Feb 16 02:04:41.178088 master-0 systemd[1]: Starting Kubernetes Kubelet... Feb 16 02:04:41.821414 master-0 kubenswrapper[4147]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 02:04:41.821414 master-0 kubenswrapper[4147]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 16 02:04:41.821414 master-0 kubenswrapper[4147]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 02:04:41.821414 master-0 kubenswrapper[4147]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 02:04:41.821414 master-0 kubenswrapper[4147]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 16 02:04:41.821414 master-0 kubenswrapper[4147]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 02:04:41.823737 master-0 kubenswrapper[4147]: I0216 02:04:41.823550 4147 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 16 02:04:41.832181 master-0 kubenswrapper[4147]: W0216 02:04:41.832125 4147 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 02:04:41.832181 master-0 kubenswrapper[4147]: W0216 02:04:41.832159 4147 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 02:04:41.832181 master-0 kubenswrapper[4147]: W0216 02:04:41.832169 4147 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 02:04:41.832181 master-0 kubenswrapper[4147]: W0216 02:04:41.832180 4147 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 02:04:41.832181 master-0 kubenswrapper[4147]: W0216 02:04:41.832190 4147 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 02:04:41.832511 master-0 kubenswrapper[4147]: W0216 02:04:41.832200 4147 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 02:04:41.832511 master-0 kubenswrapper[4147]: W0216 02:04:41.832211 4147 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 02:04:41.832511 master-0 kubenswrapper[4147]: W0216 02:04:41.832221 4147 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 02:04:41.832511 master-0 kubenswrapper[4147]: W0216 02:04:41.832232 4147 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 02:04:41.832511 master-0 kubenswrapper[4147]: W0216 02:04:41.832241 4147 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 02:04:41.832511 master-0 kubenswrapper[4147]: W0216 02:04:41.832252 4147 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 02:04:41.832511 master-0 kubenswrapper[4147]: W0216 02:04:41.832262 4147 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 02:04:41.832511 master-0 kubenswrapper[4147]: W0216 02:04:41.832273 4147 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 02:04:41.832511 master-0 kubenswrapper[4147]: W0216 02:04:41.832283 4147 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 02:04:41.832511 master-0 kubenswrapper[4147]: W0216 02:04:41.832291 4147 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 02:04:41.832511 master-0 kubenswrapper[4147]: W0216 02:04:41.832299 4147 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 02:04:41.832511 master-0 kubenswrapper[4147]: W0216 02:04:41.832307 4147 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 02:04:41.832511 master-0 kubenswrapper[4147]: W0216 02:04:41.832315 4147 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 02:04:41.832511 master-0 kubenswrapper[4147]: W0216 02:04:41.832323 4147 feature_gate.go:330] unrecognized feature gate: Example Feb 16 02:04:41.832511 master-0 kubenswrapper[4147]: W0216 02:04:41.832331 4147 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 02:04:41.832511 master-0 kubenswrapper[4147]: W0216 02:04:41.832339 4147 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 02:04:41.832511 master-0 kubenswrapper[4147]: W0216 02:04:41.832347 4147 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 02:04:41.832511 master-0 kubenswrapper[4147]: W0216 02:04:41.832355 4147 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 02:04:41.832511 master-0 kubenswrapper[4147]: W0216 02:04:41.832363 4147 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 02:04:41.832511 master-0 kubenswrapper[4147]: W0216 02:04:41.832370 4147 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 02:04:41.833374 master-0 kubenswrapper[4147]: W0216 02:04:41.832396 4147 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 02:04:41.833374 master-0 kubenswrapper[4147]: W0216 02:04:41.832411 4147 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 02:04:41.833374 master-0 kubenswrapper[4147]: W0216 02:04:41.832430 4147 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 02:04:41.833374 master-0 kubenswrapper[4147]: W0216 02:04:41.832473 4147 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 02:04:41.833374 master-0 kubenswrapper[4147]: W0216 02:04:41.832483 4147 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 02:04:41.833374 master-0 kubenswrapper[4147]: W0216 02:04:41.832493 4147 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 02:04:41.833374 master-0 kubenswrapper[4147]: W0216 02:04:41.832507 4147 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 02:04:41.833374 master-0 kubenswrapper[4147]: W0216 02:04:41.832520 4147 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 02:04:41.833374 master-0 kubenswrapper[4147]: W0216 02:04:41.832531 4147 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 02:04:41.833374 master-0 kubenswrapper[4147]: W0216 02:04:41.832541 4147 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 02:04:41.833374 master-0 kubenswrapper[4147]: W0216 02:04:41.832551 4147 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 02:04:41.833374 master-0 kubenswrapper[4147]: W0216 02:04:41.832558 4147 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 02:04:41.833374 master-0 kubenswrapper[4147]: W0216 02:04:41.832567 4147 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 02:04:41.833374 master-0 kubenswrapper[4147]: W0216 02:04:41.832574 4147 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 02:04:41.833374 master-0 kubenswrapper[4147]: W0216 02:04:41.832584 4147 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 02:04:41.833374 master-0 kubenswrapper[4147]: W0216 02:04:41.832592 4147 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 02:04:41.833374 master-0 kubenswrapper[4147]: W0216 02:04:41.832602 4147 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 02:04:41.833374 master-0 kubenswrapper[4147]: W0216 02:04:41.832610 4147 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 02:04:41.833374 master-0 kubenswrapper[4147]: W0216 02:04:41.832618 4147 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 02:04:41.834252 master-0 kubenswrapper[4147]: W0216 02:04:41.832626 4147 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 02:04:41.834252 master-0 kubenswrapper[4147]: W0216 02:04:41.832634 4147 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 02:04:41.834252 master-0 kubenswrapper[4147]: W0216 02:04:41.832642 4147 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 02:04:41.834252 master-0 kubenswrapper[4147]: W0216 02:04:41.832651 4147 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 02:04:41.834252 master-0 kubenswrapper[4147]: W0216 02:04:41.832663 4147 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 02:04:41.834252 master-0 kubenswrapper[4147]: W0216 02:04:41.832673 4147 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 02:04:41.834252 master-0 kubenswrapper[4147]: W0216 02:04:41.832683 4147 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 02:04:41.834252 master-0 kubenswrapper[4147]: W0216 02:04:41.832694 4147 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 02:04:41.834252 master-0 kubenswrapper[4147]: W0216 02:04:41.832703 4147 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 02:04:41.834252 master-0 kubenswrapper[4147]: W0216 02:04:41.832711 4147 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 02:04:41.834252 master-0 kubenswrapper[4147]: W0216 02:04:41.832722 4147 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 02:04:41.834252 master-0 kubenswrapper[4147]: W0216 02:04:41.832732 4147 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 02:04:41.834252 master-0 kubenswrapper[4147]: W0216 02:04:41.832741 4147 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 02:04:41.834252 master-0 kubenswrapper[4147]: W0216 02:04:41.832751 4147 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 02:04:41.834252 master-0 kubenswrapper[4147]: W0216 02:04:41.832761 4147 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 02:04:41.834252 master-0 kubenswrapper[4147]: W0216 02:04:41.832772 4147 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 16 02:04:41.834252 master-0 kubenswrapper[4147]: W0216 02:04:41.832782 4147 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 02:04:41.834252 master-0 kubenswrapper[4147]: W0216 02:04:41.832802 4147 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 02:04:41.834252 master-0 kubenswrapper[4147]: W0216 02:04:41.832815 4147 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 02:04:41.835104 master-0 kubenswrapper[4147]: W0216 02:04:41.832827 4147 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 02:04:41.835104 master-0 kubenswrapper[4147]: W0216 02:04:41.832837 4147 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 02:04:41.835104 master-0 kubenswrapper[4147]: W0216 02:04:41.832846 4147 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 02:04:41.835104 master-0 kubenswrapper[4147]: W0216 02:04:41.832854 4147 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 02:04:41.835104 master-0 kubenswrapper[4147]: W0216 02:04:41.832861 4147 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 02:04:41.835104 master-0 kubenswrapper[4147]: W0216 02:04:41.832869 4147 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 02:04:41.835104 master-0 kubenswrapper[4147]: W0216 02:04:41.832877 4147 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 02:04:41.835104 master-0 kubenswrapper[4147]: W0216 02:04:41.832885 4147 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 02:04:41.835104 master-0 kubenswrapper[4147]: W0216 02:04:41.832896 4147 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 02:04:41.835104 master-0 kubenswrapper[4147]: I0216 02:04:41.833987 4147 flags.go:64] FLAG: --address="0.0.0.0" Feb 16 02:04:41.835104 master-0 kubenswrapper[4147]: I0216 02:04:41.834015 4147 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 16 02:04:41.835104 master-0 kubenswrapper[4147]: I0216 02:04:41.834043 4147 flags.go:64] FLAG: --anonymous-auth="true" Feb 16 02:04:41.835104 master-0 kubenswrapper[4147]: I0216 02:04:41.834057 4147 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 16 02:04:41.835104 master-0 kubenswrapper[4147]: I0216 02:04:41.834077 4147 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 16 02:04:41.835104 master-0 kubenswrapper[4147]: I0216 02:04:41.834087 4147 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 16 02:04:41.835104 master-0 kubenswrapper[4147]: I0216 02:04:41.834103 4147 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 16 02:04:41.835104 master-0 kubenswrapper[4147]: I0216 02:04:41.834115 4147 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 16 02:04:41.835104 master-0 kubenswrapper[4147]: I0216 02:04:41.834124 4147 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 16 02:04:41.835104 master-0 kubenswrapper[4147]: I0216 02:04:41.834133 4147 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 16 02:04:41.835104 master-0 kubenswrapper[4147]: I0216 02:04:41.834143 4147 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 16 02:04:41.835104 master-0 kubenswrapper[4147]: I0216 02:04:41.834153 4147 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 16 02:04:41.836164 master-0 kubenswrapper[4147]: I0216 02:04:41.834163 4147 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 16 02:04:41.836164 master-0 kubenswrapper[4147]: I0216 02:04:41.834172 4147 flags.go:64] FLAG: --cgroup-root="" Feb 16 02:04:41.836164 master-0 kubenswrapper[4147]: I0216 02:04:41.834181 4147 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 16 02:04:41.836164 master-0 kubenswrapper[4147]: I0216 02:04:41.834191 4147 flags.go:64] FLAG: --client-ca-file="" Feb 16 02:04:41.836164 master-0 kubenswrapper[4147]: I0216 02:04:41.834199 4147 flags.go:64] FLAG: --cloud-config="" Feb 16 02:04:41.836164 master-0 kubenswrapper[4147]: I0216 02:04:41.834209 4147 flags.go:64] FLAG: --cloud-provider="" Feb 16 02:04:41.836164 master-0 kubenswrapper[4147]: I0216 02:04:41.834218 4147 flags.go:64] FLAG: --cluster-dns="[]" Feb 16 02:04:41.836164 master-0 kubenswrapper[4147]: I0216 02:04:41.834240 4147 flags.go:64] FLAG: --cluster-domain="" Feb 16 02:04:41.836164 master-0 kubenswrapper[4147]: I0216 02:04:41.834249 4147 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 16 02:04:41.836164 master-0 kubenswrapper[4147]: I0216 02:04:41.834258 4147 flags.go:64] FLAG: --config-dir="" Feb 16 02:04:41.836164 master-0 kubenswrapper[4147]: I0216 02:04:41.834267 4147 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 16 02:04:41.836164 master-0 kubenswrapper[4147]: I0216 02:04:41.834277 4147 flags.go:64] FLAG: --container-log-max-files="5" Feb 16 02:04:41.836164 master-0 kubenswrapper[4147]: I0216 02:04:41.834289 4147 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 16 02:04:41.836164 master-0 kubenswrapper[4147]: I0216 02:04:41.834299 4147 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 16 02:04:41.836164 master-0 kubenswrapper[4147]: I0216 02:04:41.834308 4147 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 16 02:04:41.836164 master-0 kubenswrapper[4147]: I0216 02:04:41.834318 4147 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 16 02:04:41.836164 master-0 kubenswrapper[4147]: I0216 02:04:41.834327 4147 flags.go:64] FLAG: --contention-profiling="false" Feb 16 02:04:41.836164 master-0 kubenswrapper[4147]: I0216 02:04:41.834336 4147 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 16 02:04:41.836164 master-0 kubenswrapper[4147]: I0216 02:04:41.834345 4147 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 16 02:04:41.836164 master-0 kubenswrapper[4147]: I0216 02:04:41.834355 4147 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 16 02:04:41.836164 master-0 kubenswrapper[4147]: I0216 02:04:41.834364 4147 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 16 02:04:41.836164 master-0 kubenswrapper[4147]: I0216 02:04:41.834375 4147 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 16 02:04:41.836164 master-0 kubenswrapper[4147]: I0216 02:04:41.834384 4147 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 16 02:04:41.836164 master-0 kubenswrapper[4147]: I0216 02:04:41.834395 4147 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 16 02:04:41.836164 master-0 kubenswrapper[4147]: I0216 02:04:41.834406 4147 flags.go:64] FLAG: --enable-load-reader="false" Feb 16 02:04:41.837326 master-0 kubenswrapper[4147]: I0216 02:04:41.834418 4147 flags.go:64] FLAG: --enable-server="true" Feb 16 02:04:41.837326 master-0 kubenswrapper[4147]: I0216 02:04:41.834430 4147 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 16 02:04:41.837326 master-0 kubenswrapper[4147]: I0216 02:04:41.834480 4147 flags.go:64] FLAG: --event-burst="100" Feb 16 02:04:41.837326 master-0 kubenswrapper[4147]: I0216 02:04:41.834492 4147 flags.go:64] FLAG: --event-qps="50" Feb 16 02:04:41.837326 master-0 kubenswrapper[4147]: I0216 02:04:41.834501 4147 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 16 02:04:41.837326 master-0 kubenswrapper[4147]: I0216 02:04:41.834511 4147 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 16 02:04:41.837326 master-0 kubenswrapper[4147]: I0216 02:04:41.834520 4147 flags.go:64] FLAG: --eviction-hard="" Feb 16 02:04:41.837326 master-0 kubenswrapper[4147]: I0216 02:04:41.834533 4147 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 16 02:04:41.837326 master-0 kubenswrapper[4147]: I0216 02:04:41.834542 4147 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 16 02:04:41.837326 master-0 kubenswrapper[4147]: I0216 02:04:41.834551 4147 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 16 02:04:41.837326 master-0 kubenswrapper[4147]: I0216 02:04:41.834561 4147 flags.go:64] FLAG: --eviction-soft="" Feb 16 02:04:41.837326 master-0 kubenswrapper[4147]: I0216 02:04:41.834569 4147 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 16 02:04:41.837326 master-0 kubenswrapper[4147]: I0216 02:04:41.834579 4147 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 16 02:04:41.837326 master-0 kubenswrapper[4147]: I0216 02:04:41.834588 4147 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 16 02:04:41.837326 master-0 kubenswrapper[4147]: I0216 02:04:41.834598 4147 flags.go:64] FLAG: --experimental-mounter-path="" Feb 16 02:04:41.837326 master-0 kubenswrapper[4147]: I0216 02:04:41.834607 4147 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 16 02:04:41.837326 master-0 kubenswrapper[4147]: I0216 02:04:41.834616 4147 flags.go:64] FLAG: --fail-swap-on="true" Feb 16 02:04:41.837326 master-0 kubenswrapper[4147]: I0216 02:04:41.834625 4147 flags.go:64] FLAG: --feature-gates="" Feb 16 02:04:41.837326 master-0 kubenswrapper[4147]: I0216 02:04:41.834635 4147 flags.go:64] FLAG: --file-check-frequency="20s" Feb 16 02:04:41.837326 master-0 kubenswrapper[4147]: I0216 02:04:41.834645 4147 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 16 02:04:41.837326 master-0 kubenswrapper[4147]: I0216 02:04:41.834654 4147 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 16 02:04:41.837326 master-0 kubenswrapper[4147]: I0216 02:04:41.834663 4147 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 16 02:04:41.837326 master-0 kubenswrapper[4147]: I0216 02:04:41.834674 4147 flags.go:64] FLAG: --healthz-port="10248" Feb 16 02:04:41.837326 master-0 kubenswrapper[4147]: I0216 02:04:41.834684 4147 flags.go:64] FLAG: --help="false" Feb 16 02:04:41.837326 master-0 kubenswrapper[4147]: I0216 02:04:41.834693 4147 flags.go:64] FLAG: --hostname-override="" Feb 16 02:04:41.837326 master-0 kubenswrapper[4147]: I0216 02:04:41.834702 4147 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 16 02:04:41.838710 master-0 kubenswrapper[4147]: I0216 02:04:41.834711 4147 flags.go:64] FLAG: --http-check-frequency="20s" Feb 16 02:04:41.838710 master-0 kubenswrapper[4147]: I0216 02:04:41.834721 4147 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 16 02:04:41.838710 master-0 kubenswrapper[4147]: I0216 02:04:41.834729 4147 flags.go:64] FLAG: --image-credential-provider-config="" Feb 16 02:04:41.838710 master-0 kubenswrapper[4147]: I0216 02:04:41.834738 4147 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 16 02:04:41.838710 master-0 kubenswrapper[4147]: I0216 02:04:41.834747 4147 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 16 02:04:41.838710 master-0 kubenswrapper[4147]: I0216 02:04:41.834756 4147 flags.go:64] FLAG: --image-service-endpoint="" Feb 16 02:04:41.838710 master-0 kubenswrapper[4147]: I0216 02:04:41.834767 4147 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 16 02:04:41.838710 master-0 kubenswrapper[4147]: I0216 02:04:41.834779 4147 flags.go:64] FLAG: --kube-api-burst="100" Feb 16 02:04:41.838710 master-0 kubenswrapper[4147]: I0216 02:04:41.834790 4147 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 16 02:04:41.838710 master-0 kubenswrapper[4147]: I0216 02:04:41.834802 4147 flags.go:64] FLAG: --kube-api-qps="50" Feb 16 02:04:41.838710 master-0 kubenswrapper[4147]: I0216 02:04:41.834813 4147 flags.go:64] FLAG: --kube-reserved="" Feb 16 02:04:41.838710 master-0 kubenswrapper[4147]: I0216 02:04:41.834823 4147 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 16 02:04:41.838710 master-0 kubenswrapper[4147]: I0216 02:04:41.834832 4147 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 16 02:04:41.838710 master-0 kubenswrapper[4147]: I0216 02:04:41.834841 4147 flags.go:64] FLAG: --kubelet-cgroups="" Feb 16 02:04:41.838710 master-0 kubenswrapper[4147]: I0216 02:04:41.834850 4147 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 16 02:04:41.838710 master-0 kubenswrapper[4147]: I0216 02:04:41.834861 4147 flags.go:64] FLAG: --lock-file="" Feb 16 02:04:41.838710 master-0 kubenswrapper[4147]: I0216 02:04:41.834870 4147 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 16 02:04:41.838710 master-0 kubenswrapper[4147]: I0216 02:04:41.834880 4147 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 16 02:04:41.838710 master-0 kubenswrapper[4147]: I0216 02:04:41.834889 4147 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 16 02:04:41.838710 master-0 kubenswrapper[4147]: I0216 02:04:41.834902 4147 flags.go:64] FLAG: --log-json-split-stream="false" Feb 16 02:04:41.838710 master-0 kubenswrapper[4147]: I0216 02:04:41.834911 4147 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 16 02:04:41.838710 master-0 kubenswrapper[4147]: I0216 02:04:41.834921 4147 flags.go:64] FLAG: --log-text-split-stream="false" Feb 16 02:04:41.838710 master-0 kubenswrapper[4147]: I0216 02:04:41.834930 4147 flags.go:64] FLAG: --logging-format="text" Feb 16 02:04:41.838710 master-0 kubenswrapper[4147]: I0216 02:04:41.834938 4147 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 16 02:04:41.838710 master-0 kubenswrapper[4147]: I0216 02:04:41.834948 4147 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 16 02:04:41.839825 master-0 kubenswrapper[4147]: I0216 02:04:41.834957 4147 flags.go:64] FLAG: --manifest-url="" Feb 16 02:04:41.839825 master-0 kubenswrapper[4147]: I0216 02:04:41.834966 4147 flags.go:64] FLAG: --manifest-url-header="" Feb 16 02:04:41.839825 master-0 kubenswrapper[4147]: I0216 02:04:41.834978 4147 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 16 02:04:41.839825 master-0 kubenswrapper[4147]: I0216 02:04:41.834987 4147 flags.go:64] FLAG: --max-open-files="1000000" Feb 16 02:04:41.839825 master-0 kubenswrapper[4147]: I0216 02:04:41.834998 4147 flags.go:64] FLAG: --max-pods="110" Feb 16 02:04:41.839825 master-0 kubenswrapper[4147]: I0216 02:04:41.835008 4147 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 16 02:04:41.839825 master-0 kubenswrapper[4147]: I0216 02:04:41.835018 4147 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 16 02:04:41.839825 master-0 kubenswrapper[4147]: I0216 02:04:41.835026 4147 flags.go:64] FLAG: --memory-manager-policy="None" Feb 16 02:04:41.839825 master-0 kubenswrapper[4147]: I0216 02:04:41.835036 4147 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 16 02:04:41.839825 master-0 kubenswrapper[4147]: I0216 02:04:41.835047 4147 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 16 02:04:41.839825 master-0 kubenswrapper[4147]: I0216 02:04:41.835058 4147 flags.go:64] FLAG: --node-ip="192.168.32.10" Feb 16 02:04:41.839825 master-0 kubenswrapper[4147]: I0216 02:04:41.835070 4147 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 16 02:04:41.839825 master-0 kubenswrapper[4147]: I0216 02:04:41.835099 4147 flags.go:64] FLAG: --node-status-max-images="50" Feb 16 02:04:41.839825 master-0 kubenswrapper[4147]: I0216 02:04:41.835111 4147 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 16 02:04:41.839825 master-0 kubenswrapper[4147]: I0216 02:04:41.835122 4147 flags.go:64] FLAG: --oom-score-adj="-999" Feb 16 02:04:41.839825 master-0 kubenswrapper[4147]: I0216 02:04:41.835131 4147 flags.go:64] FLAG: --pod-cidr="" Feb 16 02:04:41.839825 master-0 kubenswrapper[4147]: I0216 02:04:41.835140 4147 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1593b6aac7bb18c1bbb5d41693e8b8c7f0c0410fcc09e15de52d8bd53e356541" Feb 16 02:04:41.839825 master-0 kubenswrapper[4147]: I0216 02:04:41.835154 4147 flags.go:64] FLAG: --pod-manifest-path="" Feb 16 02:04:41.839825 master-0 kubenswrapper[4147]: I0216 02:04:41.835163 4147 flags.go:64] FLAG: --pod-max-pids="-1" Feb 16 02:04:41.839825 master-0 kubenswrapper[4147]: I0216 02:04:41.835173 4147 flags.go:64] FLAG: --pods-per-core="0" Feb 16 02:04:41.839825 master-0 kubenswrapper[4147]: I0216 02:04:41.835182 4147 flags.go:64] FLAG: --port="10250" Feb 16 02:04:41.839825 master-0 kubenswrapper[4147]: I0216 02:04:41.835191 4147 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 16 02:04:41.839825 master-0 kubenswrapper[4147]: I0216 02:04:41.835200 4147 flags.go:64] FLAG: --provider-id="" Feb 16 02:04:41.839825 master-0 kubenswrapper[4147]: I0216 02:04:41.835209 4147 flags.go:64] FLAG: --qos-reserved="" Feb 16 02:04:41.840882 master-0 kubenswrapper[4147]: I0216 02:04:41.835218 4147 flags.go:64] FLAG: --read-only-port="10255" Feb 16 02:04:41.840882 master-0 kubenswrapper[4147]: I0216 02:04:41.835227 4147 flags.go:64] FLAG: --register-node="true" Feb 16 02:04:41.840882 master-0 kubenswrapper[4147]: I0216 02:04:41.835239 4147 flags.go:64] FLAG: --register-schedulable="true" Feb 16 02:04:41.840882 master-0 kubenswrapper[4147]: I0216 02:04:41.835248 4147 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 16 02:04:41.840882 master-0 kubenswrapper[4147]: I0216 02:04:41.835264 4147 flags.go:64] FLAG: --registry-burst="10" Feb 16 02:04:41.840882 master-0 kubenswrapper[4147]: I0216 02:04:41.835273 4147 flags.go:64] FLAG: --registry-qps="5" Feb 16 02:04:41.840882 master-0 kubenswrapper[4147]: I0216 02:04:41.835282 4147 flags.go:64] FLAG: --reserved-cpus="" Feb 16 02:04:41.840882 master-0 kubenswrapper[4147]: I0216 02:04:41.835290 4147 flags.go:64] FLAG: --reserved-memory="" Feb 16 02:04:41.840882 master-0 kubenswrapper[4147]: I0216 02:04:41.835302 4147 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 16 02:04:41.840882 master-0 kubenswrapper[4147]: I0216 02:04:41.835311 4147 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 16 02:04:41.840882 master-0 kubenswrapper[4147]: I0216 02:04:41.835321 4147 flags.go:64] FLAG: --rotate-certificates="false" Feb 16 02:04:41.840882 master-0 kubenswrapper[4147]: I0216 02:04:41.835329 4147 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 16 02:04:41.840882 master-0 kubenswrapper[4147]: I0216 02:04:41.835338 4147 flags.go:64] FLAG: --runonce="false" Feb 16 02:04:41.840882 master-0 kubenswrapper[4147]: I0216 02:04:41.835347 4147 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 16 02:04:41.840882 master-0 kubenswrapper[4147]: I0216 02:04:41.835356 4147 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 16 02:04:41.840882 master-0 kubenswrapper[4147]: I0216 02:04:41.835371 4147 flags.go:64] FLAG: --seccomp-default="false" Feb 16 02:04:41.840882 master-0 kubenswrapper[4147]: I0216 02:04:41.835413 4147 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 16 02:04:41.840882 master-0 kubenswrapper[4147]: I0216 02:04:41.835425 4147 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 16 02:04:41.840882 master-0 kubenswrapper[4147]: I0216 02:04:41.835465 4147 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 16 02:04:41.840882 master-0 kubenswrapper[4147]: I0216 02:04:41.835475 4147 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 16 02:04:41.840882 master-0 kubenswrapper[4147]: I0216 02:04:41.835489 4147 flags.go:64] FLAG: --storage-driver-password="root" Feb 16 02:04:41.840882 master-0 kubenswrapper[4147]: I0216 02:04:41.835498 4147 flags.go:64] FLAG: --storage-driver-secure="false" Feb 16 02:04:41.840882 master-0 kubenswrapper[4147]: I0216 02:04:41.835507 4147 flags.go:64] FLAG: --storage-driver-table="stats" Feb 16 02:04:41.840882 master-0 kubenswrapper[4147]: I0216 02:04:41.835518 4147 flags.go:64] FLAG: --storage-driver-user="root" Feb 16 02:04:41.840882 master-0 kubenswrapper[4147]: I0216 02:04:41.835529 4147 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 16 02:04:41.841982 master-0 kubenswrapper[4147]: I0216 02:04:41.835541 4147 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 16 02:04:41.841982 master-0 kubenswrapper[4147]: I0216 02:04:41.835553 4147 flags.go:64] FLAG: --system-cgroups="" Feb 16 02:04:41.841982 master-0 kubenswrapper[4147]: I0216 02:04:41.835564 4147 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Feb 16 02:04:41.841982 master-0 kubenswrapper[4147]: I0216 02:04:41.835582 4147 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 16 02:04:41.841982 master-0 kubenswrapper[4147]: I0216 02:04:41.835593 4147 flags.go:64] FLAG: --tls-cert-file="" Feb 16 02:04:41.841982 master-0 kubenswrapper[4147]: I0216 02:04:41.835604 4147 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 16 02:04:41.841982 master-0 kubenswrapper[4147]: I0216 02:04:41.835618 4147 flags.go:64] FLAG: --tls-min-version="" Feb 16 02:04:41.841982 master-0 kubenswrapper[4147]: I0216 02:04:41.835627 4147 flags.go:64] FLAG: --tls-private-key-file="" Feb 16 02:04:41.841982 master-0 kubenswrapper[4147]: I0216 02:04:41.835635 4147 flags.go:64] FLAG: --topology-manager-policy="none" Feb 16 02:04:41.841982 master-0 kubenswrapper[4147]: I0216 02:04:41.835644 4147 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 16 02:04:41.841982 master-0 kubenswrapper[4147]: I0216 02:04:41.835654 4147 flags.go:64] FLAG: --topology-manager-scope="container" Feb 16 02:04:41.841982 master-0 kubenswrapper[4147]: I0216 02:04:41.835662 4147 flags.go:64] FLAG: --v="2" Feb 16 02:04:41.841982 master-0 kubenswrapper[4147]: I0216 02:04:41.835675 4147 flags.go:64] FLAG: --version="false" Feb 16 02:04:41.841982 master-0 kubenswrapper[4147]: I0216 02:04:41.835688 4147 flags.go:64] FLAG: --vmodule="" Feb 16 02:04:41.841982 master-0 kubenswrapper[4147]: I0216 02:04:41.835699 4147 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 16 02:04:41.841982 master-0 kubenswrapper[4147]: I0216 02:04:41.835708 4147 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 16 02:04:41.841982 master-0 kubenswrapper[4147]: W0216 02:04:41.835941 4147 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 02:04:41.841982 master-0 kubenswrapper[4147]: W0216 02:04:41.835953 4147 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 02:04:41.841982 master-0 kubenswrapper[4147]: W0216 02:04:41.835963 4147 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 02:04:41.841982 master-0 kubenswrapper[4147]: W0216 02:04:41.835971 4147 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 02:04:41.841982 master-0 kubenswrapper[4147]: W0216 02:04:41.835979 4147 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 02:04:41.841982 master-0 kubenswrapper[4147]: W0216 02:04:41.835987 4147 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 02:04:41.841982 master-0 kubenswrapper[4147]: W0216 02:04:41.836000 4147 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 02:04:41.841982 master-0 kubenswrapper[4147]: W0216 02:04:41.836008 4147 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 02:04:41.843655 master-0 kubenswrapper[4147]: W0216 02:04:41.836016 4147 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 02:04:41.843655 master-0 kubenswrapper[4147]: W0216 02:04:41.836024 4147 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 02:04:41.843655 master-0 kubenswrapper[4147]: W0216 02:04:41.836032 4147 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 02:04:41.843655 master-0 kubenswrapper[4147]: W0216 02:04:41.836042 4147 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 02:04:41.843655 master-0 kubenswrapper[4147]: W0216 02:04:41.836050 4147 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 02:04:41.843655 master-0 kubenswrapper[4147]: W0216 02:04:41.836058 4147 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 02:04:41.843655 master-0 kubenswrapper[4147]: W0216 02:04:41.836066 4147 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 02:04:41.843655 master-0 kubenswrapper[4147]: W0216 02:04:41.836074 4147 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 02:04:41.843655 master-0 kubenswrapper[4147]: W0216 02:04:41.836081 4147 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 02:04:41.843655 master-0 kubenswrapper[4147]: W0216 02:04:41.836089 4147 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 02:04:41.843655 master-0 kubenswrapper[4147]: W0216 02:04:41.836097 4147 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 02:04:41.843655 master-0 kubenswrapper[4147]: W0216 02:04:41.836105 4147 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 02:04:41.843655 master-0 kubenswrapper[4147]: W0216 02:04:41.836115 4147 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 02:04:41.843655 master-0 kubenswrapper[4147]: W0216 02:04:41.836126 4147 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 02:04:41.843655 master-0 kubenswrapper[4147]: W0216 02:04:41.836136 4147 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 02:04:41.843655 master-0 kubenswrapper[4147]: W0216 02:04:41.836145 4147 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 02:04:41.843655 master-0 kubenswrapper[4147]: W0216 02:04:41.836158 4147 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 02:04:41.843655 master-0 kubenswrapper[4147]: W0216 02:04:41.836170 4147 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 02:04:41.843655 master-0 kubenswrapper[4147]: W0216 02:04:41.836180 4147 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 02:04:41.844518 master-0 kubenswrapper[4147]: W0216 02:04:41.836192 4147 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 02:04:41.844518 master-0 kubenswrapper[4147]: W0216 02:04:41.836202 4147 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 02:04:41.844518 master-0 kubenswrapper[4147]: W0216 02:04:41.836211 4147 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 02:04:41.844518 master-0 kubenswrapper[4147]: W0216 02:04:41.836221 4147 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 02:04:41.844518 master-0 kubenswrapper[4147]: W0216 02:04:41.836231 4147 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 02:04:41.844518 master-0 kubenswrapper[4147]: W0216 02:04:41.836240 4147 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 02:04:41.844518 master-0 kubenswrapper[4147]: W0216 02:04:41.836252 4147 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 02:04:41.844518 master-0 kubenswrapper[4147]: W0216 02:04:41.836262 4147 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 02:04:41.844518 master-0 kubenswrapper[4147]: W0216 02:04:41.836272 4147 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 02:04:41.844518 master-0 kubenswrapper[4147]: W0216 02:04:41.836281 4147 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 02:04:41.844518 master-0 kubenswrapper[4147]: W0216 02:04:41.836291 4147 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 02:04:41.844518 master-0 kubenswrapper[4147]: W0216 02:04:41.836304 4147 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 02:04:41.844518 master-0 kubenswrapper[4147]: W0216 02:04:41.836313 4147 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 02:04:41.844518 master-0 kubenswrapper[4147]: W0216 02:04:41.836324 4147 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 02:04:41.844518 master-0 kubenswrapper[4147]: W0216 02:04:41.836337 4147 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 02:04:41.844518 master-0 kubenswrapper[4147]: W0216 02:04:41.836347 4147 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 02:04:41.844518 master-0 kubenswrapper[4147]: W0216 02:04:41.836360 4147 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 02:04:41.844518 master-0 kubenswrapper[4147]: W0216 02:04:41.836370 4147 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 02:04:41.844518 master-0 kubenswrapper[4147]: W0216 02:04:41.836380 4147 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 02:04:41.844518 master-0 kubenswrapper[4147]: W0216 02:04:41.836391 4147 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 02:04:41.845622 master-0 kubenswrapper[4147]: W0216 02:04:41.836403 4147 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 02:04:41.845622 master-0 kubenswrapper[4147]: W0216 02:04:41.836413 4147 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 02:04:41.845622 master-0 kubenswrapper[4147]: W0216 02:04:41.836422 4147 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 02:04:41.845622 master-0 kubenswrapper[4147]: W0216 02:04:41.836431 4147 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 02:04:41.845622 master-0 kubenswrapper[4147]: W0216 02:04:41.836477 4147 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 02:04:41.845622 master-0 kubenswrapper[4147]: W0216 02:04:41.836488 4147 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 02:04:41.845622 master-0 kubenswrapper[4147]: W0216 02:04:41.836497 4147 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 02:04:41.845622 master-0 kubenswrapper[4147]: W0216 02:04:41.836506 4147 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 02:04:41.845622 master-0 kubenswrapper[4147]: W0216 02:04:41.836516 4147 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 02:04:41.845622 master-0 kubenswrapper[4147]: W0216 02:04:41.836526 4147 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 02:04:41.845622 master-0 kubenswrapper[4147]: W0216 02:04:41.836536 4147 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 02:04:41.845622 master-0 kubenswrapper[4147]: W0216 02:04:41.836545 4147 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 02:04:41.845622 master-0 kubenswrapper[4147]: W0216 02:04:41.836555 4147 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 02:04:41.845622 master-0 kubenswrapper[4147]: W0216 02:04:41.836562 4147 feature_gate.go:330] unrecognized feature gate: Example Feb 16 02:04:41.845622 master-0 kubenswrapper[4147]: W0216 02:04:41.836570 4147 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 02:04:41.845622 master-0 kubenswrapper[4147]: W0216 02:04:41.836579 4147 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 02:04:41.845622 master-0 kubenswrapper[4147]: W0216 02:04:41.836587 4147 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 02:04:41.845622 master-0 kubenswrapper[4147]: W0216 02:04:41.836595 4147 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 02:04:41.845622 master-0 kubenswrapper[4147]: W0216 02:04:41.836603 4147 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 02:04:41.845622 master-0 kubenswrapper[4147]: W0216 02:04:41.836612 4147 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 02:04:41.846685 master-0 kubenswrapper[4147]: W0216 02:04:41.836621 4147 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 02:04:41.846685 master-0 kubenswrapper[4147]: W0216 02:04:41.836630 4147 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 02:04:41.846685 master-0 kubenswrapper[4147]: W0216 02:04:41.836641 4147 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 02:04:41.846685 master-0 kubenswrapper[4147]: W0216 02:04:41.836656 4147 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 16 02:04:41.846685 master-0 kubenswrapper[4147]: W0216 02:04:41.836666 4147 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 02:04:41.846685 master-0 kubenswrapper[4147]: I0216 02:04:41.837876 4147 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 02:04:41.849627 master-0 kubenswrapper[4147]: I0216 02:04:41.849540 4147 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Feb 16 02:04:41.849627 master-0 kubenswrapper[4147]: I0216 02:04:41.849613 4147 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 16 02:04:41.849804 master-0 kubenswrapper[4147]: W0216 02:04:41.849762 4147 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 02:04:41.849804 master-0 kubenswrapper[4147]: W0216 02:04:41.849785 4147 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 02:04:41.849804 master-0 kubenswrapper[4147]: W0216 02:04:41.849797 4147 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 02:04:41.849982 master-0 kubenswrapper[4147]: W0216 02:04:41.849815 4147 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 02:04:41.849982 master-0 kubenswrapper[4147]: W0216 02:04:41.849826 4147 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 02:04:41.849982 master-0 kubenswrapper[4147]: W0216 02:04:41.849835 4147 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 02:04:41.849982 master-0 kubenswrapper[4147]: W0216 02:04:41.849843 4147 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 02:04:41.849982 master-0 kubenswrapper[4147]: W0216 02:04:41.849851 4147 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 02:04:41.849982 master-0 kubenswrapper[4147]: W0216 02:04:41.849859 4147 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 02:04:41.849982 master-0 kubenswrapper[4147]: W0216 02:04:41.849867 4147 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 02:04:41.849982 master-0 kubenswrapper[4147]: W0216 02:04:41.849875 4147 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 02:04:41.849982 master-0 kubenswrapper[4147]: W0216 02:04:41.849883 4147 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 02:04:41.849982 master-0 kubenswrapper[4147]: W0216 02:04:41.849892 4147 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 02:04:41.849982 master-0 kubenswrapper[4147]: W0216 02:04:41.849900 4147 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 02:04:41.849982 master-0 kubenswrapper[4147]: W0216 02:04:41.849909 4147 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 02:04:41.849982 master-0 kubenswrapper[4147]: W0216 02:04:41.849917 4147 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 02:04:41.849982 master-0 kubenswrapper[4147]: W0216 02:04:41.849925 4147 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 02:04:41.849982 master-0 kubenswrapper[4147]: W0216 02:04:41.849933 4147 feature_gate.go:330] unrecognized feature gate: Example Feb 16 02:04:41.849982 master-0 kubenswrapper[4147]: W0216 02:04:41.849941 4147 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 02:04:41.849982 master-0 kubenswrapper[4147]: W0216 02:04:41.849949 4147 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 02:04:41.849982 master-0 kubenswrapper[4147]: W0216 02:04:41.849957 4147 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 02:04:41.849982 master-0 kubenswrapper[4147]: W0216 02:04:41.849965 4147 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 02:04:41.849982 master-0 kubenswrapper[4147]: W0216 02:04:41.849972 4147 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 02:04:41.850924 master-0 kubenswrapper[4147]: W0216 02:04:41.849982 4147 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 02:04:41.850924 master-0 kubenswrapper[4147]: W0216 02:04:41.849993 4147 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 02:04:41.850924 master-0 kubenswrapper[4147]: W0216 02:04:41.850004 4147 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 02:04:41.850924 master-0 kubenswrapper[4147]: W0216 02:04:41.850014 4147 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 02:04:41.850924 master-0 kubenswrapper[4147]: W0216 02:04:41.850026 4147 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 02:04:41.850924 master-0 kubenswrapper[4147]: W0216 02:04:41.850036 4147 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 02:04:41.850924 master-0 kubenswrapper[4147]: W0216 02:04:41.850046 4147 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 02:04:41.850924 master-0 kubenswrapper[4147]: W0216 02:04:41.850056 4147 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 02:04:41.850924 master-0 kubenswrapper[4147]: W0216 02:04:41.850064 4147 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 02:04:41.850924 master-0 kubenswrapper[4147]: W0216 02:04:41.850073 4147 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 02:04:41.850924 master-0 kubenswrapper[4147]: W0216 02:04:41.850082 4147 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 02:04:41.850924 master-0 kubenswrapper[4147]: W0216 02:04:41.850095 4147 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 02:04:41.850924 master-0 kubenswrapper[4147]: W0216 02:04:41.850103 4147 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 16 02:04:41.850924 master-0 kubenswrapper[4147]: W0216 02:04:41.850112 4147 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 02:04:41.850924 master-0 kubenswrapper[4147]: W0216 02:04:41.850121 4147 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 02:04:41.850924 master-0 kubenswrapper[4147]: W0216 02:04:41.850129 4147 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 02:04:41.850924 master-0 kubenswrapper[4147]: W0216 02:04:41.850137 4147 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 02:04:41.850924 master-0 kubenswrapper[4147]: W0216 02:04:41.850146 4147 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 02:04:41.850924 master-0 kubenswrapper[4147]: W0216 02:04:41.850153 4147 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 02:04:41.851953 master-0 kubenswrapper[4147]: W0216 02:04:41.850161 4147 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 02:04:41.851953 master-0 kubenswrapper[4147]: W0216 02:04:41.850169 4147 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 02:04:41.851953 master-0 kubenswrapper[4147]: W0216 02:04:41.850178 4147 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 02:04:41.851953 master-0 kubenswrapper[4147]: W0216 02:04:41.850186 4147 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 02:04:41.851953 master-0 kubenswrapper[4147]: W0216 02:04:41.850194 4147 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 02:04:41.851953 master-0 kubenswrapper[4147]: W0216 02:04:41.850202 4147 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 02:04:41.851953 master-0 kubenswrapper[4147]: W0216 02:04:41.850210 4147 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 02:04:41.851953 master-0 kubenswrapper[4147]: W0216 02:04:41.850218 4147 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 02:04:41.851953 master-0 kubenswrapper[4147]: W0216 02:04:41.850226 4147 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 02:04:41.851953 master-0 kubenswrapper[4147]: W0216 02:04:41.850234 4147 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 02:04:41.851953 master-0 kubenswrapper[4147]: W0216 02:04:41.850242 4147 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 02:04:41.851953 master-0 kubenswrapper[4147]: W0216 02:04:41.850251 4147 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 02:04:41.851953 master-0 kubenswrapper[4147]: W0216 02:04:41.850259 4147 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 02:04:41.851953 master-0 kubenswrapper[4147]: W0216 02:04:41.850268 4147 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 02:04:41.851953 master-0 kubenswrapper[4147]: W0216 02:04:41.850283 4147 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 02:04:41.851953 master-0 kubenswrapper[4147]: W0216 02:04:41.850293 4147 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 02:04:41.851953 master-0 kubenswrapper[4147]: W0216 02:04:41.850303 4147 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 02:04:41.851953 master-0 kubenswrapper[4147]: W0216 02:04:41.850312 4147 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 02:04:41.851953 master-0 kubenswrapper[4147]: W0216 02:04:41.850321 4147 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 02:04:41.852899 master-0 kubenswrapper[4147]: W0216 02:04:41.850330 4147 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 02:04:41.852899 master-0 kubenswrapper[4147]: W0216 02:04:41.850338 4147 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 02:04:41.852899 master-0 kubenswrapper[4147]: W0216 02:04:41.850346 4147 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 02:04:41.852899 master-0 kubenswrapper[4147]: W0216 02:04:41.850354 4147 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 02:04:41.852899 master-0 kubenswrapper[4147]: W0216 02:04:41.850362 4147 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 02:04:41.852899 master-0 kubenswrapper[4147]: W0216 02:04:41.850370 4147 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 02:04:41.852899 master-0 kubenswrapper[4147]: W0216 02:04:41.850378 4147 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 02:04:41.852899 master-0 kubenswrapper[4147]: W0216 02:04:41.850385 4147 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 02:04:41.852899 master-0 kubenswrapper[4147]: W0216 02:04:41.850394 4147 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 02:04:41.852899 master-0 kubenswrapper[4147]: W0216 02:04:41.850417 4147 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 02:04:41.852899 master-0 kubenswrapper[4147]: W0216 02:04:41.850426 4147 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 02:04:41.852899 master-0 kubenswrapper[4147]: I0216 02:04:41.850465 4147 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 02:04:41.852899 master-0 kubenswrapper[4147]: W0216 02:04:41.850726 4147 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 02:04:41.852899 master-0 kubenswrapper[4147]: W0216 02:04:41.850743 4147 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 02:04:41.852899 master-0 kubenswrapper[4147]: W0216 02:04:41.850752 4147 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 02:04:41.853707 master-0 kubenswrapper[4147]: W0216 02:04:41.850760 4147 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 02:04:41.853707 master-0 kubenswrapper[4147]: W0216 02:04:41.850767 4147 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 02:04:41.853707 master-0 kubenswrapper[4147]: W0216 02:04:41.850776 4147 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 02:04:41.853707 master-0 kubenswrapper[4147]: W0216 02:04:41.850784 4147 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 02:04:41.853707 master-0 kubenswrapper[4147]: W0216 02:04:41.850792 4147 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 02:04:41.853707 master-0 kubenswrapper[4147]: W0216 02:04:41.850801 4147 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 02:04:41.853707 master-0 kubenswrapper[4147]: W0216 02:04:41.850808 4147 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 02:04:41.853707 master-0 kubenswrapper[4147]: W0216 02:04:41.850816 4147 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 02:04:41.853707 master-0 kubenswrapper[4147]: W0216 02:04:41.850824 4147 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 02:04:41.853707 master-0 kubenswrapper[4147]: W0216 02:04:41.850832 4147 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 02:04:41.853707 master-0 kubenswrapper[4147]: W0216 02:04:41.850840 4147 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 02:04:41.853707 master-0 kubenswrapper[4147]: W0216 02:04:41.850848 4147 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 02:04:41.853707 master-0 kubenswrapper[4147]: W0216 02:04:41.850856 4147 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 02:04:41.853707 master-0 kubenswrapper[4147]: W0216 02:04:41.850864 4147 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 02:04:41.853707 master-0 kubenswrapper[4147]: W0216 02:04:41.850873 4147 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 02:04:41.853707 master-0 kubenswrapper[4147]: W0216 02:04:41.850881 4147 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 02:04:41.853707 master-0 kubenswrapper[4147]: W0216 02:04:41.850889 4147 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 02:04:41.853707 master-0 kubenswrapper[4147]: W0216 02:04:41.850897 4147 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 02:04:41.853707 master-0 kubenswrapper[4147]: W0216 02:04:41.850905 4147 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 02:04:41.853707 master-0 kubenswrapper[4147]: W0216 02:04:41.850912 4147 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 02:04:41.854626 master-0 kubenswrapper[4147]: W0216 02:04:41.850921 4147 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 02:04:41.854626 master-0 kubenswrapper[4147]: W0216 02:04:41.850929 4147 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 02:04:41.854626 master-0 kubenswrapper[4147]: W0216 02:04:41.850938 4147 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 02:04:41.854626 master-0 kubenswrapper[4147]: W0216 02:04:41.850946 4147 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 16 02:04:41.854626 master-0 kubenswrapper[4147]: W0216 02:04:41.850958 4147 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 02:04:41.854626 master-0 kubenswrapper[4147]: W0216 02:04:41.850970 4147 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 02:04:41.854626 master-0 kubenswrapper[4147]: W0216 02:04:41.850980 4147 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 02:04:41.854626 master-0 kubenswrapper[4147]: W0216 02:04:41.850988 4147 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 02:04:41.854626 master-0 kubenswrapper[4147]: W0216 02:04:41.850997 4147 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 02:04:41.854626 master-0 kubenswrapper[4147]: W0216 02:04:41.851006 4147 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 02:04:41.854626 master-0 kubenswrapper[4147]: W0216 02:04:41.851017 4147 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 02:04:41.854626 master-0 kubenswrapper[4147]: W0216 02:04:41.851028 4147 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 02:04:41.854626 master-0 kubenswrapper[4147]: W0216 02:04:41.851036 4147 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 02:04:41.854626 master-0 kubenswrapper[4147]: W0216 02:04:41.851045 4147 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 02:04:41.854626 master-0 kubenswrapper[4147]: W0216 02:04:41.851054 4147 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 02:04:41.854626 master-0 kubenswrapper[4147]: W0216 02:04:41.851063 4147 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 02:04:41.854626 master-0 kubenswrapper[4147]: W0216 02:04:41.851072 4147 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 02:04:41.854626 master-0 kubenswrapper[4147]: W0216 02:04:41.851080 4147 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 02:04:41.854626 master-0 kubenswrapper[4147]: W0216 02:04:41.851089 4147 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 02:04:41.855725 master-0 kubenswrapper[4147]: W0216 02:04:41.851096 4147 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 02:04:41.855725 master-0 kubenswrapper[4147]: W0216 02:04:41.851104 4147 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 02:04:41.855725 master-0 kubenswrapper[4147]: W0216 02:04:41.851112 4147 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 02:04:41.855725 master-0 kubenswrapper[4147]: W0216 02:04:41.851121 4147 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 02:04:41.855725 master-0 kubenswrapper[4147]: W0216 02:04:41.851129 4147 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 02:04:41.855725 master-0 kubenswrapper[4147]: W0216 02:04:41.851138 4147 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 02:04:41.855725 master-0 kubenswrapper[4147]: W0216 02:04:41.851146 4147 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 02:04:41.855725 master-0 kubenswrapper[4147]: W0216 02:04:41.851155 4147 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 02:04:41.855725 master-0 kubenswrapper[4147]: W0216 02:04:41.851163 4147 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 02:04:41.855725 master-0 kubenswrapper[4147]: W0216 02:04:41.851171 4147 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 02:04:41.855725 master-0 kubenswrapper[4147]: W0216 02:04:41.851179 4147 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 02:04:41.855725 master-0 kubenswrapper[4147]: W0216 02:04:41.851189 4147 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 02:04:41.855725 master-0 kubenswrapper[4147]: W0216 02:04:41.851199 4147 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 02:04:41.855725 master-0 kubenswrapper[4147]: W0216 02:04:41.851207 4147 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 02:04:41.855725 master-0 kubenswrapper[4147]: W0216 02:04:41.851215 4147 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 02:04:41.855725 master-0 kubenswrapper[4147]: W0216 02:04:41.851224 4147 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 02:04:41.855725 master-0 kubenswrapper[4147]: W0216 02:04:41.851233 4147 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 02:04:41.855725 master-0 kubenswrapper[4147]: W0216 02:04:41.851241 4147 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 02:04:41.855725 master-0 kubenswrapper[4147]: W0216 02:04:41.851250 4147 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 02:04:41.856643 master-0 kubenswrapper[4147]: W0216 02:04:41.851260 4147 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 02:04:41.856643 master-0 kubenswrapper[4147]: W0216 02:04:41.851269 4147 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 02:04:41.856643 master-0 kubenswrapper[4147]: W0216 02:04:41.851278 4147 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 02:04:41.856643 master-0 kubenswrapper[4147]: W0216 02:04:41.851285 4147 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 02:04:41.856643 master-0 kubenswrapper[4147]: W0216 02:04:41.851294 4147 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 02:04:41.856643 master-0 kubenswrapper[4147]: W0216 02:04:41.851302 4147 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 02:04:41.856643 master-0 kubenswrapper[4147]: W0216 02:04:41.851310 4147 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 02:04:41.856643 master-0 kubenswrapper[4147]: W0216 02:04:41.851318 4147 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 02:04:41.856643 master-0 kubenswrapper[4147]: W0216 02:04:41.851327 4147 feature_gate.go:330] unrecognized feature gate: Example Feb 16 02:04:41.856643 master-0 kubenswrapper[4147]: W0216 02:04:41.851336 4147 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 02:04:41.856643 master-0 kubenswrapper[4147]: W0216 02:04:41.851343 4147 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 02:04:41.856643 master-0 kubenswrapper[4147]: I0216 02:04:41.851357 4147 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 02:04:41.856643 master-0 kubenswrapper[4147]: I0216 02:04:41.851766 4147 server.go:940] "Client rotation is on, will bootstrap in background" Feb 16 02:04:41.856643 master-0 kubenswrapper[4147]: I0216 02:04:41.855292 4147 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Feb 16 02:04:41.857541 master-0 kubenswrapper[4147]: I0216 02:04:41.857473 4147 server.go:997] "Starting client certificate rotation" Feb 16 02:04:41.857541 master-0 kubenswrapper[4147]: I0216 02:04:41.857509 4147 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 16 02:04:41.858011 master-0 kubenswrapper[4147]: I0216 02:04:41.857936 4147 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 16 02:04:41.888407 master-0 kubenswrapper[4147]: I0216 02:04:41.888309 4147 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 02:04:41.892200 master-0 kubenswrapper[4147]: I0216 02:04:41.892142 4147 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 02:04:41.893804 master-0 kubenswrapper[4147]: E0216 02:04:41.893731 4147 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 02:04:41.912693 master-0 kubenswrapper[4147]: I0216 02:04:41.912590 4147 log.go:25] "Validated CRI v1 runtime API" Feb 16 02:04:41.918467 master-0 kubenswrapper[4147]: I0216 02:04:41.918393 4147 log.go:25] "Validated CRI v1 image API" Feb 16 02:04:41.920906 master-0 kubenswrapper[4147]: I0216 02:04:41.920860 4147 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 16 02:04:41.926005 master-0 kubenswrapper[4147]: I0216 02:04:41.925933 4147 fs.go:135] Filesystem UUIDs: map[62dc72f5-7748-49f9-b4d1-75449f1d8b55:/dev/vda3 7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4] Feb 16 02:04:41.926089 master-0 kubenswrapper[4147]: I0216 02:04:41.925989 4147 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0}] Feb 16 02:04:41.959122 master-0 kubenswrapper[4147]: I0216 02:04:41.958595 4147 manager.go:217] Machine: {Timestamp:2026-02-16 02:04:41.956250964 +0000 UTC m=+0.591986160 CPUVendorID:AuthenticAMD NumCores:16 NumPhysicalCores:1 NumSockets:16 CpuFrequency:2800000 MemoryCapacity:50514153472 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:1c19e24b661c4676981e885f5d8565ba SystemUUID:1c19e24b-661c-4676-981e-885f5d8565ba BootID:6af96a74-4ecc-4294-8d2f-0e5321b23e8e Filesystems:[{Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:25257074688 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:10102833152 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:25257078784 Type:vfs Inodes:1048576 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:61:d7:58 Speed:-1 Mtu:9000} {Name:eth2 MacAddress:fa:16:3e:30:df:9f Speed:-1 Mtu:9000} {Name:ovs-system MacAddress:6e:22:cb:a5:72:48 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:50514153472 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[12] Caches:[{Id:12 Size:32768 Type:Data Level:1} {Id:12 Size:32768 Type:Instruction Level:1} {Id:12 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:12 Size:16777216 Type:Unified Level:3}] SocketID:12 BookID: DrawerID:} {Id:0 Threads:[13] Caches:[{Id:13 Size:32768 Type:Data Level:1} {Id:13 Size:32768 Type:Instruction Level:1} {Id:13 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:13 Size:16777216 Type:Unified Level:3}] SocketID:13 BookID: DrawerID:} {Id:0 Threads:[14] Caches:[{Id:14 Size:32768 Type:Data Level:1} {Id:14 Size:32768 Type:Instruction Level:1} {Id:14 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:14 Size:16777216 Type:Unified Level:3}] SocketID:14 BookID: DrawerID:} {Id:0 Threads:[15] Caches:[{Id:15 Size:32768 Type:Data Level:1} {Id:15 Size:32768 Type:Instruction Level:1} {Id:15 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:15 Size:16777216 Type:Unified Level:3}] SocketID:15 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 16 02:04:41.959122 master-0 kubenswrapper[4147]: I0216 02:04:41.959081 4147 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 16 02:04:41.959367 master-0 kubenswrapper[4147]: I0216 02:04:41.959302 4147 manager.go:233] Version: {KernelVersion:5.14.0-427.107.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202601202224-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 16 02:04:41.961013 master-0 kubenswrapper[4147]: I0216 02:04:41.960967 4147 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 16 02:04:41.961346 master-0 kubenswrapper[4147]: I0216 02:04:41.961282 4147 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 16 02:04:41.961758 master-0 kubenswrapper[4147]: I0216 02:04:41.961339 4147 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 16 02:04:41.962633 master-0 kubenswrapper[4147]: I0216 02:04:41.962584 4147 topology_manager.go:138] "Creating topology manager with none policy" Feb 16 02:04:41.962633 master-0 kubenswrapper[4147]: I0216 02:04:41.962621 4147 container_manager_linux.go:303] "Creating device plugin manager" Feb 16 02:04:41.963485 master-0 kubenswrapper[4147]: I0216 02:04:41.963427 4147 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 16 02:04:41.963582 master-0 kubenswrapper[4147]: I0216 02:04:41.963516 4147 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 16 02:04:41.963913 master-0 kubenswrapper[4147]: I0216 02:04:41.963867 4147 state_mem.go:36] "Initialized new in-memory state store" Feb 16 02:04:41.964086 master-0 kubenswrapper[4147]: I0216 02:04:41.964045 4147 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 16 02:04:41.970274 master-0 kubenswrapper[4147]: I0216 02:04:41.970230 4147 kubelet.go:418] "Attempting to sync node with API server" Feb 16 02:04:41.970514 master-0 kubenswrapper[4147]: I0216 02:04:41.970476 4147 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 16 02:04:41.970581 master-0 kubenswrapper[4147]: I0216 02:04:41.970545 4147 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 16 02:04:41.970581 master-0 kubenswrapper[4147]: I0216 02:04:41.970571 4147 kubelet.go:324] "Adding apiserver pod source" Feb 16 02:04:41.970696 master-0 kubenswrapper[4147]: I0216 02:04:41.970589 4147 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 16 02:04:41.976104 master-0 kubenswrapper[4147]: I0216 02:04:41.976039 4147 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-3.rhaos4.18.gite0b87e5.el9" apiVersion="v1" Feb 16 02:04:41.982153 master-0 kubenswrapper[4147]: I0216 02:04:41.981656 4147 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 16 02:04:41.982396 master-0 kubenswrapper[4147]: I0216 02:04:41.982197 4147 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 16 02:04:41.982396 master-0 kubenswrapper[4147]: I0216 02:04:41.982249 4147 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 16 02:04:41.982396 master-0 kubenswrapper[4147]: I0216 02:04:41.982272 4147 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 16 02:04:41.982396 master-0 kubenswrapper[4147]: I0216 02:04:41.982306 4147 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 16 02:04:41.982396 master-0 kubenswrapper[4147]: I0216 02:04:41.982326 4147 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 16 02:04:41.982396 master-0 kubenswrapper[4147]: I0216 02:04:41.982347 4147 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 16 02:04:41.982396 master-0 kubenswrapper[4147]: I0216 02:04:41.982369 4147 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 16 02:04:41.982396 master-0 kubenswrapper[4147]: I0216 02:04:41.982388 4147 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 16 02:04:41.982661 master-0 kubenswrapper[4147]: I0216 02:04:41.982477 4147 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 16 02:04:41.982661 master-0 kubenswrapper[4147]: I0216 02:04:41.982505 4147 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 16 02:04:41.982661 master-0 kubenswrapper[4147]: I0216 02:04:41.982537 4147 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 16 02:04:41.982661 master-0 kubenswrapper[4147]: I0216 02:04:41.982582 4147 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 16 02:04:41.982661 master-0 kubenswrapper[4147]: W0216 02:04:41.982531 4147 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 02:04:41.982810 master-0 kubenswrapper[4147]: W0216 02:04:41.982646 4147 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 02:04:41.982810 master-0 kubenswrapper[4147]: E0216 02:04:41.982733 4147 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 02:04:41.982878 master-0 kubenswrapper[4147]: E0216 02:04:41.982794 4147 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 02:04:41.985178 master-0 kubenswrapper[4147]: I0216 02:04:41.985148 4147 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 16 02:04:41.988671 master-0 kubenswrapper[4147]: I0216 02:04:41.988633 4147 server.go:1280] "Started kubelet" Feb 16 02:04:41.989610 master-0 kubenswrapper[4147]: I0216 02:04:41.989544 4147 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 02:04:41.990673 master-0 kubenswrapper[4147]: I0216 02:04:41.990605 4147 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 16 02:04:41.990995 master-0 kubenswrapper[4147]: I0216 02:04:41.990612 4147 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 16 02:04:41.991083 master-0 kubenswrapper[4147]: I0216 02:04:41.991016 4147 server_v1.go:47] "podresources" method="list" useActivePods=true Feb 16 02:04:41.990996 master-0 systemd[1]: Started Kubernetes Kubelet. Feb 16 02:04:41.991800 master-0 kubenswrapper[4147]: I0216 02:04:41.991742 4147 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 16 02:04:41.992176 master-0 kubenswrapper[4147]: I0216 02:04:41.992153 4147 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 16 02:04:41.992176 master-0 kubenswrapper[4147]: I0216 02:04:41.992178 4147 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 16 02:04:41.993570 master-0 kubenswrapper[4147]: I0216 02:04:41.993536 4147 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 16 02:04:41.993570 master-0 kubenswrapper[4147]: I0216 02:04:41.993551 4147 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 16 02:04:41.993755 master-0 kubenswrapper[4147]: E0216 02:04:41.993575 4147 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 02:04:41.993755 master-0 kubenswrapper[4147]: I0216 02:04:41.993645 4147 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Feb 16 02:04:41.994598 master-0 kubenswrapper[4147]: I0216 02:04:41.994565 4147 reconstruct.go:97] "Volume reconstruction finished" Feb 16 02:04:41.994598 master-0 kubenswrapper[4147]: I0216 02:04:41.994581 4147 reconciler.go:26] "Reconciler: start to sync state" Feb 16 02:04:41.995139 master-0 kubenswrapper[4147]: W0216 02:04:41.995027 4147 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 02:04:41.995261 master-0 kubenswrapper[4147]: E0216 02:04:41.995158 4147 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 02:04:41.995688 master-0 kubenswrapper[4147]: E0216 02:04:41.993925 4147 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.189497d213718835 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:04:41.988581429 +0000 UTC m=+0.624316635,LastTimestamp:2026-02-16 02:04:41.988581429 +0000 UTC m=+0.624316635,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:04:41.995989 master-0 kubenswrapper[4147]: E0216 02:04:41.995904 4147 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Feb 16 02:04:41.996392 master-0 kubenswrapper[4147]: I0216 02:04:41.996354 4147 factory.go:55] Registering systemd factory Feb 16 02:04:41.996392 master-0 kubenswrapper[4147]: I0216 02:04:41.996389 4147 factory.go:221] Registration of the systemd container factory successfully Feb 16 02:04:41.996850 master-0 kubenswrapper[4147]: I0216 02:04:41.996810 4147 factory.go:153] Registering CRI-O factory Feb 16 02:04:41.996850 master-0 kubenswrapper[4147]: I0216 02:04:41.996842 4147 factory.go:221] Registration of the crio container factory successfully Feb 16 02:04:41.996967 master-0 kubenswrapper[4147]: I0216 02:04:41.996946 4147 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 16 02:04:41.997024 master-0 kubenswrapper[4147]: I0216 02:04:41.996976 4147 factory.go:103] Registering Raw factory Feb 16 02:04:41.997024 master-0 kubenswrapper[4147]: I0216 02:04:41.996999 4147 manager.go:1196] Started watching for new ooms in manager Feb 16 02:04:41.997290 master-0 kubenswrapper[4147]: I0216 02:04:41.997262 4147 server.go:449] "Adding debug handlers to kubelet server" Feb 16 02:04:41.999014 master-0 kubenswrapper[4147]: I0216 02:04:41.998979 4147 manager.go:319] Starting recovery of all containers Feb 16 02:04:42.010810 master-0 kubenswrapper[4147]: E0216 02:04:42.010376 4147 kubelet.go:1495] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Feb 16 02:04:42.027910 master-0 kubenswrapper[4147]: I0216 02:04:42.027865 4147 manager.go:324] Recovery completed Feb 16 02:04:42.036962 master-0 kubenswrapper[4147]: I0216 02:04:42.036924 4147 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:04:42.038195 master-0 kubenswrapper[4147]: I0216 02:04:42.038153 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:04:42.038195 master-0 kubenswrapper[4147]: I0216 02:04:42.038187 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:04:42.038195 master-0 kubenswrapper[4147]: I0216 02:04:42.038195 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:04:42.039083 master-0 kubenswrapper[4147]: I0216 02:04:42.039048 4147 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 16 02:04:42.039083 master-0 kubenswrapper[4147]: I0216 02:04:42.039062 4147 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 16 02:04:42.039083 master-0 kubenswrapper[4147]: I0216 02:04:42.039079 4147 state_mem.go:36] "Initialized new in-memory state store" Feb 16 02:04:42.043346 master-0 kubenswrapper[4147]: I0216 02:04:42.043311 4147 policy_none.go:49] "None policy: Start" Feb 16 02:04:42.044156 master-0 kubenswrapper[4147]: I0216 02:04:42.044102 4147 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 16 02:04:42.044233 master-0 kubenswrapper[4147]: I0216 02:04:42.044191 4147 state_mem.go:35] "Initializing new in-memory state store" Feb 16 02:04:42.093878 master-0 kubenswrapper[4147]: E0216 02:04:42.093810 4147 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 02:04:42.120567 master-0 kubenswrapper[4147]: I0216 02:04:42.120517 4147 manager.go:334] "Starting Device Plugin manager" Feb 16 02:04:42.120785 master-0 kubenswrapper[4147]: I0216 02:04:42.120731 4147 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 16 02:04:42.120785 master-0 kubenswrapper[4147]: I0216 02:04:42.120768 4147 server.go:79] "Starting device plugin registration server" Feb 16 02:04:42.121275 master-0 kubenswrapper[4147]: I0216 02:04:42.121222 4147 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 16 02:04:42.121382 master-0 kubenswrapper[4147]: I0216 02:04:42.121256 4147 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 16 02:04:42.121539 master-0 kubenswrapper[4147]: I0216 02:04:42.121493 4147 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 16 02:04:42.121754 master-0 kubenswrapper[4147]: I0216 02:04:42.121705 4147 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 16 02:04:42.121754 master-0 kubenswrapper[4147]: I0216 02:04:42.121738 4147 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 16 02:04:42.124141 master-0 kubenswrapper[4147]: E0216 02:04:42.124086 4147 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Feb 16 02:04:42.182947 master-0 kubenswrapper[4147]: I0216 02:04:42.182854 4147 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 16 02:04:42.188848 master-0 kubenswrapper[4147]: I0216 02:04:42.186357 4147 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 16 02:04:42.188848 master-0 kubenswrapper[4147]: I0216 02:04:42.186432 4147 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 16 02:04:42.188848 master-0 kubenswrapper[4147]: I0216 02:04:42.186491 4147 kubelet.go:2335] "Starting kubelet main sync loop" Feb 16 02:04:42.188848 master-0 kubenswrapper[4147]: E0216 02:04:42.186567 4147 kubelet.go:2359] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 16 02:04:42.188848 master-0 kubenswrapper[4147]: W0216 02:04:42.187834 4147 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 02:04:42.188848 master-0 kubenswrapper[4147]: E0216 02:04:42.187961 4147 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 02:04:42.197630 master-0 kubenswrapper[4147]: E0216 02:04:42.197554 4147 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Feb 16 02:04:42.221685 master-0 kubenswrapper[4147]: I0216 02:04:42.221613 4147 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:04:42.223545 master-0 kubenswrapper[4147]: I0216 02:04:42.223501 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:04:42.223664 master-0 kubenswrapper[4147]: I0216 02:04:42.223558 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:04:42.223664 master-0 kubenswrapper[4147]: I0216 02:04:42.223582 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:04:42.223664 master-0 kubenswrapper[4147]: I0216 02:04:42.223627 4147 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 16 02:04:42.224772 master-0 kubenswrapper[4147]: E0216 02:04:42.224699 4147 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 16 02:04:42.286772 master-0 kubenswrapper[4147]: I0216 02:04:42.286703 4147 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["kube-system/bootstrap-kube-controller-manager-master-0","kube-system/bootstrap-kube-scheduler-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0-master-0","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Feb 16 02:04:42.286920 master-0 kubenswrapper[4147]: I0216 02:04:42.286800 4147 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:04:42.288065 master-0 kubenswrapper[4147]: I0216 02:04:42.287970 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:04:42.288065 master-0 kubenswrapper[4147]: I0216 02:04:42.288020 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:04:42.288065 master-0 kubenswrapper[4147]: I0216 02:04:42.288036 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:04:42.288425 master-0 kubenswrapper[4147]: I0216 02:04:42.288154 4147 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:04:42.288425 master-0 kubenswrapper[4147]: I0216 02:04:42.288405 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 02:04:42.288598 master-0 kubenswrapper[4147]: I0216 02:04:42.288495 4147 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:04:42.289385 master-0 kubenswrapper[4147]: I0216 02:04:42.289334 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:04:42.289497 master-0 kubenswrapper[4147]: I0216 02:04:42.289399 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:04:42.289497 master-0 kubenswrapper[4147]: I0216 02:04:42.289423 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:04:42.289665 master-0 kubenswrapper[4147]: I0216 02:04:42.289518 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:04:42.289665 master-0 kubenswrapper[4147]: I0216 02:04:42.289555 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:04:42.289665 master-0 kubenswrapper[4147]: I0216 02:04:42.289595 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:04:42.289665 master-0 kubenswrapper[4147]: I0216 02:04:42.289628 4147 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:04:42.289876 master-0 kubenswrapper[4147]: I0216 02:04:42.289791 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 16 02:04:42.289876 master-0 kubenswrapper[4147]: I0216 02:04:42.289840 4147 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:04:42.294810 master-0 kubenswrapper[4147]: I0216 02:04:42.294750 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:04:42.294941 master-0 kubenswrapper[4147]: I0216 02:04:42.294858 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:04:42.294941 master-0 kubenswrapper[4147]: I0216 02:04:42.294932 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:04:42.295350 master-0 kubenswrapper[4147]: I0216 02:04:42.295304 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:04:42.295481 master-0 kubenswrapper[4147]: I0216 02:04:42.295387 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:04:42.295481 master-0 kubenswrapper[4147]: I0216 02:04:42.295414 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:04:42.296625 master-0 kubenswrapper[4147]: I0216 02:04:42.296570 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 02:04:42.296625 master-0 kubenswrapper[4147]: I0216 02:04:42.296616 4147 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:04:42.296817 master-0 kubenswrapper[4147]: I0216 02:04:42.296658 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 02:04:42.296817 master-0 kubenswrapper[4147]: I0216 02:04:42.296707 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 02:04:42.296817 master-0 kubenswrapper[4147]: I0216 02:04:42.296755 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 02:04:42.296817 master-0 kubenswrapper[4147]: I0216 02:04:42.296787 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/9460ca0802075a8a6a10d7b3e6052c4d-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"9460ca0802075a8a6a10d7b3e6052c4d\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 16 02:04:42.296817 master-0 kubenswrapper[4147]: I0216 02:04:42.296804 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 02:04:42.297326 master-0 kubenswrapper[4147]: I0216 02:04:42.296819 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 02:04:42.297326 master-0 kubenswrapper[4147]: I0216 02:04:42.296857 4147 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:04:42.297326 master-0 kubenswrapper[4147]: I0216 02:04:42.296909 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 02:04:42.297326 master-0 kubenswrapper[4147]: I0216 02:04:42.296977 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/9460ca0802075a8a6a10d7b3e6052c4d-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"9460ca0802075a8a6a10d7b3e6052c4d\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 16 02:04:42.297326 master-0 kubenswrapper[4147]: I0216 02:04:42.297061 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 02:04:42.298202 master-0 kubenswrapper[4147]: I0216 02:04:42.297704 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:04:42.298202 master-0 kubenswrapper[4147]: I0216 02:04:42.297748 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:04:42.298202 master-0 kubenswrapper[4147]: I0216 02:04:42.297764 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:04:42.298202 master-0 kubenswrapper[4147]: I0216 02:04:42.297906 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:04:42.298202 master-0 kubenswrapper[4147]: I0216 02:04:42.297946 4147 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:04:42.298202 master-0 kubenswrapper[4147]: I0216 02:04:42.297948 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:04:42.298202 master-0 kubenswrapper[4147]: I0216 02:04:42.298061 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:04:42.298202 master-0 kubenswrapper[4147]: I0216 02:04:42.298104 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Feb 16 02:04:42.298202 master-0 kubenswrapper[4147]: I0216 02:04:42.298154 4147 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:04:42.300037 master-0 kubenswrapper[4147]: I0216 02:04:42.298974 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:04:42.300037 master-0 kubenswrapper[4147]: I0216 02:04:42.298988 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:04:42.300037 master-0 kubenswrapper[4147]: I0216 02:04:42.299046 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:04:42.300037 master-0 kubenswrapper[4147]: I0216 02:04:42.299058 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:04:42.300037 master-0 kubenswrapper[4147]: I0216 02:04:42.299085 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:04:42.300037 master-0 kubenswrapper[4147]: I0216 02:04:42.299064 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:04:42.300037 master-0 kubenswrapper[4147]: I0216 02:04:42.299413 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 02:04:42.300037 master-0 kubenswrapper[4147]: I0216 02:04:42.299494 4147 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:04:42.301098 master-0 kubenswrapper[4147]: I0216 02:04:42.300889 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:04:42.301098 master-0 kubenswrapper[4147]: I0216 02:04:42.300926 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:04:42.301098 master-0 kubenswrapper[4147]: I0216 02:04:42.300941 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:04:42.397747 master-0 kubenswrapper[4147]: I0216 02:04:42.397676 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 02:04:42.397882 master-0 kubenswrapper[4147]: I0216 02:04:42.397752 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 02:04:42.397882 master-0 kubenswrapper[4147]: I0216 02:04:42.397806 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 02:04:42.397882 master-0 kubenswrapper[4147]: I0216 02:04:42.397857 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/400a178a4d5e9a88ba5bbbd1da2ad15e-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"400a178a4d5e9a88ba5bbbd1da2ad15e\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 16 02:04:42.398074 master-0 kubenswrapper[4147]: I0216 02:04:42.397897 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 02:04:42.398074 master-0 kubenswrapper[4147]: I0216 02:04:42.397906 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 02:04:42.398074 master-0 kubenswrapper[4147]: I0216 02:04:42.397907 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 02:04:42.398074 master-0 kubenswrapper[4147]: I0216 02:04:42.398014 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 02:04:42.398074 master-0 kubenswrapper[4147]: I0216 02:04:42.398034 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 02:04:42.398351 master-0 kubenswrapper[4147]: I0216 02:04:42.398022 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 02:04:42.398351 master-0 kubenswrapper[4147]: I0216 02:04:42.398160 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 02:04:42.398351 master-0 kubenswrapper[4147]: I0216 02:04:42.398229 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 02:04:42.398351 master-0 kubenswrapper[4147]: I0216 02:04:42.398292 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/9460ca0802075a8a6a10d7b3e6052c4d-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"9460ca0802075a8a6a10d7b3e6052c4d\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 16 02:04:42.398351 master-0 kubenswrapper[4147]: I0216 02:04:42.398327 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/400a178a4d5e9a88ba5bbbd1da2ad15e-certs\") pod \"etcd-master-0-master-0\" (UID: \"400a178a4d5e9a88ba5bbbd1da2ad15e\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 16 02:04:42.398661 master-0 kubenswrapper[4147]: I0216 02:04:42.398385 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 02:04:42.398661 master-0 kubenswrapper[4147]: I0216 02:04:42.398390 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 02:04:42.398661 master-0 kubenswrapper[4147]: I0216 02:04:42.398420 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 02:04:42.398661 master-0 kubenswrapper[4147]: I0216 02:04:42.398503 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/9460ca0802075a8a6a10d7b3e6052c4d-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"9460ca0802075a8a6a10d7b3e6052c4d\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 16 02:04:42.398661 master-0 kubenswrapper[4147]: I0216 02:04:42.398531 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 02:04:42.398661 master-0 kubenswrapper[4147]: I0216 02:04:42.398607 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 02:04:42.398987 master-0 kubenswrapper[4147]: I0216 02:04:42.398668 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 02:04:42.398987 master-0 kubenswrapper[4147]: I0216 02:04:42.398683 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 02:04:42.398987 master-0 kubenswrapper[4147]: I0216 02:04:42.398701 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/9460ca0802075a8a6a10d7b3e6052c4d-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"9460ca0802075a8a6a10d7b3e6052c4d\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 16 02:04:42.398987 master-0 kubenswrapper[4147]: I0216 02:04:42.398743 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 02:04:42.398987 master-0 kubenswrapper[4147]: I0216 02:04:42.398767 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 02:04:42.398987 master-0 kubenswrapper[4147]: I0216 02:04:42.398791 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/9460ca0802075a8a6a10d7b3e6052c4d-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"9460ca0802075a8a6a10d7b3e6052c4d\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 16 02:04:42.425255 master-0 kubenswrapper[4147]: I0216 02:04:42.425187 4147 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:04:42.426852 master-0 kubenswrapper[4147]: I0216 02:04:42.426788 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:04:42.426852 master-0 kubenswrapper[4147]: I0216 02:04:42.426842 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:04:42.427058 master-0 kubenswrapper[4147]: I0216 02:04:42.426859 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:04:42.427058 master-0 kubenswrapper[4147]: I0216 02:04:42.426929 4147 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 16 02:04:42.428201 master-0 kubenswrapper[4147]: E0216 02:04:42.428135 4147 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 16 02:04:42.499097 master-0 kubenswrapper[4147]: I0216 02:04:42.498969 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/400a178a4d5e9a88ba5bbbd1da2ad15e-certs\") pod \"etcd-master-0-master-0\" (UID: \"400a178a4d5e9a88ba5bbbd1da2ad15e\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 16 02:04:42.499097 master-0 kubenswrapper[4147]: I0216 02:04:42.499048 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 02:04:42.499400 master-0 kubenswrapper[4147]: I0216 02:04:42.499179 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 02:04:42.499400 master-0 kubenswrapper[4147]: I0216 02:04:42.499241 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 02:04:42.499400 master-0 kubenswrapper[4147]: I0216 02:04:42.499275 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 02:04:42.499400 master-0 kubenswrapper[4147]: I0216 02:04:42.499311 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 02:04:42.499400 master-0 kubenswrapper[4147]: I0216 02:04:42.499375 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 02:04:42.499725 master-0 kubenswrapper[4147]: I0216 02:04:42.499398 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/400a178a4d5e9a88ba5bbbd1da2ad15e-certs\") pod \"etcd-master-0-master-0\" (UID: \"400a178a4d5e9a88ba5bbbd1da2ad15e\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 16 02:04:42.499725 master-0 kubenswrapper[4147]: I0216 02:04:42.499409 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/400a178a4d5e9a88ba5bbbd1da2ad15e-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"400a178a4d5e9a88ba5bbbd1da2ad15e\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 16 02:04:42.499725 master-0 kubenswrapper[4147]: I0216 02:04:42.499485 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 02:04:42.499725 master-0 kubenswrapper[4147]: I0216 02:04:42.499499 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 02:04:42.499725 master-0 kubenswrapper[4147]: I0216 02:04:42.499518 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/400a178a4d5e9a88ba5bbbd1da2ad15e-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"400a178a4d5e9a88ba5bbbd1da2ad15e\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 16 02:04:42.499725 master-0 kubenswrapper[4147]: I0216 02:04:42.499496 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 02:04:42.499725 master-0 kubenswrapper[4147]: I0216 02:04:42.499540 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 02:04:42.499725 master-0 kubenswrapper[4147]: I0216 02:04:42.499595 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 02:04:42.499725 master-0 kubenswrapper[4147]: I0216 02:04:42.499598 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 02:04:42.599730 master-0 kubenswrapper[4147]: E0216 02:04:42.599544 4147 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Feb 16 02:04:42.639537 master-0 kubenswrapper[4147]: I0216 02:04:42.639418 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 02:04:42.652406 master-0 kubenswrapper[4147]: I0216 02:04:42.652353 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 16 02:04:42.676969 master-0 kubenswrapper[4147]: I0216 02:04:42.676882 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 02:04:42.708622 master-0 kubenswrapper[4147]: I0216 02:04:42.708575 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Feb 16 02:04:42.721021 master-0 kubenswrapper[4147]: I0216 02:04:42.720914 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 02:04:42.828957 master-0 kubenswrapper[4147]: I0216 02:04:42.828865 4147 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:04:42.830269 master-0 kubenswrapper[4147]: I0216 02:04:42.830213 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:04:42.830269 master-0 kubenswrapper[4147]: I0216 02:04:42.830269 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:04:42.830502 master-0 kubenswrapper[4147]: I0216 02:04:42.830286 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:04:42.830502 master-0 kubenswrapper[4147]: I0216 02:04:42.830345 4147 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 16 02:04:42.831710 master-0 kubenswrapper[4147]: E0216 02:04:42.831650 4147 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 16 02:04:42.896115 master-0 kubenswrapper[4147]: W0216 02:04:42.895889 4147 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 02:04:42.896115 master-0 kubenswrapper[4147]: E0216 02:04:42.896029 4147 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 02:04:42.992461 master-0 kubenswrapper[4147]: I0216 02:04:42.992241 4147 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 02:04:43.085616 master-0 kubenswrapper[4147]: W0216 02:04:43.085472 4147 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 02:04:43.085616 master-0 kubenswrapper[4147]: E0216 02:04:43.085608 4147 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 02:04:43.091464 master-0 kubenswrapper[4147]: W0216 02:04:43.091318 4147 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 02:04:43.091589 master-0 kubenswrapper[4147]: E0216 02:04:43.091493 4147 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 02:04:43.401484 master-0 kubenswrapper[4147]: E0216 02:04:43.401347 4147 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Feb 16 02:04:43.580740 master-0 kubenswrapper[4147]: W0216 02:04:43.580547 4147 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 02:04:43.580740 master-0 kubenswrapper[4147]: E0216 02:04:43.580702 4147 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 02:04:43.638879 master-0 kubenswrapper[4147]: I0216 02:04:43.632801 4147 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:04:43.638879 master-0 kubenswrapper[4147]: I0216 02:04:43.634487 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:04:43.638879 master-0 kubenswrapper[4147]: I0216 02:04:43.634547 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:04:43.638879 master-0 kubenswrapper[4147]: I0216 02:04:43.634580 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:04:43.638879 master-0 kubenswrapper[4147]: I0216 02:04:43.634711 4147 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 16 02:04:43.638879 master-0 kubenswrapper[4147]: E0216 02:04:43.635957 4147 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 16 02:04:43.948226 master-0 kubenswrapper[4147]: I0216 02:04:43.948115 4147 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 16 02:04:43.950620 master-0 kubenswrapper[4147]: E0216 02:04:43.950518 4147 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 02:04:43.991969 master-0 kubenswrapper[4147]: I0216 02:04:43.991843 4147 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 02:04:44.254257 master-0 kubenswrapper[4147]: W0216 02:04:44.253773 4147 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80420f2e7c3cdda71f7d0d6ccbe6f9f3.slice/crio-0e4ba8ef0f2d2dcfdef03df990cea8e18604ff8954454af1715e76176988bea9 WatchSource:0}: Error finding container 0e4ba8ef0f2d2dcfdef03df990cea8e18604ff8954454af1715e76176988bea9: Status 404 returned error can't find the container with id 0e4ba8ef0f2d2dcfdef03df990cea8e18604ff8954454af1715e76176988bea9 Feb 16 02:04:44.254767 master-0 kubenswrapper[4147]: W0216 02:04:44.254694 4147 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9460ca0802075a8a6a10d7b3e6052c4d.slice/crio-c8b97e97c35ace1d8e3342fb279b58e63ecd66a09abba6b504fe344a2864fe27 WatchSource:0}: Error finding container c8b97e97c35ace1d8e3342fb279b58e63ecd66a09abba6b504fe344a2864fe27: Status 404 returned error can't find the container with id c8b97e97c35ace1d8e3342fb279b58e63ecd66a09abba6b504fe344a2864fe27 Feb 16 02:04:44.266751 master-0 kubenswrapper[4147]: I0216 02:04:44.266705 4147 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 02:04:44.286280 master-0 kubenswrapper[4147]: W0216 02:04:44.286214 4147 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3322fd3717f4aec0d8f54ec7862c07e.slice/crio-07f7d55685e3891e139cfcc8fc39a4525349b15753a33187f5704239bf899022 WatchSource:0}: Error finding container 07f7d55685e3891e139cfcc8fc39a4525349b15753a33187f5704239bf899022: Status 404 returned error can't find the container with id 07f7d55685e3891e139cfcc8fc39a4525349b15753a33187f5704239bf899022 Feb 16 02:04:44.305414 master-0 kubenswrapper[4147]: W0216 02:04:44.305367 4147 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d1e91e5a1fed5cf7076a92d2830d36f.slice/crio-ff8deeed5842106bfd4d1b27be4848f25105bbaa159314b19c6a3add851fbf37 WatchSource:0}: Error finding container ff8deeed5842106bfd4d1b27be4848f25105bbaa159314b19c6a3add851fbf37: Status 404 returned error can't find the container with id ff8deeed5842106bfd4d1b27be4848f25105bbaa159314b19c6a3add851fbf37 Feb 16 02:04:44.328300 master-0 kubenswrapper[4147]: W0216 02:04:44.328184 4147 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod400a178a4d5e9a88ba5bbbd1da2ad15e.slice/crio-a24c34b6599e046cb5b217ee112cd5793502433694aca39a7811b07f3f980447 WatchSource:0}: Error finding container a24c34b6599e046cb5b217ee112cd5793502433694aca39a7811b07f3f980447: Status 404 returned error can't find the container with id a24c34b6599e046cb5b217ee112cd5793502433694aca39a7811b07f3f980447 Feb 16 02:04:44.874773 master-0 kubenswrapper[4147]: W0216 02:04:44.874707 4147 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 02:04:44.874953 master-0 kubenswrapper[4147]: E0216 02:04:44.874788 4147 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 02:04:44.991495 master-0 kubenswrapper[4147]: I0216 02:04:44.991407 4147 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 02:04:45.002389 master-0 kubenswrapper[4147]: E0216 02:04:45.002332 4147 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Feb 16 02:04:45.095817 master-0 kubenswrapper[4147]: W0216 02:04:45.095695 4147 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 02:04:45.095817 master-0 kubenswrapper[4147]: E0216 02:04:45.095757 4147 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 02:04:45.196426 master-0 kubenswrapper[4147]: I0216 02:04:45.196321 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"400a178a4d5e9a88ba5bbbd1da2ad15e","Type":"ContainerStarted","Data":"a24c34b6599e046cb5b217ee112cd5793502433694aca39a7811b07f3f980447"} Feb 16 02:04:45.197257 master-0 kubenswrapper[4147]: I0216 02:04:45.197161 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5d1e91e5a1fed5cf7076a92d2830d36f","Type":"ContainerStarted","Data":"ff8deeed5842106bfd4d1b27be4848f25105bbaa159314b19c6a3add851fbf37"} Feb 16 02:04:45.198420 master-0 kubenswrapper[4147]: I0216 02:04:45.198365 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b3322fd3717f4aec0d8f54ec7862c07e","Type":"ContainerStarted","Data":"07f7d55685e3891e139cfcc8fc39a4525349b15753a33187f5704239bf899022"} Feb 16 02:04:45.199657 master-0 kubenswrapper[4147]: I0216 02:04:45.199558 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"9460ca0802075a8a6a10d7b3e6052c4d","Type":"ContainerStarted","Data":"c8b97e97c35ace1d8e3342fb279b58e63ecd66a09abba6b504fe344a2864fe27"} Feb 16 02:04:45.200379 master-0 kubenswrapper[4147]: I0216 02:04:45.200339 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerStarted","Data":"0e4ba8ef0f2d2dcfdef03df990cea8e18604ff8954454af1715e76176988bea9"} Feb 16 02:04:45.236904 master-0 kubenswrapper[4147]: I0216 02:04:45.236764 4147 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:04:45.237947 master-0 kubenswrapper[4147]: I0216 02:04:45.237901 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:04:45.237947 master-0 kubenswrapper[4147]: I0216 02:04:45.237945 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:04:45.238041 master-0 kubenswrapper[4147]: I0216 02:04:45.237959 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:04:45.238041 master-0 kubenswrapper[4147]: I0216 02:04:45.238009 4147 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 16 02:04:45.238941 master-0 kubenswrapper[4147]: E0216 02:04:45.238892 4147 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 16 02:04:45.539979 master-0 kubenswrapper[4147]: W0216 02:04:45.539905 4147 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 02:04:45.540198 master-0 kubenswrapper[4147]: E0216 02:04:45.540000 4147 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 02:04:45.891646 master-0 kubenswrapper[4147]: E0216 02:04:45.891498 4147 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.189497d213718835 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:04:41.988581429 +0000 UTC m=+0.624316635,LastTimestamp:2026-02-16 02:04:41.988581429 +0000 UTC m=+0.624316635,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:04:45.991469 master-0 kubenswrapper[4147]: I0216 02:04:45.991384 4147 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 02:04:46.206331 master-0 kubenswrapper[4147]: I0216 02:04:46.206205 4147 generic.go:334] "Generic (PLEG): container finished" podID="b3322fd3717f4aec0d8f54ec7862c07e" containerID="0a5e2f09da7456e3ddaab1d9e62abba19553e3d0c15bed35db33dc97212f0fb6" exitCode=0 Feb 16 02:04:46.206331 master-0 kubenswrapper[4147]: I0216 02:04:46.206281 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b3322fd3717f4aec0d8f54ec7862c07e","Type":"ContainerDied","Data":"0a5e2f09da7456e3ddaab1d9e62abba19553e3d0c15bed35db33dc97212f0fb6"} Feb 16 02:04:46.207257 master-0 kubenswrapper[4147]: I0216 02:04:46.206339 4147 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:04:46.208206 master-0 kubenswrapper[4147]: I0216 02:04:46.207601 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:04:46.208206 master-0 kubenswrapper[4147]: I0216 02:04:46.207639 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:04:46.208206 master-0 kubenswrapper[4147]: I0216 02:04:46.207649 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:04:46.767592 master-0 kubenswrapper[4147]: W0216 02:04:46.767511 4147 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 02:04:46.767674 master-0 kubenswrapper[4147]: E0216 02:04:46.767607 4147 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 02:04:46.990721 master-0 kubenswrapper[4147]: I0216 02:04:46.990587 4147 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 02:04:47.211051 master-0 kubenswrapper[4147]: I0216 02:04:47.211018 4147 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:04:47.211051 master-0 kubenswrapper[4147]: I0216 02:04:47.211024 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"400a178a4d5e9a88ba5bbbd1da2ad15e","Type":"ContainerStarted","Data":"d3c90f1e73202b8ff7d7463b840dabc14c0987d4a5dea05816767a582d4b8f44"} Feb 16 02:04:47.211051 master-0 kubenswrapper[4147]: I0216 02:04:47.211063 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"400a178a4d5e9a88ba5bbbd1da2ad15e","Type":"ContainerStarted","Data":"1ac49b435dd0ca350c530f89dad4bc64dffda1e4e142d28d15199074e3eba071"} Feb 16 02:04:47.211901 master-0 kubenswrapper[4147]: I0216 02:04:47.211878 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:04:47.211901 master-0 kubenswrapper[4147]: I0216 02:04:47.211902 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:04:47.212111 master-0 kubenswrapper[4147]: I0216 02:04:47.211911 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:04:47.212941 master-0 kubenswrapper[4147]: I0216 02:04:47.212918 4147 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_b3322fd3717f4aec0d8f54ec7862c07e/kube-rbac-proxy-crio/0.log" Feb 16 02:04:47.213412 master-0 kubenswrapper[4147]: I0216 02:04:47.213287 4147 generic.go:334] "Generic (PLEG): container finished" podID="b3322fd3717f4aec0d8f54ec7862c07e" containerID="e4c8309d6384e5e40467ff4f9fda47d5883a09ce904b67c214dc139ea1ced941" exitCode=1 Feb 16 02:04:47.213412 master-0 kubenswrapper[4147]: I0216 02:04:47.213320 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b3322fd3717f4aec0d8f54ec7862c07e","Type":"ContainerDied","Data":"e4c8309d6384e5e40467ff4f9fda47d5883a09ce904b67c214dc139ea1ced941"} Feb 16 02:04:47.213412 master-0 kubenswrapper[4147]: I0216 02:04:47.213326 4147 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:04:47.213827 master-0 kubenswrapper[4147]: I0216 02:04:47.213803 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:04:47.213827 master-0 kubenswrapper[4147]: I0216 02:04:47.213823 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:04:47.213922 master-0 kubenswrapper[4147]: I0216 02:04:47.213837 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:04:47.214044 master-0 kubenswrapper[4147]: I0216 02:04:47.214023 4147 scope.go:117] "RemoveContainer" containerID="e4c8309d6384e5e40467ff4f9fda47d5883a09ce904b67c214dc139ea1ced941" Feb 16 02:04:47.990484 master-0 kubenswrapper[4147]: I0216 02:04:47.990424 4147 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 02:04:48.052057 master-0 kubenswrapper[4147]: I0216 02:04:48.052024 4147 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 16 02:04:48.052946 master-0 kubenswrapper[4147]: E0216 02:04:48.052899 4147 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 02:04:48.203957 master-0 kubenswrapper[4147]: E0216 02:04:48.203897 4147 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="6.4s" Feb 16 02:04:48.217813 master-0 kubenswrapper[4147]: I0216 02:04:48.217783 4147 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_b3322fd3717f4aec0d8f54ec7862c07e/kube-rbac-proxy-crio/1.log" Feb 16 02:04:48.218244 master-0 kubenswrapper[4147]: I0216 02:04:48.218212 4147 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_b3322fd3717f4aec0d8f54ec7862c07e/kube-rbac-proxy-crio/0.log" Feb 16 02:04:48.218677 master-0 kubenswrapper[4147]: I0216 02:04:48.218638 4147 generic.go:334] "Generic (PLEG): container finished" podID="b3322fd3717f4aec0d8f54ec7862c07e" containerID="5181a89c92564f80aa8551c4b2d57544e051f73caa59531edcb38eaae5143732" exitCode=1 Feb 16 02:04:48.218757 master-0 kubenswrapper[4147]: I0216 02:04:48.218693 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b3322fd3717f4aec0d8f54ec7862c07e","Type":"ContainerDied","Data":"5181a89c92564f80aa8551c4b2d57544e051f73caa59531edcb38eaae5143732"} Feb 16 02:04:48.218757 master-0 kubenswrapper[4147]: I0216 02:04:48.218743 4147 scope.go:117] "RemoveContainer" containerID="e4c8309d6384e5e40467ff4f9fda47d5883a09ce904b67c214dc139ea1ced941" Feb 16 02:04:48.218757 master-0 kubenswrapper[4147]: I0216 02:04:48.218752 4147 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:04:48.218881 master-0 kubenswrapper[4147]: I0216 02:04:48.218752 4147 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:04:48.220364 master-0 kubenswrapper[4147]: I0216 02:04:48.219789 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:04:48.220364 master-0 kubenswrapper[4147]: I0216 02:04:48.219824 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:04:48.220364 master-0 kubenswrapper[4147]: I0216 02:04:48.219837 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:04:48.221244 master-0 kubenswrapper[4147]: I0216 02:04:48.221225 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:04:48.221244 master-0 kubenswrapper[4147]: I0216 02:04:48.221248 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:04:48.221356 master-0 kubenswrapper[4147]: I0216 02:04:48.221257 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:04:48.221799 master-0 kubenswrapper[4147]: I0216 02:04:48.221764 4147 scope.go:117] "RemoveContainer" containerID="5181a89c92564f80aa8551c4b2d57544e051f73caa59531edcb38eaae5143732" Feb 16 02:04:48.221994 master-0 kubenswrapper[4147]: E0216 02:04:48.221960 4147 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(b3322fd3717f4aec0d8f54ec7862c07e)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="b3322fd3717f4aec0d8f54ec7862c07e" Feb 16 02:04:48.439734 master-0 kubenswrapper[4147]: I0216 02:04:48.439680 4147 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:04:48.440741 master-0 kubenswrapper[4147]: I0216 02:04:48.440700 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:04:48.440741 master-0 kubenswrapper[4147]: I0216 02:04:48.440734 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:04:48.440741 master-0 kubenswrapper[4147]: I0216 02:04:48.440743 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:04:48.440870 master-0 kubenswrapper[4147]: I0216 02:04:48.440781 4147 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 16 02:04:48.441590 master-0 kubenswrapper[4147]: E0216 02:04:48.441542 4147 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 16 02:04:48.990992 master-0 kubenswrapper[4147]: I0216 02:04:48.990896 4147 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 02:04:49.274052 master-0 kubenswrapper[4147]: I0216 02:04:49.273855 4147 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:04:49.275264 master-0 kubenswrapper[4147]: I0216 02:04:49.275204 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:04:49.275345 master-0 kubenswrapper[4147]: I0216 02:04:49.275268 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:04:49.275345 master-0 kubenswrapper[4147]: I0216 02:04:49.275288 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:04:49.275820 master-0 kubenswrapper[4147]: I0216 02:04:49.275784 4147 scope.go:117] "RemoveContainer" containerID="5181a89c92564f80aa8551c4b2d57544e051f73caa59531edcb38eaae5143732" Feb 16 02:04:49.276073 master-0 kubenswrapper[4147]: E0216 02:04:49.276012 4147 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(b3322fd3717f4aec0d8f54ec7862c07e)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="b3322fd3717f4aec0d8f54ec7862c07e" Feb 16 02:04:49.654962 master-0 kubenswrapper[4147]: W0216 02:04:49.654792 4147 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 02:04:49.654962 master-0 kubenswrapper[4147]: E0216 02:04:49.654914 4147 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 02:04:49.927661 master-0 kubenswrapper[4147]: W0216 02:04:49.927574 4147 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 02:04:49.927842 master-0 kubenswrapper[4147]: E0216 02:04:49.927685 4147 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 02:04:49.991571 master-0 kubenswrapper[4147]: I0216 02:04:49.991515 4147 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 02:04:50.991901 master-0 kubenswrapper[4147]: I0216 02:04:50.991840 4147 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 02:04:51.282720 master-0 kubenswrapper[4147]: I0216 02:04:51.282241 4147 generic.go:334] "Generic (PLEG): container finished" podID="5d1e91e5a1fed5cf7076a92d2830d36f" containerID="3872302a7c76c50aca9a9d80255ef01b4820b2081427956ca06ca96b2b4c695b" exitCode=0 Feb 16 02:04:51.282720 master-0 kubenswrapper[4147]: I0216 02:04:51.282410 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5d1e91e5a1fed5cf7076a92d2830d36f","Type":"ContainerDied","Data":"3872302a7c76c50aca9a9d80255ef01b4820b2081427956ca06ca96b2b4c695b"} Feb 16 02:04:51.282720 master-0 kubenswrapper[4147]: I0216 02:04:51.282618 4147 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:04:51.283966 master-0 kubenswrapper[4147]: I0216 02:04:51.283900 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:04:51.283966 master-0 kubenswrapper[4147]: I0216 02:04:51.283943 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:04:51.283966 master-0 kubenswrapper[4147]: I0216 02:04:51.283960 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:04:51.285845 master-0 kubenswrapper[4147]: I0216 02:04:51.285792 4147 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_b3322fd3717f4aec0d8f54ec7862c07e/kube-rbac-proxy-crio/1.log" Feb 16 02:04:51.288089 master-0 kubenswrapper[4147]: I0216 02:04:51.288051 4147 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:04:51.288880 master-0 kubenswrapper[4147]: I0216 02:04:51.288827 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"9460ca0802075a8a6a10d7b3e6052c4d","Type":"ContainerStarted","Data":"0d84c00dcc11900a2f5a4ff15f798ef8c8b6cc92a9b7e1f32a7c33bfeed4a478"} Feb 16 02:04:51.288880 master-0 kubenswrapper[4147]: I0216 02:04:51.288869 4147 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:04:51.290274 master-0 kubenswrapper[4147]: I0216 02:04:51.290226 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:04:51.290274 master-0 kubenswrapper[4147]: I0216 02:04:51.290262 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:04:51.290274 master-0 kubenswrapper[4147]: I0216 02:04:51.290278 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:04:51.292935 master-0 kubenswrapper[4147]: I0216 02:04:51.292901 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:04:51.292935 master-0 kubenswrapper[4147]: I0216 02:04:51.292933 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:04:51.293104 master-0 kubenswrapper[4147]: I0216 02:04:51.292945 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:04:51.294745 master-0 kubenswrapper[4147]: I0216 02:04:51.294685 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerStarted","Data":"e320e64d785f6de34aed9795724368979c84944d7c7a25afb100430d56e9ef3e"} Feb 16 02:04:51.498612 master-0 kubenswrapper[4147]: W0216 02:04:51.498165 4147 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 02:04:51.498612 master-0 kubenswrapper[4147]: E0216 02:04:51.498279 4147 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 02:04:52.124232 master-0 kubenswrapper[4147]: E0216 02:04:52.124174 4147 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Feb 16 02:04:52.325156 master-0 kubenswrapper[4147]: I0216 02:04:52.323736 4147 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:04:52.325156 master-0 kubenswrapper[4147]: I0216 02:04:52.324250 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5d1e91e5a1fed5cf7076a92d2830d36f","Type":"ContainerStarted","Data":"b4c44b5842e3ec8dee60bb9b2661e316b9a431e19fc3d6452f904a284fd4a961"} Feb 16 02:04:52.325156 master-0 kubenswrapper[4147]: I0216 02:04:52.324697 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:04:52.325156 master-0 kubenswrapper[4147]: I0216 02:04:52.324780 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:04:52.325156 master-0 kubenswrapper[4147]: I0216 02:04:52.324798 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:04:53.183524 master-0 kubenswrapper[4147]: I0216 02:04:53.182798 4147 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 02:04:53.183524 master-0 kubenswrapper[4147]: W0216 02:04:53.183176 4147 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 16 02:04:53.183524 master-0 kubenswrapper[4147]: E0216 02:04:53.183225 4147 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 16 02:04:53.335481 master-0 kubenswrapper[4147]: I0216 02:04:53.335407 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerStarted","Data":"7921033cca2163ce5e4549f18d23b23e3797f9935bb1bd7ed5580d96e9031f08"} Feb 16 02:04:53.335674 master-0 kubenswrapper[4147]: I0216 02:04:53.335573 4147 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:04:53.336544 master-0 kubenswrapper[4147]: I0216 02:04:53.336511 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:04:53.336601 master-0 kubenswrapper[4147]: I0216 02:04:53.336551 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:04:53.336601 master-0 kubenswrapper[4147]: I0216 02:04:53.336564 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:04:53.999049 master-0 kubenswrapper[4147]: I0216 02:04:53.998972 4147 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 02:04:54.337491 master-0 kubenswrapper[4147]: I0216 02:04:54.337340 4147 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:04:54.338404 master-0 kubenswrapper[4147]: I0216 02:04:54.338073 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:04:54.338404 master-0 kubenswrapper[4147]: I0216 02:04:54.338113 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:04:54.338404 master-0 kubenswrapper[4147]: I0216 02:04:54.338124 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:04:54.611710 master-0 kubenswrapper[4147]: E0216 02:04:54.611653 4147 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 16 02:04:54.841870 master-0 kubenswrapper[4147]: I0216 02:04:54.841680 4147 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:04:54.844008 master-0 kubenswrapper[4147]: I0216 02:04:54.843885 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:04:54.844008 master-0 kubenswrapper[4147]: I0216 02:04:54.843962 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:04:54.844008 master-0 kubenswrapper[4147]: I0216 02:04:54.843981 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:04:54.844454 master-0 kubenswrapper[4147]: I0216 02:04:54.844064 4147 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 16 02:04:54.851590 master-0 kubenswrapper[4147]: E0216 02:04:54.851531 4147 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Feb 16 02:04:54.997765 master-0 kubenswrapper[4147]: I0216 02:04:54.997699 4147 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 02:04:55.344393 master-0 kubenswrapper[4147]: I0216 02:04:55.344321 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5d1e91e5a1fed5cf7076a92d2830d36f","Type":"ContainerStarted","Data":"34fdb1528016b6c99888a5b17a344114bc05a46b5e53091141b876be457cb369"} Feb 16 02:04:55.345836 master-0 kubenswrapper[4147]: I0216 02:04:55.344409 4147 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:04:55.350812 master-0 kubenswrapper[4147]: I0216 02:04:55.350766 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:04:55.350975 master-0 kubenswrapper[4147]: I0216 02:04:55.350880 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:04:55.350975 master-0 kubenswrapper[4147]: I0216 02:04:55.350901 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:04:55.900498 master-0 kubenswrapper[4147]: E0216 02:04:55.900280 4147 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189497d213718835 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:04:41.988581429 +0000 UTC m=+0.624316635,LastTimestamp:2026-02-16 02:04:41.988581429 +0000 UTC m=+0.624316635,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:04:55.907992 master-0 kubenswrapper[4147]: E0216 02:04:55.907766 4147 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189497d21666463c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:04:42.038175292 +0000 UTC m=+0.673910408,LastTimestamp:2026-02-16 02:04:42.038175292 +0000 UTC m=+0.673910408,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:04:55.915004 master-0 kubenswrapper[4147]: E0216 02:04:55.914840 4147 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189497d216668732 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:04:42.038191922 +0000 UTC m=+0.673927038,LastTimestamp:2026-02-16 02:04:42.038191922 +0000 UTC m=+0.673927038,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:04:55.921915 master-0 kubenswrapper[4147]: E0216 02:04:55.921745 4147 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189497d21666a8ac default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:04:42.038200492 +0000 UTC m=+0.673935598,LastTimestamp:2026-02-16 02:04:42.038200492 +0000 UTC m=+0.673935598,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:04:55.929372 master-0 kubenswrapper[4147]: E0216 02:04:55.929207 4147 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189497d21ccc6968 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:04:42.145532264 +0000 UTC m=+0.781267440,LastTimestamp:2026-02-16 02:04:42.145532264 +0000 UTC m=+0.781267440,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:04:55.936968 master-0 kubenswrapper[4147]: E0216 02:04:55.936844 4147 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189497d21666463c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189497d21666463c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:04:42.038175292 +0000 UTC m=+0.673910408,LastTimestamp:2026-02-16 02:04:42.223537101 +0000 UTC m=+0.859272257,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:04:55.943482 master-0 kubenswrapper[4147]: E0216 02:04:55.943317 4147 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189497d216668732\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189497d216668732 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:04:42.038191922 +0000 UTC m=+0.673927038,LastTimestamp:2026-02-16 02:04:42.223570321 +0000 UTC m=+0.859305467,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:04:55.951001 master-0 kubenswrapper[4147]: E0216 02:04:55.950862 4147 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189497d21666a8ac\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189497d21666a8ac default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:04:42.038200492 +0000 UTC m=+0.673935598,LastTimestamp:2026-02-16 02:04:42.223591621 +0000 UTC m=+0.859326777,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:04:55.962940 master-0 kubenswrapper[4147]: E0216 02:04:55.962812 4147 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189497d21666463c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189497d21666463c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:04:42.038175292 +0000 UTC m=+0.673910408,LastTimestamp:2026-02-16 02:04:42.287998639 +0000 UTC m=+0.923733795,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:04:55.972463 master-0 kubenswrapper[4147]: E0216 02:04:55.972283 4147 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189497d216668732\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189497d216668732 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:04:42.038191922 +0000 UTC m=+0.673927038,LastTimestamp:2026-02-16 02:04:42.288030659 +0000 UTC m=+0.923765805,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:04:55.980152 master-0 kubenswrapper[4147]: E0216 02:04:55.980032 4147 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189497d21666a8ac\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189497d21666a8ac default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:04:42.038200492 +0000 UTC m=+0.673935598,LastTimestamp:2026-02-16 02:04:42.288046219 +0000 UTC m=+0.923781375,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:04:55.986915 master-0 kubenswrapper[4147]: E0216 02:04:55.986759 4147 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189497d21666463c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189497d21666463c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:04:42.038175292 +0000 UTC m=+0.673910408,LastTimestamp:2026-02-16 02:04:42.289370527 +0000 UTC m=+0.925105713,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:04:55.996190 master-0 kubenswrapper[4147]: I0216 02:04:55.996095 4147 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 02:04:55.996644 master-0 kubenswrapper[4147]: E0216 02:04:55.996290 4147 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189497d216668732\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189497d216668732 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:04:42.038191922 +0000 UTC m=+0.673927038,LastTimestamp:2026-02-16 02:04:42.289414468 +0000 UTC m=+0.925149654,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:04:56.002772 master-0 kubenswrapper[4147]: E0216 02:04:56.002590 4147 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189497d21666a8ac\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189497d21666a8ac default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:04:42.038200492 +0000 UTC m=+0.673935598,LastTimestamp:2026-02-16 02:04:42.289472488 +0000 UTC m=+0.925207674,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:04:56.010367 master-0 kubenswrapper[4147]: E0216 02:04:56.010226 4147 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189497d21666463c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189497d21666463c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:04:42.038175292 +0000 UTC m=+0.673910408,LastTimestamp:2026-02-16 02:04:42.289544479 +0000 UTC m=+0.925279635,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:04:56.016883 master-0 kubenswrapper[4147]: E0216 02:04:56.016748 4147 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189497d216668732\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189497d216668732 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:04:42.038191922 +0000 UTC m=+0.673927038,LastTimestamp:2026-02-16 02:04:42.28956904 +0000 UTC m=+0.925304196,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:04:56.024023 master-0 kubenswrapper[4147]: E0216 02:04:56.023831 4147 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189497d21666a8ac\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189497d21666a8ac default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:04:42.038200492 +0000 UTC m=+0.673935598,LastTimestamp:2026-02-16 02:04:42.28960908 +0000 UTC m=+0.925344236,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:04:56.031137 master-0 kubenswrapper[4147]: E0216 02:04:56.030959 4147 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189497d21666463c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189497d21666463c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:04:42.038175292 +0000 UTC m=+0.673910408,LastTimestamp:2026-02-16 02:04:42.294794538 +0000 UTC m=+0.930529684,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:04:56.037314 master-0 kubenswrapper[4147]: E0216 02:04:56.037138 4147 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189497d216668732\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189497d216668732 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:04:42.038191922 +0000 UTC m=+0.673927038,LastTimestamp:2026-02-16 02:04:42.29490374 +0000 UTC m=+0.930638916,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:04:56.044712 master-0 kubenswrapper[4147]: E0216 02:04:56.044591 4147 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189497d21666a8ac\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189497d21666a8ac default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:04:42.038200492 +0000 UTC m=+0.673935598,LastTimestamp:2026-02-16 02:04:42.29494339 +0000 UTC m=+0.930678536,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:04:56.051297 master-0 kubenswrapper[4147]: E0216 02:04:56.051146 4147 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189497d21666463c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189497d21666463c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:04:42.038175292 +0000 UTC m=+0.673910408,LastTimestamp:2026-02-16 02:04:42.295359356 +0000 UTC m=+0.931094532,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:04:56.058046 master-0 kubenswrapper[4147]: E0216 02:04:56.057901 4147 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189497d216668732\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189497d216668732 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:04:42.038191922 +0000 UTC m=+0.673927038,LastTimestamp:2026-02-16 02:04:42.295407026 +0000 UTC m=+0.931142172,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:04:56.065168 master-0 kubenswrapper[4147]: E0216 02:04:56.064964 4147 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189497d21666a8ac\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189497d21666a8ac default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:04:42.038200492 +0000 UTC m=+0.673935598,LastTimestamp:2026-02-16 02:04:42.295459917 +0000 UTC m=+0.931195063,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:04:56.071657 master-0 kubenswrapper[4147]: E0216 02:04:56.071529 4147 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189497d21666463c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189497d21666463c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:04:42.038175292 +0000 UTC m=+0.673910408,LastTimestamp:2026-02-16 02:04:42.297734457 +0000 UTC m=+0.933469633,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:04:56.077270 master-0 kubenswrapper[4147]: E0216 02:04:56.077024 4147 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189497d216668732\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189497d216668732 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:04:42.038191922 +0000 UTC m=+0.673927038,LastTimestamp:2026-02-16 02:04:42.297757927 +0000 UTC m=+0.933493073,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:04:56.084519 master-0 kubenswrapper[4147]: E0216 02:04:56.084361 4147 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189497d29b3a1629 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:80420f2e7c3cdda71f7d0d6ccbe6f9f3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:04:44.266649129 +0000 UTC m=+2.902384285,LastTimestamp:2026-02-16 02:04:44.266649129 +0000 UTC m=+2.902384285,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:04:56.090799 master-0 kubenswrapper[4147]: E0216 02:04:56.090677 4147 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189497d29b3aa3bd kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:9460ca0802075a8a6a10d7b3e6052c4d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:04:44.266685373 +0000 UTC m=+2.902420519,LastTimestamp:2026-02-16 02:04:44.266685373 +0000 UTC m=+2.902420519,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:04:56.098315 master-0 kubenswrapper[4147]: E0216 02:04:56.098166 4147 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189497d29caf4998 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:b3322fd3717f4aec0d8f54ec7862c07e,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:04:44.291107224 +0000 UTC m=+2.926842380,LastTimestamp:2026-02-16 02:04:44.291107224 +0000 UTC m=+2.926842380,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:04:56.104684 master-0 kubenswrapper[4147]: E0216 02:04:56.104523 4147 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189497d29defe7ac openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5d1e91e5a1fed5cf7076a92d2830d36f,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:04:44.312119212 +0000 UTC m=+2.947854368,LastTimestamp:2026-02-16 02:04:44.312119212 +0000 UTC m=+2.947854368,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:04:56.109936 master-0 kubenswrapper[4147]: E0216 02:04:56.109717 4147 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189497d29f0df2c0 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:400a178a4d5e9a88ba5bbbd1da2ad15e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:04:44.330865344 +0000 UTC m=+2.966600500,LastTimestamp:2026-02-16 02:04:44.330865344 +0000 UTC m=+2.966600500,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:04:56.117593 master-0 kubenswrapper[4147]: E0216 02:04:56.117409 4147 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189497d2fbb73bd3 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:b3322fd3717f4aec0d8f54ec7862c07e,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0\" in 1.594s (1.594s including waiting). Image size: 459915626 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:04:45.885463507 +0000 UTC m=+4.521198633,LastTimestamp:2026-02-16 02:04:45.885463507 +0000 UTC m=+4.521198633,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:04:56.122621 master-0 kubenswrapper[4147]: E0216 02:04:56.122481 4147 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189497d3079f4243 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:b3322fd3717f4aec0d8f54ec7862c07e,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:04:46.085218883 +0000 UTC m=+4.720953999,LastTimestamp:2026-02-16 02:04:46.085218883 +0000 UTC m=+4.720953999,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:04:56.129240 master-0 kubenswrapper[4147]: E0216 02:04:56.129108 4147 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189497d308afc505 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:b3322fd3717f4aec0d8f54ec7862c07e,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:04:46.103078149 +0000 UTC m=+4.738813265,LastTimestamp:2026-02-16 02:04:46.103078149 +0000 UTC m=+4.738813265,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:04:56.136331 master-0 kubenswrapper[4147]: E0216 02:04:56.136195 4147 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189497d32f504ae3 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:b3322fd3717f4aec0d8f54ec7862c07e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:04:46.751132387 +0000 UTC m=+5.386867503,LastTimestamp:2026-02-16 02:04:46.751132387 +0000 UTC m=+5.386867503,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:04:56.142943 master-0 kubenswrapper[4147]: E0216 02:04:56.142799 4147 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189497d3324a31da openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:400a178a4d5e9a88ba5bbbd1da2ad15e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3\" in 2.47s (2.47s including waiting). Image size: 524042902 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:04:46.80106441 +0000 UTC m=+5.436799526,LastTimestamp:2026-02-16 02:04:46.80106441 +0000 UTC m=+5.436799526,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:04:56.152759 master-0 kubenswrapper[4147]: E0216 02:04:56.152528 4147 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189497d33ba844e0 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:b3322fd3717f4aec0d8f54ec7862c07e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:04:46.958224608 +0000 UTC m=+5.593959724,LastTimestamp:2026-02-16 02:04:46.958224608 +0000 UTC m=+5.593959724,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:04:56.160125 master-0 kubenswrapper[4147]: E0216 02:04:56.159940 4147 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189497d33cb40298 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:b3322fd3717f4aec0d8f54ec7862c07e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:04:46.975771288 +0000 UTC m=+5.611506404,LastTimestamp:2026-02-16 02:04:46.975771288 +0000 UTC m=+5.611506404,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:04:56.167254 master-0 kubenswrapper[4147]: E0216 02:04:56.167040 4147 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189497d33d0850a4 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:400a178a4d5e9a88ba5bbbd1da2ad15e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:04:46.981296292 +0000 UTC m=+5.617031408,LastTimestamp:2026-02-16 02:04:46.981296292 +0000 UTC m=+5.617031408,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:04:56.174361 master-0 kubenswrapper[4147]: E0216 02:04:56.174230 4147 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189497d33dffc3de openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:400a178a4d5e9a88ba5bbbd1da2ad15e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:04:46.997513182 +0000 UTC m=+5.633248288,LastTimestamp:2026-02-16 02:04:46.997513182 +0000 UTC m=+5.633248288,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:04:56.180970 master-0 kubenswrapper[4147]: E0216 02:04:56.180812 4147 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189497d33e1dd0c5 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:400a178a4d5e9a88ba5bbbd1da2ad15e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:04:46.999482565 +0000 UTC m=+5.635217671,LastTimestamp:2026-02-16 02:04:46.999482565 +0000 UTC m=+5.635217671,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:04:56.187611 master-0 kubenswrapper[4147]: E0216 02:04:56.187480 4147 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189497d347c81001 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:400a178a4d5e9a88ba5bbbd1da2ad15e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:04:47.161634817 +0000 UTC m=+5.797369933,LastTimestamp:2026-02-16 02:04:47.161634817 +0000 UTC m=+5.797369933,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:04:56.194250 master-0 kubenswrapper[4147]: E0216 02:04:56.194132 4147 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189497d348f4a736 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:400a178a4d5e9a88ba5bbbd1da2ad15e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:04:47.181334326 +0000 UTC m=+5.817069442,LastTimestamp:2026-02-16 02:04:47.181334326 +0000 UTC m=+5.817069442,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:04:56.201427 master-0 kubenswrapper[4147]: E0216 02:04:56.201256 4147 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189497d32f504ae3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189497d32f504ae3 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:b3322fd3717f4aec0d8f54ec7862c07e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:04:46.751132387 +0000 UTC m=+5.386867503,LastTimestamp:2026-02-16 02:04:47.217385336 +0000 UTC m=+5.853120452,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:04:56.208314 master-0 kubenswrapper[4147]: E0216 02:04:56.208134 4147 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189497d33ba844e0\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189497d33ba844e0 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:b3322fd3717f4aec0d8f54ec7862c07e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:04:46.958224608 +0000 UTC m=+5.593959724,LastTimestamp:2026-02-16 02:04:47.447004858 +0000 UTC m=+6.082739974,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:04:56.218796 master-0 kubenswrapper[4147]: E0216 02:04:56.218655 4147 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189497d33cb40298\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189497d33cb40298 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:b3322fd3717f4aec0d8f54ec7862c07e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:04:46.975771288 +0000 UTC m=+5.611506404,LastTimestamp:2026-02-16 02:04:47.464404723 +0000 UTC m=+6.100139839,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:04:56.227239 master-0 kubenswrapper[4147]: E0216 02:04:56.227072 4147 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189497d386fa634b openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:b3322fd3717f4aec0d8f54ec7862c07e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(b3322fd3717f4aec0d8f54ec7862c07e),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:04:48.221897547 +0000 UTC m=+6.857632663,LastTimestamp:2026-02-16 02:04:48.221897547 +0000 UTC m=+6.857632663,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:04:56.234725 master-0 kubenswrapper[4147]: E0216 02:04:56.234571 4147 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189497d386fa634b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189497d386fa634b openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:b3322fd3717f4aec0d8f54ec7862c07e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(b3322fd3717f4aec0d8f54ec7862c07e),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:04:48.221897547 +0000 UTC m=+6.857632663,LastTimestamp:2026-02-16 02:04:49.275973821 +0000 UTC m=+7.911708967,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:04:56.241635 master-0 kubenswrapper[4147]: E0216 02:04:56.241481 4147 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189497d4213cb1ef kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:80420f2e7c3cdda71f7d0d6ccbe6f9f3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d\" in 6.543s (6.543s including waiting). Image size: 938665460 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:04:50.809934319 +0000 UTC m=+9.445669475,LastTimestamp:2026-02-16 02:04:50.809934319 +0000 UTC m=+9.445669475,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:04:56.246951 master-0 kubenswrapper[4147]: E0216 02:04:56.246812 4147 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189497d421a9e245 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:9460ca0802075a8a6a10d7b3e6052c4d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d\" in 6.55s (6.55s including waiting). Image size: 938665460 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:04:50.817090117 +0000 UTC m=+9.452825233,LastTimestamp:2026-02-16 02:04:50.817090117 +0000 UTC m=+9.452825233,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:04:56.251316 master-0 kubenswrapper[4147]: E0216 02:04:56.251070 4147 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189497d4233c4362 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5d1e91e5a1fed5cf7076a92d2830d36f,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d\" in 6.531s (6.531s including waiting). Image size: 938665460 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:04:50.84346045 +0000 UTC m=+9.479195566,LastTimestamp:2026-02-16 02:04:50.84346045 +0000 UTC m=+9.479195566,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:04:56.257489 master-0 kubenswrapper[4147]: E0216 02:04:56.257321 4147 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189497d42ebdf06e kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:80420f2e7c3cdda71f7d0d6ccbe6f9f3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:04:51.03650827 +0000 UTC m=+9.672243396,LastTimestamp:2026-02-16 02:04:51.03650827 +0000 UTC m=+9.672243396,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:04:56.264092 master-0 kubenswrapper[4147]: E0216 02:04:56.263929 4147 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189497d42ec959b7 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:9460ca0802075a8a6a10d7b3e6052c4d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:04:51.037256119 +0000 UTC m=+9.672991265,LastTimestamp:2026-02-16 02:04:51.037256119 +0000 UTC m=+9.672991265,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:04:56.271272 master-0 kubenswrapper[4147]: E0216 02:04:56.271066 4147 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189497d42f442c4a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5d1e91e5a1fed5cf7076a92d2830d36f,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:04:51.045305418 +0000 UTC m=+9.681040544,LastTimestamp:2026-02-16 02:04:51.045305418 +0000 UTC m=+9.681040544,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:04:56.272180 master-0 kubenswrapper[4147]: I0216 02:04:56.272139 4147 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 02:04:56.272294 master-0 kubenswrapper[4147]: I0216 02:04:56.272266 4147 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:04:56.275855 master-0 kubenswrapper[4147]: I0216 02:04:56.275280 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:04:56.275855 master-0 kubenswrapper[4147]: I0216 02:04:56.275375 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:04:56.275855 master-0 kubenswrapper[4147]: I0216 02:04:56.275399 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:04:56.278769 master-0 kubenswrapper[4147]: E0216 02:04:56.278581 4147 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189497d42f5883a5 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:9460ca0802075a8a6a10d7b3e6052c4d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:04:51.046638501 +0000 UTC m=+9.682373657,LastTimestamp:2026-02-16 02:04:51.046638501 +0000 UTC m=+9.682373657,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:04:56.281971 master-0 kubenswrapper[4147]: I0216 02:04:56.281930 4147 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 02:04:56.286370 master-0 kubenswrapper[4147]: E0216 02:04:56.286216 4147 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189497d42f888adf kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:80420f2e7c3cdda71f7d0d6ccbe6f9f3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:04:51.049786079 +0000 UTC m=+9.685521225,LastTimestamp:2026-02-16 02:04:51.049786079 +0000 UTC m=+9.685521225,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:04:56.292828 master-0 kubenswrapper[4147]: E0216 02:04:56.292728 4147 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189497d42f9c622d kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:80420f2e7c3cdda71f7d0d6ccbe6f9f3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d54bd262ca625a326b01ea2bfd33db10a402c05590e6b710b0959712e1bf30b\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:04:51.051086381 +0000 UTC m=+9.686821507,LastTimestamp:2026-02-16 02:04:51.051086381 +0000 UTC m=+9.686821507,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:04:56.299670 master-0 kubenswrapper[4147]: E0216 02:04:56.299525 4147 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189497d4302aea8b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5d1e91e5a1fed5cf7076a92d2830d36f,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:04:51.060427403 +0000 UTC m=+9.696162549,LastTimestamp:2026-02-16 02:04:51.060427403 +0000 UTC m=+9.696162549,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:04:56.307348 master-0 kubenswrapper[4147]: E0216 02:04:56.307225 4147 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189497d43dbad1c0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5d1e91e5a1fed5cf7076a92d2830d36f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:04:51.287962048 +0000 UTC m=+9.923697194,LastTimestamp:2026-02-16 02:04:51.287962048 +0000 UTC m=+9.923697194,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:04:56.314423 master-0 kubenswrapper[4147]: E0216 02:04:56.314241 4147 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189497d44e7311fe openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5d1e91e5a1fed5cf7076a92d2830d36f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:04:51.568472574 +0000 UTC m=+10.204207700,LastTimestamp:2026-02-16 02:04:51.568472574 +0000 UTC m=+10.204207700,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:04:56.321983 master-0 kubenswrapper[4147]: E0216 02:04:56.321793 4147 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189497d44f4318d4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5d1e91e5a1fed5cf7076a92d2830d36f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:04:51.582105812 +0000 UTC m=+10.217840938,LastTimestamp:2026-02-16 02:04:51.582105812 +0000 UTC m=+10.217840938,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:04:56.329650 master-0 kubenswrapper[4147]: E0216 02:04:56.329408 4147 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189497d44f4fb603 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5d1e91e5a1fed5cf7076a92d2830d36f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:04:51.582932483 +0000 UTC m=+10.218667619,LastTimestamp:2026-02-16 02:04:51.582932483 +0000 UTC m=+10.218667619,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:04:56.337057 master-0 kubenswrapper[4147]: E0216 02:04:56.336896 4147 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189497d49e290832 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:80420f2e7c3cdda71f7d0d6ccbe6f9f3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d54bd262ca625a326b01ea2bfd33db10a402c05590e6b710b0959712e1bf30b\" in 1.854s (1.854s including waiting). Image size: 500068323 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:04:52.905797682 +0000 UTC m=+11.541532808,LastTimestamp:2026-02-16 02:04:52.905797682 +0000 UTC m=+11.541532808,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:04:56.343327 master-0 kubenswrapper[4147]: E0216 02:04:56.343191 4147 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189497d4aea873ff kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:80420f2e7c3cdda71f7d0d6ccbe6f9f3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:04:53.182583807 +0000 UTC m=+11.818318923,LastTimestamp:2026-02-16 02:04:53.182583807 +0000 UTC m=+11.818318923,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:04:56.348649 master-0 kubenswrapper[4147]: I0216 02:04:56.347675 4147 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:04:56.348649 master-0 kubenswrapper[4147]: I0216 02:04:56.347705 4147 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:04:56.348649 master-0 kubenswrapper[4147]: I0216 02:04:56.347748 4147 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 02:04:56.349587 master-0 kubenswrapper[4147]: I0216 02:04:56.349068 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:04:56.349587 master-0 kubenswrapper[4147]: I0216 02:04:56.349109 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:04:56.349587 master-0 kubenswrapper[4147]: I0216 02:04:56.349126 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:04:56.349587 master-0 kubenswrapper[4147]: I0216 02:04:56.349235 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:04:56.349587 master-0 kubenswrapper[4147]: I0216 02:04:56.349304 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:04:56.349587 master-0 kubenswrapper[4147]: I0216 02:04:56.349316 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:04:56.349931 master-0 kubenswrapper[4147]: E0216 02:04:56.349535 4147 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189497d4afe13a88 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:80420f2e7c3cdda71f7d0d6ccbe6f9f3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:04:53.203081864 +0000 UTC m=+11.838816980,LastTimestamp:2026-02-16 02:04:53.203081864 +0000 UTC m=+11.838816980,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:04:56.357475 master-0 kubenswrapper[4147]: E0216 02:04:56.357274 4147 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189497d4f9292af9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5d1e91e5a1fed5cf7076a92d2830d36f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49\" in 2.849s (2.849s including waiting). Image size: 509806416 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:04:54.432533241 +0000 UTC m=+13.068268357,LastTimestamp:2026-02-16 02:04:54.432533241 +0000 UTC m=+13.068268357,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:04:56.365302 master-0 kubenswrapper[4147]: E0216 02:04:56.365141 4147 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189497d5057c43be openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5d1e91e5a1fed5cf7076a92d2830d36f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:04:54.639305662 +0000 UTC m=+13.275040798,LastTimestamp:2026-02-16 02:04:54.639305662 +0000 UTC m=+13.275040798,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:04:56.372709 master-0 kubenswrapper[4147]: E0216 02:04:56.372547 4147 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189497d5063d5827 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5d1e91e5a1fed5cf7076a92d2830d36f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:04:54.651959335 +0000 UTC m=+13.287694471,LastTimestamp:2026-02-16 02:04:54.651959335 +0000 UTC m=+13.287694471,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:04:56.482988 master-0 kubenswrapper[4147]: I0216 02:04:56.482767 4147 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 02:04:56.799009 master-0 kubenswrapper[4147]: I0216 02:04:56.798787 4147 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 16 02:04:56.825601 master-0 kubenswrapper[4147]: I0216 02:04:56.825519 4147 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 16 02:04:56.996669 master-0 kubenswrapper[4147]: I0216 02:04:56.996580 4147 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 02:04:57.350401 master-0 kubenswrapper[4147]: I0216 02:04:57.350299 4147 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:04:57.350401 master-0 kubenswrapper[4147]: I0216 02:04:57.350351 4147 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:04:57.352114 master-0 kubenswrapper[4147]: I0216 02:04:57.351926 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:04:57.352114 master-0 kubenswrapper[4147]: I0216 02:04:57.351980 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:04:57.352114 master-0 kubenswrapper[4147]: I0216 02:04:57.352003 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:04:57.352643 master-0 kubenswrapper[4147]: I0216 02:04:57.352294 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:04:57.352643 master-0 kubenswrapper[4147]: I0216 02:04:57.352352 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:04:57.352643 master-0 kubenswrapper[4147]: I0216 02:04:57.352372 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:04:58.001811 master-0 kubenswrapper[4147]: I0216 02:04:58.001730 4147 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 02:04:58.476521 master-0 kubenswrapper[4147]: W0216 02:04:58.476389 4147 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 16 02:04:58.477256 master-0 kubenswrapper[4147]: E0216 02:04:58.476525 4147 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 16 02:04:58.795532 master-0 kubenswrapper[4147]: W0216 02:04:58.795297 4147 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "master-0" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 16 02:04:58.795708 master-0 kubenswrapper[4147]: E0216 02:04:58.795539 4147 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"master-0\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 16 02:04:58.999953 master-0 kubenswrapper[4147]: I0216 02:04:58.999862 4147 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 02:04:59.141625 master-0 kubenswrapper[4147]: I0216 02:04:59.141420 4147 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 02:04:59.141856 master-0 kubenswrapper[4147]: I0216 02:04:59.141656 4147 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:04:59.144852 master-0 kubenswrapper[4147]: I0216 02:04:59.144727 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:04:59.144988 master-0 kubenswrapper[4147]: I0216 02:04:59.144887 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:04:59.144988 master-0 kubenswrapper[4147]: I0216 02:04:59.144909 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:04:59.150287 master-0 kubenswrapper[4147]: I0216 02:04:59.150222 4147 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 02:04:59.355353 master-0 kubenswrapper[4147]: I0216 02:04:59.355255 4147 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:04:59.356926 master-0 kubenswrapper[4147]: I0216 02:04:59.356842 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:04:59.356926 master-0 kubenswrapper[4147]: I0216 02:04:59.356894 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:04:59.356926 master-0 kubenswrapper[4147]: I0216 02:04:59.356910 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:04:59.366271 master-0 kubenswrapper[4147]: I0216 02:04:59.366212 4147 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 02:04:59.584898 master-0 kubenswrapper[4147]: I0216 02:04:59.584783 4147 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 02:04:59.585992 master-0 kubenswrapper[4147]: I0216 02:04:59.585098 4147 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:04:59.586818 master-0 kubenswrapper[4147]: I0216 02:04:59.586741 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:04:59.586976 master-0 kubenswrapper[4147]: I0216 02:04:59.586844 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:04:59.586976 master-0 kubenswrapper[4147]: I0216 02:04:59.586872 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:04:59.591558 master-0 kubenswrapper[4147]: I0216 02:04:59.591512 4147 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 02:05:00.002830 master-0 kubenswrapper[4147]: I0216 02:05:00.002742 4147 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 02:05:00.063534 master-0 kubenswrapper[4147]: W0216 02:05:00.063358 4147 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 16 02:05:00.063534 master-0 kubenswrapper[4147]: E0216 02:05:00.063417 4147 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 16 02:05:00.187413 master-0 kubenswrapper[4147]: I0216 02:05:00.187315 4147 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:05:00.188814 master-0 kubenswrapper[4147]: I0216 02:05:00.188751 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:05:00.188814 master-0 kubenswrapper[4147]: I0216 02:05:00.188811 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:05:00.188999 master-0 kubenswrapper[4147]: I0216 02:05:00.188829 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:05:00.189298 master-0 kubenswrapper[4147]: I0216 02:05:00.189262 4147 scope.go:117] "RemoveContainer" containerID="5181a89c92564f80aa8551c4b2d57544e051f73caa59531edcb38eaae5143732" Feb 16 02:05:00.207270 master-0 kubenswrapper[4147]: E0216 02:05:00.207064 4147 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189497d32f504ae3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189497d32f504ae3 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:b3322fd3717f4aec0d8f54ec7862c07e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:04:46.751132387 +0000 UTC m=+5.386867503,LastTimestamp:2026-02-16 02:05:00.193007089 +0000 UTC m=+18.828742245,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:05:00.358405 master-0 kubenswrapper[4147]: I0216 02:05:00.358345 4147 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:05:00.358663 master-0 kubenswrapper[4147]: I0216 02:05:00.358414 4147 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 02:05:00.360075 master-0 kubenswrapper[4147]: I0216 02:05:00.359899 4147 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:05:00.361001 master-0 kubenswrapper[4147]: I0216 02:05:00.360949 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:05:00.361128 master-0 kubenswrapper[4147]: I0216 02:05:00.361011 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:05:00.361128 master-0 kubenswrapper[4147]: I0216 02:05:00.361036 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:05:00.361263 master-0 kubenswrapper[4147]: I0216 02:05:00.361145 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:05:00.361263 master-0 kubenswrapper[4147]: I0216 02:05:00.361185 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:05:00.361263 master-0 kubenswrapper[4147]: I0216 02:05:00.361204 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:05:00.481504 master-0 kubenswrapper[4147]: E0216 02:05:00.481199 4147 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189497d33ba844e0\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189497d33ba844e0 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:b3322fd3717f4aec0d8f54ec7862c07e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:04:46.958224608 +0000 UTC m=+5.593959724,LastTimestamp:2026-02-16 02:05:00.472359817 +0000 UTC m=+19.108094973,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:05:00.503378 master-0 kubenswrapper[4147]: E0216 02:05:00.503228 4147 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189497d33cb40298\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189497d33cb40298 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:b3322fd3717f4aec0d8f54ec7862c07e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:04:46.975771288 +0000 UTC m=+5.611506404,LastTimestamp:2026-02-16 02:05:00.494594507 +0000 UTC m=+19.130329663,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:05:00.997829 master-0 kubenswrapper[4147]: I0216 02:05:00.997770 4147 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 02:05:01.315301 master-0 kubenswrapper[4147]: I0216 02:05:01.315217 4147 csr.go:261] certificate signing request csr-jxsvg is approved, waiting to be issued Feb 16 02:05:01.362409 master-0 kubenswrapper[4147]: I0216 02:05:01.362301 4147 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_b3322fd3717f4aec0d8f54ec7862c07e/kube-rbac-proxy-crio/2.log" Feb 16 02:05:01.362963 master-0 kubenswrapper[4147]: I0216 02:05:01.362845 4147 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_b3322fd3717f4aec0d8f54ec7862c07e/kube-rbac-proxy-crio/1.log" Feb 16 02:05:01.363462 master-0 kubenswrapper[4147]: I0216 02:05:01.363377 4147 generic.go:334] "Generic (PLEG): container finished" podID="b3322fd3717f4aec0d8f54ec7862c07e" containerID="1315b8b70fa662058fdbb3d25c0b57bbe5b7832e14fd3593c7b3c8b6954d366b" exitCode=1 Feb 16 02:05:01.363764 master-0 kubenswrapper[4147]: I0216 02:05:01.363518 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b3322fd3717f4aec0d8f54ec7862c07e","Type":"ContainerDied","Data":"1315b8b70fa662058fdbb3d25c0b57bbe5b7832e14fd3593c7b3c8b6954d366b"} Feb 16 02:05:01.363764 master-0 kubenswrapper[4147]: I0216 02:05:01.363566 4147 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:05:01.363764 master-0 kubenswrapper[4147]: I0216 02:05:01.363634 4147 scope.go:117] "RemoveContainer" containerID="5181a89c92564f80aa8551c4b2d57544e051f73caa59531edcb38eaae5143732" Feb 16 02:05:01.363975 master-0 kubenswrapper[4147]: I0216 02:05:01.363819 4147 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:05:01.365265 master-0 kubenswrapper[4147]: I0216 02:05:01.365226 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:05:01.365407 master-0 kubenswrapper[4147]: I0216 02:05:01.365279 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:05:01.365407 master-0 kubenswrapper[4147]: I0216 02:05:01.365297 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:05:01.366480 master-0 kubenswrapper[4147]: I0216 02:05:01.365719 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:05:01.366480 master-0 kubenswrapper[4147]: I0216 02:05:01.365761 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:05:01.366480 master-0 kubenswrapper[4147]: I0216 02:05:01.365781 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:05:01.366480 master-0 kubenswrapper[4147]: I0216 02:05:01.365803 4147 scope.go:117] "RemoveContainer" containerID="1315b8b70fa662058fdbb3d25c0b57bbe5b7832e14fd3593c7b3c8b6954d366b" Feb 16 02:05:01.366480 master-0 kubenswrapper[4147]: E0216 02:05:01.366024 4147 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(b3322fd3717f4aec0d8f54ec7862c07e)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="b3322fd3717f4aec0d8f54ec7862c07e" Feb 16 02:05:01.369608 master-0 kubenswrapper[4147]: I0216 02:05:01.369504 4147 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 02:05:01.374245 master-0 kubenswrapper[4147]: E0216 02:05:01.374066 4147 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189497d386fa634b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189497d386fa634b openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:b3322fd3717f4aec0d8f54ec7862c07e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(b3322fd3717f4aec0d8f54ec7862c07e),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:04:48.221897547 +0000 UTC m=+6.857632663,LastTimestamp:2026-02-16 02:05:01.365991087 +0000 UTC m=+20.001726233,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:05:01.620754 master-0 kubenswrapper[4147]: E0216 02:05:01.620569 4147 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 16 02:05:01.852818 master-0 kubenswrapper[4147]: I0216 02:05:01.852696 4147 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:05:01.854321 master-0 kubenswrapper[4147]: I0216 02:05:01.854262 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:05:01.854321 master-0 kubenswrapper[4147]: I0216 02:05:01.854332 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:05:01.854572 master-0 kubenswrapper[4147]: I0216 02:05:01.854350 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:05:01.854572 master-0 kubenswrapper[4147]: I0216 02:05:01.854431 4147 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 16 02:05:01.862309 master-0 kubenswrapper[4147]: E0216 02:05:01.862245 4147 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Feb 16 02:05:01.998427 master-0 kubenswrapper[4147]: I0216 02:05:01.998367 4147 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 02:05:02.124475 master-0 kubenswrapper[4147]: E0216 02:05:02.124373 4147 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Feb 16 02:05:02.368341 master-0 kubenswrapper[4147]: I0216 02:05:02.368204 4147 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_b3322fd3717f4aec0d8f54ec7862c07e/kube-rbac-proxy-crio/2.log" Feb 16 02:05:02.368922 master-0 kubenswrapper[4147]: I0216 02:05:02.368892 4147 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:05:02.370081 master-0 kubenswrapper[4147]: I0216 02:05:02.370038 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:05:02.370169 master-0 kubenswrapper[4147]: I0216 02:05:02.370102 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:05:02.370169 master-0 kubenswrapper[4147]: I0216 02:05:02.370120 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:05:02.999303 master-0 kubenswrapper[4147]: I0216 02:05:02.999224 4147 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 02:05:04.002389 master-0 kubenswrapper[4147]: I0216 02:05:04.002334 4147 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 02:05:04.997851 master-0 kubenswrapper[4147]: I0216 02:05:04.997756 4147 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 02:05:05.214078 master-0 kubenswrapper[4147]: W0216 02:05:05.213990 4147 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 16 02:05:05.214876 master-0 kubenswrapper[4147]: E0216 02:05:05.214080 4147 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 16 02:05:05.998041 master-0 kubenswrapper[4147]: I0216 02:05:05.997964 4147 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 02:05:06.997898 master-0 kubenswrapper[4147]: I0216 02:05:06.997814 4147 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 02:05:07.997655 master-0 kubenswrapper[4147]: I0216 02:05:07.997597 4147 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 02:05:08.629897 master-0 kubenswrapper[4147]: E0216 02:05:08.629810 4147 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 16 02:05:08.862569 master-0 kubenswrapper[4147]: I0216 02:05:08.862385 4147 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:05:08.864739 master-0 kubenswrapper[4147]: I0216 02:05:08.864698 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:05:08.864942 master-0 kubenswrapper[4147]: I0216 02:05:08.864919 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:05:08.865122 master-0 kubenswrapper[4147]: I0216 02:05:08.865102 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:05:08.865353 master-0 kubenswrapper[4147]: I0216 02:05:08.865332 4147 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 16 02:05:08.873008 master-0 kubenswrapper[4147]: E0216 02:05:08.872973 4147 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Feb 16 02:05:08.998165 master-0 kubenswrapper[4147]: I0216 02:05:08.998102 4147 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 02:05:09.997462 master-0 kubenswrapper[4147]: I0216 02:05:09.997379 4147 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 02:05:10.263025 master-0 kubenswrapper[4147]: I0216 02:05:10.262875 4147 csr.go:257] certificate signing request csr-jxsvg is issued Feb 16 02:05:10.746558 master-0 kubenswrapper[4147]: I0216 02:05:10.746168 4147 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 02:05:10.746821 master-0 kubenswrapper[4147]: I0216 02:05:10.746659 4147 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:05:10.747899 master-0 kubenswrapper[4147]: I0216 02:05:10.747801 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:05:10.748044 master-0 kubenswrapper[4147]: I0216 02:05:10.747914 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:05:10.748044 master-0 kubenswrapper[4147]: I0216 02:05:10.747940 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:05:10.857753 master-0 kubenswrapper[4147]: I0216 02:05:10.857675 4147 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 16 02:05:11.002303 master-0 kubenswrapper[4147]: I0216 02:05:11.002187 4147 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 16 02:05:11.018537 master-0 kubenswrapper[4147]: I0216 02:05:11.018472 4147 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 16 02:05:11.076265 master-0 kubenswrapper[4147]: I0216 02:05:11.076204 4147 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 16 02:05:11.284196 master-0 kubenswrapper[4147]: I0216 02:05:11.263914 4147 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-17 01:56:50 +0000 UTC, rotation deadline is 2026-02-16 23:06:05.033269013 +0000 UTC Feb 16 02:05:11.284196 master-0 kubenswrapper[4147]: I0216 02:05:11.263968 4147 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 21h0m53.769306459s for next certificate rotation Feb 16 02:05:11.333370 master-0 kubenswrapper[4147]: I0216 02:05:11.333259 4147 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 16 02:05:11.333370 master-0 kubenswrapper[4147]: E0216 02:05:11.333306 4147 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Feb 16 02:05:11.355113 master-0 kubenswrapper[4147]: I0216 02:05:11.355028 4147 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 16 02:05:11.371332 master-0 kubenswrapper[4147]: I0216 02:05:11.371288 4147 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 16 02:05:11.434118 master-0 kubenswrapper[4147]: I0216 02:05:11.434024 4147 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 16 02:05:11.707513 master-0 kubenswrapper[4147]: I0216 02:05:11.707215 4147 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 16 02:05:11.707513 master-0 kubenswrapper[4147]: E0216 02:05:11.707246 4147 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Feb 16 02:05:11.804002 master-0 kubenswrapper[4147]: I0216 02:05:11.803822 4147 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 16 02:05:11.819323 master-0 kubenswrapper[4147]: I0216 02:05:11.819294 4147 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 16 02:05:11.878938 master-0 kubenswrapper[4147]: I0216 02:05:11.878823 4147 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 16 02:05:12.002934 master-0 kubenswrapper[4147]: I0216 02:05:12.002804 4147 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 16 02:05:12.125525 master-0 kubenswrapper[4147]: E0216 02:05:12.125394 4147 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Feb 16 02:05:12.144806 master-0 kubenswrapper[4147]: I0216 02:05:12.144744 4147 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 16 02:05:12.144806 master-0 kubenswrapper[4147]: E0216 02:05:12.144780 4147 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Feb 16 02:05:12.738188 master-0 kubenswrapper[4147]: I0216 02:05:12.738108 4147 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 16 02:05:12.754404 master-0 kubenswrapper[4147]: I0216 02:05:12.754304 4147 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 16 02:05:12.812414 master-0 kubenswrapper[4147]: I0216 02:05:12.812320 4147 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 16 02:05:12.990643 master-0 kubenswrapper[4147]: I0216 02:05:12.990476 4147 apiserver.go:52] "Watching apiserver" Feb 16 02:05:12.994216 master-0 kubenswrapper[4147]: I0216 02:05:12.994166 4147 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 16 02:05:12.994397 master-0 kubenswrapper[4147]: I0216 02:05:12.994329 4147 kubelet.go:2421] "SyncLoop ADD" source="api" pods=[] Feb 16 02:05:13.076038 master-0 kubenswrapper[4147]: I0216 02:05:13.075943 4147 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 16 02:05:13.076038 master-0 kubenswrapper[4147]: E0216 02:05:13.076007 4147 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Feb 16 02:05:13.094354 master-0 kubenswrapper[4147]: I0216 02:05:13.094283 4147 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Feb 16 02:05:15.187813 master-0 kubenswrapper[4147]: I0216 02:05:15.187717 4147 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:05:15.189236 master-0 kubenswrapper[4147]: I0216 02:05:15.189181 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:05:15.189236 master-0 kubenswrapper[4147]: I0216 02:05:15.189237 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:05:15.189370 master-0 kubenswrapper[4147]: I0216 02:05:15.189255 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:05:15.189796 master-0 kubenswrapper[4147]: I0216 02:05:15.189753 4147 scope.go:117] "RemoveContainer" containerID="1315b8b70fa662058fdbb3d25c0b57bbe5b7832e14fd3593c7b3c8b6954d366b" Feb 16 02:05:15.190028 master-0 kubenswrapper[4147]: E0216 02:05:15.189979 4147 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(b3322fd3717f4aec0d8f54ec7862c07e)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="b3322fd3717f4aec0d8f54ec7862c07e" Feb 16 02:05:15.278489 master-0 kubenswrapper[4147]: I0216 02:05:15.278397 4147 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 16 02:05:15.636167 master-0 kubenswrapper[4147]: E0216 02:05:15.635976 4147 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"master-0\" not found" node="master-0" Feb 16 02:05:15.873638 master-0 kubenswrapper[4147]: I0216 02:05:15.873541 4147 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:05:15.875361 master-0 kubenswrapper[4147]: I0216 02:05:15.875254 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:05:15.875545 master-0 kubenswrapper[4147]: I0216 02:05:15.875388 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:05:15.875545 master-0 kubenswrapper[4147]: I0216 02:05:15.875410 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:05:15.875545 master-0 kubenswrapper[4147]: I0216 02:05:15.875522 4147 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 16 02:05:15.885024 master-0 kubenswrapper[4147]: I0216 02:05:15.884967 4147 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Feb 16 02:05:16.009341 master-0 kubenswrapper[4147]: I0216 02:05:16.009259 4147 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 16 02:05:16.021603 master-0 kubenswrapper[4147]: I0216 02:05:16.021527 4147 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 16 02:05:16.192639 master-0 kubenswrapper[4147]: I0216 02:05:16.192517 4147 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/network-operator-6fcf4c966-dctqr"] Feb 16 02:05:16.193519 master-0 kubenswrapper[4147]: I0216 02:05:16.192934 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-6fcf4c966-dctqr" Feb 16 02:05:16.196584 master-0 kubenswrapper[4147]: I0216 02:05:16.196528 4147 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 16 02:05:16.196937 master-0 kubenswrapper[4147]: I0216 02:05:16.196893 4147 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 16 02:05:16.197206 master-0 kubenswrapper[4147]: I0216 02:05:16.197113 4147 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 16 02:05:16.237529 master-0 kubenswrapper[4147]: I0216 02:05:16.237414 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/456e6c3a-c16c-470b-a0cd-bb79865b54f0-host-etc-kube\") pod \"network-operator-6fcf4c966-dctqr\" (UID: \"456e6c3a-c16c-470b-a0cd-bb79865b54f0\") " pod="openshift-network-operator/network-operator-6fcf4c966-dctqr" Feb 16 02:05:16.237801 master-0 kubenswrapper[4147]: I0216 02:05:16.237566 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nl7r8\" (UniqueName: \"kubernetes.io/projected/456e6c3a-c16c-470b-a0cd-bb79865b54f0-kube-api-access-nl7r8\") pod \"network-operator-6fcf4c966-dctqr\" (UID: \"456e6c3a-c16c-470b-a0cd-bb79865b54f0\") " pod="openshift-network-operator/network-operator-6fcf4c966-dctqr" Feb 16 02:05:16.237801 master-0 kubenswrapper[4147]: I0216 02:05:16.237607 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/456e6c3a-c16c-470b-a0cd-bb79865b54f0-metrics-tls\") pod \"network-operator-6fcf4c966-dctqr\" (UID: \"456e6c3a-c16c-470b-a0cd-bb79865b54f0\") " pod="openshift-network-operator/network-operator-6fcf4c966-dctqr" Feb 16 02:05:16.345224 master-0 kubenswrapper[4147]: I0216 02:05:16.344988 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nl7r8\" (UniqueName: \"kubernetes.io/projected/456e6c3a-c16c-470b-a0cd-bb79865b54f0-kube-api-access-nl7r8\") pod \"network-operator-6fcf4c966-dctqr\" (UID: \"456e6c3a-c16c-470b-a0cd-bb79865b54f0\") " pod="openshift-network-operator/network-operator-6fcf4c966-dctqr" Feb 16 02:05:16.345224 master-0 kubenswrapper[4147]: I0216 02:05:16.345056 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/456e6c3a-c16c-470b-a0cd-bb79865b54f0-metrics-tls\") pod \"network-operator-6fcf4c966-dctqr\" (UID: \"456e6c3a-c16c-470b-a0cd-bb79865b54f0\") " pod="openshift-network-operator/network-operator-6fcf4c966-dctqr" Feb 16 02:05:16.345224 master-0 kubenswrapper[4147]: I0216 02:05:16.345076 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/456e6c3a-c16c-470b-a0cd-bb79865b54f0-host-etc-kube\") pod \"network-operator-6fcf4c966-dctqr\" (UID: \"456e6c3a-c16c-470b-a0cd-bb79865b54f0\") " pod="openshift-network-operator/network-operator-6fcf4c966-dctqr" Feb 16 02:05:16.345224 master-0 kubenswrapper[4147]: I0216 02:05:16.345148 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/456e6c3a-c16c-470b-a0cd-bb79865b54f0-host-etc-kube\") pod \"network-operator-6fcf4c966-dctqr\" (UID: \"456e6c3a-c16c-470b-a0cd-bb79865b54f0\") " pod="openshift-network-operator/network-operator-6fcf4c966-dctqr" Feb 16 02:05:16.346377 master-0 kubenswrapper[4147]: I0216 02:05:16.346331 4147 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 16 02:05:16.351960 master-0 kubenswrapper[4147]: I0216 02:05:16.351930 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/456e6c3a-c16c-470b-a0cd-bb79865b54f0-metrics-tls\") pod \"network-operator-6fcf4c966-dctqr\" (UID: \"456e6c3a-c16c-470b-a0cd-bb79865b54f0\") " pod="openshift-network-operator/network-operator-6fcf4c966-dctqr" Feb 16 02:05:16.366093 master-0 kubenswrapper[4147]: I0216 02:05:16.366038 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nl7r8\" (UniqueName: \"kubernetes.io/projected/456e6c3a-c16c-470b-a0cd-bb79865b54f0-kube-api-access-nl7r8\") pod \"network-operator-6fcf4c966-dctqr\" (UID: \"456e6c3a-c16c-470b-a0cd-bb79865b54f0\") " pod="openshift-network-operator/network-operator-6fcf4c966-dctqr" Feb 16 02:05:16.513002 master-0 kubenswrapper[4147]: I0216 02:05:16.512947 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-6fcf4c966-dctqr" Feb 16 02:05:17.191230 master-0 kubenswrapper[4147]: I0216 02:05:17.190950 4147 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-76959b6567-9fxxl"] Feb 16 02:05:17.193580 master-0 kubenswrapper[4147]: I0216 02:05:17.193281 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-76959b6567-9fxxl" Feb 16 02:05:17.199364 master-0 kubenswrapper[4147]: I0216 02:05:17.197829 4147 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 16 02:05:17.199364 master-0 kubenswrapper[4147]: I0216 02:05:17.198171 4147 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 16 02:05:17.200827 master-0 kubenswrapper[4147]: I0216 02:05:17.200622 4147 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 16 02:05:17.251430 master-0 kubenswrapper[4147]: I0216 02:05:17.251333 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-etc-cvo-updatepayloads\") pod \"cluster-version-operator-76959b6567-9fxxl\" (UID: \"864c0ef4-319c-457c-aa3b-adf0c3e5a0ff\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-9fxxl" Feb 16 02:05:17.251430 master-0 kubenswrapper[4147]: I0216 02:05:17.251416 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-serving-cert\") pod \"cluster-version-operator-76959b6567-9fxxl\" (UID: \"864c0ef4-319c-457c-aa3b-adf0c3e5a0ff\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-9fxxl" Feb 16 02:05:17.251858 master-0 kubenswrapper[4147]: I0216 02:05:17.251578 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-service-ca\") pod \"cluster-version-operator-76959b6567-9fxxl\" (UID: \"864c0ef4-319c-457c-aa3b-adf0c3e5a0ff\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-9fxxl" Feb 16 02:05:17.251858 master-0 kubenswrapper[4147]: I0216 02:05:17.251650 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-etc-ssl-certs\") pod \"cluster-version-operator-76959b6567-9fxxl\" (UID: \"864c0ef4-319c-457c-aa3b-adf0c3e5a0ff\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-9fxxl" Feb 16 02:05:17.251858 master-0 kubenswrapper[4147]: I0216 02:05:17.251694 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-kube-api-access\") pod \"cluster-version-operator-76959b6567-9fxxl\" (UID: \"864c0ef4-319c-457c-aa3b-adf0c3e5a0ff\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-9fxxl" Feb 16 02:05:17.352202 master-0 kubenswrapper[4147]: I0216 02:05:17.352098 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-kube-api-access\") pod \"cluster-version-operator-76959b6567-9fxxl\" (UID: \"864c0ef4-319c-457c-aa3b-adf0c3e5a0ff\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-9fxxl" Feb 16 02:05:17.352202 master-0 kubenswrapper[4147]: I0216 02:05:17.352184 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-etc-ssl-certs\") pod \"cluster-version-operator-76959b6567-9fxxl\" (UID: \"864c0ef4-319c-457c-aa3b-adf0c3e5a0ff\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-9fxxl" Feb 16 02:05:17.352580 master-0 kubenswrapper[4147]: I0216 02:05:17.352226 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-etc-cvo-updatepayloads\") pod \"cluster-version-operator-76959b6567-9fxxl\" (UID: \"864c0ef4-319c-457c-aa3b-adf0c3e5a0ff\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-9fxxl" Feb 16 02:05:17.352580 master-0 kubenswrapper[4147]: I0216 02:05:17.352261 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-serving-cert\") pod \"cluster-version-operator-76959b6567-9fxxl\" (UID: \"864c0ef4-319c-457c-aa3b-adf0c3e5a0ff\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-9fxxl" Feb 16 02:05:17.352580 master-0 kubenswrapper[4147]: I0216 02:05:17.352297 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-service-ca\") pod \"cluster-version-operator-76959b6567-9fxxl\" (UID: \"864c0ef4-319c-457c-aa3b-adf0c3e5a0ff\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-9fxxl" Feb 16 02:05:17.353749 master-0 kubenswrapper[4147]: I0216 02:05:17.353699 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-service-ca\") pod \"cluster-version-operator-76959b6567-9fxxl\" (UID: \"864c0ef4-319c-457c-aa3b-adf0c3e5a0ff\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-9fxxl" Feb 16 02:05:17.354209 master-0 kubenswrapper[4147]: I0216 02:05:17.354163 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-etc-ssl-certs\") pod \"cluster-version-operator-76959b6567-9fxxl\" (UID: \"864c0ef4-319c-457c-aa3b-adf0c3e5a0ff\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-9fxxl" Feb 16 02:05:17.354291 master-0 kubenswrapper[4147]: I0216 02:05:17.354230 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-etc-cvo-updatepayloads\") pod \"cluster-version-operator-76959b6567-9fxxl\" (UID: \"864c0ef4-319c-457c-aa3b-adf0c3e5a0ff\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-9fxxl" Feb 16 02:05:17.354361 master-0 kubenswrapper[4147]: E0216 02:05:17.354314 4147 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 16 02:05:17.354423 master-0 kubenswrapper[4147]: E0216 02:05:17.354381 4147 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-serving-cert podName:864c0ef4-319c-457c-aa3b-adf0c3e5a0ff nodeName:}" failed. No retries permitted until 2026-02-16 02:05:17.854356813 +0000 UTC m=+36.490091959 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-serving-cert") pod "cluster-version-operator-76959b6567-9fxxl" (UID: "864c0ef4-319c-457c-aa3b-adf0c3e5a0ff") : secret "cluster-version-operator-serving-cert" not found Feb 16 02:05:17.389034 master-0 kubenswrapper[4147]: I0216 02:05:17.387527 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-kube-api-access\") pod \"cluster-version-operator-76959b6567-9fxxl\" (UID: \"864c0ef4-319c-457c-aa3b-adf0c3e5a0ff\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-9fxxl" Feb 16 02:05:17.408150 master-0 kubenswrapper[4147]: I0216 02:05:17.407801 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-6fcf4c966-dctqr" event={"ID":"456e6c3a-c16c-470b-a0cd-bb79865b54f0","Type":"ContainerStarted","Data":"450f24c15fabf2ce3093e6381873f7497e388b2d4b0a5acae355eb63b714bf74"} Feb 16 02:05:17.494460 master-0 kubenswrapper[4147]: I0216 02:05:17.494214 4147 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 16 02:05:17.856398 master-0 kubenswrapper[4147]: I0216 02:05:17.856239 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-serving-cert\") pod \"cluster-version-operator-76959b6567-9fxxl\" (UID: \"864c0ef4-319c-457c-aa3b-adf0c3e5a0ff\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-9fxxl" Feb 16 02:05:17.856705 master-0 kubenswrapper[4147]: E0216 02:05:17.856400 4147 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 16 02:05:17.856705 master-0 kubenswrapper[4147]: E0216 02:05:17.856551 4147 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-serving-cert podName:864c0ef4-319c-457c-aa3b-adf0c3e5a0ff nodeName:}" failed. No retries permitted until 2026-02-16 02:05:18.856528152 +0000 UTC m=+37.492263308 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-serving-cert") pod "cluster-version-operator-76959b6567-9fxxl" (UID: "864c0ef4-319c-457c-aa3b-adf0c3e5a0ff") : secret "cluster-version-operator-serving-cert" not found Feb 16 02:05:18.352549 master-0 kubenswrapper[4147]: I0216 02:05:18.352481 4147 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["assisted-installer/assisted-installer-controller-p2zdr"] Feb 16 02:05:18.353716 master-0 kubenswrapper[4147]: I0216 02:05:18.353593 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-p2zdr" Feb 16 02:05:18.356667 master-0 kubenswrapper[4147]: I0216 02:05:18.356280 4147 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"openshift-service-ca.crt" Feb 16 02:05:18.357352 master-0 kubenswrapper[4147]: I0216 02:05:18.357303 4147 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"kube-root-ca.crt" Feb 16 02:05:18.357623 master-0 kubenswrapper[4147]: I0216 02:05:18.357560 4147 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"assisted-installer-controller-config" Feb 16 02:05:18.357968 master-0 kubenswrapper[4147]: I0216 02:05:18.357904 4147 reflector.go:368] Caches populated for *v1.Secret from object-"assisted-installer"/"assisted-installer-controller-secret" Feb 16 02:05:18.359848 master-0 kubenswrapper[4147]: I0216 02:05:18.359749 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/5fb79c4e-3171-4d3c-a0d1-ed1a93acafa8-sno-bootstrap-files\") pod \"assisted-installer-controller-p2zdr\" (UID: \"5fb79c4e-3171-4d3c-a0d1-ed1a93acafa8\") " pod="assisted-installer/assisted-installer-controller-p2zdr" Feb 16 02:05:18.359984 master-0 kubenswrapper[4147]: I0216 02:05:18.359885 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/5fb79c4e-3171-4d3c-a0d1-ed1a93acafa8-host-resolv-conf\") pod \"assisted-installer-controller-p2zdr\" (UID: \"5fb79c4e-3171-4d3c-a0d1-ed1a93acafa8\") " pod="assisted-installer/assisted-installer-controller-p2zdr" Feb 16 02:05:18.360060 master-0 kubenswrapper[4147]: I0216 02:05:18.359990 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/5fb79c4e-3171-4d3c-a0d1-ed1a93acafa8-host-var-run-resolv-conf\") pod \"assisted-installer-controller-p2zdr\" (UID: \"5fb79c4e-3171-4d3c-a0d1-ed1a93acafa8\") " pod="assisted-installer/assisted-installer-controller-p2zdr" Feb 16 02:05:18.360152 master-0 kubenswrapper[4147]: I0216 02:05:18.360075 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/5fb79c4e-3171-4d3c-a0d1-ed1a93acafa8-host-ca-bundle\") pod \"assisted-installer-controller-p2zdr\" (UID: \"5fb79c4e-3171-4d3c-a0d1-ed1a93acafa8\") " pod="assisted-installer/assisted-installer-controller-p2zdr" Feb 16 02:05:18.360152 master-0 kubenswrapper[4147]: I0216 02:05:18.360128 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9xzm\" (UniqueName: \"kubernetes.io/projected/5fb79c4e-3171-4d3c-a0d1-ed1a93acafa8-kube-api-access-g9xzm\") pod \"assisted-installer-controller-p2zdr\" (UID: \"5fb79c4e-3171-4d3c-a0d1-ed1a93acafa8\") " pod="assisted-installer/assisted-installer-controller-p2zdr" Feb 16 02:05:18.460922 master-0 kubenswrapper[4147]: I0216 02:05:18.460482 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/5fb79c4e-3171-4d3c-a0d1-ed1a93acafa8-host-ca-bundle\") pod \"assisted-installer-controller-p2zdr\" (UID: \"5fb79c4e-3171-4d3c-a0d1-ed1a93acafa8\") " pod="assisted-installer/assisted-installer-controller-p2zdr" Feb 16 02:05:18.460922 master-0 kubenswrapper[4147]: I0216 02:05:18.460515 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9xzm\" (UniqueName: \"kubernetes.io/projected/5fb79c4e-3171-4d3c-a0d1-ed1a93acafa8-kube-api-access-g9xzm\") pod \"assisted-installer-controller-p2zdr\" (UID: \"5fb79c4e-3171-4d3c-a0d1-ed1a93acafa8\") " pod="assisted-installer/assisted-installer-controller-p2zdr" Feb 16 02:05:18.460922 master-0 kubenswrapper[4147]: I0216 02:05:18.460534 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/5fb79c4e-3171-4d3c-a0d1-ed1a93acafa8-sno-bootstrap-files\") pod \"assisted-installer-controller-p2zdr\" (UID: \"5fb79c4e-3171-4d3c-a0d1-ed1a93acafa8\") " pod="assisted-installer/assisted-installer-controller-p2zdr" Feb 16 02:05:18.460922 master-0 kubenswrapper[4147]: I0216 02:05:18.460549 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/5fb79c4e-3171-4d3c-a0d1-ed1a93acafa8-host-resolv-conf\") pod \"assisted-installer-controller-p2zdr\" (UID: \"5fb79c4e-3171-4d3c-a0d1-ed1a93acafa8\") " pod="assisted-installer/assisted-installer-controller-p2zdr" Feb 16 02:05:18.460922 master-0 kubenswrapper[4147]: I0216 02:05:18.460566 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/5fb79c4e-3171-4d3c-a0d1-ed1a93acafa8-host-var-run-resolv-conf\") pod \"assisted-installer-controller-p2zdr\" (UID: \"5fb79c4e-3171-4d3c-a0d1-ed1a93acafa8\") " pod="assisted-installer/assisted-installer-controller-p2zdr" Feb 16 02:05:18.460922 master-0 kubenswrapper[4147]: I0216 02:05:18.460609 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/5fb79c4e-3171-4d3c-a0d1-ed1a93acafa8-host-var-run-resolv-conf\") pod \"assisted-installer-controller-p2zdr\" (UID: \"5fb79c4e-3171-4d3c-a0d1-ed1a93acafa8\") " pod="assisted-installer/assisted-installer-controller-p2zdr" Feb 16 02:05:18.460922 master-0 kubenswrapper[4147]: I0216 02:05:18.460643 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/5fb79c4e-3171-4d3c-a0d1-ed1a93acafa8-host-ca-bundle\") pod \"assisted-installer-controller-p2zdr\" (UID: \"5fb79c4e-3171-4d3c-a0d1-ed1a93acafa8\") " pod="assisted-installer/assisted-installer-controller-p2zdr" Feb 16 02:05:18.460922 master-0 kubenswrapper[4147]: I0216 02:05:18.460866 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/5fb79c4e-3171-4d3c-a0d1-ed1a93acafa8-sno-bootstrap-files\") pod \"assisted-installer-controller-p2zdr\" (UID: \"5fb79c4e-3171-4d3c-a0d1-ed1a93acafa8\") " pod="assisted-installer/assisted-installer-controller-p2zdr" Feb 16 02:05:18.460922 master-0 kubenswrapper[4147]: I0216 02:05:18.460892 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/5fb79c4e-3171-4d3c-a0d1-ed1a93acafa8-host-resolv-conf\") pod \"assisted-installer-controller-p2zdr\" (UID: \"5fb79c4e-3171-4d3c-a0d1-ed1a93acafa8\") " pod="assisted-installer/assisted-installer-controller-p2zdr" Feb 16 02:05:18.489737 master-0 kubenswrapper[4147]: I0216 02:05:18.489700 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9xzm\" (UniqueName: \"kubernetes.io/projected/5fb79c4e-3171-4d3c-a0d1-ed1a93acafa8-kube-api-access-g9xzm\") pod \"assisted-installer-controller-p2zdr\" (UID: \"5fb79c4e-3171-4d3c-a0d1-ed1a93acafa8\") " pod="assisted-installer/assisted-installer-controller-p2zdr" Feb 16 02:05:18.695506 master-0 kubenswrapper[4147]: I0216 02:05:18.695314 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-p2zdr" Feb 16 02:05:18.708274 master-0 kubenswrapper[4147]: W0216 02:05:18.708224 4147 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5fb79c4e_3171_4d3c_a0d1_ed1a93acafa8.slice/crio-053a88a441f8640d0f49f51a0dfba08da37a8b783fce751c0810295de49cf426 WatchSource:0}: Error finding container 053a88a441f8640d0f49f51a0dfba08da37a8b783fce751c0810295de49cf426: Status 404 returned error can't find the container with id 053a88a441f8640d0f49f51a0dfba08da37a8b783fce751c0810295de49cf426 Feb 16 02:05:18.864058 master-0 kubenswrapper[4147]: I0216 02:05:18.863998 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-serving-cert\") pod \"cluster-version-operator-76959b6567-9fxxl\" (UID: \"864c0ef4-319c-457c-aa3b-adf0c3e5a0ff\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-9fxxl" Feb 16 02:05:18.864321 master-0 kubenswrapper[4147]: E0216 02:05:18.864122 4147 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 16 02:05:18.864321 master-0 kubenswrapper[4147]: E0216 02:05:18.864173 4147 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-serving-cert podName:864c0ef4-319c-457c-aa3b-adf0c3e5a0ff nodeName:}" failed. No retries permitted until 2026-02-16 02:05:20.864155289 +0000 UTC m=+39.499890405 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-serving-cert") pod "cluster-version-operator-76959b6567-9fxxl" (UID: "864c0ef4-319c-457c-aa3b-adf0c3e5a0ff") : secret "cluster-version-operator-serving-cert" not found Feb 16 02:05:19.412661 master-0 kubenswrapper[4147]: I0216 02:05:19.412589 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-p2zdr" event={"ID":"5fb79c4e-3171-4d3c-a0d1-ed1a93acafa8","Type":"ContainerStarted","Data":"053a88a441f8640d0f49f51a0dfba08da37a8b783fce751c0810295de49cf426"} Feb 16 02:05:19.508809 master-0 kubenswrapper[4147]: I0216 02:05:19.508745 4147 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 16 02:05:20.417824 master-0 kubenswrapper[4147]: I0216 02:05:20.417053 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-6fcf4c966-dctqr" event={"ID":"456e6c3a-c16c-470b-a0cd-bb79865b54f0","Type":"ContainerStarted","Data":"84b61562b0c4e54147ae15c3e99cac0408baf94416f7643d3aafcf6087c2cdf4"} Feb 16 02:05:20.434140 master-0 kubenswrapper[4147]: I0216 02:05:20.433629 4147 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-operator/network-operator-6fcf4c966-dctqr" podStartSLOduration=0.886468313 podStartE2EDuration="4.433562688s" podCreationTimestamp="2026-02-16 02:05:16 +0000 UTC" firstStartedPulling="2026-02-16 02:05:16.537738631 +0000 UTC m=+35.173473787" lastFinishedPulling="2026-02-16 02:05:20.084833046 +0000 UTC m=+38.720568162" observedRunningTime="2026-02-16 02:05:20.430074085 +0000 UTC m=+39.065809201" watchObservedRunningTime="2026-02-16 02:05:20.433562688 +0000 UTC m=+39.069297864" Feb 16 02:05:20.875412 master-0 kubenswrapper[4147]: I0216 02:05:20.875034 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-serving-cert\") pod \"cluster-version-operator-76959b6567-9fxxl\" (UID: \"864c0ef4-319c-457c-aa3b-adf0c3e5a0ff\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-9fxxl" Feb 16 02:05:20.875412 master-0 kubenswrapper[4147]: E0216 02:05:20.875200 4147 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 16 02:05:20.875412 master-0 kubenswrapper[4147]: E0216 02:05:20.875281 4147 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-serving-cert podName:864c0ef4-319c-457c-aa3b-adf0c3e5a0ff nodeName:}" failed. No retries permitted until 2026-02-16 02:05:24.875256032 +0000 UTC m=+43.510991188 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-serving-cert") pod "cluster-version-operator-76959b6567-9fxxl" (UID: "864c0ef4-319c-457c-aa3b-adf0c3e5a0ff") : secret "cluster-version-operator-serving-cert" not found Feb 16 02:05:22.374889 master-0 kubenswrapper[4147]: I0216 02:05:22.374755 4147 csr.go:261] certificate signing request csr-rdwlg is approved, waiting to be issued Feb 16 02:05:22.383609 master-0 kubenswrapper[4147]: I0216 02:05:22.383584 4147 csr.go:257] certificate signing request csr-rdwlg is issued Feb 16 02:05:22.700570 master-0 kubenswrapper[4147]: I0216 02:05:22.700184 4147 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/mtu-prober-xgtz9"] Feb 16 02:05:22.700570 master-0 kubenswrapper[4147]: I0216 02:05:22.700476 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-xgtz9" Feb 16 02:05:22.792690 master-0 kubenswrapper[4147]: I0216 02:05:22.792602 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fpjh\" (UniqueName: \"kubernetes.io/projected/a874e346-456c-4e93-87bd-7b70434ddeb1-kube-api-access-4fpjh\") pod \"mtu-prober-xgtz9\" (UID: \"a874e346-456c-4e93-87bd-7b70434ddeb1\") " pod="openshift-network-operator/mtu-prober-xgtz9" Feb 16 02:05:22.893498 master-0 kubenswrapper[4147]: I0216 02:05:22.893460 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fpjh\" (UniqueName: \"kubernetes.io/projected/a874e346-456c-4e93-87bd-7b70434ddeb1-kube-api-access-4fpjh\") pod \"mtu-prober-xgtz9\" (UID: \"a874e346-456c-4e93-87bd-7b70434ddeb1\") " pod="openshift-network-operator/mtu-prober-xgtz9" Feb 16 02:05:22.922444 master-0 kubenswrapper[4147]: I0216 02:05:22.922413 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fpjh\" (UniqueName: \"kubernetes.io/projected/a874e346-456c-4e93-87bd-7b70434ddeb1-kube-api-access-4fpjh\") pod \"mtu-prober-xgtz9\" (UID: \"a874e346-456c-4e93-87bd-7b70434ddeb1\") " pod="openshift-network-operator/mtu-prober-xgtz9" Feb 16 02:05:23.016190 master-0 kubenswrapper[4147]: I0216 02:05:23.016137 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-xgtz9" Feb 16 02:05:23.386419 master-0 kubenswrapper[4147]: I0216 02:05:23.386274 4147 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-17 01:56:50 +0000 UTC, rotation deadline is 2026-02-16 22:24:19.739772528 +0000 UTC Feb 16 02:05:23.386419 master-0 kubenswrapper[4147]: I0216 02:05:23.386329 4147 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 20h18m56.353448336s for next certificate rotation Feb 16 02:05:23.530914 master-0 kubenswrapper[4147]: W0216 02:05:23.530839 4147 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda874e346_456c_4e93_87bd_7b70434ddeb1.slice/crio-821f1d48faf9d45a2dade69f93e659089f19489c18c9dee5c3fa79d5976d4f78 WatchSource:0}: Error finding container 821f1d48faf9d45a2dade69f93e659089f19489c18c9dee5c3fa79d5976d4f78: Status 404 returned error can't find the container with id 821f1d48faf9d45a2dade69f93e659089f19489c18c9dee5c3fa79d5976d4f78 Feb 16 02:05:24.387427 master-0 kubenswrapper[4147]: I0216 02:05:24.387317 4147 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-17 01:56:50 +0000 UTC, rotation deadline is 2026-02-16 21:05:31.498512533 +0000 UTC Feb 16 02:05:24.387427 master-0 kubenswrapper[4147]: I0216 02:05:24.387367 4147 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 19h0m7.111150463s for next certificate rotation Feb 16 02:05:24.429996 master-0 kubenswrapper[4147]: I0216 02:05:24.429927 4147 generic.go:334] "Generic (PLEG): container finished" podID="5fb79c4e-3171-4d3c-a0d1-ed1a93acafa8" containerID="e0727143cdcad3cc5251095472bb96e72f7ab1b59c0a90ac12887c7c83657168" exitCode=0 Feb 16 02:05:24.430166 master-0 kubenswrapper[4147]: I0216 02:05:24.430047 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-p2zdr" event={"ID":"5fb79c4e-3171-4d3c-a0d1-ed1a93acafa8","Type":"ContainerDied","Data":"e0727143cdcad3cc5251095472bb96e72f7ab1b59c0a90ac12887c7c83657168"} Feb 16 02:05:24.432779 master-0 kubenswrapper[4147]: I0216 02:05:24.432716 4147 generic.go:334] "Generic (PLEG): container finished" podID="a874e346-456c-4e93-87bd-7b70434ddeb1" containerID="55eb3affcbf11e7c854417599461f5fea225338fb48ac5a0d81226de9a467092" exitCode=0 Feb 16 02:05:24.432908 master-0 kubenswrapper[4147]: I0216 02:05:24.432783 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-xgtz9" event={"ID":"a874e346-456c-4e93-87bd-7b70434ddeb1","Type":"ContainerDied","Data":"55eb3affcbf11e7c854417599461f5fea225338fb48ac5a0d81226de9a467092"} Feb 16 02:05:24.432908 master-0 kubenswrapper[4147]: I0216 02:05:24.432868 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-xgtz9" event={"ID":"a874e346-456c-4e93-87bd-7b70434ddeb1","Type":"ContainerStarted","Data":"821f1d48faf9d45a2dade69f93e659089f19489c18c9dee5c3fa79d5976d4f78"} Feb 16 02:05:24.909598 master-0 kubenswrapper[4147]: I0216 02:05:24.909499 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-serving-cert\") pod \"cluster-version-operator-76959b6567-9fxxl\" (UID: \"864c0ef4-319c-457c-aa3b-adf0c3e5a0ff\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-9fxxl" Feb 16 02:05:24.909920 master-0 kubenswrapper[4147]: E0216 02:05:24.909700 4147 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 16 02:05:24.909920 master-0 kubenswrapper[4147]: E0216 02:05:24.909781 4147 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-serving-cert podName:864c0ef4-319c-457c-aa3b-adf0c3e5a0ff nodeName:}" failed. No retries permitted until 2026-02-16 02:05:32.909757503 +0000 UTC m=+51.545492649 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-serving-cert") pod "cluster-version-operator-76959b6567-9fxxl" (UID: "864c0ef4-319c-457c-aa3b-adf0c3e5a0ff") : secret "cluster-version-operator-serving-cert" not found Feb 16 02:05:25.461932 master-0 kubenswrapper[4147]: I0216 02:05:25.461875 4147 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-xgtz9" Feb 16 02:05:25.469014 master-0 kubenswrapper[4147]: I0216 02:05:25.468953 4147 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-p2zdr" Feb 16 02:05:25.614362 master-0 kubenswrapper[4147]: I0216 02:05:25.614251 4147 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fpjh\" (UniqueName: \"kubernetes.io/projected/a874e346-456c-4e93-87bd-7b70434ddeb1-kube-api-access-4fpjh\") pod \"a874e346-456c-4e93-87bd-7b70434ddeb1\" (UID: \"a874e346-456c-4e93-87bd-7b70434ddeb1\") " Feb 16 02:05:25.614362 master-0 kubenswrapper[4147]: I0216 02:05:25.614332 4147 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/5fb79c4e-3171-4d3c-a0d1-ed1a93acafa8-host-resolv-conf\") pod \"5fb79c4e-3171-4d3c-a0d1-ed1a93acafa8\" (UID: \"5fb79c4e-3171-4d3c-a0d1-ed1a93acafa8\") " Feb 16 02:05:25.614755 master-0 kubenswrapper[4147]: I0216 02:05:25.614382 4147 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9xzm\" (UniqueName: \"kubernetes.io/projected/5fb79c4e-3171-4d3c-a0d1-ed1a93acafa8-kube-api-access-g9xzm\") pod \"5fb79c4e-3171-4d3c-a0d1-ed1a93acafa8\" (UID: \"5fb79c4e-3171-4d3c-a0d1-ed1a93acafa8\") " Feb 16 02:05:25.614755 master-0 kubenswrapper[4147]: I0216 02:05:25.614429 4147 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/5fb79c4e-3171-4d3c-a0d1-ed1a93acafa8-sno-bootstrap-files\") pod \"5fb79c4e-3171-4d3c-a0d1-ed1a93acafa8\" (UID: \"5fb79c4e-3171-4d3c-a0d1-ed1a93acafa8\") " Feb 16 02:05:25.614755 master-0 kubenswrapper[4147]: I0216 02:05:25.614512 4147 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/5fb79c4e-3171-4d3c-a0d1-ed1a93acafa8-host-var-run-resolv-conf\") pod \"5fb79c4e-3171-4d3c-a0d1-ed1a93acafa8\" (UID: \"5fb79c4e-3171-4d3c-a0d1-ed1a93acafa8\") " Feb 16 02:05:25.614755 master-0 kubenswrapper[4147]: I0216 02:05:25.614556 4147 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/5fb79c4e-3171-4d3c-a0d1-ed1a93acafa8-host-ca-bundle\") pod \"5fb79c4e-3171-4d3c-a0d1-ed1a93acafa8\" (UID: \"5fb79c4e-3171-4d3c-a0d1-ed1a93acafa8\") " Feb 16 02:05:25.614755 master-0 kubenswrapper[4147]: I0216 02:05:25.614714 4147 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5fb79c4e-3171-4d3c-a0d1-ed1a93acafa8-host-ca-bundle" (OuterVolumeSpecName: "host-ca-bundle") pod "5fb79c4e-3171-4d3c-a0d1-ed1a93acafa8" (UID: "5fb79c4e-3171-4d3c-a0d1-ed1a93acafa8"). InnerVolumeSpecName "host-ca-bundle". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:05:25.615059 master-0 kubenswrapper[4147]: I0216 02:05:25.614824 4147 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5fb79c4e-3171-4d3c-a0d1-ed1a93acafa8-host-resolv-conf" (OuterVolumeSpecName: "host-resolv-conf") pod "5fb79c4e-3171-4d3c-a0d1-ed1a93acafa8" (UID: "5fb79c4e-3171-4d3c-a0d1-ed1a93acafa8"). InnerVolumeSpecName "host-resolv-conf". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:05:25.615369 master-0 kubenswrapper[4147]: I0216 02:05:25.615279 4147 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5fb79c4e-3171-4d3c-a0d1-ed1a93acafa8-sno-bootstrap-files" (OuterVolumeSpecName: "sno-bootstrap-files") pod "5fb79c4e-3171-4d3c-a0d1-ed1a93acafa8" (UID: "5fb79c4e-3171-4d3c-a0d1-ed1a93acafa8"). InnerVolumeSpecName "sno-bootstrap-files". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:05:25.615482 master-0 kubenswrapper[4147]: I0216 02:05:25.615404 4147 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5fb79c4e-3171-4d3c-a0d1-ed1a93acafa8-host-var-run-resolv-conf" (OuterVolumeSpecName: "host-var-run-resolv-conf") pod "5fb79c4e-3171-4d3c-a0d1-ed1a93acafa8" (UID: "5fb79c4e-3171-4d3c-a0d1-ed1a93acafa8"). InnerVolumeSpecName "host-var-run-resolv-conf". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:05:25.622216 master-0 kubenswrapper[4147]: I0216 02:05:25.622142 4147 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fb79c4e-3171-4d3c-a0d1-ed1a93acafa8-kube-api-access-g9xzm" (OuterVolumeSpecName: "kube-api-access-g9xzm") pod "5fb79c4e-3171-4d3c-a0d1-ed1a93acafa8" (UID: "5fb79c4e-3171-4d3c-a0d1-ed1a93acafa8"). InnerVolumeSpecName "kube-api-access-g9xzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:05:25.622396 master-0 kubenswrapper[4147]: I0216 02:05:25.622219 4147 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a874e346-456c-4e93-87bd-7b70434ddeb1-kube-api-access-4fpjh" (OuterVolumeSpecName: "kube-api-access-4fpjh") pod "a874e346-456c-4e93-87bd-7b70434ddeb1" (UID: "a874e346-456c-4e93-87bd-7b70434ddeb1"). InnerVolumeSpecName "kube-api-access-4fpjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:05:25.715286 master-0 kubenswrapper[4147]: I0216 02:05:25.715069 4147 reconciler_common.go:293] "Volume detached for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/5fb79c4e-3171-4d3c-a0d1-ed1a93acafa8-host-var-run-resolv-conf\") on node \"master-0\" DevicePath \"\"" Feb 16 02:05:25.715286 master-0 kubenswrapper[4147]: I0216 02:05:25.715119 4147 reconciler_common.go:293] "Volume detached for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/5fb79c4e-3171-4d3c-a0d1-ed1a93acafa8-host-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 02:05:25.715286 master-0 kubenswrapper[4147]: I0216 02:05:25.715138 4147 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4fpjh\" (UniqueName: \"kubernetes.io/projected/a874e346-456c-4e93-87bd-7b70434ddeb1-kube-api-access-4fpjh\") on node \"master-0\" DevicePath \"\"" Feb 16 02:05:25.715286 master-0 kubenswrapper[4147]: I0216 02:05:25.715155 4147 reconciler_common.go:293] "Volume detached for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/5fb79c4e-3171-4d3c-a0d1-ed1a93acafa8-host-resolv-conf\") on node \"master-0\" DevicePath \"\"" Feb 16 02:05:25.715286 master-0 kubenswrapper[4147]: I0216 02:05:25.715173 4147 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9xzm\" (UniqueName: \"kubernetes.io/projected/5fb79c4e-3171-4d3c-a0d1-ed1a93acafa8-kube-api-access-g9xzm\") on node \"master-0\" DevicePath \"\"" Feb 16 02:05:25.715286 master-0 kubenswrapper[4147]: I0216 02:05:25.715190 4147 reconciler_common.go:293] "Volume detached for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/5fb79c4e-3171-4d3c-a0d1-ed1a93acafa8-sno-bootstrap-files\") on node \"master-0\" DevicePath \"\"" Feb 16 02:05:26.439573 master-0 kubenswrapper[4147]: I0216 02:05:26.439429 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-p2zdr" event={"ID":"5fb79c4e-3171-4d3c-a0d1-ed1a93acafa8","Type":"ContainerDied","Data":"053a88a441f8640d0f49f51a0dfba08da37a8b783fce751c0810295de49cf426"} Feb 16 02:05:26.439573 master-0 kubenswrapper[4147]: I0216 02:05:26.439537 4147 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="053a88a441f8640d0f49f51a0dfba08da37a8b783fce751c0810295de49cf426" Feb 16 02:05:26.440156 master-0 kubenswrapper[4147]: I0216 02:05:26.439578 4147 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-p2zdr" Feb 16 02:05:26.442581 master-0 kubenswrapper[4147]: I0216 02:05:26.441810 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-xgtz9" event={"ID":"a874e346-456c-4e93-87bd-7b70434ddeb1","Type":"ContainerDied","Data":"821f1d48faf9d45a2dade69f93e659089f19489c18c9dee5c3fa79d5976d4f78"} Feb 16 02:05:26.442581 master-0 kubenswrapper[4147]: I0216 02:05:26.441866 4147 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="821f1d48faf9d45a2dade69f93e659089f19489c18c9dee5c3fa79d5976d4f78" Feb 16 02:05:26.442581 master-0 kubenswrapper[4147]: I0216 02:05:26.441978 4147 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-xgtz9" Feb 16 02:05:27.201662 master-0 kubenswrapper[4147]: I0216 02:05:27.201575 4147 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-0"] Feb 16 02:05:27.201662 master-0 kubenswrapper[4147]: I0216 02:05:27.201660 4147 scope.go:117] "RemoveContainer" containerID="1315b8b70fa662058fdbb3d25c0b57bbe5b7832e14fd3593c7b3c8b6954d366b" Feb 16 02:05:27.766760 master-0 kubenswrapper[4147]: I0216 02:05:27.766684 4147 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-network-operator/mtu-prober-xgtz9"] Feb 16 02:05:27.771374 master-0 kubenswrapper[4147]: I0216 02:05:27.771253 4147 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-network-operator/mtu-prober-xgtz9"] Feb 16 02:05:28.191257 master-0 kubenswrapper[4147]: I0216 02:05:28.191198 4147 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a874e346-456c-4e93-87bd-7b70434ddeb1" path="/var/lib/kubelet/pods/a874e346-456c-4e93-87bd-7b70434ddeb1/volumes" Feb 16 02:05:28.450244 master-0 kubenswrapper[4147]: I0216 02:05:28.450094 4147 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_b3322fd3717f4aec0d8f54ec7862c07e/kube-rbac-proxy-crio/2.log" Feb 16 02:05:28.451173 master-0 kubenswrapper[4147]: I0216 02:05:28.450694 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b3322fd3717f4aec0d8f54ec7862c07e","Type":"ContainerStarted","Data":"93fa1f75b959883173b882fc0c221239f90d8c0f0c6f464304aa368bf78625b2"} Feb 16 02:05:32.573495 master-0 kubenswrapper[4147]: I0216 02:05:32.568114 4147 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podStartSLOduration=5.56808423 podStartE2EDuration="5.56808423s" podCreationTimestamp="2026-02-16 02:05:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:05:28.467326448 +0000 UTC m=+47.103061624" watchObservedRunningTime="2026-02-16 02:05:32.56808423 +0000 UTC m=+51.203819376" Feb 16 02:05:32.573495 master-0 kubenswrapper[4147]: I0216 02:05:32.568574 4147 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-8jgrl"] Feb 16 02:05:32.573495 master-0 kubenswrapper[4147]: E0216 02:05:32.568710 4147 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a874e346-456c-4e93-87bd-7b70434ddeb1" containerName="prober" Feb 16 02:05:32.573495 master-0 kubenswrapper[4147]: I0216 02:05:32.568728 4147 state_mem.go:107] "Deleted CPUSet assignment" podUID="a874e346-456c-4e93-87bd-7b70434ddeb1" containerName="prober" Feb 16 02:05:32.573495 master-0 kubenswrapper[4147]: E0216 02:05:32.568742 4147 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fb79c4e-3171-4d3c-a0d1-ed1a93acafa8" containerName="assisted-installer-controller" Feb 16 02:05:32.573495 master-0 kubenswrapper[4147]: I0216 02:05:32.568758 4147 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fb79c4e-3171-4d3c-a0d1-ed1a93acafa8" containerName="assisted-installer-controller" Feb 16 02:05:32.573495 master-0 kubenswrapper[4147]: I0216 02:05:32.568796 4147 memory_manager.go:354] "RemoveStaleState removing state" podUID="a874e346-456c-4e93-87bd-7b70434ddeb1" containerName="prober" Feb 16 02:05:32.573495 master-0 kubenswrapper[4147]: I0216 02:05:32.568840 4147 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fb79c4e-3171-4d3c-a0d1-ed1a93acafa8" containerName="assisted-installer-controller" Feb 16 02:05:32.573495 master-0 kubenswrapper[4147]: I0216 02:05:32.569131 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-8jgrl" Feb 16 02:05:32.573495 master-0 kubenswrapper[4147]: I0216 02:05:32.572489 4147 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 16 02:05:32.573495 master-0 kubenswrapper[4147]: I0216 02:05:32.572739 4147 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 16 02:05:32.573495 master-0 kubenswrapper[4147]: I0216 02:05:32.572760 4147 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 16 02:05:32.573495 master-0 kubenswrapper[4147]: I0216 02:05:32.572832 4147 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 16 02:05:32.665823 master-0 kubenswrapper[4147]: I0216 02:05:32.665556 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-host-var-lib-cni-bin\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:05:32.665823 master-0 kubenswrapper[4147]: I0216 02:05:32.665641 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-etc-kubernetes\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:05:32.665823 master-0 kubenswrapper[4147]: I0216 02:05:32.665672 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-multus-conf-dir\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:05:32.665823 master-0 kubenswrapper[4147]: I0216 02:05:32.665690 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-os-release\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:05:32.665823 master-0 kubenswrapper[4147]: I0216 02:05:32.665708 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-host-run-netns\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:05:32.665823 master-0 kubenswrapper[4147]: I0216 02:05:32.665730 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-cnibin\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:05:32.665823 master-0 kubenswrapper[4147]: I0216 02:05:32.665756 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-system-cni-dir\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:05:32.665823 master-0 kubenswrapper[4147]: I0216 02:05:32.665775 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-multus-cni-dir\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:05:32.666355 master-0 kubenswrapper[4147]: I0216 02:05:32.665879 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-host-var-lib-kubelet\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:05:32.666355 master-0 kubenswrapper[4147]: I0216 02:05:32.666034 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-hostroot\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:05:32.666355 master-0 kubenswrapper[4147]: I0216 02:05:32.666154 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/430c146b-ceaf-411a-add6-ce949243aabf-multus-daemon-config\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:05:32.666355 master-0 kubenswrapper[4147]: I0216 02:05:32.666261 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-multus-socket-dir-parent\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:05:32.666355 master-0 kubenswrapper[4147]: I0216 02:05:32.666313 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-host-var-lib-cni-multus\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:05:32.666355 master-0 kubenswrapper[4147]: I0216 02:05:32.666351 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/430c146b-ceaf-411a-add6-ce949243aabf-cni-binary-copy\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:05:32.666559 master-0 kubenswrapper[4147]: I0216 02:05:32.666380 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-host-run-k8s-cni-cncf-io\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:05:32.666559 master-0 kubenswrapper[4147]: I0216 02:05:32.666420 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-host-run-multus-certs\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:05:32.666559 master-0 kubenswrapper[4147]: I0216 02:05:32.666496 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdllq\" (UniqueName: \"kubernetes.io/projected/430c146b-ceaf-411a-add6-ce949243aabf-kube-api-access-vdllq\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:05:32.764151 master-0 kubenswrapper[4147]: I0216 02:05:32.764020 4147 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-mvdkf"] Feb 16 02:05:32.765707 master-0 kubenswrapper[4147]: I0216 02:05:32.765663 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-mvdkf" Feb 16 02:05:32.768062 master-0 kubenswrapper[4147]: I0216 02:05:32.766726 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-multus-conf-dir\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:05:32.768062 master-0 kubenswrapper[4147]: I0216 02:05:32.766791 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-cnibin\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:05:32.768062 master-0 kubenswrapper[4147]: I0216 02:05:32.766871 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-multus-conf-dir\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:05:32.768062 master-0 kubenswrapper[4147]: I0216 02:05:32.766904 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-os-release\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:05:32.768062 master-0 kubenswrapper[4147]: I0216 02:05:32.766934 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-host-run-netns\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:05:32.768062 master-0 kubenswrapper[4147]: I0216 02:05:32.766964 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-hostroot\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:05:32.768062 master-0 kubenswrapper[4147]: I0216 02:05:32.766999 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-system-cni-dir\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:05:32.768062 master-0 kubenswrapper[4147]: I0216 02:05:32.767034 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-multus-cni-dir\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:05:32.768062 master-0 kubenswrapper[4147]: I0216 02:05:32.767137 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-hostroot\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:05:32.768062 master-0 kubenswrapper[4147]: I0216 02:05:32.767207 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-host-var-lib-kubelet\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:05:32.768062 master-0 kubenswrapper[4147]: I0216 02:05:32.767299 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-multus-socket-dir-parent\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:05:32.768062 master-0 kubenswrapper[4147]: I0216 02:05:32.767350 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/430c146b-ceaf-411a-add6-ce949243aabf-multus-daemon-config\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:05:32.768062 master-0 kubenswrapper[4147]: I0216 02:05:32.767366 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-system-cni-dir\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:05:32.768062 master-0 kubenswrapper[4147]: I0216 02:05:32.767394 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-host-var-lib-cni-multus\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:05:32.768062 master-0 kubenswrapper[4147]: I0216 02:05:32.767482 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdllq\" (UniqueName: \"kubernetes.io/projected/430c146b-ceaf-411a-add6-ce949243aabf-kube-api-access-vdllq\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:05:32.768062 master-0 kubenswrapper[4147]: I0216 02:05:32.767488 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-host-var-lib-cni-multus\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:05:32.768062 master-0 kubenswrapper[4147]: I0216 02:05:32.767518 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/430c146b-ceaf-411a-add6-ce949243aabf-cni-binary-copy\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:05:32.768062 master-0 kubenswrapper[4147]: I0216 02:05:32.767551 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-host-run-k8s-cni-cncf-io\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:05:32.768708 master-0 kubenswrapper[4147]: I0216 02:05:32.767585 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-host-run-multus-certs\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:05:32.768708 master-0 kubenswrapper[4147]: I0216 02:05:32.767668 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-host-var-lib-cni-bin\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:05:32.768708 master-0 kubenswrapper[4147]: I0216 02:05:32.767704 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-etc-kubernetes\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:05:32.768708 master-0 kubenswrapper[4147]: I0216 02:05:32.767717 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-host-run-netns\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:05:32.768708 master-0 kubenswrapper[4147]: I0216 02:05:32.767802 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-cnibin\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:05:32.768708 master-0 kubenswrapper[4147]: I0216 02:05:32.767852 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-os-release\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:05:32.768708 master-0 kubenswrapper[4147]: I0216 02:05:32.767807 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-multus-cni-dir\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:05:32.768708 master-0 kubenswrapper[4147]: I0216 02:05:32.767976 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-multus-socket-dir-parent\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:05:32.768708 master-0 kubenswrapper[4147]: I0216 02:05:32.768594 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-etc-kubernetes\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:05:32.768708 master-0 kubenswrapper[4147]: I0216 02:05:32.768670 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-host-var-lib-kubelet\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:05:32.768990 master-0 kubenswrapper[4147]: I0216 02:05:32.768862 4147 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Feb 16 02:05:32.769027 master-0 kubenswrapper[4147]: I0216 02:05:32.768995 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-host-run-k8s-cni-cncf-io\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:05:32.769065 master-0 kubenswrapper[4147]: I0216 02:05:32.769032 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-host-run-multus-certs\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:05:32.769065 master-0 kubenswrapper[4147]: I0216 02:05:32.769047 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/430c146b-ceaf-411a-add6-ce949243aabf-multus-daemon-config\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:05:32.769065 master-0 kubenswrapper[4147]: I0216 02:05:32.769062 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-host-var-lib-cni-bin\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:05:32.769177 master-0 kubenswrapper[4147]: I0216 02:05:32.769163 4147 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 16 02:05:32.769282 master-0 kubenswrapper[4147]: I0216 02:05:32.769237 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/430c146b-ceaf-411a-add6-ce949243aabf-cni-binary-copy\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:05:32.794479 master-0 kubenswrapper[4147]: I0216 02:05:32.793315 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdllq\" (UniqueName: \"kubernetes.io/projected/430c146b-ceaf-411a-add6-ce949243aabf-kube-api-access-vdllq\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:05:32.868574 master-0 kubenswrapper[4147]: I0216 02:05:32.868470 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f91346c7-bde4-4fa2-ac27-b5f0d25eeb75-os-release\") pod \"multus-additional-cni-plugins-mvdkf\" (UID: \"f91346c7-bde4-4fa2-ac27-b5f0d25eeb75\") " pod="openshift-multus/multus-additional-cni-plugins-mvdkf" Feb 16 02:05:32.868817 master-0 kubenswrapper[4147]: I0216 02:05:32.868798 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f91346c7-bde4-4fa2-ac27-b5f0d25eeb75-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mvdkf\" (UID: \"f91346c7-bde4-4fa2-ac27-b5f0d25eeb75\") " pod="openshift-multus/multus-additional-cni-plugins-mvdkf" Feb 16 02:05:32.868929 master-0 kubenswrapper[4147]: I0216 02:05:32.868910 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/f91346c7-bde4-4fa2-ac27-b5f0d25eeb75-whereabouts-configmap\") pod \"multus-additional-cni-plugins-mvdkf\" (UID: \"f91346c7-bde4-4fa2-ac27-b5f0d25eeb75\") " pod="openshift-multus/multus-additional-cni-plugins-mvdkf" Feb 16 02:05:32.869052 master-0 kubenswrapper[4147]: I0216 02:05:32.869036 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f91346c7-bde4-4fa2-ac27-b5f0d25eeb75-system-cni-dir\") pod \"multus-additional-cni-plugins-mvdkf\" (UID: \"f91346c7-bde4-4fa2-ac27-b5f0d25eeb75\") " pod="openshift-multus/multus-additional-cni-plugins-mvdkf" Feb 16 02:05:32.869159 master-0 kubenswrapper[4147]: I0216 02:05:32.869140 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f91346c7-bde4-4fa2-ac27-b5f0d25eeb75-cnibin\") pod \"multus-additional-cni-plugins-mvdkf\" (UID: \"f91346c7-bde4-4fa2-ac27-b5f0d25eeb75\") " pod="openshift-multus/multus-additional-cni-plugins-mvdkf" Feb 16 02:05:32.869270 master-0 kubenswrapper[4147]: I0216 02:05:32.869254 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f91346c7-bde4-4fa2-ac27-b5f0d25eeb75-cni-binary-copy\") pod \"multus-additional-cni-plugins-mvdkf\" (UID: \"f91346c7-bde4-4fa2-ac27-b5f0d25eeb75\") " pod="openshift-multus/multus-additional-cni-plugins-mvdkf" Feb 16 02:05:32.869373 master-0 kubenswrapper[4147]: I0216 02:05:32.869358 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ns9l\" (UniqueName: \"kubernetes.io/projected/f91346c7-bde4-4fa2-ac27-b5f0d25eeb75-kube-api-access-4ns9l\") pod \"multus-additional-cni-plugins-mvdkf\" (UID: \"f91346c7-bde4-4fa2-ac27-b5f0d25eeb75\") " pod="openshift-multus/multus-additional-cni-plugins-mvdkf" Feb 16 02:05:32.869505 master-0 kubenswrapper[4147]: I0216 02:05:32.869489 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f91346c7-bde4-4fa2-ac27-b5f0d25eeb75-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-mvdkf\" (UID: \"f91346c7-bde4-4fa2-ac27-b5f0d25eeb75\") " pod="openshift-multus/multus-additional-cni-plugins-mvdkf" Feb 16 02:05:32.895047 master-0 kubenswrapper[4147]: I0216 02:05:32.894979 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-8jgrl" Feb 16 02:05:32.971356 master-0 kubenswrapper[4147]: I0216 02:05:32.970337 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ns9l\" (UniqueName: \"kubernetes.io/projected/f91346c7-bde4-4fa2-ac27-b5f0d25eeb75-kube-api-access-4ns9l\") pod \"multus-additional-cni-plugins-mvdkf\" (UID: \"f91346c7-bde4-4fa2-ac27-b5f0d25eeb75\") " pod="openshift-multus/multus-additional-cni-plugins-mvdkf" Feb 16 02:05:32.971356 master-0 kubenswrapper[4147]: I0216 02:05:32.971005 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f91346c7-bde4-4fa2-ac27-b5f0d25eeb75-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-mvdkf\" (UID: \"f91346c7-bde4-4fa2-ac27-b5f0d25eeb75\") " pod="openshift-multus/multus-additional-cni-plugins-mvdkf" Feb 16 02:05:32.971962 master-0 kubenswrapper[4147]: I0216 02:05:32.971911 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f91346c7-bde4-4fa2-ac27-b5f0d25eeb75-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mvdkf\" (UID: \"f91346c7-bde4-4fa2-ac27-b5f0d25eeb75\") " pod="openshift-multus/multus-additional-cni-plugins-mvdkf" Feb 16 02:05:32.972133 master-0 kubenswrapper[4147]: I0216 02:05:32.972105 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f91346c7-bde4-4fa2-ac27-b5f0d25eeb75-os-release\") pod \"multus-additional-cni-plugins-mvdkf\" (UID: \"f91346c7-bde4-4fa2-ac27-b5f0d25eeb75\") " pod="openshift-multus/multus-additional-cni-plugins-mvdkf" Feb 16 02:05:32.972318 master-0 kubenswrapper[4147]: I0216 02:05:32.972292 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-serving-cert\") pod \"cluster-version-operator-76959b6567-9fxxl\" (UID: \"864c0ef4-319c-457c-aa3b-adf0c3e5a0ff\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-9fxxl" Feb 16 02:05:32.972496 master-0 kubenswrapper[4147]: I0216 02:05:32.972469 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/f91346c7-bde4-4fa2-ac27-b5f0d25eeb75-whereabouts-configmap\") pod \"multus-additional-cni-plugins-mvdkf\" (UID: \"f91346c7-bde4-4fa2-ac27-b5f0d25eeb75\") " pod="openshift-multus/multus-additional-cni-plugins-mvdkf" Feb 16 02:05:32.972656 master-0 kubenswrapper[4147]: I0216 02:05:32.972631 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f91346c7-bde4-4fa2-ac27-b5f0d25eeb75-system-cni-dir\") pod \"multus-additional-cni-plugins-mvdkf\" (UID: \"f91346c7-bde4-4fa2-ac27-b5f0d25eeb75\") " pod="openshift-multus/multus-additional-cni-plugins-mvdkf" Feb 16 02:05:32.972835 master-0 kubenswrapper[4147]: I0216 02:05:32.972810 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f91346c7-bde4-4fa2-ac27-b5f0d25eeb75-cnibin\") pod \"multus-additional-cni-plugins-mvdkf\" (UID: \"f91346c7-bde4-4fa2-ac27-b5f0d25eeb75\") " pod="openshift-multus/multus-additional-cni-plugins-mvdkf" Feb 16 02:05:32.972966 master-0 kubenswrapper[4147]: E0216 02:05:32.972490 4147 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 16 02:05:32.973236 master-0 kubenswrapper[4147]: I0216 02:05:32.973179 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f91346c7-bde4-4fa2-ac27-b5f0d25eeb75-cni-binary-copy\") pod \"multus-additional-cni-plugins-mvdkf\" (UID: \"f91346c7-bde4-4fa2-ac27-b5f0d25eeb75\") " pod="openshift-multus/multus-additional-cni-plugins-mvdkf" Feb 16 02:05:32.973318 master-0 kubenswrapper[4147]: I0216 02:05:32.973226 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/f91346c7-bde4-4fa2-ac27-b5f0d25eeb75-whereabouts-configmap\") pod \"multus-additional-cni-plugins-mvdkf\" (UID: \"f91346c7-bde4-4fa2-ac27-b5f0d25eeb75\") " pod="openshift-multus/multus-additional-cni-plugins-mvdkf" Feb 16 02:05:32.973318 master-0 kubenswrapper[4147]: I0216 02:05:32.972369 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f91346c7-bde4-4fa2-ac27-b5f0d25eeb75-os-release\") pod \"multus-additional-cni-plugins-mvdkf\" (UID: \"f91346c7-bde4-4fa2-ac27-b5f0d25eeb75\") " pod="openshift-multus/multus-additional-cni-plugins-mvdkf" Feb 16 02:05:32.973318 master-0 kubenswrapper[4147]: I0216 02:05:32.972133 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f91346c7-bde4-4fa2-ac27-b5f0d25eeb75-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-mvdkf\" (UID: \"f91346c7-bde4-4fa2-ac27-b5f0d25eeb75\") " pod="openshift-multus/multus-additional-cni-plugins-mvdkf" Feb 16 02:05:32.973318 master-0 kubenswrapper[4147]: I0216 02:05:32.973208 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f91346c7-bde4-4fa2-ac27-b5f0d25eeb75-cnibin\") pod \"multus-additional-cni-plugins-mvdkf\" (UID: \"f91346c7-bde4-4fa2-ac27-b5f0d25eeb75\") " pod="openshift-multus/multus-additional-cni-plugins-mvdkf" Feb 16 02:05:32.973318 master-0 kubenswrapper[4147]: I0216 02:05:32.972299 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f91346c7-bde4-4fa2-ac27-b5f0d25eeb75-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mvdkf\" (UID: \"f91346c7-bde4-4fa2-ac27-b5f0d25eeb75\") " pod="openshift-multus/multus-additional-cni-plugins-mvdkf" Feb 16 02:05:32.973318 master-0 kubenswrapper[4147]: I0216 02:05:32.972888 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f91346c7-bde4-4fa2-ac27-b5f0d25eeb75-system-cni-dir\") pod \"multus-additional-cni-plugins-mvdkf\" (UID: \"f91346c7-bde4-4fa2-ac27-b5f0d25eeb75\") " pod="openshift-multus/multus-additional-cni-plugins-mvdkf" Feb 16 02:05:32.973776 master-0 kubenswrapper[4147]: E0216 02:05:32.973751 4147 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-serving-cert podName:864c0ef4-319c-457c-aa3b-adf0c3e5a0ff nodeName:}" failed. No retries permitted until 2026-02-16 02:05:48.973217121 +0000 UTC m=+67.608952277 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-serving-cert") pod "cluster-version-operator-76959b6567-9fxxl" (UID: "864c0ef4-319c-457c-aa3b-adf0c3e5a0ff") : secret "cluster-version-operator-serving-cert" not found Feb 16 02:05:32.974938 master-0 kubenswrapper[4147]: I0216 02:05:32.974836 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f91346c7-bde4-4fa2-ac27-b5f0d25eeb75-cni-binary-copy\") pod \"multus-additional-cni-plugins-mvdkf\" (UID: \"f91346c7-bde4-4fa2-ac27-b5f0d25eeb75\") " pod="openshift-multus/multus-additional-cni-plugins-mvdkf" Feb 16 02:05:32.991334 master-0 kubenswrapper[4147]: I0216 02:05:32.991265 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ns9l\" (UniqueName: \"kubernetes.io/projected/f91346c7-bde4-4fa2-ac27-b5f0d25eeb75-kube-api-access-4ns9l\") pod \"multus-additional-cni-plugins-mvdkf\" (UID: \"f91346c7-bde4-4fa2-ac27-b5f0d25eeb75\") " pod="openshift-multus/multus-additional-cni-plugins-mvdkf" Feb 16 02:05:33.092932 master-0 kubenswrapper[4147]: I0216 02:05:33.092898 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-mvdkf" Feb 16 02:05:33.106056 master-0 kubenswrapper[4147]: W0216 02:05:33.105973 4147 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf91346c7_bde4_4fa2_ac27_b5f0d25eeb75.slice/crio-7ffd086045d7cf31e1d7c2da1b8924ee64ea940c7d3c880260b182cb5c759f90 WatchSource:0}: Error finding container 7ffd086045d7cf31e1d7c2da1b8924ee64ea940c7d3c880260b182cb5c759f90: Status 404 returned error can't find the container with id 7ffd086045d7cf31e1d7c2da1b8924ee64ea940c7d3c880260b182cb5c759f90 Feb 16 02:05:33.464745 master-0 kubenswrapper[4147]: I0216 02:05:33.464688 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-8jgrl" event={"ID":"430c146b-ceaf-411a-add6-ce949243aabf","Type":"ContainerStarted","Data":"359c73798649b5d5b089f1492d973d2f87ffd23f53f2f5868ba22d8d7543d4cc"} Feb 16 02:05:33.466062 master-0 kubenswrapper[4147]: I0216 02:05:33.466019 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mvdkf" event={"ID":"f91346c7-bde4-4fa2-ac27-b5f0d25eeb75","Type":"ContainerStarted","Data":"7ffd086045d7cf31e1d7c2da1b8924ee64ea940c7d3c880260b182cb5c759f90"} Feb 16 02:05:33.550237 master-0 kubenswrapper[4147]: I0216 02:05:33.550157 4147 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-gn9mv"] Feb 16 02:05:33.550900 master-0 kubenswrapper[4147]: I0216 02:05:33.550835 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gn9mv" Feb 16 02:05:33.551065 master-0 kubenswrapper[4147]: E0216 02:05:33.550967 4147 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gn9mv" podUID="7f0f9b7d-e663-4927-861b-a9544d483b6e" Feb 16 02:05:33.579020 master-0 kubenswrapper[4147]: I0216 02:05:33.578933 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7f0f9b7d-e663-4927-861b-a9544d483b6e-metrics-certs\") pod \"network-metrics-daemon-gn9mv\" (UID: \"7f0f9b7d-e663-4927-861b-a9544d483b6e\") " pod="openshift-multus/network-metrics-daemon-gn9mv" Feb 16 02:05:33.579020 master-0 kubenswrapper[4147]: I0216 02:05:33.578983 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m4sb\" (UniqueName: \"kubernetes.io/projected/7f0f9b7d-e663-4927-861b-a9544d483b6e-kube-api-access-5m4sb\") pod \"network-metrics-daemon-gn9mv\" (UID: \"7f0f9b7d-e663-4927-861b-a9544d483b6e\") " pod="openshift-multus/network-metrics-daemon-gn9mv" Feb 16 02:05:33.680721 master-0 kubenswrapper[4147]: I0216 02:05:33.680404 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7f0f9b7d-e663-4927-861b-a9544d483b6e-metrics-certs\") pod \"network-metrics-daemon-gn9mv\" (UID: \"7f0f9b7d-e663-4927-861b-a9544d483b6e\") " pod="openshift-multus/network-metrics-daemon-gn9mv" Feb 16 02:05:33.680964 master-0 kubenswrapper[4147]: I0216 02:05:33.680744 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5m4sb\" (UniqueName: \"kubernetes.io/projected/7f0f9b7d-e663-4927-861b-a9544d483b6e-kube-api-access-5m4sb\") pod \"network-metrics-daemon-gn9mv\" (UID: \"7f0f9b7d-e663-4927-861b-a9544d483b6e\") " pod="openshift-multus/network-metrics-daemon-gn9mv" Feb 16 02:05:33.680964 master-0 kubenswrapper[4147]: E0216 02:05:33.680748 4147 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 02:05:33.680964 master-0 kubenswrapper[4147]: E0216 02:05:33.680921 4147 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7f0f9b7d-e663-4927-861b-a9544d483b6e-metrics-certs podName:7f0f9b7d-e663-4927-861b-a9544d483b6e nodeName:}" failed. No retries permitted until 2026-02-16 02:05:34.18088458 +0000 UTC m=+52.816619736 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7f0f9b7d-e663-4927-861b-a9544d483b6e-metrics-certs") pod "network-metrics-daemon-gn9mv" (UID: "7f0f9b7d-e663-4927-861b-a9544d483b6e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 02:05:33.709183 master-0 kubenswrapper[4147]: I0216 02:05:33.709100 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5m4sb\" (UniqueName: \"kubernetes.io/projected/7f0f9b7d-e663-4927-861b-a9544d483b6e-kube-api-access-5m4sb\") pod \"network-metrics-daemon-gn9mv\" (UID: \"7f0f9b7d-e663-4927-861b-a9544d483b6e\") " pod="openshift-multus/network-metrics-daemon-gn9mv" Feb 16 02:05:34.184147 master-0 kubenswrapper[4147]: I0216 02:05:34.184046 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7f0f9b7d-e663-4927-861b-a9544d483b6e-metrics-certs\") pod \"network-metrics-daemon-gn9mv\" (UID: \"7f0f9b7d-e663-4927-861b-a9544d483b6e\") " pod="openshift-multus/network-metrics-daemon-gn9mv" Feb 16 02:05:34.184413 master-0 kubenswrapper[4147]: E0216 02:05:34.184228 4147 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 02:05:34.184413 master-0 kubenswrapper[4147]: E0216 02:05:34.184339 4147 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7f0f9b7d-e663-4927-861b-a9544d483b6e-metrics-certs podName:7f0f9b7d-e663-4927-861b-a9544d483b6e nodeName:}" failed. No retries permitted until 2026-02-16 02:05:35.184314099 +0000 UTC m=+53.820049255 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7f0f9b7d-e663-4927-861b-a9544d483b6e-metrics-certs") pod "network-metrics-daemon-gn9mv" (UID: "7f0f9b7d-e663-4927-861b-a9544d483b6e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 02:05:35.187788 master-0 kubenswrapper[4147]: I0216 02:05:35.187405 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gn9mv" Feb 16 02:05:35.188374 master-0 kubenswrapper[4147]: E0216 02:05:35.187947 4147 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gn9mv" podUID="7f0f9b7d-e663-4927-861b-a9544d483b6e" Feb 16 02:05:35.193249 master-0 kubenswrapper[4147]: I0216 02:05:35.193182 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7f0f9b7d-e663-4927-861b-a9544d483b6e-metrics-certs\") pod \"network-metrics-daemon-gn9mv\" (UID: \"7f0f9b7d-e663-4927-861b-a9544d483b6e\") " pod="openshift-multus/network-metrics-daemon-gn9mv" Feb 16 02:05:35.193393 master-0 kubenswrapper[4147]: E0216 02:05:35.193358 4147 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 02:05:35.193533 master-0 kubenswrapper[4147]: E0216 02:05:35.193508 4147 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7f0f9b7d-e663-4927-861b-a9544d483b6e-metrics-certs podName:7f0f9b7d-e663-4927-861b-a9544d483b6e nodeName:}" failed. No retries permitted until 2026-02-16 02:05:37.193487052 +0000 UTC m=+55.829222168 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7f0f9b7d-e663-4927-861b-a9544d483b6e-metrics-certs") pod "network-metrics-daemon-gn9mv" (UID: "7f0f9b7d-e663-4927-861b-a9544d483b6e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 02:05:36.476617 master-0 kubenswrapper[4147]: I0216 02:05:36.476522 4147 generic.go:334] "Generic (PLEG): container finished" podID="f91346c7-bde4-4fa2-ac27-b5f0d25eeb75" containerID="25d520296eb3e3e0c239fcaebd996a70fe80cf8a6487bc284e94db513bb2809d" exitCode=0 Feb 16 02:05:36.476617 master-0 kubenswrapper[4147]: I0216 02:05:36.476596 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mvdkf" event={"ID":"f91346c7-bde4-4fa2-ac27-b5f0d25eeb75","Type":"ContainerDied","Data":"25d520296eb3e3e0c239fcaebd996a70fe80cf8a6487bc284e94db513bb2809d"} Feb 16 02:05:37.187156 master-0 kubenswrapper[4147]: I0216 02:05:37.187083 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gn9mv" Feb 16 02:05:37.187419 master-0 kubenswrapper[4147]: E0216 02:05:37.187287 4147 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gn9mv" podUID="7f0f9b7d-e663-4927-861b-a9544d483b6e" Feb 16 02:05:37.240759 master-0 kubenswrapper[4147]: I0216 02:05:37.240697 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7f0f9b7d-e663-4927-861b-a9544d483b6e-metrics-certs\") pod \"network-metrics-daemon-gn9mv\" (UID: \"7f0f9b7d-e663-4927-861b-a9544d483b6e\") " pod="openshift-multus/network-metrics-daemon-gn9mv" Feb 16 02:05:37.240952 master-0 kubenswrapper[4147]: E0216 02:05:37.240896 4147 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 02:05:37.241037 master-0 kubenswrapper[4147]: E0216 02:05:37.241010 4147 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7f0f9b7d-e663-4927-861b-a9544d483b6e-metrics-certs podName:7f0f9b7d-e663-4927-861b-a9544d483b6e nodeName:}" failed. No retries permitted until 2026-02-16 02:05:41.240974795 +0000 UTC m=+59.876709961 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7f0f9b7d-e663-4927-861b-a9544d483b6e-metrics-certs") pod "network-metrics-daemon-gn9mv" (UID: "7f0f9b7d-e663-4927-861b-a9544d483b6e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 02:05:39.186941 master-0 kubenswrapper[4147]: I0216 02:05:39.186869 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gn9mv" Feb 16 02:05:39.188017 master-0 kubenswrapper[4147]: E0216 02:05:39.187013 4147 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gn9mv" podUID="7f0f9b7d-e663-4927-861b-a9544d483b6e" Feb 16 02:05:41.187901 master-0 kubenswrapper[4147]: I0216 02:05:41.187698 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gn9mv" Feb 16 02:05:41.187901 master-0 kubenswrapper[4147]: E0216 02:05:41.187887 4147 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gn9mv" podUID="7f0f9b7d-e663-4927-861b-a9544d483b6e" Feb 16 02:05:41.277490 master-0 kubenswrapper[4147]: I0216 02:05:41.277402 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7f0f9b7d-e663-4927-861b-a9544d483b6e-metrics-certs\") pod \"network-metrics-daemon-gn9mv\" (UID: \"7f0f9b7d-e663-4927-861b-a9544d483b6e\") " pod="openshift-multus/network-metrics-daemon-gn9mv" Feb 16 02:05:41.277704 master-0 kubenswrapper[4147]: E0216 02:05:41.277625 4147 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 02:05:41.277774 master-0 kubenswrapper[4147]: E0216 02:05:41.277733 4147 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7f0f9b7d-e663-4927-861b-a9544d483b6e-metrics-certs podName:7f0f9b7d-e663-4927-861b-a9544d483b6e nodeName:}" failed. No retries permitted until 2026-02-16 02:05:49.27770853 +0000 UTC m=+67.913443646 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7f0f9b7d-e663-4927-861b-a9544d483b6e-metrics-certs") pod "network-metrics-daemon-gn9mv" (UID: "7f0f9b7d-e663-4927-861b-a9544d483b6e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 02:05:43.188487 master-0 kubenswrapper[4147]: I0216 02:05:43.187758 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gn9mv" Feb 16 02:05:43.188487 master-0 kubenswrapper[4147]: E0216 02:05:43.188067 4147 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gn9mv" podUID="7f0f9b7d-e663-4927-861b-a9544d483b6e" Feb 16 02:05:44.960057 master-0 kubenswrapper[4147]: I0216 02:05:44.959978 4147 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-mcff9"] Feb 16 02:05:44.962191 master-0 kubenswrapper[4147]: I0216 02:05:44.960997 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-mcff9" Feb 16 02:05:44.963377 master-0 kubenswrapper[4147]: I0216 02:05:44.962870 4147 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 16 02:05:44.963516 master-0 kubenswrapper[4147]: I0216 02:05:44.963467 4147 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 16 02:05:44.964138 master-0 kubenswrapper[4147]: I0216 02:05:44.964091 4147 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 16 02:05:44.964663 master-0 kubenswrapper[4147]: I0216 02:05:44.964396 4147 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 16 02:05:44.965033 master-0 kubenswrapper[4147]: I0216 02:05:44.964868 4147 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 16 02:05:45.108816 master-0 kubenswrapper[4147]: I0216 02:05:45.108697 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f7317f91-9441-449f-9738-85da088cf94f-ovnkube-config\") pod \"ovnkube-control-plane-bb7ffbb8d-mcff9\" (UID: \"f7317f91-9441-449f-9738-85da088cf94f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-mcff9" Feb 16 02:05:45.108816 master-0 kubenswrapper[4147]: I0216 02:05:45.108785 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58cq8\" (UniqueName: \"kubernetes.io/projected/f7317f91-9441-449f-9738-85da088cf94f-kube-api-access-58cq8\") pod \"ovnkube-control-plane-bb7ffbb8d-mcff9\" (UID: \"f7317f91-9441-449f-9738-85da088cf94f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-mcff9" Feb 16 02:05:45.108816 master-0 kubenswrapper[4147]: I0216 02:05:45.108826 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f7317f91-9441-449f-9738-85da088cf94f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-bb7ffbb8d-mcff9\" (UID: \"f7317f91-9441-449f-9738-85da088cf94f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-mcff9" Feb 16 02:05:45.109520 master-0 kubenswrapper[4147]: I0216 02:05:45.108876 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f7317f91-9441-449f-9738-85da088cf94f-env-overrides\") pod \"ovnkube-control-plane-bb7ffbb8d-mcff9\" (UID: \"f7317f91-9441-449f-9738-85da088cf94f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-mcff9" Feb 16 02:05:45.177412 master-0 kubenswrapper[4147]: I0216 02:05:45.177338 4147 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-zmw46"] Feb 16 02:05:45.179931 master-0 kubenswrapper[4147]: I0216 02:05:45.179883 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" Feb 16 02:05:45.184256 master-0 kubenswrapper[4147]: I0216 02:05:45.184180 4147 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 16 02:05:45.185245 master-0 kubenswrapper[4147]: I0216 02:05:45.185168 4147 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 16 02:05:45.187327 master-0 kubenswrapper[4147]: I0216 02:05:45.187284 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gn9mv" Feb 16 02:05:45.187872 master-0 kubenswrapper[4147]: E0216 02:05:45.187801 4147 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gn9mv" podUID="7f0f9b7d-e663-4927-861b-a9544d483b6e" Feb 16 02:05:45.209875 master-0 kubenswrapper[4147]: I0216 02:05:45.209814 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f7317f91-9441-449f-9738-85da088cf94f-ovnkube-config\") pod \"ovnkube-control-plane-bb7ffbb8d-mcff9\" (UID: \"f7317f91-9441-449f-9738-85da088cf94f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-mcff9" Feb 16 02:05:45.210291 master-0 kubenswrapper[4147]: I0216 02:05:45.210129 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58cq8\" (UniqueName: \"kubernetes.io/projected/f7317f91-9441-449f-9738-85da088cf94f-kube-api-access-58cq8\") pod \"ovnkube-control-plane-bb7ffbb8d-mcff9\" (UID: \"f7317f91-9441-449f-9738-85da088cf94f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-mcff9" Feb 16 02:05:45.210694 master-0 kubenswrapper[4147]: I0216 02:05:45.210593 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f7317f91-9441-449f-9738-85da088cf94f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-bb7ffbb8d-mcff9\" (UID: \"f7317f91-9441-449f-9738-85da088cf94f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-mcff9" Feb 16 02:05:45.210930 master-0 kubenswrapper[4147]: I0216 02:05:45.210891 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f7317f91-9441-449f-9738-85da088cf94f-env-overrides\") pod \"ovnkube-control-plane-bb7ffbb8d-mcff9\" (UID: \"f7317f91-9441-449f-9738-85da088cf94f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-mcff9" Feb 16 02:05:45.211899 master-0 kubenswrapper[4147]: I0216 02:05:45.211142 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f7317f91-9441-449f-9738-85da088cf94f-ovnkube-config\") pod \"ovnkube-control-plane-bb7ffbb8d-mcff9\" (UID: \"f7317f91-9441-449f-9738-85da088cf94f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-mcff9" Feb 16 02:05:45.212108 master-0 kubenswrapper[4147]: I0216 02:05:45.212042 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f7317f91-9441-449f-9738-85da088cf94f-env-overrides\") pod \"ovnkube-control-plane-bb7ffbb8d-mcff9\" (UID: \"f7317f91-9441-449f-9738-85da088cf94f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-mcff9" Feb 16 02:05:45.224841 master-0 kubenswrapper[4147]: I0216 02:05:45.224786 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f7317f91-9441-449f-9738-85da088cf94f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-bb7ffbb8d-mcff9\" (UID: \"f7317f91-9441-449f-9738-85da088cf94f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-mcff9" Feb 16 02:05:45.244287 master-0 kubenswrapper[4147]: I0216 02:05:45.244083 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58cq8\" (UniqueName: \"kubernetes.io/projected/f7317f91-9441-449f-9738-85da088cf94f-kube-api-access-58cq8\") pod \"ovnkube-control-plane-bb7ffbb8d-mcff9\" (UID: \"f7317f91-9441-449f-9738-85da088cf94f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-mcff9" Feb 16 02:05:45.282300 master-0 kubenswrapper[4147]: I0216 02:05:45.281992 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-mcff9" Feb 16 02:05:45.312240 master-0 kubenswrapper[4147]: I0216 02:05:45.312158 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-host-slash\") pod \"ovnkube-node-zmw46\" (UID: \"30d41850-9a4b-4ce2-9902-a59492adeb24\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" Feb 16 02:05:45.312240 master-0 kubenswrapper[4147]: I0216 02:05:45.312223 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/30d41850-9a4b-4ce2-9902-a59492adeb24-ovnkube-config\") pod \"ovnkube-node-zmw46\" (UID: \"30d41850-9a4b-4ce2-9902-a59492adeb24\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" Feb 16 02:05:45.312240 master-0 kubenswrapper[4147]: I0216 02:05:45.312259 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-node-log\") pod \"ovnkube-node-zmw46\" (UID: \"30d41850-9a4b-4ce2-9902-a59492adeb24\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" Feb 16 02:05:45.312670 master-0 kubenswrapper[4147]: I0216 02:05:45.312323 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-host-run-ovn-kubernetes\") pod \"ovnkube-node-zmw46\" (UID: \"30d41850-9a4b-4ce2-9902-a59492adeb24\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" Feb 16 02:05:45.312670 master-0 kubenswrapper[4147]: I0216 02:05:45.312375 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/30d41850-9a4b-4ce2-9902-a59492adeb24-env-overrides\") pod \"ovnkube-node-zmw46\" (UID: \"30d41850-9a4b-4ce2-9902-a59492adeb24\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" Feb 16 02:05:45.312670 master-0 kubenswrapper[4147]: I0216 02:05:45.312402 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4pzn\" (UniqueName: \"kubernetes.io/projected/30d41850-9a4b-4ce2-9902-a59492adeb24-kube-api-access-k4pzn\") pod \"ovnkube-node-zmw46\" (UID: \"30d41850-9a4b-4ce2-9902-a59492adeb24\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" Feb 16 02:05:45.312670 master-0 kubenswrapper[4147]: I0216 02:05:45.312450 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-systemd-units\") pod \"ovnkube-node-zmw46\" (UID: \"30d41850-9a4b-4ce2-9902-a59492adeb24\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" Feb 16 02:05:45.312670 master-0 kubenswrapper[4147]: I0216 02:05:45.312467 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-host-run-netns\") pod \"ovnkube-node-zmw46\" (UID: \"30d41850-9a4b-4ce2-9902-a59492adeb24\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" Feb 16 02:05:45.312670 master-0 kubenswrapper[4147]: I0216 02:05:45.312481 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-etc-openvswitch\") pod \"ovnkube-node-zmw46\" (UID: \"30d41850-9a4b-4ce2-9902-a59492adeb24\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" Feb 16 02:05:45.312670 master-0 kubenswrapper[4147]: I0216 02:05:45.312509 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-run-ovn\") pod \"ovnkube-node-zmw46\" (UID: \"30d41850-9a4b-4ce2-9902-a59492adeb24\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" Feb 16 02:05:45.312670 master-0 kubenswrapper[4147]: I0216 02:05:45.312548 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-host-kubelet\") pod \"ovnkube-node-zmw46\" (UID: \"30d41850-9a4b-4ce2-9902-a59492adeb24\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" Feb 16 02:05:45.312670 master-0 kubenswrapper[4147]: I0216 02:05:45.312564 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-log-socket\") pod \"ovnkube-node-zmw46\" (UID: \"30d41850-9a4b-4ce2-9902-a59492adeb24\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" Feb 16 02:05:45.312670 master-0 kubenswrapper[4147]: I0216 02:05:45.312580 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zmw46\" (UID: \"30d41850-9a4b-4ce2-9902-a59492adeb24\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" Feb 16 02:05:45.312670 master-0 kubenswrapper[4147]: I0216 02:05:45.312612 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-host-cni-bin\") pod \"ovnkube-node-zmw46\" (UID: \"30d41850-9a4b-4ce2-9902-a59492adeb24\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" Feb 16 02:05:45.312670 master-0 kubenswrapper[4147]: I0216 02:05:45.312627 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-host-cni-netd\") pod \"ovnkube-node-zmw46\" (UID: \"30d41850-9a4b-4ce2-9902-a59492adeb24\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" Feb 16 02:05:45.312670 master-0 kubenswrapper[4147]: I0216 02:05:45.312645 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/30d41850-9a4b-4ce2-9902-a59492adeb24-ovnkube-script-lib\") pod \"ovnkube-node-zmw46\" (UID: \"30d41850-9a4b-4ce2-9902-a59492adeb24\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" Feb 16 02:05:45.312670 master-0 kubenswrapper[4147]: I0216 02:05:45.312662 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-var-lib-openvswitch\") pod \"ovnkube-node-zmw46\" (UID: \"30d41850-9a4b-4ce2-9902-a59492adeb24\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" Feb 16 02:05:45.312670 master-0 kubenswrapper[4147]: I0216 02:05:45.312676 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-run-openvswitch\") pod \"ovnkube-node-zmw46\" (UID: \"30d41850-9a4b-4ce2-9902-a59492adeb24\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" Feb 16 02:05:45.312670 master-0 kubenswrapper[4147]: I0216 02:05:45.312696 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/30d41850-9a4b-4ce2-9902-a59492adeb24-ovn-node-metrics-cert\") pod \"ovnkube-node-zmw46\" (UID: \"30d41850-9a4b-4ce2-9902-a59492adeb24\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" Feb 16 02:05:45.313251 master-0 kubenswrapper[4147]: I0216 02:05:45.312716 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-run-systemd\") pod \"ovnkube-node-zmw46\" (UID: \"30d41850-9a4b-4ce2-9902-a59492adeb24\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" Feb 16 02:05:45.413923 master-0 kubenswrapper[4147]: I0216 02:05:45.413511 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-etc-openvswitch\") pod \"ovnkube-node-zmw46\" (UID: \"30d41850-9a4b-4ce2-9902-a59492adeb24\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" Feb 16 02:05:45.413923 master-0 kubenswrapper[4147]: I0216 02:05:45.413583 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-systemd-units\") pod \"ovnkube-node-zmw46\" (UID: \"30d41850-9a4b-4ce2-9902-a59492adeb24\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" Feb 16 02:05:45.413923 master-0 kubenswrapper[4147]: I0216 02:05:45.413603 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-host-run-netns\") pod \"ovnkube-node-zmw46\" (UID: \"30d41850-9a4b-4ce2-9902-a59492adeb24\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" Feb 16 02:05:45.413923 master-0 kubenswrapper[4147]: I0216 02:05:45.413622 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-run-ovn\") pod \"ovnkube-node-zmw46\" (UID: \"30d41850-9a4b-4ce2-9902-a59492adeb24\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" Feb 16 02:05:45.413923 master-0 kubenswrapper[4147]: I0216 02:05:45.413702 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-run-ovn\") pod \"ovnkube-node-zmw46\" (UID: \"30d41850-9a4b-4ce2-9902-a59492adeb24\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" Feb 16 02:05:45.413923 master-0 kubenswrapper[4147]: I0216 02:05:45.413750 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-host-run-netns\") pod \"ovnkube-node-zmw46\" (UID: \"30d41850-9a4b-4ce2-9902-a59492adeb24\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" Feb 16 02:05:45.413923 master-0 kubenswrapper[4147]: I0216 02:05:45.413775 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-systemd-units\") pod \"ovnkube-node-zmw46\" (UID: \"30d41850-9a4b-4ce2-9902-a59492adeb24\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" Feb 16 02:05:45.413923 master-0 kubenswrapper[4147]: I0216 02:05:45.413794 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-etc-openvswitch\") pod \"ovnkube-node-zmw46\" (UID: \"30d41850-9a4b-4ce2-9902-a59492adeb24\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" Feb 16 02:05:45.413923 master-0 kubenswrapper[4147]: I0216 02:05:45.413840 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zmw46\" (UID: \"30d41850-9a4b-4ce2-9902-a59492adeb24\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" Feb 16 02:05:45.413923 master-0 kubenswrapper[4147]: I0216 02:05:45.413860 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-host-kubelet\") pod \"ovnkube-node-zmw46\" (UID: \"30d41850-9a4b-4ce2-9902-a59492adeb24\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" Feb 16 02:05:45.413923 master-0 kubenswrapper[4147]: I0216 02:05:45.413879 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-log-socket\") pod \"ovnkube-node-zmw46\" (UID: \"30d41850-9a4b-4ce2-9902-a59492adeb24\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" Feb 16 02:05:45.414524 master-0 kubenswrapper[4147]: I0216 02:05:45.413991 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zmw46\" (UID: \"30d41850-9a4b-4ce2-9902-a59492adeb24\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" Feb 16 02:05:45.414524 master-0 kubenswrapper[4147]: I0216 02:05:45.414083 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-host-cni-bin\") pod \"ovnkube-node-zmw46\" (UID: \"30d41850-9a4b-4ce2-9902-a59492adeb24\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" Feb 16 02:05:45.414524 master-0 kubenswrapper[4147]: I0216 02:05:45.414167 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-host-cni-bin\") pod \"ovnkube-node-zmw46\" (UID: \"30d41850-9a4b-4ce2-9902-a59492adeb24\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" Feb 16 02:05:45.414524 master-0 kubenswrapper[4147]: I0216 02:05:45.414139 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-host-kubelet\") pod \"ovnkube-node-zmw46\" (UID: \"30d41850-9a4b-4ce2-9902-a59492adeb24\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" Feb 16 02:05:45.414524 master-0 kubenswrapper[4147]: I0216 02:05:45.414177 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-host-cni-netd\") pod \"ovnkube-node-zmw46\" (UID: \"30d41850-9a4b-4ce2-9902-a59492adeb24\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" Feb 16 02:05:45.414524 master-0 kubenswrapper[4147]: I0216 02:05:45.414222 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-host-cni-netd\") pod \"ovnkube-node-zmw46\" (UID: \"30d41850-9a4b-4ce2-9902-a59492adeb24\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" Feb 16 02:05:45.414524 master-0 kubenswrapper[4147]: I0216 02:05:45.414226 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/30d41850-9a4b-4ce2-9902-a59492adeb24-ovnkube-script-lib\") pod \"ovnkube-node-zmw46\" (UID: \"30d41850-9a4b-4ce2-9902-a59492adeb24\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" Feb 16 02:05:45.414524 master-0 kubenswrapper[4147]: I0216 02:05:45.414105 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-log-socket\") pod \"ovnkube-node-zmw46\" (UID: \"30d41850-9a4b-4ce2-9902-a59492adeb24\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" Feb 16 02:05:45.414524 master-0 kubenswrapper[4147]: I0216 02:05:45.414271 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-var-lib-openvswitch\") pod \"ovnkube-node-zmw46\" (UID: \"30d41850-9a4b-4ce2-9902-a59492adeb24\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" Feb 16 02:05:45.414524 master-0 kubenswrapper[4147]: I0216 02:05:45.414323 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-var-lib-openvswitch\") pod \"ovnkube-node-zmw46\" (UID: \"30d41850-9a4b-4ce2-9902-a59492adeb24\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" Feb 16 02:05:45.414524 master-0 kubenswrapper[4147]: I0216 02:05:45.414378 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-run-openvswitch\") pod \"ovnkube-node-zmw46\" (UID: \"30d41850-9a4b-4ce2-9902-a59492adeb24\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" Feb 16 02:05:45.414524 master-0 kubenswrapper[4147]: I0216 02:05:45.414413 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/30d41850-9a4b-4ce2-9902-a59492adeb24-ovn-node-metrics-cert\") pod \"ovnkube-node-zmw46\" (UID: \"30d41850-9a4b-4ce2-9902-a59492adeb24\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" Feb 16 02:05:45.414524 master-0 kubenswrapper[4147]: I0216 02:05:45.414496 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-run-openvswitch\") pod \"ovnkube-node-zmw46\" (UID: \"30d41850-9a4b-4ce2-9902-a59492adeb24\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" Feb 16 02:05:45.414524 master-0 kubenswrapper[4147]: I0216 02:05:45.414503 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-run-systemd\") pod \"ovnkube-node-zmw46\" (UID: \"30d41850-9a4b-4ce2-9902-a59492adeb24\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" Feb 16 02:05:45.414919 master-0 kubenswrapper[4147]: I0216 02:05:45.414559 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-host-slash\") pod \"ovnkube-node-zmw46\" (UID: \"30d41850-9a4b-4ce2-9902-a59492adeb24\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" Feb 16 02:05:45.414919 master-0 kubenswrapper[4147]: I0216 02:05:45.414595 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/30d41850-9a4b-4ce2-9902-a59492adeb24-ovnkube-config\") pod \"ovnkube-node-zmw46\" (UID: \"30d41850-9a4b-4ce2-9902-a59492adeb24\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" Feb 16 02:05:45.414919 master-0 kubenswrapper[4147]: I0216 02:05:45.414683 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-node-log\") pod \"ovnkube-node-zmw46\" (UID: \"30d41850-9a4b-4ce2-9902-a59492adeb24\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" Feb 16 02:05:45.414919 master-0 kubenswrapper[4147]: I0216 02:05:45.414725 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-host-run-ovn-kubernetes\") pod \"ovnkube-node-zmw46\" (UID: \"30d41850-9a4b-4ce2-9902-a59492adeb24\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" Feb 16 02:05:45.414919 master-0 kubenswrapper[4147]: I0216 02:05:45.414760 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/30d41850-9a4b-4ce2-9902-a59492adeb24-env-overrides\") pod \"ovnkube-node-zmw46\" (UID: \"30d41850-9a4b-4ce2-9902-a59492adeb24\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" Feb 16 02:05:45.414919 master-0 kubenswrapper[4147]: I0216 02:05:45.414794 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4pzn\" (UniqueName: \"kubernetes.io/projected/30d41850-9a4b-4ce2-9902-a59492adeb24-kube-api-access-k4pzn\") pod \"ovnkube-node-zmw46\" (UID: \"30d41850-9a4b-4ce2-9902-a59492adeb24\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" Feb 16 02:05:45.416000 master-0 kubenswrapper[4147]: I0216 02:05:45.415181 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/30d41850-9a4b-4ce2-9902-a59492adeb24-ovnkube-script-lib\") pod \"ovnkube-node-zmw46\" (UID: \"30d41850-9a4b-4ce2-9902-a59492adeb24\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" Feb 16 02:05:45.416000 master-0 kubenswrapper[4147]: I0216 02:05:45.415243 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-run-systemd\") pod \"ovnkube-node-zmw46\" (UID: \"30d41850-9a4b-4ce2-9902-a59492adeb24\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" Feb 16 02:05:45.416000 master-0 kubenswrapper[4147]: I0216 02:05:45.415288 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-host-run-ovn-kubernetes\") pod \"ovnkube-node-zmw46\" (UID: \"30d41850-9a4b-4ce2-9902-a59492adeb24\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" Feb 16 02:05:45.416000 master-0 kubenswrapper[4147]: I0216 02:05:45.415405 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-host-slash\") pod \"ovnkube-node-zmw46\" (UID: \"30d41850-9a4b-4ce2-9902-a59492adeb24\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" Feb 16 02:05:45.416000 master-0 kubenswrapper[4147]: I0216 02:05:45.415564 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-node-log\") pod \"ovnkube-node-zmw46\" (UID: \"30d41850-9a4b-4ce2-9902-a59492adeb24\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" Feb 16 02:05:45.416000 master-0 kubenswrapper[4147]: I0216 02:05:45.415950 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/30d41850-9a4b-4ce2-9902-a59492adeb24-ovnkube-config\") pod \"ovnkube-node-zmw46\" (UID: \"30d41850-9a4b-4ce2-9902-a59492adeb24\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" Feb 16 02:05:45.416838 master-0 kubenswrapper[4147]: I0216 02:05:45.416795 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/30d41850-9a4b-4ce2-9902-a59492adeb24-env-overrides\") pod \"ovnkube-node-zmw46\" (UID: \"30d41850-9a4b-4ce2-9902-a59492adeb24\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" Feb 16 02:05:45.418546 master-0 kubenswrapper[4147]: I0216 02:05:45.418404 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/30d41850-9a4b-4ce2-9902-a59492adeb24-ovn-node-metrics-cert\") pod \"ovnkube-node-zmw46\" (UID: \"30d41850-9a4b-4ce2-9902-a59492adeb24\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" Feb 16 02:05:45.442214 master-0 kubenswrapper[4147]: I0216 02:05:45.442082 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4pzn\" (UniqueName: \"kubernetes.io/projected/30d41850-9a4b-4ce2-9902-a59492adeb24-kube-api-access-k4pzn\") pod \"ovnkube-node-zmw46\" (UID: \"30d41850-9a4b-4ce2-9902-a59492adeb24\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" Feb 16 02:05:45.567873 master-0 kubenswrapper[4147]: I0216 02:05:45.567794 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" Feb 16 02:05:45.572206 master-0 kubenswrapper[4147]: W0216 02:05:45.572142 4147 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7317f91_9441_449f_9738_85da088cf94f.slice/crio-fc01c156470f39fb1cec479037027fa891a6711ffbe4b5da46389ad652e479bb WatchSource:0}: Error finding container fc01c156470f39fb1cec479037027fa891a6711ffbe4b5da46389ad652e479bb: Status 404 returned error can't find the container with id fc01c156470f39fb1cec479037027fa891a6711ffbe4b5da46389ad652e479bb Feb 16 02:05:45.588511 master-0 kubenswrapper[4147]: W0216 02:05:45.588313 4147 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30d41850_9a4b_4ce2_9902_a59492adeb24.slice/crio-e947e806d4cd304de7ae0e1eadc87bafdcb0e816f937b266a440ab82d15412f1 WatchSource:0}: Error finding container e947e806d4cd304de7ae0e1eadc87bafdcb0e816f937b266a440ab82d15412f1: Status 404 returned error can't find the container with id e947e806d4cd304de7ae0e1eadc87bafdcb0e816f937b266a440ab82d15412f1 Feb 16 02:05:46.507801 master-0 kubenswrapper[4147]: I0216 02:05:46.507244 4147 generic.go:334] "Generic (PLEG): container finished" podID="f91346c7-bde4-4fa2-ac27-b5f0d25eeb75" containerID="93e94006a31f3ba668f3844369615d0dcc4ff0267ec4f323096fa745c6b0818c" exitCode=0 Feb 16 02:05:46.508345 master-0 kubenswrapper[4147]: I0216 02:05:46.507824 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mvdkf" event={"ID":"f91346c7-bde4-4fa2-ac27-b5f0d25eeb75","Type":"ContainerDied","Data":"93e94006a31f3ba668f3844369615d0dcc4ff0267ec4f323096fa745c6b0818c"} Feb 16 02:05:46.511247 master-0 kubenswrapper[4147]: I0216 02:05:46.511201 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" event={"ID":"30d41850-9a4b-4ce2-9902-a59492adeb24","Type":"ContainerStarted","Data":"e947e806d4cd304de7ae0e1eadc87bafdcb0e816f937b266a440ab82d15412f1"} Feb 16 02:05:46.512755 master-0 kubenswrapper[4147]: I0216 02:05:46.512708 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-8jgrl" event={"ID":"430c146b-ceaf-411a-add6-ce949243aabf","Type":"ContainerStarted","Data":"3bd384e0d327a6216f1728d28ddc7128dd28e3d2e8783fac89af77b04cc9f0bf"} Feb 16 02:05:46.521052 master-0 kubenswrapper[4147]: I0216 02:05:46.514742 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-mcff9" event={"ID":"f7317f91-9441-449f-9738-85da088cf94f","Type":"ContainerStarted","Data":"29d48a215f0d6d45e8a38ea306749993550d4a017b0c60417aeaaaa9f1e38d77"} Feb 16 02:05:46.521052 master-0 kubenswrapper[4147]: I0216 02:05:46.514792 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-mcff9" event={"ID":"f7317f91-9441-449f-9738-85da088cf94f","Type":"ContainerStarted","Data":"fc01c156470f39fb1cec479037027fa891a6711ffbe4b5da46389ad652e479bb"} Feb 16 02:05:46.542772 master-0 kubenswrapper[4147]: I0216 02:05:46.542677 4147 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-8jgrl" podStartSLOduration=1.803497949 podStartE2EDuration="14.54264703s" podCreationTimestamp="2026-02-16 02:05:32 +0000 UTC" firstStartedPulling="2026-02-16 02:05:32.913544825 +0000 UTC m=+51.549279971" lastFinishedPulling="2026-02-16 02:05:45.652693936 +0000 UTC m=+64.288429052" observedRunningTime="2026-02-16 02:05:46.541496933 +0000 UTC m=+65.177232049" watchObservedRunningTime="2026-02-16 02:05:46.54264703 +0000 UTC m=+65.178382186" Feb 16 02:05:47.186926 master-0 kubenswrapper[4147]: I0216 02:05:47.186854 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gn9mv" Feb 16 02:05:47.187123 master-0 kubenswrapper[4147]: E0216 02:05:47.186999 4147 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gn9mv" podUID="7f0f9b7d-e663-4927-861b-a9544d483b6e" Feb 16 02:05:48.142331 master-0 kubenswrapper[4147]: I0216 02:05:48.142237 4147 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-hswdj"] Feb 16 02:05:48.143600 master-0 kubenswrapper[4147]: I0216 02:05:48.142789 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-hswdj" Feb 16 02:05:48.143600 master-0 kubenswrapper[4147]: E0216 02:05:48.142847 4147 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-hswdj" podUID="e478bdcc-052e-42f8-91b6-58c26cfc9cfc" Feb 16 02:05:48.239772 master-0 kubenswrapper[4147]: I0216 02:05:48.239662 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfgxq\" (UniqueName: \"kubernetes.io/projected/e478bdcc-052e-42f8-91b6-58c26cfc9cfc-kube-api-access-pfgxq\") pod \"network-check-target-hswdj\" (UID: \"e478bdcc-052e-42f8-91b6-58c26cfc9cfc\") " pod="openshift-network-diagnostics/network-check-target-hswdj" Feb 16 02:05:48.340892 master-0 kubenswrapper[4147]: I0216 02:05:48.340795 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfgxq\" (UniqueName: \"kubernetes.io/projected/e478bdcc-052e-42f8-91b6-58c26cfc9cfc-kube-api-access-pfgxq\") pod \"network-check-target-hswdj\" (UID: \"e478bdcc-052e-42f8-91b6-58c26cfc9cfc\") " pod="openshift-network-diagnostics/network-check-target-hswdj" Feb 16 02:05:48.360766 master-0 kubenswrapper[4147]: E0216 02:05:48.360708 4147 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 02:05:48.360766 master-0 kubenswrapper[4147]: E0216 02:05:48.360745 4147 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 02:05:48.360766 master-0 kubenswrapper[4147]: E0216 02:05:48.360757 4147 projected.go:194] Error preparing data for projected volume kube-api-access-pfgxq for pod openshift-network-diagnostics/network-check-target-hswdj: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 02:05:48.361031 master-0 kubenswrapper[4147]: E0216 02:05:48.360824 4147 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e478bdcc-052e-42f8-91b6-58c26cfc9cfc-kube-api-access-pfgxq podName:e478bdcc-052e-42f8-91b6-58c26cfc9cfc nodeName:}" failed. No retries permitted until 2026-02-16 02:05:48.860804304 +0000 UTC m=+67.496539420 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pfgxq" (UniqueName: "kubernetes.io/projected/e478bdcc-052e-42f8-91b6-58c26cfc9cfc-kube-api-access-pfgxq") pod "network-check-target-hswdj" (UID: "e478bdcc-052e-42f8-91b6-58c26cfc9cfc") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 02:05:48.948343 master-0 kubenswrapper[4147]: I0216 02:05:48.948285 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfgxq\" (UniqueName: \"kubernetes.io/projected/e478bdcc-052e-42f8-91b6-58c26cfc9cfc-kube-api-access-pfgxq\") pod \"network-check-target-hswdj\" (UID: \"e478bdcc-052e-42f8-91b6-58c26cfc9cfc\") " pod="openshift-network-diagnostics/network-check-target-hswdj" Feb 16 02:05:48.948636 master-0 kubenswrapper[4147]: E0216 02:05:48.948588 4147 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 02:05:48.948681 master-0 kubenswrapper[4147]: E0216 02:05:48.948640 4147 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 02:05:48.948681 master-0 kubenswrapper[4147]: E0216 02:05:48.948673 4147 projected.go:194] Error preparing data for projected volume kube-api-access-pfgxq for pod openshift-network-diagnostics/network-check-target-hswdj: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 02:05:48.948783 master-0 kubenswrapper[4147]: E0216 02:05:48.948751 4147 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e478bdcc-052e-42f8-91b6-58c26cfc9cfc-kube-api-access-pfgxq podName:e478bdcc-052e-42f8-91b6-58c26cfc9cfc nodeName:}" failed. No retries permitted until 2026-02-16 02:05:49.948726312 +0000 UTC m=+68.584461458 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-pfgxq" (UniqueName: "kubernetes.io/projected/e478bdcc-052e-42f8-91b6-58c26cfc9cfc-kube-api-access-pfgxq") pod "network-check-target-hswdj" (UID: "e478bdcc-052e-42f8-91b6-58c26cfc9cfc") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 02:05:49.049511 master-0 kubenswrapper[4147]: I0216 02:05:49.049410 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-serving-cert\") pod \"cluster-version-operator-76959b6567-9fxxl\" (UID: \"864c0ef4-319c-457c-aa3b-adf0c3e5a0ff\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-9fxxl" Feb 16 02:05:49.049753 master-0 kubenswrapper[4147]: E0216 02:05:49.049636 4147 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 16 02:05:49.049753 master-0 kubenswrapper[4147]: E0216 02:05:49.049714 4147 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-serving-cert podName:864c0ef4-319c-457c-aa3b-adf0c3e5a0ff nodeName:}" failed. No retries permitted until 2026-02-16 02:06:21.049691974 +0000 UTC m=+99.685427110 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-serving-cert") pod "cluster-version-operator-76959b6567-9fxxl" (UID: "864c0ef4-319c-457c-aa3b-adf0c3e5a0ff") : secret "cluster-version-operator-serving-cert" not found Feb 16 02:05:49.187537 master-0 kubenswrapper[4147]: I0216 02:05:49.187485 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gn9mv" Feb 16 02:05:49.187965 master-0 kubenswrapper[4147]: E0216 02:05:49.187619 4147 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gn9mv" podUID="7f0f9b7d-e663-4927-861b-a9544d483b6e" Feb 16 02:05:49.352229 master-0 kubenswrapper[4147]: I0216 02:05:49.352133 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7f0f9b7d-e663-4927-861b-a9544d483b6e-metrics-certs\") pod \"network-metrics-daemon-gn9mv\" (UID: \"7f0f9b7d-e663-4927-861b-a9544d483b6e\") " pod="openshift-multus/network-metrics-daemon-gn9mv" Feb 16 02:05:49.352550 master-0 kubenswrapper[4147]: E0216 02:05:49.352341 4147 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 02:05:49.352550 master-0 kubenswrapper[4147]: E0216 02:05:49.352497 4147 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7f0f9b7d-e663-4927-861b-a9544d483b6e-metrics-certs podName:7f0f9b7d-e663-4927-861b-a9544d483b6e nodeName:}" failed. No retries permitted until 2026-02-16 02:06:05.352467699 +0000 UTC m=+83.988202855 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7f0f9b7d-e663-4927-861b-a9544d483b6e-metrics-certs") pod "network-metrics-daemon-gn9mv" (UID: "7f0f9b7d-e663-4927-861b-a9544d483b6e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 02:05:49.527793 master-0 kubenswrapper[4147]: I0216 02:05:49.527595 4147 generic.go:334] "Generic (PLEG): container finished" podID="f91346c7-bde4-4fa2-ac27-b5f0d25eeb75" containerID="d0f78aff7e0b714e84872137c91a78811349c06129b280efb18e955c4097bbb8" exitCode=0 Feb 16 02:05:49.527793 master-0 kubenswrapper[4147]: I0216 02:05:49.527663 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mvdkf" event={"ID":"f91346c7-bde4-4fa2-ac27-b5f0d25eeb75","Type":"ContainerDied","Data":"d0f78aff7e0b714e84872137c91a78811349c06129b280efb18e955c4097bbb8"} Feb 16 02:05:49.959480 master-0 kubenswrapper[4147]: I0216 02:05:49.957397 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfgxq\" (UniqueName: \"kubernetes.io/projected/e478bdcc-052e-42f8-91b6-58c26cfc9cfc-kube-api-access-pfgxq\") pod \"network-check-target-hswdj\" (UID: \"e478bdcc-052e-42f8-91b6-58c26cfc9cfc\") " pod="openshift-network-diagnostics/network-check-target-hswdj" Feb 16 02:05:49.959480 master-0 kubenswrapper[4147]: E0216 02:05:49.957732 4147 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 02:05:49.959480 master-0 kubenswrapper[4147]: E0216 02:05:49.957780 4147 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 02:05:49.959480 master-0 kubenswrapper[4147]: E0216 02:05:49.957803 4147 projected.go:194] Error preparing data for projected volume kube-api-access-pfgxq for pod openshift-network-diagnostics/network-check-target-hswdj: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 02:05:49.959480 master-0 kubenswrapper[4147]: E0216 02:05:49.957900 4147 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e478bdcc-052e-42f8-91b6-58c26cfc9cfc-kube-api-access-pfgxq podName:e478bdcc-052e-42f8-91b6-58c26cfc9cfc nodeName:}" failed. No retries permitted until 2026-02-16 02:05:51.957870774 +0000 UTC m=+70.593605930 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-pfgxq" (UniqueName: "kubernetes.io/projected/e478bdcc-052e-42f8-91b6-58c26cfc9cfc-kube-api-access-pfgxq") pod "network-check-target-hswdj" (UID: "e478bdcc-052e-42f8-91b6-58c26cfc9cfc") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 02:05:50.187746 master-0 kubenswrapper[4147]: I0216 02:05:50.187687 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-hswdj" Feb 16 02:05:50.188344 master-0 kubenswrapper[4147]: E0216 02:05:50.187822 4147 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-hswdj" podUID="e478bdcc-052e-42f8-91b6-58c26cfc9cfc" Feb 16 02:05:51.187778 master-0 kubenswrapper[4147]: I0216 02:05:51.187671 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gn9mv" Feb 16 02:05:51.188755 master-0 kubenswrapper[4147]: E0216 02:05:51.187889 4147 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gn9mv" podUID="7f0f9b7d-e663-4927-861b-a9544d483b6e" Feb 16 02:05:51.974516 master-0 kubenswrapper[4147]: I0216 02:05:51.974396 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfgxq\" (UniqueName: \"kubernetes.io/projected/e478bdcc-052e-42f8-91b6-58c26cfc9cfc-kube-api-access-pfgxq\") pod \"network-check-target-hswdj\" (UID: \"e478bdcc-052e-42f8-91b6-58c26cfc9cfc\") " pod="openshift-network-diagnostics/network-check-target-hswdj" Feb 16 02:05:51.974916 master-0 kubenswrapper[4147]: E0216 02:05:51.974642 4147 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 02:05:51.974916 master-0 kubenswrapper[4147]: E0216 02:05:51.974677 4147 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 02:05:51.974916 master-0 kubenswrapper[4147]: E0216 02:05:51.974692 4147 projected.go:194] Error preparing data for projected volume kube-api-access-pfgxq for pod openshift-network-diagnostics/network-check-target-hswdj: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 02:05:51.974916 master-0 kubenswrapper[4147]: E0216 02:05:51.974755 4147 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e478bdcc-052e-42f8-91b6-58c26cfc9cfc-kube-api-access-pfgxq podName:e478bdcc-052e-42f8-91b6-58c26cfc9cfc nodeName:}" failed. No retries permitted until 2026-02-16 02:05:55.974736466 +0000 UTC m=+74.610471592 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-pfgxq" (UniqueName: "kubernetes.io/projected/e478bdcc-052e-42f8-91b6-58c26cfc9cfc-kube-api-access-pfgxq") pod "network-check-target-hswdj" (UID: "e478bdcc-052e-42f8-91b6-58c26cfc9cfc") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 02:05:52.187213 master-0 kubenswrapper[4147]: I0216 02:05:52.187141 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-hswdj" Feb 16 02:05:52.187825 master-0 kubenswrapper[4147]: E0216 02:05:52.187767 4147 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-hswdj" podUID="e478bdcc-052e-42f8-91b6-58c26cfc9cfc" Feb 16 02:05:52.825785 master-0 kubenswrapper[4147]: I0216 02:05:52.825708 4147 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-kffmg"] Feb 16 02:05:52.826155 master-0 kubenswrapper[4147]: I0216 02:05:52.826113 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-kffmg" Feb 16 02:05:52.828118 master-0 kubenswrapper[4147]: I0216 02:05:52.828058 4147 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 16 02:05:52.828383 master-0 kubenswrapper[4147]: I0216 02:05:52.828343 4147 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 16 02:05:52.830173 master-0 kubenswrapper[4147]: I0216 02:05:52.830067 4147 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 16 02:05:52.830548 master-0 kubenswrapper[4147]: I0216 02:05:52.830248 4147 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 16 02:05:52.830548 master-0 kubenswrapper[4147]: I0216 02:05:52.830359 4147 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 16 02:05:52.883077 master-0 kubenswrapper[4147]: I0216 02:05:52.882997 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dbc5b101-936f-4bf3-bbf3-f30966b0ab50-webhook-cert\") pod \"network-node-identity-kffmg\" (UID: \"dbc5b101-936f-4bf3-bbf3-f30966b0ab50\") " pod="openshift-network-node-identity/network-node-identity-kffmg" Feb 16 02:05:52.883077 master-0 kubenswrapper[4147]: I0216 02:05:52.883052 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlnkb\" (UniqueName: \"kubernetes.io/projected/dbc5b101-936f-4bf3-bbf3-f30966b0ab50-kube-api-access-jlnkb\") pod \"network-node-identity-kffmg\" (UID: \"dbc5b101-936f-4bf3-bbf3-f30966b0ab50\") " pod="openshift-network-node-identity/network-node-identity-kffmg" Feb 16 02:05:52.883077 master-0 kubenswrapper[4147]: I0216 02:05:52.883091 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dbc5b101-936f-4bf3-bbf3-f30966b0ab50-env-overrides\") pod \"network-node-identity-kffmg\" (UID: \"dbc5b101-936f-4bf3-bbf3-f30966b0ab50\") " pod="openshift-network-node-identity/network-node-identity-kffmg" Feb 16 02:05:52.883477 master-0 kubenswrapper[4147]: I0216 02:05:52.883317 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/dbc5b101-936f-4bf3-bbf3-f30966b0ab50-ovnkube-identity-cm\") pod \"network-node-identity-kffmg\" (UID: \"dbc5b101-936f-4bf3-bbf3-f30966b0ab50\") " pod="openshift-network-node-identity/network-node-identity-kffmg" Feb 16 02:05:52.983988 master-0 kubenswrapper[4147]: I0216 02:05:52.983944 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/dbc5b101-936f-4bf3-bbf3-f30966b0ab50-ovnkube-identity-cm\") pod \"network-node-identity-kffmg\" (UID: \"dbc5b101-936f-4bf3-bbf3-f30966b0ab50\") " pod="openshift-network-node-identity/network-node-identity-kffmg" Feb 16 02:05:52.984081 master-0 kubenswrapper[4147]: I0216 02:05:52.983992 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dbc5b101-936f-4bf3-bbf3-f30966b0ab50-webhook-cert\") pod \"network-node-identity-kffmg\" (UID: \"dbc5b101-936f-4bf3-bbf3-f30966b0ab50\") " pod="openshift-network-node-identity/network-node-identity-kffmg" Feb 16 02:05:52.984081 master-0 kubenswrapper[4147]: I0216 02:05:52.984018 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlnkb\" (UniqueName: \"kubernetes.io/projected/dbc5b101-936f-4bf3-bbf3-f30966b0ab50-kube-api-access-jlnkb\") pod \"network-node-identity-kffmg\" (UID: \"dbc5b101-936f-4bf3-bbf3-f30966b0ab50\") " pod="openshift-network-node-identity/network-node-identity-kffmg" Feb 16 02:05:52.984081 master-0 kubenswrapper[4147]: I0216 02:05:52.984045 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dbc5b101-936f-4bf3-bbf3-f30966b0ab50-env-overrides\") pod \"network-node-identity-kffmg\" (UID: \"dbc5b101-936f-4bf3-bbf3-f30966b0ab50\") " pod="openshift-network-node-identity/network-node-identity-kffmg" Feb 16 02:05:52.984575 master-0 kubenswrapper[4147]: E0216 02:05:52.984547 4147 secret.go:189] Couldn't get secret openshift-network-node-identity/network-node-identity-cert: secret "network-node-identity-cert" not found Feb 16 02:05:52.984647 master-0 kubenswrapper[4147]: E0216 02:05:52.984608 4147 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dbc5b101-936f-4bf3-bbf3-f30966b0ab50-webhook-cert podName:dbc5b101-936f-4bf3-bbf3-f30966b0ab50 nodeName:}" failed. No retries permitted until 2026-02-16 02:05:53.484589658 +0000 UTC m=+72.120324784 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/dbc5b101-936f-4bf3-bbf3-f30966b0ab50-webhook-cert") pod "network-node-identity-kffmg" (UID: "dbc5b101-936f-4bf3-bbf3-f30966b0ab50") : secret "network-node-identity-cert" not found Feb 16 02:05:52.985342 master-0 kubenswrapper[4147]: I0216 02:05:52.985306 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dbc5b101-936f-4bf3-bbf3-f30966b0ab50-env-overrides\") pod \"network-node-identity-kffmg\" (UID: \"dbc5b101-936f-4bf3-bbf3-f30966b0ab50\") " pod="openshift-network-node-identity/network-node-identity-kffmg" Feb 16 02:05:52.986340 master-0 kubenswrapper[4147]: I0216 02:05:52.986295 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/dbc5b101-936f-4bf3-bbf3-f30966b0ab50-ovnkube-identity-cm\") pod \"network-node-identity-kffmg\" (UID: \"dbc5b101-936f-4bf3-bbf3-f30966b0ab50\") " pod="openshift-network-node-identity/network-node-identity-kffmg" Feb 16 02:05:53.006068 master-0 kubenswrapper[4147]: I0216 02:05:53.006032 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlnkb\" (UniqueName: \"kubernetes.io/projected/dbc5b101-936f-4bf3-bbf3-f30966b0ab50-kube-api-access-jlnkb\") pod \"network-node-identity-kffmg\" (UID: \"dbc5b101-936f-4bf3-bbf3-f30966b0ab50\") " pod="openshift-network-node-identity/network-node-identity-kffmg" Feb 16 02:05:53.187355 master-0 kubenswrapper[4147]: I0216 02:05:53.187309 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gn9mv" Feb 16 02:05:53.187555 master-0 kubenswrapper[4147]: E0216 02:05:53.187451 4147 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gn9mv" podUID="7f0f9b7d-e663-4927-861b-a9544d483b6e" Feb 16 02:05:53.488606 master-0 kubenswrapper[4147]: I0216 02:05:53.488496 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dbc5b101-936f-4bf3-bbf3-f30966b0ab50-webhook-cert\") pod \"network-node-identity-kffmg\" (UID: \"dbc5b101-936f-4bf3-bbf3-f30966b0ab50\") " pod="openshift-network-node-identity/network-node-identity-kffmg" Feb 16 02:05:53.492964 master-0 kubenswrapper[4147]: I0216 02:05:53.492901 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dbc5b101-936f-4bf3-bbf3-f30966b0ab50-webhook-cert\") pod \"network-node-identity-kffmg\" (UID: \"dbc5b101-936f-4bf3-bbf3-f30966b0ab50\") " pod="openshift-network-node-identity/network-node-identity-kffmg" Feb 16 02:05:53.541064 master-0 kubenswrapper[4147]: I0216 02:05:53.540976 4147 generic.go:334] "Generic (PLEG): container finished" podID="f91346c7-bde4-4fa2-ac27-b5f0d25eeb75" containerID="f32a8a71ff757721727f0a15b091975f54ceee8df971155b55280b5af1e45ccf" exitCode=0 Feb 16 02:05:53.541064 master-0 kubenswrapper[4147]: I0216 02:05:53.541017 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mvdkf" event={"ID":"f91346c7-bde4-4fa2-ac27-b5f0d25eeb75","Type":"ContainerDied","Data":"f32a8a71ff757721727f0a15b091975f54ceee8df971155b55280b5af1e45ccf"} Feb 16 02:05:53.747211 master-0 kubenswrapper[4147]: I0216 02:05:53.747142 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-kffmg" Feb 16 02:05:53.773720 master-0 kubenswrapper[4147]: W0216 02:05:53.773562 4147 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddbc5b101_936f_4bf3_bbf3_f30966b0ab50.slice/crio-ee59c4907031f715d1c9629d7cd8d627d819c1de44021beecbbfe36a41fcaf72 WatchSource:0}: Error finding container ee59c4907031f715d1c9629d7cd8d627d819c1de44021beecbbfe36a41fcaf72: Status 404 returned error can't find the container with id ee59c4907031f715d1c9629d7cd8d627d819c1de44021beecbbfe36a41fcaf72 Feb 16 02:05:54.187489 master-0 kubenswrapper[4147]: I0216 02:05:54.187283 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-hswdj" Feb 16 02:05:54.187489 master-0 kubenswrapper[4147]: E0216 02:05:54.187394 4147 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-hswdj" podUID="e478bdcc-052e-42f8-91b6-58c26cfc9cfc" Feb 16 02:05:54.544133 master-0 kubenswrapper[4147]: I0216 02:05:54.544008 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-kffmg" event={"ID":"dbc5b101-936f-4bf3-bbf3-f30966b0ab50","Type":"ContainerStarted","Data":"ee59c4907031f715d1c9629d7cd8d627d819c1de44021beecbbfe36a41fcaf72"} Feb 16 02:05:55.187202 master-0 kubenswrapper[4147]: I0216 02:05:55.187139 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gn9mv" Feb 16 02:05:55.187526 master-0 kubenswrapper[4147]: E0216 02:05:55.187284 4147 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gn9mv" podUID="7f0f9b7d-e663-4927-861b-a9544d483b6e" Feb 16 02:05:56.011173 master-0 kubenswrapper[4147]: I0216 02:05:56.011115 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfgxq\" (UniqueName: \"kubernetes.io/projected/e478bdcc-052e-42f8-91b6-58c26cfc9cfc-kube-api-access-pfgxq\") pod \"network-check-target-hswdj\" (UID: \"e478bdcc-052e-42f8-91b6-58c26cfc9cfc\") " pod="openshift-network-diagnostics/network-check-target-hswdj" Feb 16 02:05:56.011665 master-0 kubenswrapper[4147]: E0216 02:05:56.011318 4147 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 02:05:56.011665 master-0 kubenswrapper[4147]: E0216 02:05:56.011344 4147 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 02:05:56.011665 master-0 kubenswrapper[4147]: E0216 02:05:56.011365 4147 projected.go:194] Error preparing data for projected volume kube-api-access-pfgxq for pod openshift-network-diagnostics/network-check-target-hswdj: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 02:05:56.011665 master-0 kubenswrapper[4147]: E0216 02:05:56.011427 4147 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e478bdcc-052e-42f8-91b6-58c26cfc9cfc-kube-api-access-pfgxq podName:e478bdcc-052e-42f8-91b6-58c26cfc9cfc nodeName:}" failed. No retries permitted until 2026-02-16 02:06:04.011405007 +0000 UTC m=+82.647140153 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-pfgxq" (UniqueName: "kubernetes.io/projected/e478bdcc-052e-42f8-91b6-58c26cfc9cfc-kube-api-access-pfgxq") pod "network-check-target-hswdj" (UID: "e478bdcc-052e-42f8-91b6-58c26cfc9cfc") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 02:05:56.187534 master-0 kubenswrapper[4147]: I0216 02:05:56.187428 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-hswdj" Feb 16 02:05:56.187722 master-0 kubenswrapper[4147]: E0216 02:05:56.187651 4147 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-hswdj" podUID="e478bdcc-052e-42f8-91b6-58c26cfc9cfc" Feb 16 02:05:57.187939 master-0 kubenswrapper[4147]: I0216 02:05:57.187891 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gn9mv" Feb 16 02:05:57.188552 master-0 kubenswrapper[4147]: E0216 02:05:57.188106 4147 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gn9mv" podUID="7f0f9b7d-e663-4927-861b-a9544d483b6e" Feb 16 02:05:58.187463 master-0 kubenswrapper[4147]: I0216 02:05:58.187358 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-hswdj" Feb 16 02:05:58.187801 master-0 kubenswrapper[4147]: E0216 02:05:58.187655 4147 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-hswdj" podUID="e478bdcc-052e-42f8-91b6-58c26cfc9cfc" Feb 16 02:05:59.187336 master-0 kubenswrapper[4147]: I0216 02:05:59.187262 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gn9mv" Feb 16 02:05:59.188081 master-0 kubenswrapper[4147]: E0216 02:05:59.187580 4147 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gn9mv" podUID="7f0f9b7d-e663-4927-861b-a9544d483b6e" Feb 16 02:05:59.200673 master-0 kubenswrapper[4147]: I0216 02:05:59.200632 4147 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Feb 16 02:05:59.200933 master-0 kubenswrapper[4147]: W0216 02:05:59.200907 4147 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Feb 16 02:06:00.187340 master-0 kubenswrapper[4147]: I0216 02:06:00.187253 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-hswdj" Feb 16 02:06:00.188261 master-0 kubenswrapper[4147]: E0216 02:06:00.187479 4147 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-hswdj" podUID="e478bdcc-052e-42f8-91b6-58c26cfc9cfc" Feb 16 02:06:01.186854 master-0 kubenswrapper[4147]: I0216 02:06:01.186807 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gn9mv" Feb 16 02:06:01.187052 master-0 kubenswrapper[4147]: E0216 02:06:01.186931 4147 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gn9mv" podUID="7f0f9b7d-e663-4927-861b-a9544d483b6e" Feb 16 02:06:02.187032 master-0 kubenswrapper[4147]: I0216 02:06:02.186958 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-hswdj" Feb 16 02:06:02.188493 master-0 kubenswrapper[4147]: E0216 02:06:02.188206 4147 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-hswdj" podUID="e478bdcc-052e-42f8-91b6-58c26cfc9cfc" Feb 16 02:06:03.187322 master-0 kubenswrapper[4147]: I0216 02:06:03.187252 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gn9mv" Feb 16 02:06:03.188494 master-0 kubenswrapper[4147]: E0216 02:06:03.187421 4147 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gn9mv" podUID="7f0f9b7d-e663-4927-861b-a9544d483b6e" Feb 16 02:06:04.087779 master-0 kubenswrapper[4147]: I0216 02:06:04.087667 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfgxq\" (UniqueName: \"kubernetes.io/projected/e478bdcc-052e-42f8-91b6-58c26cfc9cfc-kube-api-access-pfgxq\") pod \"network-check-target-hswdj\" (UID: \"e478bdcc-052e-42f8-91b6-58c26cfc9cfc\") " pod="openshift-network-diagnostics/network-check-target-hswdj" Feb 16 02:06:04.087938 master-0 kubenswrapper[4147]: E0216 02:06:04.087913 4147 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 02:06:04.087972 master-0 kubenswrapper[4147]: E0216 02:06:04.087943 4147 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 02:06:04.087972 master-0 kubenswrapper[4147]: E0216 02:06:04.087962 4147 projected.go:194] Error preparing data for projected volume kube-api-access-pfgxq for pod openshift-network-diagnostics/network-check-target-hswdj: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 02:06:04.088049 master-0 kubenswrapper[4147]: E0216 02:06:04.088026 4147 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e478bdcc-052e-42f8-91b6-58c26cfc9cfc-kube-api-access-pfgxq podName:e478bdcc-052e-42f8-91b6-58c26cfc9cfc nodeName:}" failed. No retries permitted until 2026-02-16 02:06:20.088005405 +0000 UTC m=+98.723740551 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-pfgxq" (UniqueName: "kubernetes.io/projected/e478bdcc-052e-42f8-91b6-58c26cfc9cfc-kube-api-access-pfgxq") pod "network-check-target-hswdj" (UID: "e478bdcc-052e-42f8-91b6-58c26cfc9cfc") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 02:06:04.188698 master-0 kubenswrapper[4147]: I0216 02:06:04.188644 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-hswdj" Feb 16 02:06:04.189106 master-0 kubenswrapper[4147]: E0216 02:06:04.188790 4147 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-hswdj" podUID="e478bdcc-052e-42f8-91b6-58c26cfc9cfc" Feb 16 02:06:05.187883 master-0 kubenswrapper[4147]: I0216 02:06:05.187801 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gn9mv" Feb 16 02:06:05.188197 master-0 kubenswrapper[4147]: E0216 02:06:05.188042 4147 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gn9mv" podUID="7f0f9b7d-e663-4927-861b-a9544d483b6e" Feb 16 02:06:05.397668 master-0 kubenswrapper[4147]: I0216 02:06:05.397574 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7f0f9b7d-e663-4927-861b-a9544d483b6e-metrics-certs\") pod \"network-metrics-daemon-gn9mv\" (UID: \"7f0f9b7d-e663-4927-861b-a9544d483b6e\") " pod="openshift-multus/network-metrics-daemon-gn9mv" Feb 16 02:06:05.398533 master-0 kubenswrapper[4147]: E0216 02:06:05.397778 4147 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 02:06:05.398533 master-0 kubenswrapper[4147]: E0216 02:06:05.397867 4147 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7f0f9b7d-e663-4927-861b-a9544d483b6e-metrics-certs podName:7f0f9b7d-e663-4927-861b-a9544d483b6e nodeName:}" failed. No retries permitted until 2026-02-16 02:06:37.397842583 +0000 UTC m=+116.033577709 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7f0f9b7d-e663-4927-861b-a9544d483b6e-metrics-certs") pod "network-metrics-daemon-gn9mv" (UID: "7f0f9b7d-e663-4927-861b-a9544d483b6e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 02:06:06.187159 master-0 kubenswrapper[4147]: I0216 02:06:06.187064 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-hswdj" Feb 16 02:06:06.187500 master-0 kubenswrapper[4147]: E0216 02:06:06.187310 4147 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-hswdj" podUID="e478bdcc-052e-42f8-91b6-58c26cfc9cfc" Feb 16 02:06:07.187748 master-0 kubenswrapper[4147]: I0216 02:06:07.187685 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gn9mv" Feb 16 02:06:07.188928 master-0 kubenswrapper[4147]: E0216 02:06:07.188835 4147 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gn9mv" podUID="7f0f9b7d-e663-4927-861b-a9544d483b6e" Feb 16 02:06:08.187684 master-0 kubenswrapper[4147]: I0216 02:06:08.187641 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-hswdj" Feb 16 02:06:08.187792 master-0 kubenswrapper[4147]: E0216 02:06:08.187772 4147 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-hswdj" podUID="e478bdcc-052e-42f8-91b6-58c26cfc9cfc" Feb 16 02:06:08.582106 master-0 kubenswrapper[4147]: I0216 02:06:08.581922 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-kffmg" event={"ID":"dbc5b101-936f-4bf3-bbf3-f30966b0ab50","Type":"ContainerStarted","Data":"73c5d4096ed4f5f723bea74695c09c9920b7cf6836ef92fa2286119a88696c78"} Feb 16 02:06:08.582106 master-0 kubenswrapper[4147]: I0216 02:06:08.581986 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-kffmg" event={"ID":"dbc5b101-936f-4bf3-bbf3-f30966b0ab50","Type":"ContainerStarted","Data":"528daeba946749640ce2b81f6f5bcc37687cc1690c040b79308891be8d1c3952"} Feb 16 02:06:08.584614 master-0 kubenswrapper[4147]: I0216 02:06:08.584542 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-mcff9" event={"ID":"f7317f91-9441-449f-9738-85da088cf94f","Type":"ContainerStarted","Data":"b0f87ddc237d60c2bab39a1452b1e36c685e800e91756d3d4eee6ecf6e94ac8b"} Feb 16 02:06:08.588916 master-0 kubenswrapper[4147]: I0216 02:06:08.588814 4147 generic.go:334] "Generic (PLEG): container finished" podID="f91346c7-bde4-4fa2-ac27-b5f0d25eeb75" containerID="e44874f0350470f24d5ea4a5701795fe3efa7441e0282bd848060a5c5089ab29" exitCode=0 Feb 16 02:06:08.589056 master-0 kubenswrapper[4147]: I0216 02:06:08.588943 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mvdkf" event={"ID":"f91346c7-bde4-4fa2-ac27-b5f0d25eeb75","Type":"ContainerDied","Data":"e44874f0350470f24d5ea4a5701795fe3efa7441e0282bd848060a5c5089ab29"} Feb 16 02:06:08.590894 master-0 kubenswrapper[4147]: I0216 02:06:08.590845 4147 generic.go:334] "Generic (PLEG): container finished" podID="30d41850-9a4b-4ce2-9902-a59492adeb24" containerID="a6d27bfddbad624bf45e594d48d6801bfe98efc2c2670d7f3752b9d56ad3d75c" exitCode=0 Feb 16 02:06:08.590997 master-0 kubenswrapper[4147]: I0216 02:06:08.590894 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" event={"ID":"30d41850-9a4b-4ce2-9902-a59492adeb24","Type":"ContainerDied","Data":"a6d27bfddbad624bf45e594d48d6801bfe98efc2c2670d7f3752b9d56ad3d75c"} Feb 16 02:06:08.602891 master-0 kubenswrapper[4147]: I0216 02:06:08.602794 4147 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-node-identity/network-node-identity-kffmg" podStartSLOduration=2.603319778 podStartE2EDuration="16.602767633s" podCreationTimestamp="2026-02-16 02:05:52 +0000 UTC" firstStartedPulling="2026-02-16 02:05:53.778424996 +0000 UTC m=+72.414160122" lastFinishedPulling="2026-02-16 02:06:07.777872811 +0000 UTC m=+86.413607977" observedRunningTime="2026-02-16 02:06:08.601801949 +0000 UTC m=+87.237537115" watchObservedRunningTime="2026-02-16 02:06:08.602767633 +0000 UTC m=+87.238502809" Feb 16 02:06:08.603020 master-0 kubenswrapper[4147]: I0216 02:06:08.602977 4147 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0-master-0" podStartSLOduration=9.602968028 podStartE2EDuration="9.602968028s" podCreationTimestamp="2026-02-16 02:05:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:06:02.210303936 +0000 UTC m=+80.846039062" watchObservedRunningTime="2026-02-16 02:06:08.602968028 +0000 UTC m=+87.238703204" Feb 16 02:06:08.679055 master-0 kubenswrapper[4147]: I0216 02:06:08.678939 4147 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-mcff9" podStartSLOduration=2.8571903499999998 podStartE2EDuration="24.678905422s" podCreationTimestamp="2026-02-16 02:05:44 +0000 UTC" firstStartedPulling="2026-02-16 02:05:45.829216063 +0000 UTC m=+64.464951189" lastFinishedPulling="2026-02-16 02:06:07.650931105 +0000 UTC m=+86.286666261" observedRunningTime="2026-02-16 02:06:08.672939223 +0000 UTC m=+87.308674349" watchObservedRunningTime="2026-02-16 02:06:08.678905422 +0000 UTC m=+87.314640578" Feb 16 02:06:09.187559 master-0 kubenswrapper[4147]: I0216 02:06:09.187486 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gn9mv" Feb 16 02:06:09.187726 master-0 kubenswrapper[4147]: E0216 02:06:09.187633 4147 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gn9mv" podUID="7f0f9b7d-e663-4927-861b-a9544d483b6e" Feb 16 02:06:09.605043 master-0 kubenswrapper[4147]: I0216 02:06:09.604890 4147 generic.go:334] "Generic (PLEG): container finished" podID="f91346c7-bde4-4fa2-ac27-b5f0d25eeb75" containerID="0858acab9d05c2a71790635b1c93f375645e501dd52d5da79fb7b1cbf9b57e86" exitCode=0 Feb 16 02:06:09.605043 master-0 kubenswrapper[4147]: I0216 02:06:09.604963 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mvdkf" event={"ID":"f91346c7-bde4-4fa2-ac27-b5f0d25eeb75","Type":"ContainerDied","Data":"0858acab9d05c2a71790635b1c93f375645e501dd52d5da79fb7b1cbf9b57e86"} Feb 16 02:06:09.613926 master-0 kubenswrapper[4147]: I0216 02:06:09.613877 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" event={"ID":"30d41850-9a4b-4ce2-9902-a59492adeb24","Type":"ContainerStarted","Data":"54057cbf958aea9e109c7d17e98aee5281d0a4015dc820437ffc81643f0510c0"} Feb 16 02:06:09.614021 master-0 kubenswrapper[4147]: I0216 02:06:09.613943 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" event={"ID":"30d41850-9a4b-4ce2-9902-a59492adeb24","Type":"ContainerStarted","Data":"909b82be05dd32fe9dc4e88305f8e9ff7a3b90e3a20dd135cf29281c872a8d66"} Feb 16 02:06:09.614021 master-0 kubenswrapper[4147]: I0216 02:06:09.613966 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" event={"ID":"30d41850-9a4b-4ce2-9902-a59492adeb24","Type":"ContainerStarted","Data":"6b3c984b8aa1dc133b6c6bf02c330b29966a579f745c43faac91ae572c60b4cb"} Feb 16 02:06:09.614021 master-0 kubenswrapper[4147]: I0216 02:06:09.613985 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" event={"ID":"30d41850-9a4b-4ce2-9902-a59492adeb24","Type":"ContainerStarted","Data":"ded188d3d43c2f077a6171cda2ddb63433611ea745b982d15bf1f5c6b820601e"} Feb 16 02:06:09.614021 master-0 kubenswrapper[4147]: I0216 02:06:09.614004 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" event={"ID":"30d41850-9a4b-4ce2-9902-a59492adeb24","Type":"ContainerStarted","Data":"56559338d27ab6d66e19e6479539b1c372389ae4413e7ff8d76d4c0ee8313918"} Feb 16 02:06:09.614236 master-0 kubenswrapper[4147]: I0216 02:06:09.614028 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" event={"ID":"30d41850-9a4b-4ce2-9902-a59492adeb24","Type":"ContainerStarted","Data":"1b7b7bdf4c642a1559ec30fb01fd3a1b4682cad332a4325e3ce885fc9b0bd375"} Feb 16 02:06:10.187728 master-0 kubenswrapper[4147]: I0216 02:06:10.187674 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-hswdj" Feb 16 02:06:10.187966 master-0 kubenswrapper[4147]: E0216 02:06:10.187867 4147 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-hswdj" podUID="e478bdcc-052e-42f8-91b6-58c26cfc9cfc" Feb 16 02:06:10.297913 master-0 kubenswrapper[4147]: I0216 02:06:10.296674 4147 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-zmw46"] Feb 16 02:06:10.621325 master-0 kubenswrapper[4147]: I0216 02:06:10.621184 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mvdkf" event={"ID":"f91346c7-bde4-4fa2-ac27-b5f0d25eeb75","Type":"ContainerStarted","Data":"361f284d64a45eb309a4c064ab4873d1f3e9155a865c47ecbe13fbb084251c26"} Feb 16 02:06:10.646337 master-0 kubenswrapper[4147]: I0216 02:06:10.646261 4147 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-mvdkf" podStartSLOduration=4.105145239 podStartE2EDuration="38.646235987s" podCreationTimestamp="2026-02-16 02:05:32 +0000 UTC" firstStartedPulling="2026-02-16 02:05:33.108934954 +0000 UTC m=+51.744670100" lastFinishedPulling="2026-02-16 02:06:07.650025702 +0000 UTC m=+86.285760848" observedRunningTime="2026-02-16 02:06:10.646155155 +0000 UTC m=+89.281890371" watchObservedRunningTime="2026-02-16 02:06:10.646235987 +0000 UTC m=+89.281971133" Feb 16 02:06:11.187780 master-0 kubenswrapper[4147]: I0216 02:06:11.187713 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gn9mv" Feb 16 02:06:11.188095 master-0 kubenswrapper[4147]: E0216 02:06:11.187891 4147 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gn9mv" podUID="7f0f9b7d-e663-4927-861b-a9544d483b6e" Feb 16 02:06:12.187365 master-0 kubenswrapper[4147]: I0216 02:06:12.187272 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-hswdj" Feb 16 02:06:12.188566 master-0 kubenswrapper[4147]: E0216 02:06:12.188413 4147 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-hswdj" podUID="e478bdcc-052e-42f8-91b6-58c26cfc9cfc" Feb 16 02:06:12.634929 master-0 kubenswrapper[4147]: I0216 02:06:12.634761 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" event={"ID":"30d41850-9a4b-4ce2-9902-a59492adeb24","Type":"ContainerStarted","Data":"549ee6f369575d943032c4f2c9b7b4db572d4e7b0fbe810a5d7632f7758f10f6"} Feb 16 02:06:13.187271 master-0 kubenswrapper[4147]: I0216 02:06:13.187184 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gn9mv" Feb 16 02:06:13.187749 master-0 kubenswrapper[4147]: E0216 02:06:13.187354 4147 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gn9mv" podUID="7f0f9b7d-e663-4927-861b-a9544d483b6e" Feb 16 02:06:14.188569 master-0 kubenswrapper[4147]: I0216 02:06:14.187123 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-hswdj" Feb 16 02:06:14.188569 master-0 kubenswrapper[4147]: E0216 02:06:14.187648 4147 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-hswdj" podUID="e478bdcc-052e-42f8-91b6-58c26cfc9cfc" Feb 16 02:06:14.649704 master-0 kubenswrapper[4147]: I0216 02:06:14.649610 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" event={"ID":"30d41850-9a4b-4ce2-9902-a59492adeb24","Type":"ContainerStarted","Data":"6480bb3563cbd1a8cff960037546688c839f01c862a63b7c461d82549ab638ba"} Feb 16 02:06:14.650064 master-0 kubenswrapper[4147]: I0216 02:06:14.649894 4147 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" podUID="30d41850-9a4b-4ce2-9902-a59492adeb24" containerName="ovn-controller" containerID="cri-o://1b7b7bdf4c642a1559ec30fb01fd3a1b4682cad332a4325e3ce885fc9b0bd375" gracePeriod=30 Feb 16 02:06:14.650064 master-0 kubenswrapper[4147]: I0216 02:06:14.649946 4147 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" podUID="30d41850-9a4b-4ce2-9902-a59492adeb24" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://6b3c984b8aa1dc133b6c6bf02c330b29966a579f745c43faac91ae572c60b4cb" gracePeriod=30 Feb 16 02:06:14.650247 master-0 kubenswrapper[4147]: I0216 02:06:14.650037 4147 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" podUID="30d41850-9a4b-4ce2-9902-a59492adeb24" containerName="kube-rbac-proxy-node" containerID="cri-o://ded188d3d43c2f077a6171cda2ddb63433611ea745b982d15bf1f5c6b820601e" gracePeriod=30 Feb 16 02:06:14.650247 master-0 kubenswrapper[4147]: I0216 02:06:14.649905 4147 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" podUID="30d41850-9a4b-4ce2-9902-a59492adeb24" containerName="sbdb" containerID="cri-o://549ee6f369575d943032c4f2c9b7b4db572d4e7b0fbe810a5d7632f7758f10f6" gracePeriod=30 Feb 16 02:06:14.650247 master-0 kubenswrapper[4147]: I0216 02:06:14.650128 4147 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" podUID="30d41850-9a4b-4ce2-9902-a59492adeb24" containerName="nbdb" containerID="cri-o://54057cbf958aea9e109c7d17e98aee5281d0a4015dc820437ffc81643f0510c0" gracePeriod=30 Feb 16 02:06:14.650430 master-0 kubenswrapper[4147]: I0216 02:06:14.650231 4147 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" Feb 16 02:06:14.650430 master-0 kubenswrapper[4147]: I0216 02:06:14.650204 4147 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" podUID="30d41850-9a4b-4ce2-9902-a59492adeb24" containerName="ovn-acl-logging" containerID="cri-o://56559338d27ab6d66e19e6479539b1c372389ae4413e7ff8d76d4c0ee8313918" gracePeriod=30 Feb 16 02:06:14.650430 master-0 kubenswrapper[4147]: I0216 02:06:14.650263 4147 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" podUID="30d41850-9a4b-4ce2-9902-a59492adeb24" containerName="northd" containerID="cri-o://909b82be05dd32fe9dc4e88305f8e9ff7a3b90e3a20dd135cf29281c872a8d66" gracePeriod=30 Feb 16 02:06:14.651968 master-0 kubenswrapper[4147]: I0216 02:06:14.651695 4147 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" Feb 16 02:06:14.651968 master-0 kubenswrapper[4147]: I0216 02:06:14.651747 4147 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" Feb 16 02:06:14.654231 master-0 kubenswrapper[4147]: E0216 02:06:14.654155 4147 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="549ee6f369575d943032c4f2c9b7b4db572d4e7b0fbe810a5d7632f7758f10f6" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Feb 16 02:06:14.654371 master-0 kubenswrapper[4147]: E0216 02:06:14.654313 4147 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="54057cbf958aea9e109c7d17e98aee5281d0a4015dc820437ffc81643f0510c0" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Feb 16 02:06:14.656427 master-0 kubenswrapper[4147]: E0216 02:06:14.656343 4147 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="549ee6f369575d943032c4f2c9b7b4db572d4e7b0fbe810a5d7632f7758f10f6" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Feb 16 02:06:14.657324 master-0 kubenswrapper[4147]: E0216 02:06:14.656570 4147 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="54057cbf958aea9e109c7d17e98aee5281d0a4015dc820437ffc81643f0510c0" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Feb 16 02:06:14.660713 master-0 kubenswrapper[4147]: E0216 02:06:14.659769 4147 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="54057cbf958aea9e109c7d17e98aee5281d0a4015dc820437ffc81643f0510c0" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Feb 16 02:06:14.660713 master-0 kubenswrapper[4147]: E0216 02:06:14.659798 4147 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="549ee6f369575d943032c4f2c9b7b4db572d4e7b0fbe810a5d7632f7758f10f6" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Feb 16 02:06:14.660713 master-0 kubenswrapper[4147]: E0216 02:06:14.659849 4147 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" podUID="30d41850-9a4b-4ce2-9902-a59492adeb24" containerName="nbdb" Feb 16 02:06:14.660713 master-0 kubenswrapper[4147]: E0216 02:06:14.659886 4147 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" podUID="30d41850-9a4b-4ce2-9902-a59492adeb24" containerName="sbdb" Feb 16 02:06:14.689500 master-0 kubenswrapper[4147]: I0216 02:06:14.687681 4147 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" podStartSLOduration=7.632998488 podStartE2EDuration="29.687654879s" podCreationTimestamp="2026-02-16 02:05:45 +0000 UTC" firstStartedPulling="2026-02-16 02:05:45.590470449 +0000 UTC m=+64.226205565" lastFinishedPulling="2026-02-16 02:06:07.64512684 +0000 UTC m=+86.280861956" observedRunningTime="2026-02-16 02:06:14.687141296 +0000 UTC m=+93.322876482" watchObservedRunningTime="2026-02-16 02:06:14.687654879 +0000 UTC m=+93.323390035" Feb 16 02:06:14.704725 master-0 kubenswrapper[4147]: I0216 02:06:14.704639 4147 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" podUID="30d41850-9a4b-4ce2-9902-a59492adeb24" containerName="ovnkube-controller" containerID="cri-o://6480bb3563cbd1a8cff960037546688c839f01c862a63b7c461d82549ab638ba" gracePeriod=30 Feb 16 02:06:15.187092 master-0 kubenswrapper[4147]: I0216 02:06:15.187005 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gn9mv" Feb 16 02:06:15.187379 master-0 kubenswrapper[4147]: E0216 02:06:15.187237 4147 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gn9mv" podUID="7f0f9b7d-e663-4927-861b-a9544d483b6e" Feb 16 02:06:15.569839 master-0 kubenswrapper[4147]: E0216 02:06:15.569710 4147 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 549ee6f369575d943032c4f2c9b7b4db572d4e7b0fbe810a5d7632f7758f10f6 is running failed: container process not found" containerID="549ee6f369575d943032c4f2c9b7b4db572d4e7b0fbe810a5d7632f7758f10f6" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Feb 16 02:06:15.569839 master-0 kubenswrapper[4147]: E0216 02:06:15.569741 4147 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 54057cbf958aea9e109c7d17e98aee5281d0a4015dc820437ffc81643f0510c0 is running failed: container process not found" containerID="54057cbf958aea9e109c7d17e98aee5281d0a4015dc820437ffc81643f0510c0" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Feb 16 02:06:15.570770 master-0 kubenswrapper[4147]: E0216 02:06:15.570588 4147 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 54057cbf958aea9e109c7d17e98aee5281d0a4015dc820437ffc81643f0510c0 is running failed: container process not found" containerID="54057cbf958aea9e109c7d17e98aee5281d0a4015dc820437ffc81643f0510c0" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Feb 16 02:06:15.570770 master-0 kubenswrapper[4147]: E0216 02:06:15.570587 4147 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 549ee6f369575d943032c4f2c9b7b4db572d4e7b0fbe810a5d7632f7758f10f6 is running failed: container process not found" containerID="549ee6f369575d943032c4f2c9b7b4db572d4e7b0fbe810a5d7632f7758f10f6" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Feb 16 02:06:15.571229 master-0 kubenswrapper[4147]: E0216 02:06:15.571151 4147 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 54057cbf958aea9e109c7d17e98aee5281d0a4015dc820437ffc81643f0510c0 is running failed: container process not found" containerID="54057cbf958aea9e109c7d17e98aee5281d0a4015dc820437ffc81643f0510c0" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Feb 16 02:06:15.571229 master-0 kubenswrapper[4147]: E0216 02:06:15.571187 4147 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 549ee6f369575d943032c4f2c9b7b4db572d4e7b0fbe810a5d7632f7758f10f6 is running failed: container process not found" containerID="549ee6f369575d943032c4f2c9b7b4db572d4e7b0fbe810a5d7632f7758f10f6" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Feb 16 02:06:15.571348 master-0 kubenswrapper[4147]: E0216 02:06:15.571225 4147 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 54057cbf958aea9e109c7d17e98aee5281d0a4015dc820437ffc81643f0510c0 is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" podUID="30d41850-9a4b-4ce2-9902-a59492adeb24" containerName="nbdb" Feb 16 02:06:15.571348 master-0 kubenswrapper[4147]: E0216 02:06:15.571241 4147 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 549ee6f369575d943032c4f2c9b7b4db572d4e7b0fbe810a5d7632f7758f10f6 is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" podUID="30d41850-9a4b-4ce2-9902-a59492adeb24" containerName="sbdb" Feb 16 02:06:15.599797 master-0 kubenswrapper[4147]: I0216 02:06:15.599725 4147 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zmw46_30d41850-9a4b-4ce2-9902-a59492adeb24/ovnkube-controller/0.log" Feb 16 02:06:15.602626 master-0 kubenswrapper[4147]: I0216 02:06:15.602565 4147 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zmw46_30d41850-9a4b-4ce2-9902-a59492adeb24/kube-rbac-proxy-ovn-metrics/0.log" Feb 16 02:06:15.603392 master-0 kubenswrapper[4147]: I0216 02:06:15.603325 4147 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zmw46_30d41850-9a4b-4ce2-9902-a59492adeb24/kube-rbac-proxy-node/0.log" Feb 16 02:06:15.604673 master-0 kubenswrapper[4147]: I0216 02:06:15.604605 4147 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zmw46_30d41850-9a4b-4ce2-9902-a59492adeb24/ovn-acl-logging/0.log" Feb 16 02:06:15.608094 master-0 kubenswrapper[4147]: I0216 02:06:15.608010 4147 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zmw46_30d41850-9a4b-4ce2-9902-a59492adeb24/ovn-controller/0.log" Feb 16 02:06:15.609406 master-0 kubenswrapper[4147]: I0216 02:06:15.609365 4147 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" Feb 16 02:06:15.659467 master-0 kubenswrapper[4147]: I0216 02:06:15.659322 4147 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zmw46_30d41850-9a4b-4ce2-9902-a59492adeb24/ovnkube-controller/0.log" Feb 16 02:06:15.663337 master-0 kubenswrapper[4147]: I0216 02:06:15.663266 4147 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zmw46_30d41850-9a4b-4ce2-9902-a59492adeb24/kube-rbac-proxy-ovn-metrics/0.log" Feb 16 02:06:15.664420 master-0 kubenswrapper[4147]: I0216 02:06:15.664325 4147 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zmw46_30d41850-9a4b-4ce2-9902-a59492adeb24/kube-rbac-proxy-node/0.log" Feb 16 02:06:15.665153 master-0 kubenswrapper[4147]: I0216 02:06:15.665098 4147 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zmw46_30d41850-9a4b-4ce2-9902-a59492adeb24/ovn-acl-logging/0.log" Feb 16 02:06:15.668031 master-0 kubenswrapper[4147]: I0216 02:06:15.667966 4147 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zmw46_30d41850-9a4b-4ce2-9902-a59492adeb24/ovn-controller/0.log" Feb 16 02:06:15.668655 master-0 kubenswrapper[4147]: I0216 02:06:15.668600 4147 generic.go:334] "Generic (PLEG): container finished" podID="30d41850-9a4b-4ce2-9902-a59492adeb24" containerID="6480bb3563cbd1a8cff960037546688c839f01c862a63b7c461d82549ab638ba" exitCode=1 Feb 16 02:06:15.668655 master-0 kubenswrapper[4147]: I0216 02:06:15.668645 4147 generic.go:334] "Generic (PLEG): container finished" podID="30d41850-9a4b-4ce2-9902-a59492adeb24" containerID="549ee6f369575d943032c4f2c9b7b4db572d4e7b0fbe810a5d7632f7758f10f6" exitCode=0 Feb 16 02:06:15.668872 master-0 kubenswrapper[4147]: I0216 02:06:15.668663 4147 generic.go:334] "Generic (PLEG): container finished" podID="30d41850-9a4b-4ce2-9902-a59492adeb24" containerID="54057cbf958aea9e109c7d17e98aee5281d0a4015dc820437ffc81643f0510c0" exitCode=0 Feb 16 02:06:15.668872 master-0 kubenswrapper[4147]: I0216 02:06:15.668701 4147 generic.go:334] "Generic (PLEG): container finished" podID="30d41850-9a4b-4ce2-9902-a59492adeb24" containerID="909b82be05dd32fe9dc4e88305f8e9ff7a3b90e3a20dd135cf29281c872a8d66" exitCode=0 Feb 16 02:06:15.668872 master-0 kubenswrapper[4147]: I0216 02:06:15.668715 4147 generic.go:334] "Generic (PLEG): container finished" podID="30d41850-9a4b-4ce2-9902-a59492adeb24" containerID="6b3c984b8aa1dc133b6c6bf02c330b29966a579f745c43faac91ae572c60b4cb" exitCode=143 Feb 16 02:06:15.668872 master-0 kubenswrapper[4147]: I0216 02:06:15.668732 4147 generic.go:334] "Generic (PLEG): container finished" podID="30d41850-9a4b-4ce2-9902-a59492adeb24" containerID="ded188d3d43c2f077a6171cda2ddb63433611ea745b982d15bf1f5c6b820601e" exitCode=143 Feb 16 02:06:15.668872 master-0 kubenswrapper[4147]: I0216 02:06:15.668748 4147 generic.go:334] "Generic (PLEG): container finished" podID="30d41850-9a4b-4ce2-9902-a59492adeb24" containerID="56559338d27ab6d66e19e6479539b1c372389ae4413e7ff8d76d4c0ee8313918" exitCode=143 Feb 16 02:06:15.668872 master-0 kubenswrapper[4147]: I0216 02:06:15.668762 4147 generic.go:334] "Generic (PLEG): container finished" podID="30d41850-9a4b-4ce2-9902-a59492adeb24" containerID="1b7b7bdf4c642a1559ec30fb01fd3a1b4682cad332a4325e3ce885fc9b0bd375" exitCode=143 Feb 16 02:06:15.668872 master-0 kubenswrapper[4147]: I0216 02:06:15.668792 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" event={"ID":"30d41850-9a4b-4ce2-9902-a59492adeb24","Type":"ContainerDied","Data":"6480bb3563cbd1a8cff960037546688c839f01c862a63b7c461d82549ab638ba"} Feb 16 02:06:15.668872 master-0 kubenswrapper[4147]: I0216 02:06:15.668832 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" event={"ID":"30d41850-9a4b-4ce2-9902-a59492adeb24","Type":"ContainerDied","Data":"549ee6f369575d943032c4f2c9b7b4db572d4e7b0fbe810a5d7632f7758f10f6"} Feb 16 02:06:15.668872 master-0 kubenswrapper[4147]: I0216 02:06:15.668856 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" event={"ID":"30d41850-9a4b-4ce2-9902-a59492adeb24","Type":"ContainerDied","Data":"54057cbf958aea9e109c7d17e98aee5281d0a4015dc820437ffc81643f0510c0"} Feb 16 02:06:15.668872 master-0 kubenswrapper[4147]: I0216 02:06:15.668876 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" event={"ID":"30d41850-9a4b-4ce2-9902-a59492adeb24","Type":"ContainerDied","Data":"909b82be05dd32fe9dc4e88305f8e9ff7a3b90e3a20dd135cf29281c872a8d66"} Feb 16 02:06:15.668872 master-0 kubenswrapper[4147]: I0216 02:06:15.668896 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" event={"ID":"30d41850-9a4b-4ce2-9902-a59492adeb24","Type":"ContainerDied","Data":"6b3c984b8aa1dc133b6c6bf02c330b29966a579f745c43faac91ae572c60b4cb"} Feb 16 02:06:15.669928 master-0 kubenswrapper[4147]: I0216 02:06:15.668919 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" event={"ID":"30d41850-9a4b-4ce2-9902-a59492adeb24","Type":"ContainerDied","Data":"ded188d3d43c2f077a6171cda2ddb63433611ea745b982d15bf1f5c6b820601e"} Feb 16 02:06:15.669928 master-0 kubenswrapper[4147]: I0216 02:06:15.668940 4147 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"56559338d27ab6d66e19e6479539b1c372389ae4413e7ff8d76d4c0ee8313918"} Feb 16 02:06:15.669928 master-0 kubenswrapper[4147]: I0216 02:06:15.669137 4147 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1b7b7bdf4c642a1559ec30fb01fd3a1b4682cad332a4325e3ce885fc9b0bd375"} Feb 16 02:06:15.669928 master-0 kubenswrapper[4147]: I0216 02:06:15.669150 4147 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a6d27bfddbad624bf45e594d48d6801bfe98efc2c2670d7f3752b9d56ad3d75c"} Feb 16 02:06:15.669928 master-0 kubenswrapper[4147]: I0216 02:06:15.669166 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" event={"ID":"30d41850-9a4b-4ce2-9902-a59492adeb24","Type":"ContainerDied","Data":"56559338d27ab6d66e19e6479539b1c372389ae4413e7ff8d76d4c0ee8313918"} Feb 16 02:06:15.669928 master-0 kubenswrapper[4147]: I0216 02:06:15.669183 4147 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6480bb3563cbd1a8cff960037546688c839f01c862a63b7c461d82549ab638ba"} Feb 16 02:06:15.669928 master-0 kubenswrapper[4147]: I0216 02:06:15.669196 4147 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"549ee6f369575d943032c4f2c9b7b4db572d4e7b0fbe810a5d7632f7758f10f6"} Feb 16 02:06:15.669928 master-0 kubenswrapper[4147]: I0216 02:06:15.669208 4147 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"54057cbf958aea9e109c7d17e98aee5281d0a4015dc820437ffc81643f0510c0"} Feb 16 02:06:15.669928 master-0 kubenswrapper[4147]: I0216 02:06:15.669218 4147 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"909b82be05dd32fe9dc4e88305f8e9ff7a3b90e3a20dd135cf29281c872a8d66"} Feb 16 02:06:15.669928 master-0 kubenswrapper[4147]: I0216 02:06:15.669229 4147 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6b3c984b8aa1dc133b6c6bf02c330b29966a579f745c43faac91ae572c60b4cb"} Feb 16 02:06:15.669928 master-0 kubenswrapper[4147]: I0216 02:06:15.669240 4147 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ded188d3d43c2f077a6171cda2ddb63433611ea745b982d15bf1f5c6b820601e"} Feb 16 02:06:15.669928 master-0 kubenswrapper[4147]: I0216 02:06:15.669250 4147 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"56559338d27ab6d66e19e6479539b1c372389ae4413e7ff8d76d4c0ee8313918"} Feb 16 02:06:15.669928 master-0 kubenswrapper[4147]: I0216 02:06:15.669261 4147 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1b7b7bdf4c642a1559ec30fb01fd3a1b4682cad332a4325e3ce885fc9b0bd375"} Feb 16 02:06:15.669928 master-0 kubenswrapper[4147]: I0216 02:06:15.669271 4147 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a6d27bfddbad624bf45e594d48d6801bfe98efc2c2670d7f3752b9d56ad3d75c"} Feb 16 02:06:15.669928 master-0 kubenswrapper[4147]: I0216 02:06:15.669286 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" event={"ID":"30d41850-9a4b-4ce2-9902-a59492adeb24","Type":"ContainerDied","Data":"1b7b7bdf4c642a1559ec30fb01fd3a1b4682cad332a4325e3ce885fc9b0bd375"} Feb 16 02:06:15.669928 master-0 kubenswrapper[4147]: I0216 02:06:15.669301 4147 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6480bb3563cbd1a8cff960037546688c839f01c862a63b7c461d82549ab638ba"} Feb 16 02:06:15.669928 master-0 kubenswrapper[4147]: I0216 02:06:15.669313 4147 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"549ee6f369575d943032c4f2c9b7b4db572d4e7b0fbe810a5d7632f7758f10f6"} Feb 16 02:06:15.669928 master-0 kubenswrapper[4147]: I0216 02:06:15.669324 4147 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"54057cbf958aea9e109c7d17e98aee5281d0a4015dc820437ffc81643f0510c0"} Feb 16 02:06:15.669928 master-0 kubenswrapper[4147]: I0216 02:06:15.669334 4147 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"909b82be05dd32fe9dc4e88305f8e9ff7a3b90e3a20dd135cf29281c872a8d66"} Feb 16 02:06:15.669928 master-0 kubenswrapper[4147]: I0216 02:06:15.669345 4147 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6b3c984b8aa1dc133b6c6bf02c330b29966a579f745c43faac91ae572c60b4cb"} Feb 16 02:06:15.669928 master-0 kubenswrapper[4147]: I0216 02:06:15.669355 4147 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ded188d3d43c2f077a6171cda2ddb63433611ea745b982d15bf1f5c6b820601e"} Feb 16 02:06:15.669928 master-0 kubenswrapper[4147]: I0216 02:06:15.669365 4147 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"56559338d27ab6d66e19e6479539b1c372389ae4413e7ff8d76d4c0ee8313918"} Feb 16 02:06:15.669928 master-0 kubenswrapper[4147]: I0216 02:06:15.669376 4147 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1b7b7bdf4c642a1559ec30fb01fd3a1b4682cad332a4325e3ce885fc9b0bd375"} Feb 16 02:06:15.669928 master-0 kubenswrapper[4147]: I0216 02:06:15.669552 4147 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a6d27bfddbad624bf45e594d48d6801bfe98efc2c2670d7f3752b9d56ad3d75c"} Feb 16 02:06:15.669928 master-0 kubenswrapper[4147]: I0216 02:06:15.669578 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" event={"ID":"30d41850-9a4b-4ce2-9902-a59492adeb24","Type":"ContainerDied","Data":"e947e806d4cd304de7ae0e1eadc87bafdcb0e816f937b266a440ab82d15412f1"} Feb 16 02:06:15.669928 master-0 kubenswrapper[4147]: I0216 02:06:15.669603 4147 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6480bb3563cbd1a8cff960037546688c839f01c862a63b7c461d82549ab638ba"} Feb 16 02:06:15.669928 master-0 kubenswrapper[4147]: I0216 02:06:15.669621 4147 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"549ee6f369575d943032c4f2c9b7b4db572d4e7b0fbe810a5d7632f7758f10f6"} Feb 16 02:06:15.671560 master-0 kubenswrapper[4147]: I0216 02:06:15.669632 4147 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"54057cbf958aea9e109c7d17e98aee5281d0a4015dc820437ffc81643f0510c0"} Feb 16 02:06:15.671560 master-0 kubenswrapper[4147]: I0216 02:06:15.669642 4147 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"909b82be05dd32fe9dc4e88305f8e9ff7a3b90e3a20dd135cf29281c872a8d66"} Feb 16 02:06:15.671560 master-0 kubenswrapper[4147]: I0216 02:06:15.669654 4147 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6b3c984b8aa1dc133b6c6bf02c330b29966a579f745c43faac91ae572c60b4cb"} Feb 16 02:06:15.671560 master-0 kubenswrapper[4147]: I0216 02:06:15.669664 4147 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ded188d3d43c2f077a6171cda2ddb63433611ea745b982d15bf1f5c6b820601e"} Feb 16 02:06:15.671560 master-0 kubenswrapper[4147]: I0216 02:06:15.669675 4147 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"56559338d27ab6d66e19e6479539b1c372389ae4413e7ff8d76d4c0ee8313918"} Feb 16 02:06:15.671560 master-0 kubenswrapper[4147]: I0216 02:06:15.669686 4147 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1b7b7bdf4c642a1559ec30fb01fd3a1b4682cad332a4325e3ce885fc9b0bd375"} Feb 16 02:06:15.671560 master-0 kubenswrapper[4147]: I0216 02:06:15.669699 4147 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a6d27bfddbad624bf45e594d48d6801bfe98efc2c2670d7f3752b9d56ad3d75c"} Feb 16 02:06:15.671560 master-0 kubenswrapper[4147]: I0216 02:06:15.669719 4147 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zmw46" Feb 16 02:06:15.671560 master-0 kubenswrapper[4147]: I0216 02:06:15.669760 4147 scope.go:117] "RemoveContainer" containerID="6480bb3563cbd1a8cff960037546688c839f01c862a63b7c461d82549ab638ba" Feb 16 02:06:15.678462 master-0 kubenswrapper[4147]: I0216 02:06:15.678376 4147 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bs85n"] Feb 16 02:06:15.678626 master-0 kubenswrapper[4147]: E0216 02:06:15.678539 4147 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30d41850-9a4b-4ce2-9902-a59492adeb24" containerName="ovn-acl-logging" Feb 16 02:06:15.678626 master-0 kubenswrapper[4147]: I0216 02:06:15.678559 4147 state_mem.go:107] "Deleted CPUSet assignment" podUID="30d41850-9a4b-4ce2-9902-a59492adeb24" containerName="ovn-acl-logging" Feb 16 02:06:15.678626 master-0 kubenswrapper[4147]: E0216 02:06:15.678602 4147 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30d41850-9a4b-4ce2-9902-a59492adeb24" containerName="kubecfg-setup" Feb 16 02:06:15.678626 master-0 kubenswrapper[4147]: I0216 02:06:15.678615 4147 state_mem.go:107] "Deleted CPUSet assignment" podUID="30d41850-9a4b-4ce2-9902-a59492adeb24" containerName="kubecfg-setup" Feb 16 02:06:15.678626 master-0 kubenswrapper[4147]: E0216 02:06:15.678630 4147 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30d41850-9a4b-4ce2-9902-a59492adeb24" containerName="kube-rbac-proxy-ovn-metrics" Feb 16 02:06:15.678898 master-0 kubenswrapper[4147]: I0216 02:06:15.678643 4147 state_mem.go:107] "Deleted CPUSet assignment" podUID="30d41850-9a4b-4ce2-9902-a59492adeb24" containerName="kube-rbac-proxy-ovn-metrics" Feb 16 02:06:15.678898 master-0 kubenswrapper[4147]: E0216 02:06:15.678658 4147 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30d41850-9a4b-4ce2-9902-a59492adeb24" containerName="ovn-controller" Feb 16 02:06:15.678898 master-0 kubenswrapper[4147]: I0216 02:06:15.678672 4147 state_mem.go:107] "Deleted CPUSet assignment" podUID="30d41850-9a4b-4ce2-9902-a59492adeb24" containerName="ovn-controller" Feb 16 02:06:15.678898 master-0 kubenswrapper[4147]: E0216 02:06:15.678686 4147 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30d41850-9a4b-4ce2-9902-a59492adeb24" containerName="kube-rbac-proxy-node" Feb 16 02:06:15.678898 master-0 kubenswrapper[4147]: I0216 02:06:15.678698 4147 state_mem.go:107] "Deleted CPUSet assignment" podUID="30d41850-9a4b-4ce2-9902-a59492adeb24" containerName="kube-rbac-proxy-node" Feb 16 02:06:15.678898 master-0 kubenswrapper[4147]: E0216 02:06:15.678712 4147 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30d41850-9a4b-4ce2-9902-a59492adeb24" containerName="nbdb" Feb 16 02:06:15.678898 master-0 kubenswrapper[4147]: I0216 02:06:15.678724 4147 state_mem.go:107] "Deleted CPUSet assignment" podUID="30d41850-9a4b-4ce2-9902-a59492adeb24" containerName="nbdb" Feb 16 02:06:15.678898 master-0 kubenswrapper[4147]: E0216 02:06:15.678737 4147 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30d41850-9a4b-4ce2-9902-a59492adeb24" containerName="sbdb" Feb 16 02:06:15.678898 master-0 kubenswrapper[4147]: I0216 02:06:15.678749 4147 state_mem.go:107] "Deleted CPUSet assignment" podUID="30d41850-9a4b-4ce2-9902-a59492adeb24" containerName="sbdb" Feb 16 02:06:15.678898 master-0 kubenswrapper[4147]: E0216 02:06:15.678766 4147 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30d41850-9a4b-4ce2-9902-a59492adeb24" containerName="northd" Feb 16 02:06:15.678898 master-0 kubenswrapper[4147]: I0216 02:06:15.678778 4147 state_mem.go:107] "Deleted CPUSet assignment" podUID="30d41850-9a4b-4ce2-9902-a59492adeb24" containerName="northd" Feb 16 02:06:15.678898 master-0 kubenswrapper[4147]: E0216 02:06:15.678790 4147 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30d41850-9a4b-4ce2-9902-a59492adeb24" containerName="ovnkube-controller" Feb 16 02:06:15.678898 master-0 kubenswrapper[4147]: I0216 02:06:15.678802 4147 state_mem.go:107] "Deleted CPUSet assignment" podUID="30d41850-9a4b-4ce2-9902-a59492adeb24" containerName="ovnkube-controller" Feb 16 02:06:15.678898 master-0 kubenswrapper[4147]: I0216 02:06:15.678861 4147 memory_manager.go:354] "RemoveStaleState removing state" podUID="30d41850-9a4b-4ce2-9902-a59492adeb24" containerName="ovn-controller" Feb 16 02:06:15.678898 master-0 kubenswrapper[4147]: I0216 02:06:15.678876 4147 memory_manager.go:354] "RemoveStaleState removing state" podUID="30d41850-9a4b-4ce2-9902-a59492adeb24" containerName="nbdb" Feb 16 02:06:15.678898 master-0 kubenswrapper[4147]: I0216 02:06:15.678891 4147 memory_manager.go:354] "RemoveStaleState removing state" podUID="30d41850-9a4b-4ce2-9902-a59492adeb24" containerName="sbdb" Feb 16 02:06:15.678898 master-0 kubenswrapper[4147]: I0216 02:06:15.678904 4147 memory_manager.go:354] "RemoveStaleState removing state" podUID="30d41850-9a4b-4ce2-9902-a59492adeb24" containerName="ovn-acl-logging" Feb 16 02:06:15.678898 master-0 kubenswrapper[4147]: I0216 02:06:15.678917 4147 memory_manager.go:354] "RemoveStaleState removing state" podUID="30d41850-9a4b-4ce2-9902-a59492adeb24" containerName="kube-rbac-proxy-ovn-metrics" Feb 16 02:06:15.679902 master-0 kubenswrapper[4147]: I0216 02:06:15.678930 4147 memory_manager.go:354] "RemoveStaleState removing state" podUID="30d41850-9a4b-4ce2-9902-a59492adeb24" containerName="northd" Feb 16 02:06:15.679902 master-0 kubenswrapper[4147]: I0216 02:06:15.678943 4147 memory_manager.go:354] "RemoveStaleState removing state" podUID="30d41850-9a4b-4ce2-9902-a59492adeb24" containerName="kube-rbac-proxy-node" Feb 16 02:06:15.679902 master-0 kubenswrapper[4147]: I0216 02:06:15.678956 4147 memory_manager.go:354] "RemoveStaleState removing state" podUID="30d41850-9a4b-4ce2-9902-a59492adeb24" containerName="ovnkube-controller" Feb 16 02:06:15.680066 master-0 kubenswrapper[4147]: I0216 02:06:15.679953 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:15.696951 master-0 kubenswrapper[4147]: I0216 02:06:15.696898 4147 scope.go:117] "RemoveContainer" containerID="549ee6f369575d943032c4f2c9b7b4db572d4e7b0fbe810a5d7632f7758f10f6" Feb 16 02:06:15.698745 master-0 kubenswrapper[4147]: I0216 02:06:15.698710 4147 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-etc-openvswitch\") pod \"30d41850-9a4b-4ce2-9902-a59492adeb24\" (UID: \"30d41850-9a4b-4ce2-9902-a59492adeb24\") " Feb 16 02:06:15.698889 master-0 kubenswrapper[4147]: I0216 02:06:15.698761 4147 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-host-cni-netd\") pod \"30d41850-9a4b-4ce2-9902-a59492adeb24\" (UID: \"30d41850-9a4b-4ce2-9902-a59492adeb24\") " Feb 16 02:06:15.698889 master-0 kubenswrapper[4147]: I0216 02:06:15.698816 4147 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/30d41850-9a4b-4ce2-9902-a59492adeb24-ovnkube-script-lib\") pod \"30d41850-9a4b-4ce2-9902-a59492adeb24\" (UID: \"30d41850-9a4b-4ce2-9902-a59492adeb24\") " Feb 16 02:06:15.698889 master-0 kubenswrapper[4147]: I0216 02:06:15.698867 4147 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-run-systemd\") pod \"30d41850-9a4b-4ce2-9902-a59492adeb24\" (UID: \"30d41850-9a4b-4ce2-9902-a59492adeb24\") " Feb 16 02:06:15.699069 master-0 kubenswrapper[4147]: I0216 02:06:15.698902 4147 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-host-run-netns\") pod \"30d41850-9a4b-4ce2-9902-a59492adeb24\" (UID: \"30d41850-9a4b-4ce2-9902-a59492adeb24\") " Feb 16 02:06:15.699069 master-0 kubenswrapper[4147]: I0216 02:06:15.698892 4147 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "30d41850-9a4b-4ce2-9902-a59492adeb24" (UID: "30d41850-9a4b-4ce2-9902-a59492adeb24"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:06:15.699069 master-0 kubenswrapper[4147]: I0216 02:06:15.698931 4147 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-host-cni-bin\") pod \"30d41850-9a4b-4ce2-9902-a59492adeb24\" (UID: \"30d41850-9a4b-4ce2-9902-a59492adeb24\") " Feb 16 02:06:15.699069 master-0 kubenswrapper[4147]: I0216 02:06:15.698958 4147 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-run-openvswitch\") pod \"30d41850-9a4b-4ce2-9902-a59492adeb24\" (UID: \"30d41850-9a4b-4ce2-9902-a59492adeb24\") " Feb 16 02:06:15.699069 master-0 kubenswrapper[4147]: I0216 02:06:15.699033 4147 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "30d41850-9a4b-4ce2-9902-a59492adeb24" (UID: "30d41850-9a4b-4ce2-9902-a59492adeb24"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:06:15.699331 master-0 kubenswrapper[4147]: I0216 02:06:15.699073 4147 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "30d41850-9a4b-4ce2-9902-a59492adeb24" (UID: "30d41850-9a4b-4ce2-9902-a59492adeb24"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:06:15.699331 master-0 kubenswrapper[4147]: I0216 02:06:15.699156 4147 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "30d41850-9a4b-4ce2-9902-a59492adeb24" (UID: "30d41850-9a4b-4ce2-9902-a59492adeb24"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:06:15.699331 master-0 kubenswrapper[4147]: I0216 02:06:15.699210 4147 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "30d41850-9a4b-4ce2-9902-a59492adeb24" (UID: "30d41850-9a4b-4ce2-9902-a59492adeb24"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:06:15.699331 master-0 kubenswrapper[4147]: I0216 02:06:15.699267 4147 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/30d41850-9a4b-4ce2-9902-a59492adeb24-ovn-node-metrics-cert\") pod \"30d41850-9a4b-4ce2-9902-a59492adeb24\" (UID: \"30d41850-9a4b-4ce2-9902-a59492adeb24\") " Feb 16 02:06:15.699576 master-0 kubenswrapper[4147]: I0216 02:06:15.699393 4147 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/30d41850-9a4b-4ce2-9902-a59492adeb24-ovnkube-config\") pod \"30d41850-9a4b-4ce2-9902-a59492adeb24\" (UID: \"30d41850-9a4b-4ce2-9902-a59492adeb24\") " Feb 16 02:06:15.699576 master-0 kubenswrapper[4147]: I0216 02:06:15.699430 4147 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-host-kubelet\") pod \"30d41850-9a4b-4ce2-9902-a59492adeb24\" (UID: \"30d41850-9a4b-4ce2-9902-a59492adeb24\") " Feb 16 02:06:15.699576 master-0 kubenswrapper[4147]: I0216 02:06:15.699545 4147 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-host-var-lib-cni-networks-ovn-kubernetes\") pod \"30d41850-9a4b-4ce2-9902-a59492adeb24\" (UID: \"30d41850-9a4b-4ce2-9902-a59492adeb24\") " Feb 16 02:06:15.699730 master-0 kubenswrapper[4147]: I0216 02:06:15.699581 4147 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-host-run-ovn-kubernetes\") pod \"30d41850-9a4b-4ce2-9902-a59492adeb24\" (UID: \"30d41850-9a4b-4ce2-9902-a59492adeb24\") " Feb 16 02:06:15.699730 master-0 kubenswrapper[4147]: I0216 02:06:15.699614 4147 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-run-ovn\") pod \"30d41850-9a4b-4ce2-9902-a59492adeb24\" (UID: \"30d41850-9a4b-4ce2-9902-a59492adeb24\") " Feb 16 02:06:15.699730 master-0 kubenswrapper[4147]: I0216 02:06:15.699650 4147 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-log-socket\") pod \"30d41850-9a4b-4ce2-9902-a59492adeb24\" (UID: \"30d41850-9a4b-4ce2-9902-a59492adeb24\") " Feb 16 02:06:15.699730 master-0 kubenswrapper[4147]: I0216 02:06:15.699689 4147 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-systemd-units\") pod \"30d41850-9a4b-4ce2-9902-a59492adeb24\" (UID: \"30d41850-9a4b-4ce2-9902-a59492adeb24\") " Feb 16 02:06:15.699869 master-0 kubenswrapper[4147]: I0216 02:06:15.699726 4147 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-host-slash\") pod \"30d41850-9a4b-4ce2-9902-a59492adeb24\" (UID: \"30d41850-9a4b-4ce2-9902-a59492adeb24\") " Feb 16 02:06:15.699869 master-0 kubenswrapper[4147]: I0216 02:06:15.699758 4147 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-node-log\") pod \"30d41850-9a4b-4ce2-9902-a59492adeb24\" (UID: \"30d41850-9a4b-4ce2-9902-a59492adeb24\") " Feb 16 02:06:15.699869 master-0 kubenswrapper[4147]: I0216 02:06:15.699796 4147 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4pzn\" (UniqueName: \"kubernetes.io/projected/30d41850-9a4b-4ce2-9902-a59492adeb24-kube-api-access-k4pzn\") pod \"30d41850-9a4b-4ce2-9902-a59492adeb24\" (UID: \"30d41850-9a4b-4ce2-9902-a59492adeb24\") " Feb 16 02:06:15.699869 master-0 kubenswrapper[4147]: I0216 02:06:15.699800 4147 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "30d41850-9a4b-4ce2-9902-a59492adeb24" (UID: "30d41850-9a4b-4ce2-9902-a59492adeb24"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:06:15.699869 master-0 kubenswrapper[4147]: I0216 02:06:15.699830 4147 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/30d41850-9a4b-4ce2-9902-a59492adeb24-env-overrides\") pod \"30d41850-9a4b-4ce2-9902-a59492adeb24\" (UID: \"30d41850-9a4b-4ce2-9902-a59492adeb24\") " Feb 16 02:06:15.699869 master-0 kubenswrapper[4147]: I0216 02:06:15.699828 4147 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30d41850-9a4b-4ce2-9902-a59492adeb24-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "30d41850-9a4b-4ce2-9902-a59492adeb24" (UID: "30d41850-9a4b-4ce2-9902-a59492adeb24"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:06:15.699869 master-0 kubenswrapper[4147]: I0216 02:06:15.699864 4147 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-var-lib-openvswitch\") pod \"30d41850-9a4b-4ce2-9902-a59492adeb24\" (UID: \"30d41850-9a4b-4ce2-9902-a59492adeb24\") " Feb 16 02:06:15.700107 master-0 kubenswrapper[4147]: I0216 02:06:15.699868 4147 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "30d41850-9a4b-4ce2-9902-a59492adeb24" (UID: "30d41850-9a4b-4ce2-9902-a59492adeb24"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:06:15.700107 master-0 kubenswrapper[4147]: I0216 02:06:15.699978 4147 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "30d41850-9a4b-4ce2-9902-a59492adeb24" (UID: "30d41850-9a4b-4ce2-9902-a59492adeb24"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:06:15.700177 master-0 kubenswrapper[4147]: I0216 02:06:15.700134 4147 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-host-run-netns\") on node \"master-0\" DevicePath \"\"" Feb 16 02:06:15.700227 master-0 kubenswrapper[4147]: I0216 02:06:15.700166 4147 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-host-slash" (OuterVolumeSpecName: "host-slash") pod "30d41850-9a4b-4ce2-9902-a59492adeb24" (UID: "30d41850-9a4b-4ce2-9902-a59492adeb24"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:06:15.700227 master-0 kubenswrapper[4147]: I0216 02:06:15.700200 4147 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-host-cni-bin\") on node \"master-0\" DevicePath \"\"" Feb 16 02:06:15.700305 master-0 kubenswrapper[4147]: I0216 02:06:15.700225 4147 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-run-openvswitch\") on node \"master-0\" DevicePath \"\"" Feb 16 02:06:15.700305 master-0 kubenswrapper[4147]: I0216 02:06:15.700221 4147 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30d41850-9a4b-4ce2-9902-a59492adeb24-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "30d41850-9a4b-4ce2-9902-a59492adeb24" (UID: "30d41850-9a4b-4ce2-9902-a59492adeb24"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:06:15.700305 master-0 kubenswrapper[4147]: I0216 02:06:15.700222 4147 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "30d41850-9a4b-4ce2-9902-a59492adeb24" (UID: "30d41850-9a4b-4ce2-9902-a59492adeb24"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:06:15.700305 master-0 kubenswrapper[4147]: I0216 02:06:15.700280 4147 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-host-kubelet\") on node \"master-0\" DevicePath \"\"" Feb 16 02:06:15.700305 master-0 kubenswrapper[4147]: I0216 02:06:15.700301 4147 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-log-socket" (OuterVolumeSpecName: "log-socket") pod "30d41850-9a4b-4ce2-9902-a59492adeb24" (UID: "30d41850-9a4b-4ce2-9902-a59492adeb24"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:06:15.702034 master-0 kubenswrapper[4147]: I0216 02:06:15.700304 4147 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-host-var-lib-cni-networks-ovn-kubernetes\") on node \"master-0\" DevicePath \"\"" Feb 16 02:06:15.702034 master-0 kubenswrapper[4147]: I0216 02:06:15.700345 4147 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "30d41850-9a4b-4ce2-9902-a59492adeb24" (UID: "30d41850-9a4b-4ce2-9902-a59492adeb24"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:06:15.702034 master-0 kubenswrapper[4147]: I0216 02:06:15.700360 4147 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-host-run-ovn-kubernetes\") on node \"master-0\" DevicePath \"\"" Feb 16 02:06:15.702034 master-0 kubenswrapper[4147]: I0216 02:06:15.700390 4147 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-etc-openvswitch\") on node \"master-0\" DevicePath \"\"" Feb 16 02:06:15.702034 master-0 kubenswrapper[4147]: I0216 02:06:15.700391 4147 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "30d41850-9a4b-4ce2-9902-a59492adeb24" (UID: "30d41850-9a4b-4ce2-9902-a59492adeb24"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:06:15.702034 master-0 kubenswrapper[4147]: I0216 02:06:15.700500 4147 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-host-cni-netd\") on node \"master-0\" DevicePath \"\"" Feb 16 02:06:15.702034 master-0 kubenswrapper[4147]: I0216 02:06:15.700504 4147 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-node-log" (OuterVolumeSpecName: "node-log") pod "30d41850-9a4b-4ce2-9902-a59492adeb24" (UID: "30d41850-9a4b-4ce2-9902-a59492adeb24"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:06:15.702034 master-0 kubenswrapper[4147]: I0216 02:06:15.700530 4147 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/30d41850-9a4b-4ce2-9902-a59492adeb24-ovnkube-script-lib\") on node \"master-0\" DevicePath \"\"" Feb 16 02:06:15.702034 master-0 kubenswrapper[4147]: I0216 02:06:15.700998 4147 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30d41850-9a4b-4ce2-9902-a59492adeb24-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "30d41850-9a4b-4ce2-9902-a59492adeb24" (UID: "30d41850-9a4b-4ce2-9902-a59492adeb24"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:06:15.706418 master-0 kubenswrapper[4147]: I0216 02:06:15.706358 4147 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30d41850-9a4b-4ce2-9902-a59492adeb24-kube-api-access-k4pzn" (OuterVolumeSpecName: "kube-api-access-k4pzn") pod "30d41850-9a4b-4ce2-9902-a59492adeb24" (UID: "30d41850-9a4b-4ce2-9902-a59492adeb24"). InnerVolumeSpecName "kube-api-access-k4pzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:06:15.707462 master-0 kubenswrapper[4147]: I0216 02:06:15.707343 4147 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30d41850-9a4b-4ce2-9902-a59492adeb24-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "30d41850-9a4b-4ce2-9902-a59492adeb24" (UID: "30d41850-9a4b-4ce2-9902-a59492adeb24"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:06:15.708954 master-0 kubenswrapper[4147]: I0216 02:06:15.708883 4147 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "30d41850-9a4b-4ce2-9902-a59492adeb24" (UID: "30d41850-9a4b-4ce2-9902-a59492adeb24"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:06:15.726250 master-0 kubenswrapper[4147]: I0216 02:06:15.725954 4147 scope.go:117] "RemoveContainer" containerID="54057cbf958aea9e109c7d17e98aee5281d0a4015dc820437ffc81643f0510c0" Feb 16 02:06:15.739402 master-0 kubenswrapper[4147]: I0216 02:06:15.739347 4147 scope.go:117] "RemoveContainer" containerID="909b82be05dd32fe9dc4e88305f8e9ff7a3b90e3a20dd135cf29281c872a8d66" Feb 16 02:06:15.755318 master-0 kubenswrapper[4147]: I0216 02:06:15.755292 4147 scope.go:117] "RemoveContainer" containerID="6b3c984b8aa1dc133b6c6bf02c330b29966a579f745c43faac91ae572c60b4cb" Feb 16 02:06:15.766939 master-0 kubenswrapper[4147]: I0216 02:06:15.766895 4147 scope.go:117] "RemoveContainer" containerID="ded188d3d43c2f077a6171cda2ddb63433611ea745b982d15bf1f5c6b820601e" Feb 16 02:06:15.781396 master-0 kubenswrapper[4147]: I0216 02:06:15.780964 4147 scope.go:117] "RemoveContainer" containerID="56559338d27ab6d66e19e6479539b1c372389ae4413e7ff8d76d4c0ee8313918" Feb 16 02:06:15.797058 master-0 kubenswrapper[4147]: I0216 02:06:15.796958 4147 scope.go:117] "RemoveContainer" containerID="1b7b7bdf4c642a1559ec30fb01fd3a1b4682cad332a4325e3ce885fc9b0bd375" Feb 16 02:06:15.800896 master-0 kubenswrapper[4147]: I0216 02:06:15.800832 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-var-lib-openvswitch\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:15.800896 master-0 kubenswrapper[4147]: I0216 02:06:15.800888 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6dcef814-353e-4985-9afc-9e545f7853ae-env-overrides\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:15.801197 master-0 kubenswrapper[4147]: I0216 02:06:15.800980 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:15.801197 master-0 kubenswrapper[4147]: I0216 02:06:15.801047 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6dcef814-353e-4985-9afc-9e545f7853ae-ovnkube-config\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:15.801197 master-0 kubenswrapper[4147]: I0216 02:06:15.801130 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-etc-openvswitch\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:15.801197 master-0 kubenswrapper[4147]: I0216 02:06:15.801165 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6dcef814-353e-4985-9afc-9e545f7853ae-ovnkube-script-lib\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:15.801569 master-0 kubenswrapper[4147]: I0216 02:06:15.801214 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-run-openvswitch\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:15.801569 master-0 kubenswrapper[4147]: I0216 02:06:15.801234 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-systemd-units\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:15.801569 master-0 kubenswrapper[4147]: I0216 02:06:15.801251 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-run-systemd\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:15.801569 master-0 kubenswrapper[4147]: I0216 02:06:15.801289 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-run-ovn\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:15.801569 master-0 kubenswrapper[4147]: I0216 02:06:15.801348 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-host-cni-bin\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:15.801569 master-0 kubenswrapper[4147]: I0216 02:06:15.801374 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-node-log\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:15.801569 master-0 kubenswrapper[4147]: I0216 02:06:15.801398 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-host-run-ovn-kubernetes\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:15.801569 master-0 kubenswrapper[4147]: I0216 02:06:15.801453 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-log-socket\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:15.801569 master-0 kubenswrapper[4147]: I0216 02:06:15.801487 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6dcef814-353e-4985-9afc-9e545f7853ae-ovn-node-metrics-cert\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:15.801569 master-0 kubenswrapper[4147]: I0216 02:06:15.801516 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjsbs\" (UniqueName: \"kubernetes.io/projected/6dcef814-353e-4985-9afc-9e545f7853ae-kube-api-access-pjsbs\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:15.801569 master-0 kubenswrapper[4147]: I0216 02:06:15.801594 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-host-slash\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:15.802211 master-0 kubenswrapper[4147]: I0216 02:06:15.801620 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-host-run-netns\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:15.802211 master-0 kubenswrapper[4147]: I0216 02:06:15.801642 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-host-kubelet\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:15.802211 master-0 kubenswrapper[4147]: I0216 02:06:15.801671 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-host-cni-netd\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:15.802211 master-0 kubenswrapper[4147]: I0216 02:06:15.801740 4147 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-run-systemd\") on node \"master-0\" DevicePath \"\"" Feb 16 02:06:15.802211 master-0 kubenswrapper[4147]: I0216 02:06:15.801757 4147 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/30d41850-9a4b-4ce2-9902-a59492adeb24-ovn-node-metrics-cert\") on node \"master-0\" DevicePath \"\"" Feb 16 02:06:15.802211 master-0 kubenswrapper[4147]: I0216 02:06:15.801770 4147 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/30d41850-9a4b-4ce2-9902-a59492adeb24-ovnkube-config\") on node \"master-0\" DevicePath \"\"" Feb 16 02:06:15.802211 master-0 kubenswrapper[4147]: I0216 02:06:15.801784 4147 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-run-ovn\") on node \"master-0\" DevicePath \"\"" Feb 16 02:06:15.802211 master-0 kubenswrapper[4147]: I0216 02:06:15.801796 4147 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-log-socket\") on node \"master-0\" DevicePath \"\"" Feb 16 02:06:15.802211 master-0 kubenswrapper[4147]: I0216 02:06:15.801810 4147 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-systemd-units\") on node \"master-0\" DevicePath \"\"" Feb 16 02:06:15.802211 master-0 kubenswrapper[4147]: I0216 02:06:15.801822 4147 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-host-slash\") on node \"master-0\" DevicePath \"\"" Feb 16 02:06:15.802211 master-0 kubenswrapper[4147]: I0216 02:06:15.801834 4147 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-node-log\") on node \"master-0\" DevicePath \"\"" Feb 16 02:06:15.802211 master-0 kubenswrapper[4147]: I0216 02:06:15.801845 4147 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4pzn\" (UniqueName: \"kubernetes.io/projected/30d41850-9a4b-4ce2-9902-a59492adeb24-kube-api-access-k4pzn\") on node \"master-0\" DevicePath \"\"" Feb 16 02:06:15.802211 master-0 kubenswrapper[4147]: I0216 02:06:15.801857 4147 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/30d41850-9a4b-4ce2-9902-a59492adeb24-var-lib-openvswitch\") on node \"master-0\" DevicePath \"\"" Feb 16 02:06:15.802211 master-0 kubenswrapper[4147]: I0216 02:06:15.801871 4147 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/30d41850-9a4b-4ce2-9902-a59492adeb24-env-overrides\") on node \"master-0\" DevicePath \"\"" Feb 16 02:06:15.812174 master-0 kubenswrapper[4147]: I0216 02:06:15.812142 4147 scope.go:117] "RemoveContainer" containerID="a6d27bfddbad624bf45e594d48d6801bfe98efc2c2670d7f3752b9d56ad3d75c" Feb 16 02:06:15.828146 master-0 kubenswrapper[4147]: I0216 02:06:15.828085 4147 scope.go:117] "RemoveContainer" containerID="6480bb3563cbd1a8cff960037546688c839f01c862a63b7c461d82549ab638ba" Feb 16 02:06:15.828928 master-0 kubenswrapper[4147]: E0216 02:06:15.828870 4147 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6480bb3563cbd1a8cff960037546688c839f01c862a63b7c461d82549ab638ba\": container with ID starting with 6480bb3563cbd1a8cff960037546688c839f01c862a63b7c461d82549ab638ba not found: ID does not exist" containerID="6480bb3563cbd1a8cff960037546688c839f01c862a63b7c461d82549ab638ba" Feb 16 02:06:15.829011 master-0 kubenswrapper[4147]: I0216 02:06:15.828928 4147 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6480bb3563cbd1a8cff960037546688c839f01c862a63b7c461d82549ab638ba"} err="failed to get container status \"6480bb3563cbd1a8cff960037546688c839f01c862a63b7c461d82549ab638ba\": rpc error: code = NotFound desc = could not find container \"6480bb3563cbd1a8cff960037546688c839f01c862a63b7c461d82549ab638ba\": container with ID starting with 6480bb3563cbd1a8cff960037546688c839f01c862a63b7c461d82549ab638ba not found: ID does not exist" Feb 16 02:06:15.829011 master-0 kubenswrapper[4147]: I0216 02:06:15.828973 4147 scope.go:117] "RemoveContainer" containerID="549ee6f369575d943032c4f2c9b7b4db572d4e7b0fbe810a5d7632f7758f10f6" Feb 16 02:06:15.829668 master-0 kubenswrapper[4147]: E0216 02:06:15.829598 4147 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"549ee6f369575d943032c4f2c9b7b4db572d4e7b0fbe810a5d7632f7758f10f6\": container with ID starting with 549ee6f369575d943032c4f2c9b7b4db572d4e7b0fbe810a5d7632f7758f10f6 not found: ID does not exist" containerID="549ee6f369575d943032c4f2c9b7b4db572d4e7b0fbe810a5d7632f7758f10f6" Feb 16 02:06:15.829780 master-0 kubenswrapper[4147]: I0216 02:06:15.829669 4147 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"549ee6f369575d943032c4f2c9b7b4db572d4e7b0fbe810a5d7632f7758f10f6"} err="failed to get container status \"549ee6f369575d943032c4f2c9b7b4db572d4e7b0fbe810a5d7632f7758f10f6\": rpc error: code = NotFound desc = could not find container \"549ee6f369575d943032c4f2c9b7b4db572d4e7b0fbe810a5d7632f7758f10f6\": container with ID starting with 549ee6f369575d943032c4f2c9b7b4db572d4e7b0fbe810a5d7632f7758f10f6 not found: ID does not exist" Feb 16 02:06:15.829780 master-0 kubenswrapper[4147]: I0216 02:06:15.829715 4147 scope.go:117] "RemoveContainer" containerID="54057cbf958aea9e109c7d17e98aee5281d0a4015dc820437ffc81643f0510c0" Feb 16 02:06:15.830263 master-0 kubenswrapper[4147]: E0216 02:06:15.830195 4147 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54057cbf958aea9e109c7d17e98aee5281d0a4015dc820437ffc81643f0510c0\": container with ID starting with 54057cbf958aea9e109c7d17e98aee5281d0a4015dc820437ffc81643f0510c0 not found: ID does not exist" containerID="54057cbf958aea9e109c7d17e98aee5281d0a4015dc820437ffc81643f0510c0" Feb 16 02:06:15.830360 master-0 kubenswrapper[4147]: I0216 02:06:15.830247 4147 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54057cbf958aea9e109c7d17e98aee5281d0a4015dc820437ffc81643f0510c0"} err="failed to get container status \"54057cbf958aea9e109c7d17e98aee5281d0a4015dc820437ffc81643f0510c0\": rpc error: code = NotFound desc = could not find container \"54057cbf958aea9e109c7d17e98aee5281d0a4015dc820437ffc81643f0510c0\": container with ID starting with 54057cbf958aea9e109c7d17e98aee5281d0a4015dc820437ffc81643f0510c0 not found: ID does not exist" Feb 16 02:06:15.830360 master-0 kubenswrapper[4147]: I0216 02:06:15.830287 4147 scope.go:117] "RemoveContainer" containerID="909b82be05dd32fe9dc4e88305f8e9ff7a3b90e3a20dd135cf29281c872a8d66" Feb 16 02:06:15.830850 master-0 kubenswrapper[4147]: E0216 02:06:15.830795 4147 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"909b82be05dd32fe9dc4e88305f8e9ff7a3b90e3a20dd135cf29281c872a8d66\": container with ID starting with 909b82be05dd32fe9dc4e88305f8e9ff7a3b90e3a20dd135cf29281c872a8d66 not found: ID does not exist" containerID="909b82be05dd32fe9dc4e88305f8e9ff7a3b90e3a20dd135cf29281c872a8d66" Feb 16 02:06:15.830850 master-0 kubenswrapper[4147]: I0216 02:06:15.830838 4147 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"909b82be05dd32fe9dc4e88305f8e9ff7a3b90e3a20dd135cf29281c872a8d66"} err="failed to get container status \"909b82be05dd32fe9dc4e88305f8e9ff7a3b90e3a20dd135cf29281c872a8d66\": rpc error: code = NotFound desc = could not find container \"909b82be05dd32fe9dc4e88305f8e9ff7a3b90e3a20dd135cf29281c872a8d66\": container with ID starting with 909b82be05dd32fe9dc4e88305f8e9ff7a3b90e3a20dd135cf29281c872a8d66 not found: ID does not exist" Feb 16 02:06:15.831035 master-0 kubenswrapper[4147]: I0216 02:06:15.830864 4147 scope.go:117] "RemoveContainer" containerID="6b3c984b8aa1dc133b6c6bf02c330b29966a579f745c43faac91ae572c60b4cb" Feb 16 02:06:15.831308 master-0 kubenswrapper[4147]: E0216 02:06:15.831265 4147 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b3c984b8aa1dc133b6c6bf02c330b29966a579f745c43faac91ae572c60b4cb\": container with ID starting with 6b3c984b8aa1dc133b6c6bf02c330b29966a579f745c43faac91ae572c60b4cb not found: ID does not exist" containerID="6b3c984b8aa1dc133b6c6bf02c330b29966a579f745c43faac91ae572c60b4cb" Feb 16 02:06:15.831419 master-0 kubenswrapper[4147]: I0216 02:06:15.831311 4147 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b3c984b8aa1dc133b6c6bf02c330b29966a579f745c43faac91ae572c60b4cb"} err="failed to get container status \"6b3c984b8aa1dc133b6c6bf02c330b29966a579f745c43faac91ae572c60b4cb\": rpc error: code = NotFound desc = could not find container \"6b3c984b8aa1dc133b6c6bf02c330b29966a579f745c43faac91ae572c60b4cb\": container with ID starting with 6b3c984b8aa1dc133b6c6bf02c330b29966a579f745c43faac91ae572c60b4cb not found: ID does not exist" Feb 16 02:06:15.831419 master-0 kubenswrapper[4147]: I0216 02:06:15.831346 4147 scope.go:117] "RemoveContainer" containerID="ded188d3d43c2f077a6171cda2ddb63433611ea745b982d15bf1f5c6b820601e" Feb 16 02:06:15.831941 master-0 kubenswrapper[4147]: E0216 02:06:15.831884 4147 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ded188d3d43c2f077a6171cda2ddb63433611ea745b982d15bf1f5c6b820601e\": container with ID starting with ded188d3d43c2f077a6171cda2ddb63433611ea745b982d15bf1f5c6b820601e not found: ID does not exist" containerID="ded188d3d43c2f077a6171cda2ddb63433611ea745b982d15bf1f5c6b820601e" Feb 16 02:06:15.832013 master-0 kubenswrapper[4147]: I0216 02:06:15.831929 4147 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ded188d3d43c2f077a6171cda2ddb63433611ea745b982d15bf1f5c6b820601e"} err="failed to get container status \"ded188d3d43c2f077a6171cda2ddb63433611ea745b982d15bf1f5c6b820601e\": rpc error: code = NotFound desc = could not find container \"ded188d3d43c2f077a6171cda2ddb63433611ea745b982d15bf1f5c6b820601e\": container with ID starting with ded188d3d43c2f077a6171cda2ddb63433611ea745b982d15bf1f5c6b820601e not found: ID does not exist" Feb 16 02:06:15.832013 master-0 kubenswrapper[4147]: I0216 02:06:15.831959 4147 scope.go:117] "RemoveContainer" containerID="56559338d27ab6d66e19e6479539b1c372389ae4413e7ff8d76d4c0ee8313918" Feb 16 02:06:15.832545 master-0 kubenswrapper[4147]: E0216 02:06:15.832490 4147 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56559338d27ab6d66e19e6479539b1c372389ae4413e7ff8d76d4c0ee8313918\": container with ID starting with 56559338d27ab6d66e19e6479539b1c372389ae4413e7ff8d76d4c0ee8313918 not found: ID does not exist" containerID="56559338d27ab6d66e19e6479539b1c372389ae4413e7ff8d76d4c0ee8313918" Feb 16 02:06:15.832639 master-0 kubenswrapper[4147]: I0216 02:06:15.832538 4147 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56559338d27ab6d66e19e6479539b1c372389ae4413e7ff8d76d4c0ee8313918"} err="failed to get container status \"56559338d27ab6d66e19e6479539b1c372389ae4413e7ff8d76d4c0ee8313918\": rpc error: code = NotFound desc = could not find container \"56559338d27ab6d66e19e6479539b1c372389ae4413e7ff8d76d4c0ee8313918\": container with ID starting with 56559338d27ab6d66e19e6479539b1c372389ae4413e7ff8d76d4c0ee8313918 not found: ID does not exist" Feb 16 02:06:15.832639 master-0 kubenswrapper[4147]: I0216 02:06:15.832564 4147 scope.go:117] "RemoveContainer" containerID="1b7b7bdf4c642a1559ec30fb01fd3a1b4682cad332a4325e3ce885fc9b0bd375" Feb 16 02:06:15.833127 master-0 kubenswrapper[4147]: E0216 02:06:15.833066 4147 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b7b7bdf4c642a1559ec30fb01fd3a1b4682cad332a4325e3ce885fc9b0bd375\": container with ID starting with 1b7b7bdf4c642a1559ec30fb01fd3a1b4682cad332a4325e3ce885fc9b0bd375 not found: ID does not exist" containerID="1b7b7bdf4c642a1559ec30fb01fd3a1b4682cad332a4325e3ce885fc9b0bd375" Feb 16 02:06:15.833127 master-0 kubenswrapper[4147]: I0216 02:06:15.833111 4147 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b7b7bdf4c642a1559ec30fb01fd3a1b4682cad332a4325e3ce885fc9b0bd375"} err="failed to get container status \"1b7b7bdf4c642a1559ec30fb01fd3a1b4682cad332a4325e3ce885fc9b0bd375\": rpc error: code = NotFound desc = could not find container \"1b7b7bdf4c642a1559ec30fb01fd3a1b4682cad332a4325e3ce885fc9b0bd375\": container with ID starting with 1b7b7bdf4c642a1559ec30fb01fd3a1b4682cad332a4325e3ce885fc9b0bd375 not found: ID does not exist" Feb 16 02:06:15.833260 master-0 kubenswrapper[4147]: I0216 02:06:15.833142 4147 scope.go:117] "RemoveContainer" containerID="a6d27bfddbad624bf45e594d48d6801bfe98efc2c2670d7f3752b9d56ad3d75c" Feb 16 02:06:15.833576 master-0 kubenswrapper[4147]: E0216 02:06:15.833537 4147 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6d27bfddbad624bf45e594d48d6801bfe98efc2c2670d7f3752b9d56ad3d75c\": container with ID starting with a6d27bfddbad624bf45e594d48d6801bfe98efc2c2670d7f3752b9d56ad3d75c not found: ID does not exist" containerID="a6d27bfddbad624bf45e594d48d6801bfe98efc2c2670d7f3752b9d56ad3d75c" Feb 16 02:06:15.833676 master-0 kubenswrapper[4147]: I0216 02:06:15.833580 4147 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6d27bfddbad624bf45e594d48d6801bfe98efc2c2670d7f3752b9d56ad3d75c"} err="failed to get container status \"a6d27bfddbad624bf45e594d48d6801bfe98efc2c2670d7f3752b9d56ad3d75c\": rpc error: code = NotFound desc = could not find container \"a6d27bfddbad624bf45e594d48d6801bfe98efc2c2670d7f3752b9d56ad3d75c\": container with ID starting with a6d27bfddbad624bf45e594d48d6801bfe98efc2c2670d7f3752b9d56ad3d75c not found: ID does not exist" Feb 16 02:06:15.833676 master-0 kubenswrapper[4147]: I0216 02:06:15.833626 4147 scope.go:117] "RemoveContainer" containerID="6480bb3563cbd1a8cff960037546688c839f01c862a63b7c461d82549ab638ba" Feb 16 02:06:15.834169 master-0 kubenswrapper[4147]: I0216 02:06:15.834102 4147 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6480bb3563cbd1a8cff960037546688c839f01c862a63b7c461d82549ab638ba"} err="failed to get container status \"6480bb3563cbd1a8cff960037546688c839f01c862a63b7c461d82549ab638ba\": rpc error: code = NotFound desc = could not find container \"6480bb3563cbd1a8cff960037546688c839f01c862a63b7c461d82549ab638ba\": container with ID starting with 6480bb3563cbd1a8cff960037546688c839f01c862a63b7c461d82549ab638ba not found: ID does not exist" Feb 16 02:06:15.834169 master-0 kubenswrapper[4147]: I0216 02:06:15.834142 4147 scope.go:117] "RemoveContainer" containerID="549ee6f369575d943032c4f2c9b7b4db572d4e7b0fbe810a5d7632f7758f10f6" Feb 16 02:06:15.834613 master-0 kubenswrapper[4147]: I0216 02:06:15.834556 4147 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"549ee6f369575d943032c4f2c9b7b4db572d4e7b0fbe810a5d7632f7758f10f6"} err="failed to get container status \"549ee6f369575d943032c4f2c9b7b4db572d4e7b0fbe810a5d7632f7758f10f6\": rpc error: code = NotFound desc = could not find container \"549ee6f369575d943032c4f2c9b7b4db572d4e7b0fbe810a5d7632f7758f10f6\": container with ID starting with 549ee6f369575d943032c4f2c9b7b4db572d4e7b0fbe810a5d7632f7758f10f6 not found: ID does not exist" Feb 16 02:06:15.834613 master-0 kubenswrapper[4147]: I0216 02:06:15.834604 4147 scope.go:117] "RemoveContainer" containerID="54057cbf958aea9e109c7d17e98aee5281d0a4015dc820437ffc81643f0510c0" Feb 16 02:06:15.835077 master-0 kubenswrapper[4147]: I0216 02:06:15.835022 4147 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54057cbf958aea9e109c7d17e98aee5281d0a4015dc820437ffc81643f0510c0"} err="failed to get container status \"54057cbf958aea9e109c7d17e98aee5281d0a4015dc820437ffc81643f0510c0\": rpc error: code = NotFound desc = could not find container \"54057cbf958aea9e109c7d17e98aee5281d0a4015dc820437ffc81643f0510c0\": container with ID starting with 54057cbf958aea9e109c7d17e98aee5281d0a4015dc820437ffc81643f0510c0 not found: ID does not exist" Feb 16 02:06:15.835077 master-0 kubenswrapper[4147]: I0216 02:06:15.835063 4147 scope.go:117] "RemoveContainer" containerID="909b82be05dd32fe9dc4e88305f8e9ff7a3b90e3a20dd135cf29281c872a8d66" Feb 16 02:06:15.835559 master-0 kubenswrapper[4147]: I0216 02:06:15.835503 4147 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"909b82be05dd32fe9dc4e88305f8e9ff7a3b90e3a20dd135cf29281c872a8d66"} err="failed to get container status \"909b82be05dd32fe9dc4e88305f8e9ff7a3b90e3a20dd135cf29281c872a8d66\": rpc error: code = NotFound desc = could not find container \"909b82be05dd32fe9dc4e88305f8e9ff7a3b90e3a20dd135cf29281c872a8d66\": container with ID starting with 909b82be05dd32fe9dc4e88305f8e9ff7a3b90e3a20dd135cf29281c872a8d66 not found: ID does not exist" Feb 16 02:06:15.835559 master-0 kubenswrapper[4147]: I0216 02:06:15.835547 4147 scope.go:117] "RemoveContainer" containerID="6b3c984b8aa1dc133b6c6bf02c330b29966a579f745c43faac91ae572c60b4cb" Feb 16 02:06:15.835964 master-0 kubenswrapper[4147]: I0216 02:06:15.835911 4147 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b3c984b8aa1dc133b6c6bf02c330b29966a579f745c43faac91ae572c60b4cb"} err="failed to get container status \"6b3c984b8aa1dc133b6c6bf02c330b29966a579f745c43faac91ae572c60b4cb\": rpc error: code = NotFound desc = could not find container \"6b3c984b8aa1dc133b6c6bf02c330b29966a579f745c43faac91ae572c60b4cb\": container with ID starting with 6b3c984b8aa1dc133b6c6bf02c330b29966a579f745c43faac91ae572c60b4cb not found: ID does not exist" Feb 16 02:06:15.835964 master-0 kubenswrapper[4147]: I0216 02:06:15.835955 4147 scope.go:117] "RemoveContainer" containerID="ded188d3d43c2f077a6171cda2ddb63433611ea745b982d15bf1f5c6b820601e" Feb 16 02:06:15.836464 master-0 kubenswrapper[4147]: I0216 02:06:15.836384 4147 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ded188d3d43c2f077a6171cda2ddb63433611ea745b982d15bf1f5c6b820601e"} err="failed to get container status \"ded188d3d43c2f077a6171cda2ddb63433611ea745b982d15bf1f5c6b820601e\": rpc error: code = NotFound desc = could not find container \"ded188d3d43c2f077a6171cda2ddb63433611ea745b982d15bf1f5c6b820601e\": container with ID starting with ded188d3d43c2f077a6171cda2ddb63433611ea745b982d15bf1f5c6b820601e not found: ID does not exist" Feb 16 02:06:15.836464 master-0 kubenswrapper[4147]: I0216 02:06:15.836420 4147 scope.go:117] "RemoveContainer" containerID="56559338d27ab6d66e19e6479539b1c372389ae4413e7ff8d76d4c0ee8313918" Feb 16 02:06:15.836896 master-0 kubenswrapper[4147]: I0216 02:06:15.836843 4147 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56559338d27ab6d66e19e6479539b1c372389ae4413e7ff8d76d4c0ee8313918"} err="failed to get container status \"56559338d27ab6d66e19e6479539b1c372389ae4413e7ff8d76d4c0ee8313918\": rpc error: code = NotFound desc = could not find container \"56559338d27ab6d66e19e6479539b1c372389ae4413e7ff8d76d4c0ee8313918\": container with ID starting with 56559338d27ab6d66e19e6479539b1c372389ae4413e7ff8d76d4c0ee8313918 not found: ID does not exist" Feb 16 02:06:15.836896 master-0 kubenswrapper[4147]: I0216 02:06:15.836886 4147 scope.go:117] "RemoveContainer" containerID="1b7b7bdf4c642a1559ec30fb01fd3a1b4682cad332a4325e3ce885fc9b0bd375" Feb 16 02:06:15.837277 master-0 kubenswrapper[4147]: I0216 02:06:15.837235 4147 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b7b7bdf4c642a1559ec30fb01fd3a1b4682cad332a4325e3ce885fc9b0bd375"} err="failed to get container status \"1b7b7bdf4c642a1559ec30fb01fd3a1b4682cad332a4325e3ce885fc9b0bd375\": rpc error: code = NotFound desc = could not find container \"1b7b7bdf4c642a1559ec30fb01fd3a1b4682cad332a4325e3ce885fc9b0bd375\": container with ID starting with 1b7b7bdf4c642a1559ec30fb01fd3a1b4682cad332a4325e3ce885fc9b0bd375 not found: ID does not exist" Feb 16 02:06:15.837277 master-0 kubenswrapper[4147]: I0216 02:06:15.837273 4147 scope.go:117] "RemoveContainer" containerID="a6d27bfddbad624bf45e594d48d6801bfe98efc2c2670d7f3752b9d56ad3d75c" Feb 16 02:06:15.837763 master-0 kubenswrapper[4147]: I0216 02:06:15.837718 4147 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6d27bfddbad624bf45e594d48d6801bfe98efc2c2670d7f3752b9d56ad3d75c"} err="failed to get container status \"a6d27bfddbad624bf45e594d48d6801bfe98efc2c2670d7f3752b9d56ad3d75c\": rpc error: code = NotFound desc = could not find container \"a6d27bfddbad624bf45e594d48d6801bfe98efc2c2670d7f3752b9d56ad3d75c\": container with ID starting with a6d27bfddbad624bf45e594d48d6801bfe98efc2c2670d7f3752b9d56ad3d75c not found: ID does not exist" Feb 16 02:06:15.837763 master-0 kubenswrapper[4147]: I0216 02:06:15.837752 4147 scope.go:117] "RemoveContainer" containerID="6480bb3563cbd1a8cff960037546688c839f01c862a63b7c461d82549ab638ba" Feb 16 02:06:15.838123 master-0 kubenswrapper[4147]: I0216 02:06:15.838078 4147 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6480bb3563cbd1a8cff960037546688c839f01c862a63b7c461d82549ab638ba"} err="failed to get container status \"6480bb3563cbd1a8cff960037546688c839f01c862a63b7c461d82549ab638ba\": rpc error: code = NotFound desc = could not find container \"6480bb3563cbd1a8cff960037546688c839f01c862a63b7c461d82549ab638ba\": container with ID starting with 6480bb3563cbd1a8cff960037546688c839f01c862a63b7c461d82549ab638ba not found: ID does not exist" Feb 16 02:06:15.838123 master-0 kubenswrapper[4147]: I0216 02:06:15.838117 4147 scope.go:117] "RemoveContainer" containerID="549ee6f369575d943032c4f2c9b7b4db572d4e7b0fbe810a5d7632f7758f10f6" Feb 16 02:06:15.838662 master-0 kubenswrapper[4147]: I0216 02:06:15.838618 4147 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"549ee6f369575d943032c4f2c9b7b4db572d4e7b0fbe810a5d7632f7758f10f6"} err="failed to get container status \"549ee6f369575d943032c4f2c9b7b4db572d4e7b0fbe810a5d7632f7758f10f6\": rpc error: code = NotFound desc = could not find container \"549ee6f369575d943032c4f2c9b7b4db572d4e7b0fbe810a5d7632f7758f10f6\": container with ID starting with 549ee6f369575d943032c4f2c9b7b4db572d4e7b0fbe810a5d7632f7758f10f6 not found: ID does not exist" Feb 16 02:06:15.838662 master-0 kubenswrapper[4147]: I0216 02:06:15.838654 4147 scope.go:117] "RemoveContainer" containerID="54057cbf958aea9e109c7d17e98aee5281d0a4015dc820437ffc81643f0510c0" Feb 16 02:06:15.839133 master-0 kubenswrapper[4147]: I0216 02:06:15.839080 4147 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54057cbf958aea9e109c7d17e98aee5281d0a4015dc820437ffc81643f0510c0"} err="failed to get container status \"54057cbf958aea9e109c7d17e98aee5281d0a4015dc820437ffc81643f0510c0\": rpc error: code = NotFound desc = could not find container \"54057cbf958aea9e109c7d17e98aee5281d0a4015dc820437ffc81643f0510c0\": container with ID starting with 54057cbf958aea9e109c7d17e98aee5281d0a4015dc820437ffc81643f0510c0 not found: ID does not exist" Feb 16 02:06:15.839133 master-0 kubenswrapper[4147]: I0216 02:06:15.839121 4147 scope.go:117] "RemoveContainer" containerID="909b82be05dd32fe9dc4e88305f8e9ff7a3b90e3a20dd135cf29281c872a8d66" Feb 16 02:06:15.839852 master-0 kubenswrapper[4147]: I0216 02:06:15.839584 4147 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"909b82be05dd32fe9dc4e88305f8e9ff7a3b90e3a20dd135cf29281c872a8d66"} err="failed to get container status \"909b82be05dd32fe9dc4e88305f8e9ff7a3b90e3a20dd135cf29281c872a8d66\": rpc error: code = NotFound desc = could not find container \"909b82be05dd32fe9dc4e88305f8e9ff7a3b90e3a20dd135cf29281c872a8d66\": container with ID starting with 909b82be05dd32fe9dc4e88305f8e9ff7a3b90e3a20dd135cf29281c872a8d66 not found: ID does not exist" Feb 16 02:06:15.839852 master-0 kubenswrapper[4147]: I0216 02:06:15.839673 4147 scope.go:117] "RemoveContainer" containerID="6b3c984b8aa1dc133b6c6bf02c330b29966a579f745c43faac91ae572c60b4cb" Feb 16 02:06:15.840132 master-0 kubenswrapper[4147]: I0216 02:06:15.840092 4147 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b3c984b8aa1dc133b6c6bf02c330b29966a579f745c43faac91ae572c60b4cb"} err="failed to get container status \"6b3c984b8aa1dc133b6c6bf02c330b29966a579f745c43faac91ae572c60b4cb\": rpc error: code = NotFound desc = could not find container \"6b3c984b8aa1dc133b6c6bf02c330b29966a579f745c43faac91ae572c60b4cb\": container with ID starting with 6b3c984b8aa1dc133b6c6bf02c330b29966a579f745c43faac91ae572c60b4cb not found: ID does not exist" Feb 16 02:06:15.840132 master-0 kubenswrapper[4147]: I0216 02:06:15.840130 4147 scope.go:117] "RemoveContainer" containerID="ded188d3d43c2f077a6171cda2ddb63433611ea745b982d15bf1f5c6b820601e" Feb 16 02:06:15.840761 master-0 kubenswrapper[4147]: I0216 02:06:15.840558 4147 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ded188d3d43c2f077a6171cda2ddb63433611ea745b982d15bf1f5c6b820601e"} err="failed to get container status \"ded188d3d43c2f077a6171cda2ddb63433611ea745b982d15bf1f5c6b820601e\": rpc error: code = NotFound desc = could not find container \"ded188d3d43c2f077a6171cda2ddb63433611ea745b982d15bf1f5c6b820601e\": container with ID starting with ded188d3d43c2f077a6171cda2ddb63433611ea745b982d15bf1f5c6b820601e not found: ID does not exist" Feb 16 02:06:15.840761 master-0 kubenswrapper[4147]: I0216 02:06:15.840606 4147 scope.go:117] "RemoveContainer" containerID="56559338d27ab6d66e19e6479539b1c372389ae4413e7ff8d76d4c0ee8313918" Feb 16 02:06:15.841024 master-0 kubenswrapper[4147]: I0216 02:06:15.840979 4147 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56559338d27ab6d66e19e6479539b1c372389ae4413e7ff8d76d4c0ee8313918"} err="failed to get container status \"56559338d27ab6d66e19e6479539b1c372389ae4413e7ff8d76d4c0ee8313918\": rpc error: code = NotFound desc = could not find container \"56559338d27ab6d66e19e6479539b1c372389ae4413e7ff8d76d4c0ee8313918\": container with ID starting with 56559338d27ab6d66e19e6479539b1c372389ae4413e7ff8d76d4c0ee8313918 not found: ID does not exist" Feb 16 02:06:15.841024 master-0 kubenswrapper[4147]: I0216 02:06:15.841016 4147 scope.go:117] "RemoveContainer" containerID="1b7b7bdf4c642a1559ec30fb01fd3a1b4682cad332a4325e3ce885fc9b0bd375" Feb 16 02:06:15.841419 master-0 kubenswrapper[4147]: I0216 02:06:15.841347 4147 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b7b7bdf4c642a1559ec30fb01fd3a1b4682cad332a4325e3ce885fc9b0bd375"} err="failed to get container status \"1b7b7bdf4c642a1559ec30fb01fd3a1b4682cad332a4325e3ce885fc9b0bd375\": rpc error: code = NotFound desc = could not find container \"1b7b7bdf4c642a1559ec30fb01fd3a1b4682cad332a4325e3ce885fc9b0bd375\": container with ID starting with 1b7b7bdf4c642a1559ec30fb01fd3a1b4682cad332a4325e3ce885fc9b0bd375 not found: ID does not exist" Feb 16 02:06:15.841419 master-0 kubenswrapper[4147]: I0216 02:06:15.841392 4147 scope.go:117] "RemoveContainer" containerID="a6d27bfddbad624bf45e594d48d6801bfe98efc2c2670d7f3752b9d56ad3d75c" Feb 16 02:06:15.841901 master-0 kubenswrapper[4147]: I0216 02:06:15.841862 4147 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6d27bfddbad624bf45e594d48d6801bfe98efc2c2670d7f3752b9d56ad3d75c"} err="failed to get container status \"a6d27bfddbad624bf45e594d48d6801bfe98efc2c2670d7f3752b9d56ad3d75c\": rpc error: code = NotFound desc = could not find container \"a6d27bfddbad624bf45e594d48d6801bfe98efc2c2670d7f3752b9d56ad3d75c\": container with ID starting with a6d27bfddbad624bf45e594d48d6801bfe98efc2c2670d7f3752b9d56ad3d75c not found: ID does not exist" Feb 16 02:06:15.841901 master-0 kubenswrapper[4147]: I0216 02:06:15.841897 4147 scope.go:117] "RemoveContainer" containerID="6480bb3563cbd1a8cff960037546688c839f01c862a63b7c461d82549ab638ba" Feb 16 02:06:15.842478 master-0 kubenswrapper[4147]: I0216 02:06:15.842293 4147 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6480bb3563cbd1a8cff960037546688c839f01c862a63b7c461d82549ab638ba"} err="failed to get container status \"6480bb3563cbd1a8cff960037546688c839f01c862a63b7c461d82549ab638ba\": rpc error: code = NotFound desc = could not find container \"6480bb3563cbd1a8cff960037546688c839f01c862a63b7c461d82549ab638ba\": container with ID starting with 6480bb3563cbd1a8cff960037546688c839f01c862a63b7c461d82549ab638ba not found: ID does not exist" Feb 16 02:06:15.842478 master-0 kubenswrapper[4147]: I0216 02:06:15.842325 4147 scope.go:117] "RemoveContainer" containerID="549ee6f369575d943032c4f2c9b7b4db572d4e7b0fbe810a5d7632f7758f10f6" Feb 16 02:06:15.842858 master-0 kubenswrapper[4147]: I0216 02:06:15.842631 4147 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"549ee6f369575d943032c4f2c9b7b4db572d4e7b0fbe810a5d7632f7758f10f6"} err="failed to get container status \"549ee6f369575d943032c4f2c9b7b4db572d4e7b0fbe810a5d7632f7758f10f6\": rpc error: code = NotFound desc = could not find container \"549ee6f369575d943032c4f2c9b7b4db572d4e7b0fbe810a5d7632f7758f10f6\": container with ID starting with 549ee6f369575d943032c4f2c9b7b4db572d4e7b0fbe810a5d7632f7758f10f6 not found: ID does not exist" Feb 16 02:06:15.842858 master-0 kubenswrapper[4147]: I0216 02:06:15.842658 4147 scope.go:117] "RemoveContainer" containerID="54057cbf958aea9e109c7d17e98aee5281d0a4015dc820437ffc81643f0510c0" Feb 16 02:06:15.843237 master-0 kubenswrapper[4147]: I0216 02:06:15.843094 4147 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54057cbf958aea9e109c7d17e98aee5281d0a4015dc820437ffc81643f0510c0"} err="failed to get container status \"54057cbf958aea9e109c7d17e98aee5281d0a4015dc820437ffc81643f0510c0\": rpc error: code = NotFound desc = could not find container \"54057cbf958aea9e109c7d17e98aee5281d0a4015dc820437ffc81643f0510c0\": container with ID starting with 54057cbf958aea9e109c7d17e98aee5281d0a4015dc820437ffc81643f0510c0 not found: ID does not exist" Feb 16 02:06:15.843237 master-0 kubenswrapper[4147]: I0216 02:06:15.843132 4147 scope.go:117] "RemoveContainer" containerID="909b82be05dd32fe9dc4e88305f8e9ff7a3b90e3a20dd135cf29281c872a8d66" Feb 16 02:06:15.843636 master-0 kubenswrapper[4147]: I0216 02:06:15.843587 4147 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"909b82be05dd32fe9dc4e88305f8e9ff7a3b90e3a20dd135cf29281c872a8d66"} err="failed to get container status \"909b82be05dd32fe9dc4e88305f8e9ff7a3b90e3a20dd135cf29281c872a8d66\": rpc error: code = NotFound desc = could not find container \"909b82be05dd32fe9dc4e88305f8e9ff7a3b90e3a20dd135cf29281c872a8d66\": container with ID starting with 909b82be05dd32fe9dc4e88305f8e9ff7a3b90e3a20dd135cf29281c872a8d66 not found: ID does not exist" Feb 16 02:06:15.843720 master-0 kubenswrapper[4147]: I0216 02:06:15.843635 4147 scope.go:117] "RemoveContainer" containerID="6b3c984b8aa1dc133b6c6bf02c330b29966a579f745c43faac91ae572c60b4cb" Feb 16 02:06:15.844157 master-0 kubenswrapper[4147]: I0216 02:06:15.844054 4147 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b3c984b8aa1dc133b6c6bf02c330b29966a579f745c43faac91ae572c60b4cb"} err="failed to get container status \"6b3c984b8aa1dc133b6c6bf02c330b29966a579f745c43faac91ae572c60b4cb\": rpc error: code = NotFound desc = could not find container \"6b3c984b8aa1dc133b6c6bf02c330b29966a579f745c43faac91ae572c60b4cb\": container with ID starting with 6b3c984b8aa1dc133b6c6bf02c330b29966a579f745c43faac91ae572c60b4cb not found: ID does not exist" Feb 16 02:06:15.844157 master-0 kubenswrapper[4147]: I0216 02:06:15.844090 4147 scope.go:117] "RemoveContainer" containerID="ded188d3d43c2f077a6171cda2ddb63433611ea745b982d15bf1f5c6b820601e" Feb 16 02:06:15.844814 master-0 kubenswrapper[4147]: I0216 02:06:15.844604 4147 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ded188d3d43c2f077a6171cda2ddb63433611ea745b982d15bf1f5c6b820601e"} err="failed to get container status \"ded188d3d43c2f077a6171cda2ddb63433611ea745b982d15bf1f5c6b820601e\": rpc error: code = NotFound desc = could not find container \"ded188d3d43c2f077a6171cda2ddb63433611ea745b982d15bf1f5c6b820601e\": container with ID starting with ded188d3d43c2f077a6171cda2ddb63433611ea745b982d15bf1f5c6b820601e not found: ID does not exist" Feb 16 02:06:15.844814 master-0 kubenswrapper[4147]: I0216 02:06:15.844640 4147 scope.go:117] "RemoveContainer" containerID="56559338d27ab6d66e19e6479539b1c372389ae4413e7ff8d76d4c0ee8313918" Feb 16 02:06:15.845082 master-0 kubenswrapper[4147]: I0216 02:06:15.845031 4147 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56559338d27ab6d66e19e6479539b1c372389ae4413e7ff8d76d4c0ee8313918"} err="failed to get container status \"56559338d27ab6d66e19e6479539b1c372389ae4413e7ff8d76d4c0ee8313918\": rpc error: code = NotFound desc = could not find container \"56559338d27ab6d66e19e6479539b1c372389ae4413e7ff8d76d4c0ee8313918\": container with ID starting with 56559338d27ab6d66e19e6479539b1c372389ae4413e7ff8d76d4c0ee8313918 not found: ID does not exist" Feb 16 02:06:15.845147 master-0 kubenswrapper[4147]: I0216 02:06:15.845079 4147 scope.go:117] "RemoveContainer" containerID="1b7b7bdf4c642a1559ec30fb01fd3a1b4682cad332a4325e3ce885fc9b0bd375" Feb 16 02:06:15.845704 master-0 kubenswrapper[4147]: I0216 02:06:15.845639 4147 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b7b7bdf4c642a1559ec30fb01fd3a1b4682cad332a4325e3ce885fc9b0bd375"} err="failed to get container status \"1b7b7bdf4c642a1559ec30fb01fd3a1b4682cad332a4325e3ce885fc9b0bd375\": rpc error: code = NotFound desc = could not find container \"1b7b7bdf4c642a1559ec30fb01fd3a1b4682cad332a4325e3ce885fc9b0bd375\": container with ID starting with 1b7b7bdf4c642a1559ec30fb01fd3a1b4682cad332a4325e3ce885fc9b0bd375 not found: ID does not exist" Feb 16 02:06:15.845704 master-0 kubenswrapper[4147]: I0216 02:06:15.845692 4147 scope.go:117] "RemoveContainer" containerID="a6d27bfddbad624bf45e594d48d6801bfe98efc2c2670d7f3752b9d56ad3d75c" Feb 16 02:06:15.846183 master-0 kubenswrapper[4147]: I0216 02:06:15.846119 4147 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6d27bfddbad624bf45e594d48d6801bfe98efc2c2670d7f3752b9d56ad3d75c"} err="failed to get container status \"a6d27bfddbad624bf45e594d48d6801bfe98efc2c2670d7f3752b9d56ad3d75c\": rpc error: code = NotFound desc = could not find container \"a6d27bfddbad624bf45e594d48d6801bfe98efc2c2670d7f3752b9d56ad3d75c\": container with ID starting with a6d27bfddbad624bf45e594d48d6801bfe98efc2c2670d7f3752b9d56ad3d75c not found: ID does not exist" Feb 16 02:06:15.846183 master-0 kubenswrapper[4147]: I0216 02:06:15.846163 4147 scope.go:117] "RemoveContainer" containerID="6480bb3563cbd1a8cff960037546688c839f01c862a63b7c461d82549ab638ba" Feb 16 02:06:15.846605 master-0 kubenswrapper[4147]: I0216 02:06:15.846536 4147 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6480bb3563cbd1a8cff960037546688c839f01c862a63b7c461d82549ab638ba"} err="failed to get container status \"6480bb3563cbd1a8cff960037546688c839f01c862a63b7c461d82549ab638ba\": rpc error: code = NotFound desc = could not find container \"6480bb3563cbd1a8cff960037546688c839f01c862a63b7c461d82549ab638ba\": container with ID starting with 6480bb3563cbd1a8cff960037546688c839f01c862a63b7c461d82549ab638ba not found: ID does not exist" Feb 16 02:06:15.846716 master-0 kubenswrapper[4147]: I0216 02:06:15.846600 4147 scope.go:117] "RemoveContainer" containerID="549ee6f369575d943032c4f2c9b7b4db572d4e7b0fbe810a5d7632f7758f10f6" Feb 16 02:06:15.847190 master-0 kubenswrapper[4147]: I0216 02:06:15.847138 4147 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"549ee6f369575d943032c4f2c9b7b4db572d4e7b0fbe810a5d7632f7758f10f6"} err="failed to get container status \"549ee6f369575d943032c4f2c9b7b4db572d4e7b0fbe810a5d7632f7758f10f6\": rpc error: code = NotFound desc = could not find container \"549ee6f369575d943032c4f2c9b7b4db572d4e7b0fbe810a5d7632f7758f10f6\": container with ID starting with 549ee6f369575d943032c4f2c9b7b4db572d4e7b0fbe810a5d7632f7758f10f6 not found: ID does not exist" Feb 16 02:06:15.847190 master-0 kubenswrapper[4147]: I0216 02:06:15.847178 4147 scope.go:117] "RemoveContainer" containerID="54057cbf958aea9e109c7d17e98aee5281d0a4015dc820437ffc81643f0510c0" Feb 16 02:06:15.847521 master-0 kubenswrapper[4147]: I0216 02:06:15.847470 4147 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54057cbf958aea9e109c7d17e98aee5281d0a4015dc820437ffc81643f0510c0"} err="failed to get container status \"54057cbf958aea9e109c7d17e98aee5281d0a4015dc820437ffc81643f0510c0\": rpc error: code = NotFound desc = could not find container \"54057cbf958aea9e109c7d17e98aee5281d0a4015dc820437ffc81643f0510c0\": container with ID starting with 54057cbf958aea9e109c7d17e98aee5281d0a4015dc820437ffc81643f0510c0 not found: ID does not exist" Feb 16 02:06:15.847521 master-0 kubenswrapper[4147]: I0216 02:06:15.847510 4147 scope.go:117] "RemoveContainer" containerID="909b82be05dd32fe9dc4e88305f8e9ff7a3b90e3a20dd135cf29281c872a8d66" Feb 16 02:06:15.847816 master-0 kubenswrapper[4147]: I0216 02:06:15.847765 4147 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"909b82be05dd32fe9dc4e88305f8e9ff7a3b90e3a20dd135cf29281c872a8d66"} err="failed to get container status \"909b82be05dd32fe9dc4e88305f8e9ff7a3b90e3a20dd135cf29281c872a8d66\": rpc error: code = NotFound desc = could not find container \"909b82be05dd32fe9dc4e88305f8e9ff7a3b90e3a20dd135cf29281c872a8d66\": container with ID starting with 909b82be05dd32fe9dc4e88305f8e9ff7a3b90e3a20dd135cf29281c872a8d66 not found: ID does not exist" Feb 16 02:06:15.847816 master-0 kubenswrapper[4147]: I0216 02:06:15.847804 4147 scope.go:117] "RemoveContainer" containerID="6b3c984b8aa1dc133b6c6bf02c330b29966a579f745c43faac91ae572c60b4cb" Feb 16 02:06:15.848253 master-0 kubenswrapper[4147]: I0216 02:06:15.848205 4147 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b3c984b8aa1dc133b6c6bf02c330b29966a579f745c43faac91ae572c60b4cb"} err="failed to get container status \"6b3c984b8aa1dc133b6c6bf02c330b29966a579f745c43faac91ae572c60b4cb\": rpc error: code = NotFound desc = could not find container \"6b3c984b8aa1dc133b6c6bf02c330b29966a579f745c43faac91ae572c60b4cb\": container with ID starting with 6b3c984b8aa1dc133b6c6bf02c330b29966a579f745c43faac91ae572c60b4cb not found: ID does not exist" Feb 16 02:06:15.848253 master-0 kubenswrapper[4147]: I0216 02:06:15.848239 4147 scope.go:117] "RemoveContainer" containerID="ded188d3d43c2f077a6171cda2ddb63433611ea745b982d15bf1f5c6b820601e" Feb 16 02:06:15.848651 master-0 kubenswrapper[4147]: I0216 02:06:15.848604 4147 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ded188d3d43c2f077a6171cda2ddb63433611ea745b982d15bf1f5c6b820601e"} err="failed to get container status \"ded188d3d43c2f077a6171cda2ddb63433611ea745b982d15bf1f5c6b820601e\": rpc error: code = NotFound desc = could not find container \"ded188d3d43c2f077a6171cda2ddb63433611ea745b982d15bf1f5c6b820601e\": container with ID starting with ded188d3d43c2f077a6171cda2ddb63433611ea745b982d15bf1f5c6b820601e not found: ID does not exist" Feb 16 02:06:15.902683 master-0 kubenswrapper[4147]: I0216 02:06:15.902601 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-node-log\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:15.902683 master-0 kubenswrapper[4147]: I0216 02:06:15.902664 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-host-run-ovn-kubernetes\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:15.902925 master-0 kubenswrapper[4147]: I0216 02:06:15.902800 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-host-run-ovn-kubernetes\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:15.902925 master-0 kubenswrapper[4147]: I0216 02:06:15.902856 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-node-log\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:15.902925 master-0 kubenswrapper[4147]: I0216 02:06:15.902892 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-log-socket\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:15.903089 master-0 kubenswrapper[4147]: I0216 02:06:15.902926 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6dcef814-353e-4985-9afc-9e545f7853ae-ovn-node-metrics-cert\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:15.903089 master-0 kubenswrapper[4147]: I0216 02:06:15.903041 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-log-socket\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:15.903249 master-0 kubenswrapper[4147]: I0216 02:06:15.903184 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjsbs\" (UniqueName: \"kubernetes.io/projected/6dcef814-353e-4985-9afc-9e545f7853ae-kube-api-access-pjsbs\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:15.903327 master-0 kubenswrapper[4147]: I0216 02:06:15.903268 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-host-slash\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:15.903327 master-0 kubenswrapper[4147]: I0216 02:06:15.903302 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-host-run-netns\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:15.903431 master-0 kubenswrapper[4147]: I0216 02:06:15.903339 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-host-kubelet\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:15.903431 master-0 kubenswrapper[4147]: I0216 02:06:15.903370 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-host-cni-netd\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:15.903431 master-0 kubenswrapper[4147]: I0216 02:06:15.903412 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-var-lib-openvswitch\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:15.903431 master-0 kubenswrapper[4147]: I0216 02:06:15.903418 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-host-slash\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:15.903836 master-0 kubenswrapper[4147]: I0216 02:06:15.903560 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-host-cni-netd\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:15.903836 master-0 kubenswrapper[4147]: I0216 02:06:15.903563 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-host-run-netns\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:15.903836 master-0 kubenswrapper[4147]: I0216 02:06:15.903621 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6dcef814-353e-4985-9afc-9e545f7853ae-env-overrides\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:15.903836 master-0 kubenswrapper[4147]: I0216 02:06:15.903618 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-host-kubelet\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:15.903836 master-0 kubenswrapper[4147]: I0216 02:06:15.903644 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-var-lib-openvswitch\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:15.903836 master-0 kubenswrapper[4147]: I0216 02:06:15.903778 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:15.903836 master-0 kubenswrapper[4147]: I0216 02:06:15.903841 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:15.904199 master-0 kubenswrapper[4147]: I0216 02:06:15.903893 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6dcef814-353e-4985-9afc-9e545f7853ae-ovnkube-config\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:15.904199 master-0 kubenswrapper[4147]: I0216 02:06:15.903957 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-etc-openvswitch\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:15.904199 master-0 kubenswrapper[4147]: I0216 02:06:15.904004 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6dcef814-353e-4985-9afc-9e545f7853ae-ovnkube-script-lib\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:15.904359 master-0 kubenswrapper[4147]: I0216 02:06:15.904287 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-run-openvswitch\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:15.904359 master-0 kubenswrapper[4147]: I0216 02:06:15.904344 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-systemd-units\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:15.904504 master-0 kubenswrapper[4147]: I0216 02:06:15.904388 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-run-systemd\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:15.904504 master-0 kubenswrapper[4147]: I0216 02:06:15.904423 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-etc-openvswitch\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:15.904504 master-0 kubenswrapper[4147]: I0216 02:06:15.904431 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-run-ovn\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:15.904675 master-0 kubenswrapper[4147]: I0216 02:06:15.904507 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-run-openvswitch\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:15.904675 master-0 kubenswrapper[4147]: I0216 02:06:15.904534 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-run-ovn\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:15.904675 master-0 kubenswrapper[4147]: I0216 02:06:15.904583 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-run-systemd\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:15.904675 master-0 kubenswrapper[4147]: I0216 02:06:15.904600 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-host-cni-bin\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:15.904675 master-0 kubenswrapper[4147]: I0216 02:06:15.904591 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-systemd-units\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:15.904675 master-0 kubenswrapper[4147]: I0216 02:06:15.904650 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-host-cni-bin\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:15.905217 master-0 kubenswrapper[4147]: I0216 02:06:15.905116 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6dcef814-353e-4985-9afc-9e545f7853ae-env-overrides\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:15.905311 master-0 kubenswrapper[4147]: I0216 02:06:15.905129 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6dcef814-353e-4985-9afc-9e545f7853ae-ovnkube-config\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:15.905311 master-0 kubenswrapper[4147]: I0216 02:06:15.905277 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6dcef814-353e-4985-9afc-9e545f7853ae-ovnkube-script-lib\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:15.906815 master-0 kubenswrapper[4147]: I0216 02:06:15.906760 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6dcef814-353e-4985-9afc-9e545f7853ae-ovn-node-metrics-cert\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:15.923698 master-0 kubenswrapper[4147]: I0216 02:06:15.923633 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjsbs\" (UniqueName: \"kubernetes.io/projected/6dcef814-353e-4985-9afc-9e545f7853ae-kube-api-access-pjsbs\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:16.009846 master-0 kubenswrapper[4147]: I0216 02:06:16.009696 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:16.020614 master-0 kubenswrapper[4147]: I0216 02:06:16.020209 4147 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-zmw46"] Feb 16 02:06:16.024727 master-0 kubenswrapper[4147]: I0216 02:06:16.024563 4147 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-zmw46"] Feb 16 02:06:16.034892 master-0 kubenswrapper[4147]: W0216 02:06:16.034830 4147 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6dcef814_353e_4985_9afc_9e545f7853ae.slice/crio-f3b7c64f3be908fc19e9deab55b835cdfbaa84035406e99a4fd85bf496337788 WatchSource:0}: Error finding container f3b7c64f3be908fc19e9deab55b835cdfbaa84035406e99a4fd85bf496337788: Status 404 returned error can't find the container with id f3b7c64f3be908fc19e9deab55b835cdfbaa84035406e99a4fd85bf496337788 Feb 16 02:06:16.186922 master-0 kubenswrapper[4147]: I0216 02:06:16.186798 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-hswdj" Feb 16 02:06:16.187229 master-0 kubenswrapper[4147]: E0216 02:06:16.187037 4147 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-hswdj" podUID="e478bdcc-052e-42f8-91b6-58c26cfc9cfc" Feb 16 02:06:16.194038 master-0 kubenswrapper[4147]: I0216 02:06:16.193959 4147 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30d41850-9a4b-4ce2-9902-a59492adeb24" path="/var/lib/kubelet/pods/30d41850-9a4b-4ce2-9902-a59492adeb24/volumes" Feb 16 02:06:16.675090 master-0 kubenswrapper[4147]: I0216 02:06:16.674967 4147 generic.go:334] "Generic (PLEG): container finished" podID="6dcef814-353e-4985-9afc-9e545f7853ae" containerID="64b93c97323f7e51986ec036f1f46d7cb6a600efeaf1c716bc52e696eb3b4391" exitCode=0 Feb 16 02:06:16.675090 master-0 kubenswrapper[4147]: I0216 02:06:16.675045 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" event={"ID":"6dcef814-353e-4985-9afc-9e545f7853ae","Type":"ContainerDied","Data":"64b93c97323f7e51986ec036f1f46d7cb6a600efeaf1c716bc52e696eb3b4391"} Feb 16 02:06:16.675090 master-0 kubenswrapper[4147]: I0216 02:06:16.675101 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" event={"ID":"6dcef814-353e-4985-9afc-9e545f7853ae","Type":"ContainerStarted","Data":"f3b7c64f3be908fc19e9deab55b835cdfbaa84035406e99a4fd85bf496337788"} Feb 16 02:06:17.187564 master-0 kubenswrapper[4147]: I0216 02:06:17.187511 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gn9mv" Feb 16 02:06:17.187909 master-0 kubenswrapper[4147]: E0216 02:06:17.187788 4147 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gn9mv" podUID="7f0f9b7d-e663-4927-861b-a9544d483b6e" Feb 16 02:06:17.204700 master-0 kubenswrapper[4147]: I0216 02:06:17.203795 4147 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Feb 16 02:06:17.683616 master-0 kubenswrapper[4147]: I0216 02:06:17.683544 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" event={"ID":"6dcef814-353e-4985-9afc-9e545f7853ae","Type":"ContainerStarted","Data":"5c93efd00235860df2a1fb67e4926c729002849765a94dec61e1af843ab728f2"} Feb 16 02:06:17.683616 master-0 kubenswrapper[4147]: I0216 02:06:17.683610 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" event={"ID":"6dcef814-353e-4985-9afc-9e545f7853ae","Type":"ContainerStarted","Data":"dd2f6bde7a318ff658097ec27f030059f122a8e83e2de5c43cb60b2107622ad9"} Feb 16 02:06:17.683616 master-0 kubenswrapper[4147]: I0216 02:06:17.683630 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" event={"ID":"6dcef814-353e-4985-9afc-9e545f7853ae","Type":"ContainerStarted","Data":"ce9799941f99b237f44bfce248f984524944d67dab52929924cd9a8fcbcc317a"} Feb 16 02:06:17.684956 master-0 kubenswrapper[4147]: I0216 02:06:17.683649 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" event={"ID":"6dcef814-353e-4985-9afc-9e545f7853ae","Type":"ContainerStarted","Data":"c1bc45615c9ecfa64d90253381a2e646e11a4e04d319bfd8bb80979f5427dc12"} Feb 16 02:06:17.684956 master-0 kubenswrapper[4147]: I0216 02:06:17.683666 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" event={"ID":"6dcef814-353e-4985-9afc-9e545f7853ae","Type":"ContainerStarted","Data":"6a4ec2ca7c7bd0685c2ed78584a21228841502b562d763fb36ef1fca236540d2"} Feb 16 02:06:17.684956 master-0 kubenswrapper[4147]: I0216 02:06:17.683684 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" event={"ID":"6dcef814-353e-4985-9afc-9e545f7853ae","Type":"ContainerStarted","Data":"015f5f7b13901c83863c86375425948aad692d7bf5cf02a54ce4b6a0bbdb9c25"} Feb 16 02:06:18.187481 master-0 kubenswrapper[4147]: I0216 02:06:18.187383 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-hswdj" Feb 16 02:06:18.187745 master-0 kubenswrapper[4147]: E0216 02:06:18.187625 4147 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-hswdj" podUID="e478bdcc-052e-42f8-91b6-58c26cfc9cfc" Feb 16 02:06:19.187347 master-0 kubenswrapper[4147]: I0216 02:06:19.187279 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gn9mv" Feb 16 02:06:19.188351 master-0 kubenswrapper[4147]: E0216 02:06:19.187529 4147 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gn9mv" podUID="7f0f9b7d-e663-4927-861b-a9544d483b6e" Feb 16 02:06:20.151826 master-0 kubenswrapper[4147]: I0216 02:06:20.151271 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfgxq\" (UniqueName: \"kubernetes.io/projected/e478bdcc-052e-42f8-91b6-58c26cfc9cfc-kube-api-access-pfgxq\") pod \"network-check-target-hswdj\" (UID: \"e478bdcc-052e-42f8-91b6-58c26cfc9cfc\") " pod="openshift-network-diagnostics/network-check-target-hswdj" Feb 16 02:06:20.151826 master-0 kubenswrapper[4147]: E0216 02:06:20.151543 4147 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 02:06:20.152164 master-0 kubenswrapper[4147]: E0216 02:06:20.151846 4147 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 02:06:20.152164 master-0 kubenswrapper[4147]: E0216 02:06:20.151875 4147 projected.go:194] Error preparing data for projected volume kube-api-access-pfgxq for pod openshift-network-diagnostics/network-check-target-hswdj: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 02:06:20.152164 master-0 kubenswrapper[4147]: E0216 02:06:20.151960 4147 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e478bdcc-052e-42f8-91b6-58c26cfc9cfc-kube-api-access-pfgxq podName:e478bdcc-052e-42f8-91b6-58c26cfc9cfc nodeName:}" failed. No retries permitted until 2026-02-16 02:06:52.151936717 +0000 UTC m=+130.787671873 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-pfgxq" (UniqueName: "kubernetes.io/projected/e478bdcc-052e-42f8-91b6-58c26cfc9cfc-kube-api-access-pfgxq") pod "network-check-target-hswdj" (UID: "e478bdcc-052e-42f8-91b6-58c26cfc9cfc") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 02:06:20.187213 master-0 kubenswrapper[4147]: I0216 02:06:20.187160 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-hswdj" Feb 16 02:06:20.187723 master-0 kubenswrapper[4147]: E0216 02:06:20.187321 4147 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-hswdj" podUID="e478bdcc-052e-42f8-91b6-58c26cfc9cfc" Feb 16 02:06:20.702240 master-0 kubenswrapper[4147]: I0216 02:06:20.702187 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" event={"ID":"6dcef814-353e-4985-9afc-9e545f7853ae","Type":"ContainerStarted","Data":"0ee70cba75740bf510c4d2468be1ebf6b687e76e1c372848ca7beea393bbde0f"} Feb 16 02:06:21.060492 master-0 kubenswrapper[4147]: I0216 02:06:21.060290 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-serving-cert\") pod \"cluster-version-operator-76959b6567-9fxxl\" (UID: \"864c0ef4-319c-457c-aa3b-adf0c3e5a0ff\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-9fxxl" Feb 16 02:06:21.060717 master-0 kubenswrapper[4147]: E0216 02:06:21.060588 4147 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 16 02:06:21.060788 master-0 kubenswrapper[4147]: E0216 02:06:21.060735 4147 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-serving-cert podName:864c0ef4-319c-457c-aa3b-adf0c3e5a0ff nodeName:}" failed. No retries permitted until 2026-02-16 02:07:25.06069051 +0000 UTC m=+163.696425686 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-serving-cert") pod "cluster-version-operator-76959b6567-9fxxl" (UID: "864c0ef4-319c-457c-aa3b-adf0c3e5a0ff") : secret "cluster-version-operator-serving-cert" not found Feb 16 02:06:21.186848 master-0 kubenswrapper[4147]: I0216 02:06:21.186756 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gn9mv" Feb 16 02:06:21.187228 master-0 kubenswrapper[4147]: E0216 02:06:21.187133 4147 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gn9mv" podUID="7f0f9b7d-e663-4927-861b-a9544d483b6e" Feb 16 02:06:21.198885 master-0 kubenswrapper[4147]: I0216 02:06:21.198835 4147 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Feb 16 02:06:22.187004 master-0 kubenswrapper[4147]: I0216 02:06:22.186924 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-hswdj" Feb 16 02:06:22.190463 master-0 kubenswrapper[4147]: E0216 02:06:22.188278 4147 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-hswdj" podUID="e478bdcc-052e-42f8-91b6-58c26cfc9cfc" Feb 16 02:06:22.204198 master-0 kubenswrapper[4147]: I0216 02:06:22.204146 4147 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Feb 16 02:06:22.226845 master-0 kubenswrapper[4147]: I0216 02:06:22.226656 4147 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podStartSLOduration=5.226627158 podStartE2EDuration="5.226627158s" podCreationTimestamp="2026-02-16 02:06:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:06:22.211981053 +0000 UTC m=+100.847716219" watchObservedRunningTime="2026-02-16 02:06:22.226627158 +0000 UTC m=+100.862362314" Feb 16 02:06:22.717917 master-0 kubenswrapper[4147]: I0216 02:06:22.717263 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" event={"ID":"6dcef814-353e-4985-9afc-9e545f7853ae","Type":"ContainerStarted","Data":"ea796aa8a6a12070e66a7b2f74902eec15efef1503b7c50f068119b84ee66f1b"} Feb 16 02:06:22.717917 master-0 kubenswrapper[4147]: I0216 02:06:22.717830 4147 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:22.717917 master-0 kubenswrapper[4147]: I0216 02:06:22.717892 4147 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:22.717917 master-0 kubenswrapper[4147]: I0216 02:06:22.717920 4147 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:22.766410 master-0 kubenswrapper[4147]: I0216 02:06:22.766344 4147 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:22.766658 master-0 kubenswrapper[4147]: I0216 02:06:22.766452 4147 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:22.779247 master-0 kubenswrapper[4147]: I0216 02:06:22.779150 4147 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/bootstrap-kube-scheduler-master-0" podStartSLOduration=1.7791203370000002 podStartE2EDuration="1.779120337s" podCreationTimestamp="2026-02-16 02:06:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:06:22.226875804 +0000 UTC m=+100.862610990" watchObservedRunningTime="2026-02-16 02:06:22.779120337 +0000 UTC m=+101.414855493" Feb 16 02:06:22.779967 master-0 kubenswrapper[4147]: I0216 02:06:22.779915 4147 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" podStartSLOduration=7.779906527 podStartE2EDuration="7.779906527s" podCreationTimestamp="2026-02-16 02:06:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:06:22.778651096 +0000 UTC m=+101.414386252" watchObservedRunningTime="2026-02-16 02:06:22.779906527 +0000 UTC m=+101.415641683" Feb 16 02:06:22.800792 master-0 kubenswrapper[4147]: I0216 02:06:22.800682 4147 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/bootstrap-kube-controller-manager-master-0" podStartSLOduration=0.800654564 podStartE2EDuration="800.654564ms" podCreationTimestamp="2026-02-16 02:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:06:22.799648849 +0000 UTC m=+101.435384005" watchObservedRunningTime="2026-02-16 02:06:22.800654564 +0000 UTC m=+101.436389720" Feb 16 02:06:23.186866 master-0 kubenswrapper[4147]: I0216 02:06:23.186745 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gn9mv" Feb 16 02:06:23.187079 master-0 kubenswrapper[4147]: E0216 02:06:23.186979 4147 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gn9mv" podUID="7f0f9b7d-e663-4927-861b-a9544d483b6e" Feb 16 02:06:23.755972 master-0 kubenswrapper[4147]: I0216 02:06:23.755842 4147 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-hswdj"] Feb 16 02:06:23.756765 master-0 kubenswrapper[4147]: I0216 02:06:23.756064 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-hswdj" Feb 16 02:06:23.756765 master-0 kubenswrapper[4147]: E0216 02:06:23.756242 4147 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-hswdj" podUID="e478bdcc-052e-42f8-91b6-58c26cfc9cfc" Feb 16 02:06:23.762786 master-0 kubenswrapper[4147]: I0216 02:06:23.762429 4147 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-gn9mv"] Feb 16 02:06:23.762786 master-0 kubenswrapper[4147]: I0216 02:06:23.762575 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gn9mv" Feb 16 02:06:23.762786 master-0 kubenswrapper[4147]: E0216 02:06:23.762703 4147 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gn9mv" podUID="7f0f9b7d-e663-4927-861b-a9544d483b6e" Feb 16 02:06:25.188165 master-0 kubenswrapper[4147]: I0216 02:06:25.187717 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-hswdj" Feb 16 02:06:25.189102 master-0 kubenswrapper[4147]: E0216 02:06:25.188243 4147 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-hswdj" podUID="e478bdcc-052e-42f8-91b6-58c26cfc9cfc" Feb 16 02:06:25.189102 master-0 kubenswrapper[4147]: I0216 02:06:25.187789 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gn9mv" Feb 16 02:06:25.189102 master-0 kubenswrapper[4147]: E0216 02:06:25.188545 4147 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gn9mv" podUID="7f0f9b7d-e663-4927-861b-a9544d483b6e" Feb 16 02:06:27.187715 master-0 kubenswrapper[4147]: I0216 02:06:27.187633 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-hswdj" Feb 16 02:06:27.189120 master-0 kubenswrapper[4147]: I0216 02:06:27.187660 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gn9mv" Feb 16 02:06:27.189120 master-0 kubenswrapper[4147]: E0216 02:06:27.187803 4147 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-hswdj" podUID="e478bdcc-052e-42f8-91b6-58c26cfc9cfc" Feb 16 02:06:27.189120 master-0 kubenswrapper[4147]: E0216 02:06:27.187928 4147 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gn9mv" podUID="7f0f9b7d-e663-4927-861b-a9544d483b6e" Feb 16 02:06:28.643088 master-0 kubenswrapper[4147]: I0216 02:06:28.642883 4147 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeReady" Feb 16 02:06:28.643088 master-0 kubenswrapper[4147]: I0216 02:06:28.643081 4147 kubelet_node_status.go:538] "Fast updating node status as it just became ready" Feb 16 02:06:28.696347 master-0 kubenswrapper[4147]: I0216 02:06:28.696271 4147 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-588944557d-2z8fq"] Feb 16 02:06:28.696752 master-0 kubenswrapper[4147]: I0216 02:06:28.696712 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-2z8fq" Feb 16 02:06:28.704055 master-0 kubenswrapper[4147]: I0216 02:06:28.703989 4147 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 16 02:06:28.706173 master-0 kubenswrapper[4147]: I0216 02:06:28.706120 4147 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 16 02:06:28.706654 master-0 kubenswrapper[4147]: I0216 02:06:28.706583 4147 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 16 02:06:28.707521 master-0 kubenswrapper[4147]: I0216 02:06:28.707474 4147 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 16 02:06:28.711859 master-0 kubenswrapper[4147]: I0216 02:06:28.711792 4147 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5dc4688546-ck5nr"] Feb 16 02:06:28.713554 master-0 kubenswrapper[4147]: I0216 02:06:28.712558 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-ck5nr" Feb 16 02:06:28.713554 master-0 kubenswrapper[4147]: I0216 02:06:28.713284 4147 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-8n9v4"] Feb 16 02:06:28.713774 master-0 kubenswrapper[4147]: I0216 02:06:28.713750 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-8n9v4" Feb 16 02:06:28.715566 master-0 kubenswrapper[4147]: I0216 02:06:28.715517 4147 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-845gn"] Feb 16 02:06:28.715935 master-0 kubenswrapper[4147]: W0216 02:06:28.715885 4147 reflector.go:561] object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:master-0" cannot list resource "configmaps" in API group "" in the namespace "openshift-cluster-storage-operator": no relationship found between node 'master-0' and this object Feb 16 02:06:28.716038 master-0 kubenswrapper[4147]: E0216 02:06:28.715958 4147 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-storage-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:master-0\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-cluster-storage-operator\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Feb 16 02:06:28.716038 master-0 kubenswrapper[4147]: I0216 02:06:28.715989 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-845gn" Feb 16 02:06:28.716482 master-0 kubenswrapper[4147]: W0216 02:06:28.716420 4147 reflector.go:561] object-"openshift-cluster-storage-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:master-0" cannot list resource "configmaps" in API group "" in the namespace "openshift-cluster-storage-operator": no relationship found between node 'master-0' and this object Feb 16 02:06:28.716579 master-0 kubenswrapper[4147]: E0216 02:06:28.716487 4147 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-storage-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:master-0\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-cluster-storage-operator\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Feb 16 02:06:28.716579 master-0 kubenswrapper[4147]: W0216 02:06:28.716565 4147 reflector.go:561] object-"openshift-service-ca-operator"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:master-0" cannot list resource "secrets" in API group "" in the namespace "openshift-service-ca-operator": no relationship found between node 'master-0' and this object Feb 16 02:06:28.716703 master-0 kubenswrapper[4147]: E0216 02:06:28.716588 4147 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca-operator\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:master-0\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-service-ca-operator\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Feb 16 02:06:28.716762 master-0 kubenswrapper[4147]: W0216 02:06:28.716723 4147 reflector.go:561] object-"openshift-service-ca-operator"/"service-ca-operator-config": failed to list *v1.ConfigMap: configmaps "service-ca-operator-config" is forbidden: User "system:node:master-0" cannot list resource "configmaps" in API group "" in the namespace "openshift-service-ca-operator": no relationship found between node 'master-0' and this object Feb 16 02:06:28.716817 master-0 kubenswrapper[4147]: E0216 02:06:28.716761 4147 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"service-ca-operator-config\" is forbidden: User \"system:node:master-0\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-service-ca-operator\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Feb 16 02:06:28.716817 master-0 kubenswrapper[4147]: I0216 02:06:28.716790 4147 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-qwp9g"] Feb 16 02:06:28.717153 master-0 kubenswrapper[4147]: I0216 02:06:28.717120 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-qwp9g" Feb 16 02:06:28.718200 master-0 kubenswrapper[4147]: I0216 02:06:28.718163 4147 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-755d954778-bngv9"] Feb 16 02:06:28.718545 master-0 kubenswrapper[4147]: I0216 02:06:28.718516 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-755d954778-bngv9" Feb 16 02:06:28.719619 master-0 kubenswrapper[4147]: I0216 02:06:28.719575 4147 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-86b8869b79-4rfwq"] Feb 16 02:06:28.720315 master-0 kubenswrapper[4147]: I0216 02:06:28.720269 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-86b8869b79-4rfwq" Feb 16 02:06:28.723284 master-0 kubenswrapper[4147]: I0216 02:06:28.723242 4147 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 16 02:06:28.723590 master-0 kubenswrapper[4147]: I0216 02:06:28.723559 4147 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 16 02:06:28.724373 master-0 kubenswrapper[4147]: I0216 02:06:28.724336 4147 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-b47jp"] Feb 16 02:06:28.724863 master-0 kubenswrapper[4147]: I0216 02:06:28.724828 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-b47jp" Feb 16 02:06:28.736483 master-0 kubenswrapper[4147]: I0216 02:06:28.735735 4147 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-v7lmz"] Feb 16 02:06:28.740034 master-0 kubenswrapper[4147]: I0216 02:06:28.737612 4147 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Feb 16 02:06:28.740034 master-0 kubenswrapper[4147]: I0216 02:06:28.738043 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-v7lmz" Feb 16 02:06:28.740034 master-0 kubenswrapper[4147]: I0216 02:06:28.738614 4147 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-jshtp"] Feb 16 02:06:28.740034 master-0 kubenswrapper[4147]: I0216 02:06:28.738656 4147 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Feb 16 02:06:28.740034 master-0 kubenswrapper[4147]: I0216 02:06:28.738763 4147 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 16 02:06:28.740034 master-0 kubenswrapper[4147]: I0216 02:06:28.738968 4147 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 16 02:06:28.740034 master-0 kubenswrapper[4147]: I0216 02:06:28.738984 4147 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 16 02:06:28.751465 master-0 kubenswrapper[4147]: I0216 02:06:28.751400 4147 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 16 02:06:28.751671 master-0 kubenswrapper[4147]: I0216 02:06:28.751609 4147 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Feb 16 02:06:28.751768 master-0 kubenswrapper[4147]: I0216 02:06:28.751736 4147 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 16 02:06:28.752091 master-0 kubenswrapper[4147]: I0216 02:06:28.752046 4147 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 16 02:06:28.752207 master-0 kubenswrapper[4147]: I0216 02:06:28.752171 4147 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Feb 16 02:06:28.752318 master-0 kubenswrapper[4147]: I0216 02:06:28.752282 4147 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 16 02:06:28.752402 master-0 kubenswrapper[4147]: I0216 02:06:28.752313 4147 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-67bf55ccdd-htjgz"] Feb 16 02:06:28.752402 master-0 kubenswrapper[4147]: I0216 02:06:28.752397 4147 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 16 02:06:28.752655 master-0 kubenswrapper[4147]: I0216 02:06:28.752619 4147 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 16 02:06:28.752737 master-0 kubenswrapper[4147]: I0216 02:06:28.752693 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-htjgz" Feb 16 02:06:28.753079 master-0 kubenswrapper[4147]: I0216 02:06:28.753039 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-jshtp" Feb 16 02:06:28.754858 master-0 kubenswrapper[4147]: I0216 02:06:28.754609 4147 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7c6bdb986f-zlbd2"] Feb 16 02:06:28.755288 master-0 kubenswrapper[4147]: I0216 02:06:28.755241 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-zlbd2" Feb 16 02:06:28.755877 master-0 kubenswrapper[4147]: I0216 02:06:28.755830 4147 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 16 02:06:28.756111 master-0 kubenswrapper[4147]: I0216 02:06:28.756067 4147 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 16 02:06:28.756394 master-0 kubenswrapper[4147]: I0216 02:06:28.756350 4147 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 16 02:06:28.765132 master-0 kubenswrapper[4147]: I0216 02:06:28.761509 4147 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-dsjz2"] Feb 16 02:06:28.765132 master-0 kubenswrapper[4147]: I0216 02:06:28.761917 4147 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-mmhcs"] Feb 16 02:06:28.765132 master-0 kubenswrapper[4147]: I0216 02:06:28.762175 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-mmhcs" Feb 16 02:06:28.765132 master-0 kubenswrapper[4147]: I0216 02:06:28.762757 4147 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-c588d8cb4-nbjz6"] Feb 16 02:06:28.765132 master-0 kubenswrapper[4147]: I0216 02:06:28.763367 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-dsjz2" Feb 16 02:06:28.765132 master-0 kubenswrapper[4147]: I0216 02:06:28.763420 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nbjz6" Feb 16 02:06:28.765132 master-0 kubenswrapper[4147]: I0216 02:06:28.764707 4147 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-96c8c64b8-bxgpd"] Feb 16 02:06:28.766060 master-0 kubenswrapper[4147]: I0216 02:06:28.765346 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-bxgpd" Feb 16 02:06:28.767226 master-0 kubenswrapper[4147]: I0216 02:06:28.767189 4147 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-x2sh4"] Feb 16 02:06:28.767577 master-0 kubenswrapper[4147]: I0216 02:06:28.767548 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-x2sh4" Feb 16 02:06:28.767994 master-0 kubenswrapper[4147]: I0216 02:06:28.767890 4147 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-7c64d55f8-62wr2"] Feb 16 02:06:28.768280 master-0 kubenswrapper[4147]: I0216 02:06:28.768240 4147 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 16 02:06:28.768953 master-0 kubenswrapper[4147]: I0216 02:06:28.768880 4147 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 16 02:06:28.769261 master-0 kubenswrapper[4147]: I0216 02:06:28.769235 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-7c64d55f8-62wr2" Feb 16 02:06:28.769764 master-0 kubenswrapper[4147]: I0216 02:06:28.769739 4147 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-6cc5b65c6b-8nl7s"] Feb 16 02:06:28.770202 master-0 kubenswrapper[4147]: I0216 02:06:28.770183 4147 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-dgxhp"] Feb 16 02:06:28.770582 master-0 kubenswrapper[4147]: I0216 02:06:28.770564 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-dgxhp" Feb 16 02:06:28.770913 master-0 kubenswrapper[4147]: I0216 02:06:28.770895 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-8nl7s" Feb 16 02:06:28.771200 master-0 kubenswrapper[4147]: I0216 02:06:28.770592 4147 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-tkqng"] Feb 16 02:06:28.771496 master-0 kubenswrapper[4147]: I0216 02:06:28.771461 4147 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 16 02:06:28.771683 master-0 kubenswrapper[4147]: I0216 02:06:28.771659 4147 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 16 02:06:28.771806 master-0 kubenswrapper[4147]: I0216 02:06:28.771789 4147 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-756d64c8c4-5q4zs"] Feb 16 02:06:28.772151 master-0 kubenswrapper[4147]: I0216 02:06:28.772133 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-5q4zs" Feb 16 02:06:28.772928 master-0 kubenswrapper[4147]: I0216 02:06:28.772833 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-tkqng" Feb 16 02:06:28.773084 master-0 kubenswrapper[4147]: I0216 02:06:28.771817 4147 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 16 02:06:28.779373 master-0 kubenswrapper[4147]: I0216 02:06:28.771854 4147 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Feb 16 02:06:28.779373 master-0 kubenswrapper[4147]: I0216 02:06:28.771898 4147 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 16 02:06:28.779373 master-0 kubenswrapper[4147]: I0216 02:06:28.771941 4147 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 16 02:06:28.779373 master-0 kubenswrapper[4147]: I0216 02:06:28.771979 4147 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 16 02:06:28.779373 master-0 kubenswrapper[4147]: I0216 02:06:28.772010 4147 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 16 02:06:28.779373 master-0 kubenswrapper[4147]: I0216 02:06:28.772059 4147 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 16 02:06:28.779373 master-0 kubenswrapper[4147]: I0216 02:06:28.772124 4147 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 16 02:06:28.779373 master-0 kubenswrapper[4147]: I0216 02:06:28.772218 4147 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 16 02:06:28.779373 master-0 kubenswrapper[4147]: I0216 02:06:28.772321 4147 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 16 02:06:28.779373 master-0 kubenswrapper[4147]: I0216 02:06:28.776657 4147 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 16 02:06:28.780298 master-0 kubenswrapper[4147]: I0216 02:06:28.780272 4147 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Feb 16 02:06:28.780389 master-0 kubenswrapper[4147]: I0216 02:06:28.780357 4147 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 16 02:06:28.780483 master-0 kubenswrapper[4147]: I0216 02:06:28.780469 4147 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 16 02:06:28.780576 master-0 kubenswrapper[4147]: I0216 02:06:28.780558 4147 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 16 02:06:28.780721 master-0 kubenswrapper[4147]: I0216 02:06:28.780705 4147 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 16 02:06:28.780817 master-0 kubenswrapper[4147]: I0216 02:06:28.780805 4147 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 16 02:06:28.780984 master-0 kubenswrapper[4147]: I0216 02:06:28.780955 4147 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-588944557d-2z8fq"] Feb 16 02:06:28.781036 master-0 kubenswrapper[4147]: I0216 02:06:28.780832 4147 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 16 02:06:28.781131 master-0 kubenswrapper[4147]: I0216 02:06:28.780861 4147 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 16 02:06:28.781209 master-0 kubenswrapper[4147]: I0216 02:06:28.781196 4147 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Feb 16 02:06:28.781315 master-0 kubenswrapper[4147]: I0216 02:06:28.781279 4147 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-755d954778-bngv9"] Feb 16 02:06:28.781419 master-0 kubenswrapper[4147]: I0216 02:06:28.781405 4147 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 16 02:06:28.784080 master-0 kubenswrapper[4147]: I0216 02:06:28.782046 4147 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 16 02:06:28.784080 master-0 kubenswrapper[4147]: I0216 02:06:28.782173 4147 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 16 02:06:28.784080 master-0 kubenswrapper[4147]: I0216 02:06:28.782221 4147 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 16 02:06:28.784080 master-0 kubenswrapper[4147]: I0216 02:06:28.782352 4147 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 16 02:06:28.784080 master-0 kubenswrapper[4147]: I0216 02:06:28.782541 4147 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Feb 16 02:06:28.784080 master-0 kubenswrapper[4147]: I0216 02:06:28.783516 4147 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-qwp9g"] Feb 16 02:06:28.784080 master-0 kubenswrapper[4147]: I0216 02:06:28.783538 4147 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-b47jp"] Feb 16 02:06:28.784080 master-0 kubenswrapper[4147]: I0216 02:06:28.783578 4147 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 16 02:06:28.784080 master-0 kubenswrapper[4147]: I0216 02:06:28.783633 4147 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 16 02:06:28.784080 master-0 kubenswrapper[4147]: I0216 02:06:28.783704 4147 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 16 02:06:28.784080 master-0 kubenswrapper[4147]: I0216 02:06:28.783716 4147 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 16 02:06:28.784080 master-0 kubenswrapper[4147]: I0216 02:06:28.783780 4147 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 16 02:06:28.784080 master-0 kubenswrapper[4147]: I0216 02:06:28.783871 4147 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 16 02:06:28.784080 master-0 kubenswrapper[4147]: I0216 02:06:28.783918 4147 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 16 02:06:28.785303 master-0 kubenswrapper[4147]: I0216 02:06:28.785161 4147 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 16 02:06:28.786390 master-0 kubenswrapper[4147]: I0216 02:06:28.786344 4147 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5dc4688546-ck5nr"] Feb 16 02:06:28.789843 master-0 kubenswrapper[4147]: I0216 02:06:28.789643 4147 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 16 02:06:28.789843 master-0 kubenswrapper[4147]: I0216 02:06:28.789796 4147 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 16 02:06:28.789975 master-0 kubenswrapper[4147]: I0216 02:06:28.789878 4147 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Feb 16 02:06:28.792658 master-0 kubenswrapper[4147]: I0216 02:06:28.792594 4147 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 16 02:06:28.793369 master-0 kubenswrapper[4147]: I0216 02:06:28.792661 4147 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 16 02:06:28.793369 master-0 kubenswrapper[4147]: I0216 02:06:28.792957 4147 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 16 02:06:28.793369 master-0 kubenswrapper[4147]: I0216 02:06:28.793154 4147 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Feb 16 02:06:28.793369 master-0 kubenswrapper[4147]: I0216 02:06:28.793249 4147 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 16 02:06:28.828250 master-0 kubenswrapper[4147]: I0216 02:06:28.826620 4147 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Feb 16 02:06:28.828250 master-0 kubenswrapper[4147]: I0216 02:06:28.826660 4147 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Feb 16 02:06:28.828250 master-0 kubenswrapper[4147]: I0216 02:06:28.827253 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/467d92a2-1cf3-418d-b41e-8e5f9d7a5b74-srv-cert\") pod \"olm-operator-6b56bd877c-qwp9g\" (UID: \"467d92a2-1cf3-418d-b41e-8e5f9d7a5b74\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-qwp9g" Feb 16 02:06:28.828250 master-0 kubenswrapper[4147]: I0216 02:06:28.827290 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9cd32bc-a13a-44ee-ba52-7bb335c7007b-serving-cert\") pod \"authentication-operator-755d954778-bngv9\" (UID: \"c9cd32bc-a13a-44ee-ba52-7bb335c7007b\") " pod="openshift-authentication-operator/authentication-operator-755d954778-bngv9" Feb 16 02:06:28.828250 master-0 kubenswrapper[4147]: I0216 02:06:28.827314 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c02961f-30ec-4405-b7fa-9c4192342ae9-serving-cert\") pod \"openshift-controller-manager-operator-5f5f84757d-b47jp\" (UID: \"6c02961f-30ec-4405-b7fa-9c4192342ae9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-b47jp" Feb 16 02:06:28.828250 master-0 kubenswrapper[4147]: I0216 02:06:28.827350 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c02961f-30ec-4405-b7fa-9c4192342ae9-config\") pod \"openshift-controller-manager-operator-5f5f84757d-b47jp\" (UID: \"6c02961f-30ec-4405-b7fa-9c4192342ae9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-b47jp" Feb 16 02:06:28.828250 master-0 kubenswrapper[4147]: I0216 02:06:28.827403 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91938be6-9ae4-4849-abe8-fc842daecd23-serving-cert\") pod \"service-ca-operator-5dc4688546-ck5nr\" (UID: \"91938be6-9ae4-4849-abe8-fc842daecd23\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-ck5nr" Feb 16 02:06:28.828250 master-0 kubenswrapper[4147]: I0216 02:06:28.827425 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7llx6\" (UniqueName: \"kubernetes.io/projected/6c02961f-30ec-4405-b7fa-9c4192342ae9-kube-api-access-7llx6\") pod \"openshift-controller-manager-operator-5f5f84757d-b47jp\" (UID: \"6c02961f-30ec-4405-b7fa-9c4192342ae9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-b47jp" Feb 16 02:06:28.828250 master-0 kubenswrapper[4147]: I0216 02:06:28.827494 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xr7gn\" (UniqueName: \"kubernetes.io/projected/c9cd32bc-a13a-44ee-ba52-7bb335c7007b-kube-api-access-xr7gn\") pod \"authentication-operator-755d954778-bngv9\" (UID: \"c9cd32bc-a13a-44ee-ba52-7bb335c7007b\") " pod="openshift-authentication-operator/authentication-operator-755d954778-bngv9" Feb 16 02:06:28.828250 master-0 kubenswrapper[4147]: I0216 02:06:28.827522 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f8fj\" (UniqueName: \"kubernetes.io/projected/76915cba-7c11-4bd8-9943-81de74e7781b-kube-api-access-6f8fj\") pod \"catalog-operator-588944557d-2z8fq\" (UID: \"76915cba-7c11-4bd8-9943-81de74e7781b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-2z8fq" Feb 16 02:06:28.828250 master-0 kubenswrapper[4147]: I0216 02:06:28.827552 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2582m\" (UniqueName: \"kubernetes.io/projected/d008dbd4-e713-4f2e-b64d-ca9cfc83a502-kube-api-access-2582m\") pod \"csi-snapshot-controller-operator-7b87b97578-8n9v4\" (UID: \"d008dbd4-e713-4f2e-b64d-ca9cfc83a502\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-8n9v4" Feb 16 02:06:28.828250 master-0 kubenswrapper[4147]: I0216 02:06:28.827588 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/b2a83ddd-ffa5-4127-9099-91187ad9dbba-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-845gn\" (UID: \"b2a83ddd-ffa5-4127-9099-91187ad9dbba\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-845gn" Feb 16 02:06:28.828250 master-0 kubenswrapper[4147]: I0216 02:06:28.827619 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b2a83ddd-ffa5-4127-9099-91187ad9dbba-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-845gn\" (UID: \"b2a83ddd-ffa5-4127-9099-91187ad9dbba\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-845gn" Feb 16 02:06:28.828250 master-0 kubenswrapper[4147]: I0216 02:06:28.827643 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9cd32bc-a13a-44ee-ba52-7bb335c7007b-service-ca-bundle\") pod \"authentication-operator-755d954778-bngv9\" (UID: \"c9cd32bc-a13a-44ee-ba52-7bb335c7007b\") " pod="openshift-authentication-operator/authentication-operator-755d954778-bngv9" Feb 16 02:06:28.828250 master-0 kubenswrapper[4147]: I0216 02:06:28.827673 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/76915cba-7c11-4bd8-9943-81de74e7781b-profile-collector-cert\") pod \"catalog-operator-588944557d-2z8fq\" (UID: \"76915cba-7c11-4bd8-9943-81de74e7781b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-2z8fq" Feb 16 02:06:28.828729 master-0 kubenswrapper[4147]: I0216 02:06:28.827703 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9cd32bc-a13a-44ee-ba52-7bb335c7007b-config\") pod \"authentication-operator-755d954778-bngv9\" (UID: \"c9cd32bc-a13a-44ee-ba52-7bb335c7007b\") " pod="openshift-authentication-operator/authentication-operator-755d954778-bngv9" Feb 16 02:06:28.828729 master-0 kubenswrapper[4147]: I0216 02:06:28.827739 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9sgx\" (UniqueName: \"kubernetes.io/projected/2ffa4db8-97da-42de-8e51-35680f518ca7-kube-api-access-t9sgx\") pod \"dns-operator-86b8869b79-4rfwq\" (UID: \"2ffa4db8-97da-42de-8e51-35680f518ca7\") " pod="openshift-dns-operator/dns-operator-86b8869b79-4rfwq" Feb 16 02:06:28.828729 master-0 kubenswrapper[4147]: I0216 02:06:28.827770 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/467d92a2-1cf3-418d-b41e-8e5f9d7a5b74-profile-collector-cert\") pod \"olm-operator-6b56bd877c-qwp9g\" (UID: \"467d92a2-1cf3-418d-b41e-8e5f9d7a5b74\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-qwp9g" Feb 16 02:06:28.828729 master-0 kubenswrapper[4147]: I0216 02:06:28.827848 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7fmj\" (UniqueName: \"kubernetes.io/projected/b2a83ddd-ffa5-4127-9099-91187ad9dbba-kube-api-access-t7fmj\") pod \"cluster-node-tuning-operator-ff6c9b66-845gn\" (UID: \"b2a83ddd-ffa5-4127-9099-91187ad9dbba\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-845gn" Feb 16 02:06:28.828729 master-0 kubenswrapper[4147]: I0216 02:06:28.827911 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2ffa4db8-97da-42de-8e51-35680f518ca7-metrics-tls\") pod \"dns-operator-86b8869b79-4rfwq\" (UID: \"2ffa4db8-97da-42de-8e51-35680f518ca7\") " pod="openshift-dns-operator/dns-operator-86b8869b79-4rfwq" Feb 16 02:06:28.828729 master-0 kubenswrapper[4147]: I0216 02:06:28.827953 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8lvq\" (UniqueName: \"kubernetes.io/projected/467d92a2-1cf3-418d-b41e-8e5f9d7a5b74-kube-api-access-f8lvq\") pod \"olm-operator-6b56bd877c-qwp9g\" (UID: \"467d92a2-1cf3-418d-b41e-8e5f9d7a5b74\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-qwp9g" Feb 16 02:06:28.828729 master-0 kubenswrapper[4147]: I0216 02:06:28.827997 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91938be6-9ae4-4849-abe8-fc842daecd23-config\") pod \"service-ca-operator-5dc4688546-ck5nr\" (UID: \"91938be6-9ae4-4849-abe8-fc842daecd23\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-ck5nr" Feb 16 02:06:28.828729 master-0 kubenswrapper[4147]: I0216 02:06:28.828024 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b2a83ddd-ffa5-4127-9099-91187ad9dbba-trusted-ca\") pod \"cluster-node-tuning-operator-ff6c9b66-845gn\" (UID: \"b2a83ddd-ffa5-4127-9099-91187ad9dbba\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-845gn" Feb 16 02:06:28.828729 master-0 kubenswrapper[4147]: I0216 02:06:28.828130 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9cd32bc-a13a-44ee-ba52-7bb335c7007b-trusted-ca-bundle\") pod \"authentication-operator-755d954778-bngv9\" (UID: \"c9cd32bc-a13a-44ee-ba52-7bb335c7007b\") " pod="openshift-authentication-operator/authentication-operator-755d954778-bngv9" Feb 16 02:06:28.828729 master-0 kubenswrapper[4147]: I0216 02:06:28.828162 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/76915cba-7c11-4bd8-9943-81de74e7781b-srv-cert\") pod \"catalog-operator-588944557d-2z8fq\" (UID: \"76915cba-7c11-4bd8-9943-81de74e7781b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-2z8fq" Feb 16 02:06:28.828729 master-0 kubenswrapper[4147]: I0216 02:06:28.828227 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhz2m\" (UniqueName: \"kubernetes.io/projected/91938be6-9ae4-4849-abe8-fc842daecd23-kube-api-access-bhz2m\") pod \"service-ca-operator-5dc4688546-ck5nr\" (UID: \"91938be6-9ae4-4849-abe8-fc842daecd23\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-ck5nr" Feb 16 02:06:28.833288 master-0 kubenswrapper[4147]: I0216 02:06:28.833246 4147 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 16 02:06:28.835099 master-0 kubenswrapper[4147]: I0216 02:06:28.835067 4147 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 16 02:06:28.835145 master-0 kubenswrapper[4147]: I0216 02:06:28.835124 4147 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 16 02:06:28.930073 master-0 kubenswrapper[4147]: I0216 02:06:28.930023 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8f33151-61df-4b66-ba85-9ba210779059-serving-cert\") pod \"kube-controller-manager-operator-78ff47c7c5-dgxhp\" (UID: \"a8f33151-61df-4b66-ba85-9ba210779059\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-dgxhp" Feb 16 02:06:28.930073 master-0 kubenswrapper[4147]: I0216 02:06:28.930065 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bde83629-b39c-401e-bc30-5ce205638918-marketplace-trusted-ca\") pod \"marketplace-operator-6cc5b65c6b-8nl7s\" (UID: \"bde83629-b39c-401e-bc30-5ce205638918\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-8nl7s" Feb 16 02:06:28.930073 master-0 kubenswrapper[4147]: I0216 02:06:28.930088 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a0540a70-a256-422b-a827-e564d0e67866-trusted-ca\") pod \"cluster-image-registry-operator-96c8c64b8-bxgpd\" (UID: \"a0540a70-a256-422b-a827-e564d0e67866\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-bxgpd" Feb 16 02:06:28.930460 master-0 kubenswrapper[4147]: I0216 02:06:28.930106 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8f33151-61df-4b66-ba85-9ba210779059-config\") pod \"kube-controller-manager-operator-78ff47c7c5-dgxhp\" (UID: \"a8f33151-61df-4b66-ba85-9ba210779059\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-dgxhp" Feb 16 02:06:28.930460 master-0 kubenswrapper[4147]: I0216 02:06:28.930131 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7fmj\" (UniqueName: \"kubernetes.io/projected/b2a83ddd-ffa5-4127-9099-91187ad9dbba-kube-api-access-t7fmj\") pod \"cluster-node-tuning-operator-ff6c9b66-845gn\" (UID: \"b2a83ddd-ffa5-4127-9099-91187ad9dbba\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-845gn" Feb 16 02:06:28.930460 master-0 kubenswrapper[4147]: I0216 02:06:28.930199 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/23755f7f-dce6-4dcf-9664-22e3aedb5c81-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-tkqng\" (UID: \"23755f7f-dce6-4dcf-9664-22e3aedb5c81\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-tkqng" Feb 16 02:06:28.930460 master-0 kubenswrapper[4147]: I0216 02:06:28.930240 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/1f2d2601-481d-4e86-ac4c-3d34d5691261-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-55b69c6c48-jshtp\" (UID: \"1f2d2601-481d-4e86-ac4c-3d34d5691261\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-jshtp" Feb 16 02:06:28.930460 master-0 kubenswrapper[4147]: I0216 02:06:28.930260 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/724ac845-3835-458b-9645-e665be135ff9-serving-cert\") pod \"etcd-operator-67bf55ccdd-htjgz\" (UID: \"724ac845-3835-458b-9645-e665be135ff9\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-htjgz" Feb 16 02:06:28.930460 master-0 kubenswrapper[4147]: I0216 02:06:28.930285 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a5b01c1-1231-4e69-8b6c-c4981b65b26e-serving-cert\") pod \"kube-storage-version-migrator-operator-cd5474998-x2sh4\" (UID: \"4a5b01c1-1231-4e69-8b6c-c4981b65b26e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-x2sh4" Feb 16 02:06:28.930460 master-0 kubenswrapper[4147]: I0216 02:06:28.930375 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8lvq\" (UniqueName: \"kubernetes.io/projected/467d92a2-1cf3-418d-b41e-8e5f9d7a5b74-kube-api-access-f8lvq\") pod \"olm-operator-6b56bd877c-qwp9g\" (UID: \"467d92a2-1cf3-418d-b41e-8e5f9d7a5b74\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-qwp9g" Feb 16 02:06:28.930460 master-0 kubenswrapper[4147]: I0216 02:06:28.930423 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91938be6-9ae4-4849-abe8-fc842daecd23-config\") pod \"service-ca-operator-5dc4688546-ck5nr\" (UID: \"91938be6-9ae4-4849-abe8-fc842daecd23\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-ck5nr" Feb 16 02:06:28.931108 master-0 kubenswrapper[4147]: I0216 02:06:28.930693 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2ffa4db8-97da-42de-8e51-35680f518ca7-metrics-tls\") pod \"dns-operator-86b8869b79-4rfwq\" (UID: \"2ffa4db8-97da-42de-8e51-35680f518ca7\") " pod="openshift-dns-operator/dns-operator-86b8869b79-4rfwq" Feb 16 02:06:28.931108 master-0 kubenswrapper[4147]: E0216 02:06:28.930838 4147 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 16 02:06:28.931108 master-0 kubenswrapper[4147]: E0216 02:06:28.930898 4147 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ffa4db8-97da-42de-8e51-35680f518ca7-metrics-tls podName:2ffa4db8-97da-42de-8e51-35680f518ca7 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:29.430875703 +0000 UTC m=+108.066610829 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/2ffa4db8-97da-42de-8e51-35680f518ca7-metrics-tls") pod "dns-operator-86b8869b79-4rfwq" (UID: "2ffa4db8-97da-42de-8e51-35680f518ca7") : secret "metrics-tls" not found Feb 16 02:06:28.931108 master-0 kubenswrapper[4147]: I0216 02:06:28.930840 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvf8t\" (UniqueName: \"kubernetes.io/projected/21686a6d-f685-4fb6-98af-3e8a39c5981b-kube-api-access-lvf8t\") pod \"cluster-monitoring-operator-756d64c8c4-5q4zs\" (UID: \"21686a6d-f685-4fb6-98af-3e8a39c5981b\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-5q4zs" Feb 16 02:06:28.931108 master-0 kubenswrapper[4147]: I0216 02:06:28.930940 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rc6w\" (UniqueName: \"kubernetes.io/projected/04804a08-e3a5-46f3-abcb-967866834baa-kube-api-access-8rc6w\") pod \"ingress-operator-c588d8cb4-nbjz6\" (UID: \"04804a08-e3a5-46f3-abcb-967866834baa\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nbjz6" Feb 16 02:06:28.931108 master-0 kubenswrapper[4147]: I0216 02:06:28.930958 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/724ac845-3835-458b-9645-e665be135ff9-etcd-service-ca\") pod \"etcd-operator-67bf55ccdd-htjgz\" (UID: \"724ac845-3835-458b-9645-e665be135ff9\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-htjgz" Feb 16 02:06:28.931108 master-0 kubenswrapper[4147]: I0216 02:06:28.930982 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b2a83ddd-ffa5-4127-9099-91187ad9dbba-trusted-ca\") pod \"cluster-node-tuning-operator-ff6c9b66-845gn\" (UID: \"b2a83ddd-ffa5-4127-9099-91187ad9dbba\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-845gn" Feb 16 02:06:28.931108 master-0 kubenswrapper[4147]: I0216 02:06:28.931025 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/21686a6d-f685-4fb6-98af-3e8a39c5981b-telemetry-config\") pod \"cluster-monitoring-operator-756d64c8c4-5q4zs\" (UID: \"21686a6d-f685-4fb6-98af-3e8a39c5981b\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-5q4zs" Feb 16 02:06:28.931108 master-0 kubenswrapper[4147]: I0216 02:06:28.931060 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0540a70-a256-422b-a827-e564d0e67866-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-bxgpd\" (UID: \"a0540a70-a256-422b-a827-e564d0e67866\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-bxgpd" Feb 16 02:06:28.931108 master-0 kubenswrapper[4147]: I0216 02:06:28.931097 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhz2m\" (UniqueName: \"kubernetes.io/projected/91938be6-9ae4-4849-abe8-fc842daecd23-kube-api-access-bhz2m\") pod \"service-ca-operator-5dc4688546-ck5nr\" (UID: \"91938be6-9ae4-4849-abe8-fc842daecd23\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-ck5nr" Feb 16 02:06:28.931108 master-0 kubenswrapper[4147]: I0216 02:06:28.931125 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/04804a08-e3a5-46f3-abcb-967866834baa-metrics-tls\") pod \"ingress-operator-c588d8cb4-nbjz6\" (UID: \"04804a08-e3a5-46f3-abcb-967866834baa\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nbjz6" Feb 16 02:06:28.931958 master-0 kubenswrapper[4147]: I0216 02:06:28.931147 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/04804a08-e3a5-46f3-abcb-967866834baa-bound-sa-token\") pod \"ingress-operator-c588d8cb4-nbjz6\" (UID: \"04804a08-e3a5-46f3-abcb-967866834baa\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nbjz6" Feb 16 02:06:28.931958 master-0 kubenswrapper[4147]: I0216 02:06:28.931277 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9cd32bc-a13a-44ee-ba52-7bb335c7007b-trusted-ca-bundle\") pod \"authentication-operator-755d954778-bngv9\" (UID: \"c9cd32bc-a13a-44ee-ba52-7bb335c7007b\") " pod="openshift-authentication-operator/authentication-operator-755d954778-bngv9" Feb 16 02:06:28.931958 master-0 kubenswrapper[4147]: I0216 02:06:28.931306 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/76915cba-7c11-4bd8-9943-81de74e7781b-srv-cert\") pod \"catalog-operator-588944557d-2z8fq\" (UID: \"76915cba-7c11-4bd8-9943-81de74e7781b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-2z8fq" Feb 16 02:06:28.931958 master-0 kubenswrapper[4147]: I0216 02:06:28.931333 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgpcj\" (UniqueName: \"kubernetes.io/projected/b6088119-1125-4271-8c0b-0675e700edd9-kube-api-access-jgpcj\") pod \"multus-admission-controller-7c64d55f8-62wr2\" (UID: \"b6088119-1125-4271-8c0b-0675e700edd9\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-62wr2" Feb 16 02:06:28.931958 master-0 kubenswrapper[4147]: I0216 02:06:28.931359 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a5b01c1-1231-4e69-8b6c-c4981b65b26e-config\") pod \"kube-storage-version-migrator-operator-cd5474998-x2sh4\" (UID: \"4a5b01c1-1231-4e69-8b6c-c4981b65b26e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-x2sh4" Feb 16 02:06:28.931958 master-0 kubenswrapper[4147]: I0216 02:06:28.931395 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcq6v\" (UniqueName: \"kubernetes.io/projected/e379cfaf-3a4c-40e7-8641-3524b3669295-kube-api-access-gcq6v\") pod \"openshift-apiserver-operator-6d4655d9cf-v7lmz\" (UID: \"e379cfaf-3a4c-40e7-8641-3524b3669295\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-v7lmz" Feb 16 02:06:28.931958 master-0 kubenswrapper[4147]: E0216 02:06:28.931408 4147 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Feb 16 02:06:28.931958 master-0 kubenswrapper[4147]: I0216 02:06:28.931420 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1743372f-bdb0-4558-b47b-3714f3aa3fde-kube-api-access\") pod \"openshift-kube-scheduler-operator-7485d55966-mmhcs\" (UID: \"1743372f-bdb0-4558-b47b-3714f3aa3fde\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-mmhcs" Feb 16 02:06:28.931958 master-0 kubenswrapper[4147]: E0216 02:06:28.931463 4147 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/76915cba-7c11-4bd8-9943-81de74e7781b-srv-cert podName:76915cba-7c11-4bd8-9943-81de74e7781b nodeName:}" failed. No retries permitted until 2026-02-16 02:06:29.431431627 +0000 UTC m=+108.067166753 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/76915cba-7c11-4bd8-9943-81de74e7781b-srv-cert") pod "catalog-operator-588944557d-2z8fq" (UID: "76915cba-7c11-4bd8-9943-81de74e7781b") : secret "catalog-operator-serving-cert" not found Feb 16 02:06:28.931958 master-0 kubenswrapper[4147]: I0216 02:06:28.931491 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8d49c\" (UniqueName: \"kubernetes.io/projected/1f2d2601-481d-4e86-ac4c-3d34d5691261-kube-api-access-8d49c\") pod \"cluster-olm-operator-55b69c6c48-jshtp\" (UID: \"1f2d2601-481d-4e86-ac4c-3d34d5691261\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-jshtp" Feb 16 02:06:28.931958 master-0 kubenswrapper[4147]: I0216 02:06:28.931534 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4gmn\" (UniqueName: \"kubernetes.io/projected/23755f7f-dce6-4dcf-9664-22e3aedb5c81-kube-api-access-n4gmn\") pod \"package-server-manager-5c696dbdcd-tkqng\" (UID: \"23755f7f-dce6-4dcf-9664-22e3aedb5c81\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-tkqng" Feb 16 02:06:28.931958 master-0 kubenswrapper[4147]: I0216 02:06:28.931572 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/980aa005-f51d-4ca2-aee6-a6fdeefd86d0-config\") pod \"kube-apiserver-operator-54984b6678-dsjz2\" (UID: \"980aa005-f51d-4ca2-aee6-a6fdeefd86d0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-dsjz2" Feb 16 02:06:28.931958 master-0 kubenswrapper[4147]: I0216 02:06:28.931608 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/467d92a2-1cf3-418d-b41e-8e5f9d7a5b74-srv-cert\") pod \"olm-operator-6b56bd877c-qwp9g\" (UID: \"467d92a2-1cf3-418d-b41e-8e5f9d7a5b74\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-qwp9g" Feb 16 02:06:28.931958 master-0 kubenswrapper[4147]: I0216 02:06:28.931635 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b6088119-1125-4271-8c0b-0675e700edd9-webhook-certs\") pod \"multus-admission-controller-7c64d55f8-62wr2\" (UID: \"b6088119-1125-4271-8c0b-0675e700edd9\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-62wr2" Feb 16 02:06:28.933045 master-0 kubenswrapper[4147]: I0216 02:06:28.931662 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a8f33151-61df-4b66-ba85-9ba210779059-kube-api-access\") pod \"kube-controller-manager-operator-78ff47c7c5-dgxhp\" (UID: \"a8f33151-61df-4b66-ba85-9ba210779059\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-dgxhp" Feb 16 02:06:28.933045 master-0 kubenswrapper[4147]: I0216 02:06:28.931689 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9cd32bc-a13a-44ee-ba52-7bb335c7007b-serving-cert\") pod \"authentication-operator-755d954778-bngv9\" (UID: \"c9cd32bc-a13a-44ee-ba52-7bb335c7007b\") " pod="openshift-authentication-operator/authentication-operator-755d954778-bngv9" Feb 16 02:06:28.933045 master-0 kubenswrapper[4147]: I0216 02:06:28.931730 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c02961f-30ec-4405-b7fa-9c4192342ae9-serving-cert\") pod \"openshift-controller-manager-operator-5f5f84757d-b47jp\" (UID: \"6c02961f-30ec-4405-b7fa-9c4192342ae9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-b47jp" Feb 16 02:06:28.933045 master-0 kubenswrapper[4147]: I0216 02:06:28.931763 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c02961f-30ec-4405-b7fa-9c4192342ae9-config\") pod \"openshift-controller-manager-operator-5f5f84757d-b47jp\" (UID: \"6c02961f-30ec-4405-b7fa-9c4192342ae9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-b47jp" Feb 16 02:06:28.933045 master-0 kubenswrapper[4147]: E0216 02:06:28.931765 4147 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Feb 16 02:06:28.933045 master-0 kubenswrapper[4147]: E0216 02:06:28.931810 4147 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/467d92a2-1cf3-418d-b41e-8e5f9d7a5b74-srv-cert podName:467d92a2-1cf3-418d-b41e-8e5f9d7a5b74 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:29.431799806 +0000 UTC m=+108.067534932 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/467d92a2-1cf3-418d-b41e-8e5f9d7a5b74-srv-cert") pod "olm-operator-6b56bd877c-qwp9g" (UID: "467d92a2-1cf3-418d-b41e-8e5f9d7a5b74") : secret "olm-operator-serving-cert" not found Feb 16 02:06:28.933045 master-0 kubenswrapper[4147]: I0216 02:06:28.931911 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b2a83ddd-ffa5-4127-9099-91187ad9dbba-trusted-ca\") pod \"cluster-node-tuning-operator-ff6c9b66-845gn\" (UID: \"b2a83ddd-ffa5-4127-9099-91187ad9dbba\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-845gn" Feb 16 02:06:28.933045 master-0 kubenswrapper[4147]: I0216 02:06:28.932419 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c02961f-30ec-4405-b7fa-9c4192342ae9-config\") pod \"openshift-controller-manager-operator-5f5f84757d-b47jp\" (UID: \"6c02961f-30ec-4405-b7fa-9c4192342ae9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-b47jp" Feb 16 02:06:28.933045 master-0 kubenswrapper[4147]: I0216 02:06:28.932474 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/724ac845-3835-458b-9645-e665be135ff9-etcd-client\") pod \"etcd-operator-67bf55ccdd-htjgz\" (UID: \"724ac845-3835-458b-9645-e665be135ff9\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-htjgz" Feb 16 02:06:28.933045 master-0 kubenswrapper[4147]: I0216 02:06:28.932496 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2qvg\" (UniqueName: \"kubernetes.io/projected/9be9fd24-fdb1-43dc-80b8-68020427bfd7-kube-api-access-k2qvg\") pod \"openshift-config-operator-7c6bdb986f-zlbd2\" (UID: \"9be9fd24-fdb1-43dc-80b8-68020427bfd7\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-zlbd2" Feb 16 02:06:28.933045 master-0 kubenswrapper[4147]: I0216 02:06:28.932519 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/724ac845-3835-458b-9645-e665be135ff9-config\") pod \"etcd-operator-67bf55ccdd-htjgz\" (UID: \"724ac845-3835-458b-9645-e665be135ff9\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-htjgz" Feb 16 02:06:28.933045 master-0 kubenswrapper[4147]: I0216 02:06:28.932560 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a0540a70-a256-422b-a827-e564d0e67866-bound-sa-token\") pod \"cluster-image-registry-operator-96c8c64b8-bxgpd\" (UID: \"a0540a70-a256-422b-a827-e564d0e67866\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-bxgpd" Feb 16 02:06:28.933045 master-0 kubenswrapper[4147]: I0216 02:06:28.932579 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zr872\" (UniqueName: \"kubernetes.io/projected/4a5b01c1-1231-4e69-8b6c-c4981b65b26e-kube-api-access-zr872\") pod \"kube-storage-version-migrator-operator-cd5474998-x2sh4\" (UID: \"4a5b01c1-1231-4e69-8b6c-c4981b65b26e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-x2sh4" Feb 16 02:06:28.933045 master-0 kubenswrapper[4147]: I0216 02:06:28.932603 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91938be6-9ae4-4849-abe8-fc842daecd23-serving-cert\") pod \"service-ca-operator-5dc4688546-ck5nr\" (UID: \"91938be6-9ae4-4849-abe8-fc842daecd23\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-ck5nr" Feb 16 02:06:28.933045 master-0 kubenswrapper[4147]: I0216 02:06:28.932628 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7llx6\" (UniqueName: \"kubernetes.io/projected/6c02961f-30ec-4405-b7fa-9c4192342ae9-kube-api-access-7llx6\") pod \"openshift-controller-manager-operator-5f5f84757d-b47jp\" (UID: \"6c02961f-30ec-4405-b7fa-9c4192342ae9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-b47jp" Feb 16 02:06:28.934848 master-0 kubenswrapper[4147]: I0216 02:06:28.932653 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04804a08-e3a5-46f3-abcb-967866834baa-trusted-ca\") pod \"ingress-operator-c588d8cb4-nbjz6\" (UID: \"04804a08-e3a5-46f3-abcb-967866834baa\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nbjz6" Feb 16 02:06:28.934848 master-0 kubenswrapper[4147]: I0216 02:06:28.932680 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/980aa005-f51d-4ca2-aee6-a6fdeefd86d0-kube-api-access\") pod \"kube-apiserver-operator-54984b6678-dsjz2\" (UID: \"980aa005-f51d-4ca2-aee6-a6fdeefd86d0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-dsjz2" Feb 16 02:06:28.934848 master-0 kubenswrapper[4147]: I0216 02:06:28.932705 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/724ac845-3835-458b-9645-e665be135ff9-etcd-ca\") pod \"etcd-operator-67bf55ccdd-htjgz\" (UID: \"724ac845-3835-458b-9645-e665be135ff9\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-htjgz" Feb 16 02:06:28.934848 master-0 kubenswrapper[4147]: I0216 02:06:28.932742 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xr7gn\" (UniqueName: \"kubernetes.io/projected/c9cd32bc-a13a-44ee-ba52-7bb335c7007b-kube-api-access-xr7gn\") pod \"authentication-operator-755d954778-bngv9\" (UID: \"c9cd32bc-a13a-44ee-ba52-7bb335c7007b\") " pod="openshift-authentication-operator/authentication-operator-755d954778-bngv9" Feb 16 02:06:28.934848 master-0 kubenswrapper[4147]: I0216 02:06:28.932779 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/980aa005-f51d-4ca2-aee6-a6fdeefd86d0-serving-cert\") pod \"kube-apiserver-operator-54984b6678-dsjz2\" (UID: \"980aa005-f51d-4ca2-aee6-a6fdeefd86d0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-dsjz2" Feb 16 02:06:28.934848 master-0 kubenswrapper[4147]: I0216 02:06:28.932807 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6f8fj\" (UniqueName: \"kubernetes.io/projected/76915cba-7c11-4bd8-9943-81de74e7781b-kube-api-access-6f8fj\") pod \"catalog-operator-588944557d-2z8fq\" (UID: \"76915cba-7c11-4bd8-9943-81de74e7781b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-2z8fq" Feb 16 02:06:28.934848 master-0 kubenswrapper[4147]: I0216 02:06:28.932832 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2582m\" (UniqueName: \"kubernetes.io/projected/d008dbd4-e713-4f2e-b64d-ca9cfc83a502-kube-api-access-2582m\") pod \"csi-snapshot-controller-operator-7b87b97578-8n9v4\" (UID: \"d008dbd4-e713-4f2e-b64d-ca9cfc83a502\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-8n9v4" Feb 16 02:06:28.934848 master-0 kubenswrapper[4147]: I0216 02:06:28.932857 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e379cfaf-3a4c-40e7-8641-3524b3669295-config\") pod \"openshift-apiserver-operator-6d4655d9cf-v7lmz\" (UID: \"e379cfaf-3a4c-40e7-8641-3524b3669295\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-v7lmz" Feb 16 02:06:28.934848 master-0 kubenswrapper[4147]: I0216 02:06:28.932880 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/1f2d2601-481d-4e86-ac4c-3d34d5691261-operand-assets\") pod \"cluster-olm-operator-55b69c6c48-jshtp\" (UID: \"1f2d2601-481d-4e86-ac4c-3d34d5691261\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-jshtp" Feb 16 02:06:28.934848 master-0 kubenswrapper[4147]: I0216 02:06:28.932903 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e379cfaf-3a4c-40e7-8641-3524b3669295-serving-cert\") pod \"openshift-apiserver-operator-6d4655d9cf-v7lmz\" (UID: \"e379cfaf-3a4c-40e7-8641-3524b3669295\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-v7lmz" Feb 16 02:06:28.934848 master-0 kubenswrapper[4147]: I0216 02:06:28.932926 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1743372f-bdb0-4558-b47b-3714f3aa3fde-serving-cert\") pod \"openshift-kube-scheduler-operator-7485d55966-mmhcs\" (UID: \"1743372f-bdb0-4558-b47b-3714f3aa3fde\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-mmhcs" Feb 16 02:06:28.934848 master-0 kubenswrapper[4147]: I0216 02:06:28.932949 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9be9fd24-fdb1-43dc-80b8-68020427bfd7-available-featuregates\") pod \"openshift-config-operator-7c6bdb986f-zlbd2\" (UID: \"9be9fd24-fdb1-43dc-80b8-68020427bfd7\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-zlbd2" Feb 16 02:06:28.934848 master-0 kubenswrapper[4147]: I0216 02:06:28.933120 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24b6h\" (UniqueName: \"kubernetes.io/projected/bde83629-b39c-401e-bc30-5ce205638918-kube-api-access-24b6h\") pod \"marketplace-operator-6cc5b65c6b-8nl7s\" (UID: \"bde83629-b39c-401e-bc30-5ce205638918\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-8nl7s" Feb 16 02:06:28.934848 master-0 kubenswrapper[4147]: I0216 02:06:28.933164 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bff42\" (UniqueName: \"kubernetes.io/projected/724ac845-3835-458b-9645-e665be135ff9-kube-api-access-bff42\") pod \"etcd-operator-67bf55ccdd-htjgz\" (UID: \"724ac845-3835-458b-9645-e665be135ff9\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-htjgz" Feb 16 02:06:28.934848 master-0 kubenswrapper[4147]: I0216 02:06:28.933209 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9cd32bc-a13a-44ee-ba52-7bb335c7007b-service-ca-bundle\") pod \"authentication-operator-755d954778-bngv9\" (UID: \"c9cd32bc-a13a-44ee-ba52-7bb335c7007b\") " pod="openshift-authentication-operator/authentication-operator-755d954778-bngv9" Feb 16 02:06:28.938391 master-0 kubenswrapper[4147]: I0216 02:06:28.933302 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/b2a83ddd-ffa5-4127-9099-91187ad9dbba-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-845gn\" (UID: \"b2a83ddd-ffa5-4127-9099-91187ad9dbba\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-845gn" Feb 16 02:06:28.938391 master-0 kubenswrapper[4147]: I0216 02:06:28.933345 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b2a83ddd-ffa5-4127-9099-91187ad9dbba-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-845gn\" (UID: \"b2a83ddd-ffa5-4127-9099-91187ad9dbba\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-845gn" Feb 16 02:06:28.938391 master-0 kubenswrapper[4147]: I0216 02:06:28.933366 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9cd32bc-a13a-44ee-ba52-7bb335c7007b-config\") pod \"authentication-operator-755d954778-bngv9\" (UID: \"c9cd32bc-a13a-44ee-ba52-7bb335c7007b\") " pod="openshift-authentication-operator/authentication-operator-755d954778-bngv9" Feb 16 02:06:28.938391 master-0 kubenswrapper[4147]: E0216 02:06:28.933421 4147 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 16 02:06:28.938391 master-0 kubenswrapper[4147]: E0216 02:06:28.933515 4147 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b2a83ddd-ffa5-4127-9099-91187ad9dbba-apiservice-cert podName:b2a83ddd-ffa5-4127-9099-91187ad9dbba nodeName:}" failed. No retries permitted until 2026-02-16 02:06:29.433501888 +0000 UTC m=+108.069237004 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/b2a83ddd-ffa5-4127-9099-91187ad9dbba-apiservice-cert") pod "cluster-node-tuning-operator-ff6c9b66-845gn" (UID: "b2a83ddd-ffa5-4127-9099-91187ad9dbba") : secret "performance-addon-operator-webhook-cert" not found Feb 16 02:06:28.938391 master-0 kubenswrapper[4147]: I0216 02:06:28.933543 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/76915cba-7c11-4bd8-9943-81de74e7781b-profile-collector-cert\") pod \"catalog-operator-588944557d-2z8fq\" (UID: \"76915cba-7c11-4bd8-9943-81de74e7781b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-2z8fq" Feb 16 02:06:28.938391 master-0 kubenswrapper[4147]: I0216 02:06:28.933535 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9cd32bc-a13a-44ee-ba52-7bb335c7007b-trusted-ca-bundle\") pod \"authentication-operator-755d954778-bngv9\" (UID: \"c9cd32bc-a13a-44ee-ba52-7bb335c7007b\") " pod="openshift-authentication-operator/authentication-operator-755d954778-bngv9" Feb 16 02:06:28.938391 master-0 kubenswrapper[4147]: I0216 02:06:28.933564 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/bde83629-b39c-401e-bc30-5ce205638918-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-8nl7s\" (UID: \"bde83629-b39c-401e-bc30-5ce205638918\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-8nl7s" Feb 16 02:06:28.938391 master-0 kubenswrapper[4147]: E0216 02:06:28.933767 4147 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 16 02:06:28.938391 master-0 kubenswrapper[4147]: E0216 02:06:28.933874 4147 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b2a83ddd-ffa5-4127-9099-91187ad9dbba-node-tuning-operator-tls podName:b2a83ddd-ffa5-4127-9099-91187ad9dbba nodeName:}" failed. No retries permitted until 2026-02-16 02:06:29.433842887 +0000 UTC m=+108.069578093 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/b2a83ddd-ffa5-4127-9099-91187ad9dbba-node-tuning-operator-tls") pod "cluster-node-tuning-operator-ff6c9b66-845gn" (UID: "b2a83ddd-ffa5-4127-9099-91187ad9dbba") : secret "node-tuning-operator-tls" not found Feb 16 02:06:28.938391 master-0 kubenswrapper[4147]: I0216 02:06:28.933913 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9cd32bc-a13a-44ee-ba52-7bb335c7007b-service-ca-bundle\") pod \"authentication-operator-755d954778-bngv9\" (UID: \"c9cd32bc-a13a-44ee-ba52-7bb335c7007b\") " pod="openshift-authentication-operator/authentication-operator-755d954778-bngv9" Feb 16 02:06:28.938391 master-0 kubenswrapper[4147]: I0216 02:06:28.933921 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1743372f-bdb0-4558-b47b-3714f3aa3fde-config\") pod \"openshift-kube-scheduler-operator-7485d55966-mmhcs\" (UID: \"1743372f-bdb0-4558-b47b-3714f3aa3fde\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-mmhcs" Feb 16 02:06:28.938391 master-0 kubenswrapper[4147]: I0216 02:06:28.933991 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9sgx\" (UniqueName: \"kubernetes.io/projected/2ffa4db8-97da-42de-8e51-35680f518ca7-kube-api-access-t9sgx\") pod \"dns-operator-86b8869b79-4rfwq\" (UID: \"2ffa4db8-97da-42de-8e51-35680f518ca7\") " pod="openshift-dns-operator/dns-operator-86b8869b79-4rfwq" Feb 16 02:06:28.938391 master-0 kubenswrapper[4147]: I0216 02:06:28.934026 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/467d92a2-1cf3-418d-b41e-8e5f9d7a5b74-profile-collector-cert\") pod \"olm-operator-6b56bd877c-qwp9g\" (UID: \"467d92a2-1cf3-418d-b41e-8e5f9d7a5b74\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-qwp9g" Feb 16 02:06:28.938391 master-0 kubenswrapper[4147]: I0216 02:06:28.934331 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9be9fd24-fdb1-43dc-80b8-68020427bfd7-serving-cert\") pod \"openshift-config-operator-7c6bdb986f-zlbd2\" (UID: \"9be9fd24-fdb1-43dc-80b8-68020427bfd7\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-zlbd2" Feb 16 02:06:28.939471 master-0 kubenswrapper[4147]: I0216 02:06:28.934365 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/21686a6d-f685-4fb6-98af-3e8a39c5981b-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-5q4zs\" (UID: \"21686a6d-f685-4fb6-98af-3e8a39c5981b\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-5q4zs" Feb 16 02:06:28.939471 master-0 kubenswrapper[4147]: I0216 02:06:28.934392 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9p9r\" (UniqueName: \"kubernetes.io/projected/a0540a70-a256-422b-a827-e564d0e67866-kube-api-access-s9p9r\") pod \"cluster-image-registry-operator-96c8c64b8-bxgpd\" (UID: \"a0540a70-a256-422b-a827-e564d0e67866\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-bxgpd" Feb 16 02:06:28.939471 master-0 kubenswrapper[4147]: I0216 02:06:28.934729 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9cd32bc-a13a-44ee-ba52-7bb335c7007b-config\") pod \"authentication-operator-755d954778-bngv9\" (UID: \"c9cd32bc-a13a-44ee-ba52-7bb335c7007b\") " pod="openshift-authentication-operator/authentication-operator-755d954778-bngv9" Feb 16 02:06:28.939471 master-0 kubenswrapper[4147]: I0216 02:06:28.937260 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c02961f-30ec-4405-b7fa-9c4192342ae9-serving-cert\") pod \"openshift-controller-manager-operator-5f5f84757d-b47jp\" (UID: \"6c02961f-30ec-4405-b7fa-9c4192342ae9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-b47jp" Feb 16 02:06:28.939471 master-0 kubenswrapper[4147]: I0216 02:06:28.937705 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/76915cba-7c11-4bd8-9943-81de74e7781b-profile-collector-cert\") pod \"catalog-operator-588944557d-2z8fq\" (UID: \"76915cba-7c11-4bd8-9943-81de74e7781b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-2z8fq" Feb 16 02:06:28.939471 master-0 kubenswrapper[4147]: I0216 02:06:28.938033 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9cd32bc-a13a-44ee-ba52-7bb335c7007b-serving-cert\") pod \"authentication-operator-755d954778-bngv9\" (UID: \"c9cd32bc-a13a-44ee-ba52-7bb335c7007b\") " pod="openshift-authentication-operator/authentication-operator-755d954778-bngv9" Feb 16 02:06:28.939471 master-0 kubenswrapper[4147]: I0216 02:06:28.938552 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/467d92a2-1cf3-418d-b41e-8e5f9d7a5b74-profile-collector-cert\") pod \"olm-operator-6b56bd877c-qwp9g\" (UID: \"467d92a2-1cf3-418d-b41e-8e5f9d7a5b74\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-qwp9g" Feb 16 02:06:28.963114 master-0 kubenswrapper[4147]: I0216 02:06:28.962415 4147 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-v7lmz"] Feb 16 02:06:28.968089 master-0 kubenswrapper[4147]: I0216 02:06:28.967941 4147 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-67bf55ccdd-htjgz"] Feb 16 02:06:28.969046 master-0 kubenswrapper[4147]: I0216 02:06:28.968996 4147 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-845gn"] Feb 16 02:06:28.973661 master-0 kubenswrapper[4147]: I0216 02:06:28.970640 4147 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-x2sh4"] Feb 16 02:06:28.975673 master-0 kubenswrapper[4147]: I0216 02:06:28.975633 4147 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-c588d8cb4-nbjz6"] Feb 16 02:06:28.975673 master-0 kubenswrapper[4147]: I0216 02:06:28.975681 4147 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-dgxhp"] Feb 16 02:06:28.975673 master-0 kubenswrapper[4147]: I0216 02:06:28.975702 4147 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-86b8869b79-4rfwq"] Feb 16 02:06:28.976096 master-0 kubenswrapper[4147]: I0216 02:06:28.975720 4147 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-756d64c8c4-5q4zs"] Feb 16 02:06:28.976096 master-0 kubenswrapper[4147]: I0216 02:06:28.975739 4147 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7c6bdb986f-zlbd2"] Feb 16 02:06:28.978825 master-0 kubenswrapper[4147]: I0216 02:06:28.978753 4147 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-8n9v4"] Feb 16 02:06:28.978825 master-0 kubenswrapper[4147]: I0216 02:06:28.978799 4147 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-jshtp"] Feb 16 02:06:28.990149 master-0 kubenswrapper[4147]: I0216 02:06:28.981081 4147 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-96c8c64b8-bxgpd"] Feb 16 02:06:28.990149 master-0 kubenswrapper[4147]: I0216 02:06:28.983088 4147 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-dsjz2"] Feb 16 02:06:28.990149 master-0 kubenswrapper[4147]: I0216 02:06:28.983120 4147 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-9bnql"] Feb 16 02:06:28.990149 master-0 kubenswrapper[4147]: I0216 02:06:28.983602 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-9bnql" Feb 16 02:06:28.991799 master-0 kubenswrapper[4147]: I0216 02:06:28.991667 4147 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 16 02:06:28.994204 master-0 kubenswrapper[4147]: I0216 02:06:28.994153 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6f8fj\" (UniqueName: \"kubernetes.io/projected/76915cba-7c11-4bd8-9943-81de74e7781b-kube-api-access-6f8fj\") pod \"catalog-operator-588944557d-2z8fq\" (UID: \"76915cba-7c11-4bd8-9943-81de74e7781b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-2z8fq" Feb 16 02:06:28.995449 master-0 kubenswrapper[4147]: I0216 02:06:28.995385 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhz2m\" (UniqueName: \"kubernetes.io/projected/91938be6-9ae4-4849-abe8-fc842daecd23-kube-api-access-bhz2m\") pod \"service-ca-operator-5dc4688546-ck5nr\" (UID: \"91938be6-9ae4-4849-abe8-fc842daecd23\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-ck5nr" Feb 16 02:06:29.001472 master-0 kubenswrapper[4147]: I0216 02:06:29.001391 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9sgx\" (UniqueName: \"kubernetes.io/projected/2ffa4db8-97da-42de-8e51-35680f518ca7-kube-api-access-t9sgx\") pod \"dns-operator-86b8869b79-4rfwq\" (UID: \"2ffa4db8-97da-42de-8e51-35680f518ca7\") " pod="openshift-dns-operator/dns-operator-86b8869b79-4rfwq" Feb 16 02:06:29.001713 master-0 kubenswrapper[4147]: I0216 02:06:29.000651 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7llx6\" (UniqueName: \"kubernetes.io/projected/6c02961f-30ec-4405-b7fa-9c4192342ae9-kube-api-access-7llx6\") pod \"openshift-controller-manager-operator-5f5f84757d-b47jp\" (UID: \"6c02961f-30ec-4405-b7fa-9c4192342ae9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-b47jp" Feb 16 02:06:29.002796 master-0 kubenswrapper[4147]: I0216 02:06:29.002741 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8lvq\" (UniqueName: \"kubernetes.io/projected/467d92a2-1cf3-418d-b41e-8e5f9d7a5b74-kube-api-access-f8lvq\") pod \"olm-operator-6b56bd877c-qwp9g\" (UID: \"467d92a2-1cf3-418d-b41e-8e5f9d7a5b74\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-qwp9g" Feb 16 02:06:29.004746 master-0 kubenswrapper[4147]: I0216 02:06:29.004697 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7fmj\" (UniqueName: \"kubernetes.io/projected/b2a83ddd-ffa5-4127-9099-91187ad9dbba-kube-api-access-t7fmj\") pod \"cluster-node-tuning-operator-ff6c9b66-845gn\" (UID: \"b2a83ddd-ffa5-4127-9099-91187ad9dbba\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-845gn" Feb 16 02:06:29.010719 master-0 kubenswrapper[4147]: I0216 02:06:29.010641 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xr7gn\" (UniqueName: \"kubernetes.io/projected/c9cd32bc-a13a-44ee-ba52-7bb335c7007b-kube-api-access-xr7gn\") pod \"authentication-operator-755d954778-bngv9\" (UID: \"c9cd32bc-a13a-44ee-ba52-7bb335c7007b\") " pod="openshift-authentication-operator/authentication-operator-755d954778-bngv9" Feb 16 02:06:29.035613 master-0 kubenswrapper[4147]: I0216 02:06:29.035085 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/04804a08-e3a5-46f3-abcb-967866834baa-metrics-tls\") pod \"ingress-operator-c588d8cb4-nbjz6\" (UID: \"04804a08-e3a5-46f3-abcb-967866834baa\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nbjz6" Feb 16 02:06:29.035613 master-0 kubenswrapper[4147]: I0216 02:06:29.035142 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/04804a08-e3a5-46f3-abcb-967866834baa-bound-sa-token\") pod \"ingress-operator-c588d8cb4-nbjz6\" (UID: \"04804a08-e3a5-46f3-abcb-967866834baa\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nbjz6" Feb 16 02:06:29.035613 master-0 kubenswrapper[4147]: E0216 02:06:29.035305 4147 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 16 02:06:29.035613 master-0 kubenswrapper[4147]: E0216 02:06:29.035393 4147 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04804a08-e3a5-46f3-abcb-967866834baa-metrics-tls podName:04804a08-e3a5-46f3-abcb-967866834baa nodeName:}" failed. No retries permitted until 2026-02-16 02:06:29.535367259 +0000 UTC m=+108.171102405 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/04804a08-e3a5-46f3-abcb-967866834baa-metrics-tls") pod "ingress-operator-c588d8cb4-nbjz6" (UID: "04804a08-e3a5-46f3-abcb-967866834baa") : secret "metrics-tls" not found Feb 16 02:06:29.036067 master-0 kubenswrapper[4147]: I0216 02:06:29.035840 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a5b01c1-1231-4e69-8b6c-c4981b65b26e-config\") pod \"kube-storage-version-migrator-operator-cd5474998-x2sh4\" (UID: \"4a5b01c1-1231-4e69-8b6c-c4981b65b26e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-x2sh4" Feb 16 02:06:29.036067 master-0 kubenswrapper[4147]: I0216 02:06:29.035917 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgpcj\" (UniqueName: \"kubernetes.io/projected/b6088119-1125-4271-8c0b-0675e700edd9-kube-api-access-jgpcj\") pod \"multus-admission-controller-7c64d55f8-62wr2\" (UID: \"b6088119-1125-4271-8c0b-0675e700edd9\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-62wr2" Feb 16 02:06:29.036067 master-0 kubenswrapper[4147]: I0216 02:06:29.035954 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcq6v\" (UniqueName: \"kubernetes.io/projected/e379cfaf-3a4c-40e7-8641-3524b3669295-kube-api-access-gcq6v\") pod \"openshift-apiserver-operator-6d4655d9cf-v7lmz\" (UID: \"e379cfaf-3a4c-40e7-8641-3524b3669295\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-v7lmz" Feb 16 02:06:29.036311 master-0 kubenswrapper[4147]: I0216 02:06:29.036265 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1743372f-bdb0-4558-b47b-3714f3aa3fde-kube-api-access\") pod \"openshift-kube-scheduler-operator-7485d55966-mmhcs\" (UID: \"1743372f-bdb0-4558-b47b-3714f3aa3fde\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-mmhcs" Feb 16 02:06:29.036404 master-0 kubenswrapper[4147]: I0216 02:06:29.036331 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8d49c\" (UniqueName: \"kubernetes.io/projected/1f2d2601-481d-4e86-ac4c-3d34d5691261-kube-api-access-8d49c\") pod \"cluster-olm-operator-55b69c6c48-jshtp\" (UID: \"1f2d2601-481d-4e86-ac4c-3d34d5691261\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-jshtp" Feb 16 02:06:29.036404 master-0 kubenswrapper[4147]: I0216 02:06:29.036363 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4gmn\" (UniqueName: \"kubernetes.io/projected/23755f7f-dce6-4dcf-9664-22e3aedb5c81-kube-api-access-n4gmn\") pod \"package-server-manager-5c696dbdcd-tkqng\" (UID: \"23755f7f-dce6-4dcf-9664-22e3aedb5c81\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-tkqng" Feb 16 02:06:29.036563 master-0 kubenswrapper[4147]: I0216 02:06:29.036510 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/980aa005-f51d-4ca2-aee6-a6fdeefd86d0-config\") pod \"kube-apiserver-operator-54984b6678-dsjz2\" (UID: \"980aa005-f51d-4ca2-aee6-a6fdeefd86d0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-dsjz2" Feb 16 02:06:29.036563 master-0 kubenswrapper[4147]: I0216 02:06:29.036559 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a8f33151-61df-4b66-ba85-9ba210779059-kube-api-access\") pod \"kube-controller-manager-operator-78ff47c7c5-dgxhp\" (UID: \"a8f33151-61df-4b66-ba85-9ba210779059\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-dgxhp" Feb 16 02:06:29.036679 master-0 kubenswrapper[4147]: I0216 02:06:29.036582 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b6088119-1125-4271-8c0b-0675e700edd9-webhook-certs\") pod \"multus-admission-controller-7c64d55f8-62wr2\" (UID: \"b6088119-1125-4271-8c0b-0675e700edd9\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-62wr2" Feb 16 02:06:29.036679 master-0 kubenswrapper[4147]: E0216 02:06:29.036667 4147 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 16 02:06:29.036800 master-0 kubenswrapper[4147]: E0216 02:06:29.036712 4147 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b6088119-1125-4271-8c0b-0675e700edd9-webhook-certs podName:b6088119-1125-4271-8c0b-0675e700edd9 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:29.536696972 +0000 UTC m=+108.172432098 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b6088119-1125-4271-8c0b-0675e700edd9-webhook-certs") pod "multus-admission-controller-7c64d55f8-62wr2" (UID: "b6088119-1125-4271-8c0b-0675e700edd9") : secret "multus-admission-controller-secret" not found Feb 16 02:06:29.036873 master-0 kubenswrapper[4147]: I0216 02:06:29.036838 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/724ac845-3835-458b-9645-e665be135ff9-etcd-client\") pod \"etcd-operator-67bf55ccdd-htjgz\" (UID: \"724ac845-3835-458b-9645-e665be135ff9\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-htjgz" Feb 16 02:06:29.037045 master-0 kubenswrapper[4147]: I0216 02:06:29.036880 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2qvg\" (UniqueName: \"kubernetes.io/projected/9be9fd24-fdb1-43dc-80b8-68020427bfd7-kube-api-access-k2qvg\") pod \"openshift-config-operator-7c6bdb986f-zlbd2\" (UID: \"9be9fd24-fdb1-43dc-80b8-68020427bfd7\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-zlbd2" Feb 16 02:06:29.037116 master-0 kubenswrapper[4147]: I0216 02:06:29.037056 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/724ac845-3835-458b-9645-e665be135ff9-config\") pod \"etcd-operator-67bf55ccdd-htjgz\" (UID: \"724ac845-3835-458b-9645-e665be135ff9\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-htjgz" Feb 16 02:06:29.037175 master-0 kubenswrapper[4147]: I0216 02:06:29.037120 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zr872\" (UniqueName: \"kubernetes.io/projected/4a5b01c1-1231-4e69-8b6c-c4981b65b26e-kube-api-access-zr872\") pod \"kube-storage-version-migrator-operator-cd5474998-x2sh4\" (UID: \"4a5b01c1-1231-4e69-8b6c-c4981b65b26e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-x2sh4" Feb 16 02:06:29.037175 master-0 kubenswrapper[4147]: I0216 02:06:29.037158 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/980aa005-f51d-4ca2-aee6-a6fdeefd86d0-config\") pod \"kube-apiserver-operator-54984b6678-dsjz2\" (UID: \"980aa005-f51d-4ca2-aee6-a6fdeefd86d0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-dsjz2" Feb 16 02:06:29.037175 master-0 kubenswrapper[4147]: I0216 02:06:29.037163 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a0540a70-a256-422b-a827-e564d0e67866-bound-sa-token\") pod \"cluster-image-registry-operator-96c8c64b8-bxgpd\" (UID: \"a0540a70-a256-422b-a827-e564d0e67866\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-bxgpd" Feb 16 02:06:29.037340 master-0 kubenswrapper[4147]: I0216 02:06:29.037198 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/980aa005-f51d-4ca2-aee6-a6fdeefd86d0-kube-api-access\") pod \"kube-apiserver-operator-54984b6678-dsjz2\" (UID: \"980aa005-f51d-4ca2-aee6-a6fdeefd86d0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-dsjz2" Feb 16 02:06:29.037340 master-0 kubenswrapper[4147]: I0216 02:06:29.037229 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/724ac845-3835-458b-9645-e665be135ff9-etcd-ca\") pod \"etcd-operator-67bf55ccdd-htjgz\" (UID: \"724ac845-3835-458b-9645-e665be135ff9\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-htjgz" Feb 16 02:06:29.037340 master-0 kubenswrapper[4147]: I0216 02:06:29.037277 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04804a08-e3a5-46f3-abcb-967866834baa-trusted-ca\") pod \"ingress-operator-c588d8cb4-nbjz6\" (UID: \"04804a08-e3a5-46f3-abcb-967866834baa\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nbjz6" Feb 16 02:06:29.041219 master-0 kubenswrapper[4147]: I0216 02:06:29.037650 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/980aa005-f51d-4ca2-aee6-a6fdeefd86d0-serving-cert\") pod \"kube-apiserver-operator-54984b6678-dsjz2\" (UID: \"980aa005-f51d-4ca2-aee6-a6fdeefd86d0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-dsjz2" Feb 16 02:06:29.041219 master-0 kubenswrapper[4147]: I0216 02:06:29.037717 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e379cfaf-3a4c-40e7-8641-3524b3669295-config\") pod \"openshift-apiserver-operator-6d4655d9cf-v7lmz\" (UID: \"e379cfaf-3a4c-40e7-8641-3524b3669295\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-v7lmz" Feb 16 02:06:29.041219 master-0 kubenswrapper[4147]: I0216 02:06:29.037741 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e379cfaf-3a4c-40e7-8641-3524b3669295-serving-cert\") pod \"openshift-apiserver-operator-6d4655d9cf-v7lmz\" (UID: \"e379cfaf-3a4c-40e7-8641-3524b3669295\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-v7lmz" Feb 16 02:06:29.041219 master-0 kubenswrapper[4147]: I0216 02:06:29.037762 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1743372f-bdb0-4558-b47b-3714f3aa3fde-serving-cert\") pod \"openshift-kube-scheduler-operator-7485d55966-mmhcs\" (UID: \"1743372f-bdb0-4558-b47b-3714f3aa3fde\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-mmhcs" Feb 16 02:06:29.041219 master-0 kubenswrapper[4147]: I0216 02:06:29.037801 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9be9fd24-fdb1-43dc-80b8-68020427bfd7-available-featuregates\") pod \"openshift-config-operator-7c6bdb986f-zlbd2\" (UID: \"9be9fd24-fdb1-43dc-80b8-68020427bfd7\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-zlbd2" Feb 16 02:06:29.041219 master-0 kubenswrapper[4147]: I0216 02:06:29.037814 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/724ac845-3835-458b-9645-e665be135ff9-config\") pod \"etcd-operator-67bf55ccdd-htjgz\" (UID: \"724ac845-3835-458b-9645-e665be135ff9\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-htjgz" Feb 16 02:06:29.041219 master-0 kubenswrapper[4147]: I0216 02:06:29.037834 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/1f2d2601-481d-4e86-ac4c-3d34d5691261-operand-assets\") pod \"cluster-olm-operator-55b69c6c48-jshtp\" (UID: \"1f2d2601-481d-4e86-ac4c-3d34d5691261\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-jshtp" Feb 16 02:06:29.041219 master-0 kubenswrapper[4147]: I0216 02:06:29.038143 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/724ac845-3835-458b-9645-e665be135ff9-etcd-ca\") pod \"etcd-operator-67bf55ccdd-htjgz\" (UID: \"724ac845-3835-458b-9645-e665be135ff9\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-htjgz" Feb 16 02:06:29.041219 master-0 kubenswrapper[4147]: I0216 02:06:29.038343 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/1f2d2601-481d-4e86-ac4c-3d34d5691261-operand-assets\") pod \"cluster-olm-operator-55b69c6c48-jshtp\" (UID: \"1f2d2601-481d-4e86-ac4c-3d34d5691261\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-jshtp" Feb 16 02:06:29.041219 master-0 kubenswrapper[4147]: I0216 02:06:29.038630 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9be9fd24-fdb1-43dc-80b8-68020427bfd7-available-featuregates\") pod \"openshift-config-operator-7c6bdb986f-zlbd2\" (UID: \"9be9fd24-fdb1-43dc-80b8-68020427bfd7\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-zlbd2" Feb 16 02:06:29.041219 master-0 kubenswrapper[4147]: I0216 02:06:29.038750 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bff42\" (UniqueName: \"kubernetes.io/projected/724ac845-3835-458b-9645-e665be135ff9-kube-api-access-bff42\") pod \"etcd-operator-67bf55ccdd-htjgz\" (UID: \"724ac845-3835-458b-9645-e665be135ff9\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-htjgz" Feb 16 02:06:29.041219 master-0 kubenswrapper[4147]: I0216 02:06:29.038812 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e379cfaf-3a4c-40e7-8641-3524b3669295-config\") pod \"openshift-apiserver-operator-6d4655d9cf-v7lmz\" (UID: \"e379cfaf-3a4c-40e7-8641-3524b3669295\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-v7lmz" Feb 16 02:06:29.041219 master-0 kubenswrapper[4147]: I0216 02:06:29.038817 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24b6h\" (UniqueName: \"kubernetes.io/projected/bde83629-b39c-401e-bc30-5ce205638918-kube-api-access-24b6h\") pod \"marketplace-operator-6cc5b65c6b-8nl7s\" (UID: \"bde83629-b39c-401e-bc30-5ce205638918\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-8nl7s" Feb 16 02:06:29.041219 master-0 kubenswrapper[4147]: I0216 02:06:29.038888 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1743372f-bdb0-4558-b47b-3714f3aa3fde-config\") pod \"openshift-kube-scheduler-operator-7485d55966-mmhcs\" (UID: \"1743372f-bdb0-4558-b47b-3714f3aa3fde\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-mmhcs" Feb 16 02:06:29.041219 master-0 kubenswrapper[4147]: I0216 02:06:29.038916 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/bde83629-b39c-401e-bc30-5ce205638918-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-8nl7s\" (UID: \"bde83629-b39c-401e-bc30-5ce205638918\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-8nl7s" Feb 16 02:06:29.042221 master-0 kubenswrapper[4147]: I0216 02:06:29.038945 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/21686a6d-f685-4fb6-98af-3e8a39c5981b-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-5q4zs\" (UID: \"21686a6d-f685-4fb6-98af-3e8a39c5981b\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-5q4zs" Feb 16 02:06:29.042221 master-0 kubenswrapper[4147]: I0216 02:06:29.038968 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9p9r\" (UniqueName: \"kubernetes.io/projected/a0540a70-a256-422b-a827-e564d0e67866-kube-api-access-s9p9r\") pod \"cluster-image-registry-operator-96c8c64b8-bxgpd\" (UID: \"a0540a70-a256-422b-a827-e564d0e67866\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-bxgpd" Feb 16 02:06:29.042221 master-0 kubenswrapper[4147]: I0216 02:06:29.038988 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9be9fd24-fdb1-43dc-80b8-68020427bfd7-serving-cert\") pod \"openshift-config-operator-7c6bdb986f-zlbd2\" (UID: \"9be9fd24-fdb1-43dc-80b8-68020427bfd7\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-zlbd2" Feb 16 02:06:29.042221 master-0 kubenswrapper[4147]: I0216 02:06:29.039010 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8f33151-61df-4b66-ba85-9ba210779059-serving-cert\") pod \"kube-controller-manager-operator-78ff47c7c5-dgxhp\" (UID: \"a8f33151-61df-4b66-ba85-9ba210779059\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-dgxhp" Feb 16 02:06:29.042221 master-0 kubenswrapper[4147]: I0216 02:06:29.039048 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04804a08-e3a5-46f3-abcb-967866834baa-trusted-ca\") pod \"ingress-operator-c588d8cb4-nbjz6\" (UID: \"04804a08-e3a5-46f3-abcb-967866834baa\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nbjz6" Feb 16 02:06:29.042221 master-0 kubenswrapper[4147]: I0216 02:06:29.039049 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bde83629-b39c-401e-bc30-5ce205638918-marketplace-trusted-ca\") pod \"marketplace-operator-6cc5b65c6b-8nl7s\" (UID: \"bde83629-b39c-401e-bc30-5ce205638918\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-8nl7s" Feb 16 02:06:29.042221 master-0 kubenswrapper[4147]: I0216 02:06:29.039119 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a0540a70-a256-422b-a827-e564d0e67866-trusted-ca\") pod \"cluster-image-registry-operator-96c8c64b8-bxgpd\" (UID: \"a0540a70-a256-422b-a827-e564d0e67866\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-bxgpd" Feb 16 02:06:29.042221 master-0 kubenswrapper[4147]: I0216 02:06:29.039158 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/23755f7f-dce6-4dcf-9664-22e3aedb5c81-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-tkqng\" (UID: \"23755f7f-dce6-4dcf-9664-22e3aedb5c81\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-tkqng" Feb 16 02:06:29.042221 master-0 kubenswrapper[4147]: I0216 02:06:29.039194 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/1f2d2601-481d-4e86-ac4c-3d34d5691261-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-55b69c6c48-jshtp\" (UID: \"1f2d2601-481d-4e86-ac4c-3d34d5691261\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-jshtp" Feb 16 02:06:29.042221 master-0 kubenswrapper[4147]: I0216 02:06:29.039229 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8f33151-61df-4b66-ba85-9ba210779059-config\") pod \"kube-controller-manager-operator-78ff47c7c5-dgxhp\" (UID: \"a8f33151-61df-4b66-ba85-9ba210779059\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-dgxhp" Feb 16 02:06:29.042221 master-0 kubenswrapper[4147]: I0216 02:06:29.039261 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a5b01c1-1231-4e69-8b6c-c4981b65b26e-serving-cert\") pod \"kube-storage-version-migrator-operator-cd5474998-x2sh4\" (UID: \"4a5b01c1-1231-4e69-8b6c-c4981b65b26e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-x2sh4" Feb 16 02:06:29.042221 master-0 kubenswrapper[4147]: E0216 02:06:29.039315 4147 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 16 02:06:29.042221 master-0 kubenswrapper[4147]: E0216 02:06:29.039372 4147 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23755f7f-dce6-4dcf-9664-22e3aedb5c81-package-server-manager-serving-cert podName:23755f7f-dce6-4dcf-9664-22e3aedb5c81 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:29.539353898 +0000 UTC m=+108.175089024 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/23755f7f-dce6-4dcf-9664-22e3aedb5c81-package-server-manager-serving-cert") pod "package-server-manager-5c696dbdcd-tkqng" (UID: "23755f7f-dce6-4dcf-9664-22e3aedb5c81") : secret "package-server-manager-serving-cert" not found Feb 16 02:06:29.042221 master-0 kubenswrapper[4147]: I0216 02:06:29.039319 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/724ac845-3835-458b-9645-e665be135ff9-serving-cert\") pod \"etcd-operator-67bf55ccdd-htjgz\" (UID: \"724ac845-3835-458b-9645-e665be135ff9\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-htjgz" Feb 16 02:06:29.042221 master-0 kubenswrapper[4147]: I0216 02:06:29.039425 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rc6w\" (UniqueName: \"kubernetes.io/projected/04804a08-e3a5-46f3-abcb-967866834baa-kube-api-access-8rc6w\") pod \"ingress-operator-c588d8cb4-nbjz6\" (UID: \"04804a08-e3a5-46f3-abcb-967866834baa\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nbjz6" Feb 16 02:06:29.043087 master-0 kubenswrapper[4147]: I0216 02:06:29.039488 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/724ac845-3835-458b-9645-e665be135ff9-etcd-service-ca\") pod \"etcd-operator-67bf55ccdd-htjgz\" (UID: \"724ac845-3835-458b-9645-e665be135ff9\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-htjgz" Feb 16 02:06:29.043087 master-0 kubenswrapper[4147]: I0216 02:06:29.039547 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvf8t\" (UniqueName: \"kubernetes.io/projected/21686a6d-f685-4fb6-98af-3e8a39c5981b-kube-api-access-lvf8t\") pod \"cluster-monitoring-operator-756d64c8c4-5q4zs\" (UID: \"21686a6d-f685-4fb6-98af-3e8a39c5981b\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-5q4zs" Feb 16 02:06:29.043087 master-0 kubenswrapper[4147]: I0216 02:06:29.040245 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/724ac845-3835-458b-9645-e665be135ff9-etcd-client\") pod \"etcd-operator-67bf55ccdd-htjgz\" (UID: \"724ac845-3835-458b-9645-e665be135ff9\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-htjgz" Feb 16 02:06:29.043087 master-0 kubenswrapper[4147]: I0216 02:06:29.040409 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/724ac845-3835-458b-9645-e665be135ff9-etcd-service-ca\") pod \"etcd-operator-67bf55ccdd-htjgz\" (UID: \"724ac845-3835-458b-9645-e665be135ff9\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-htjgz" Feb 16 02:06:29.043087 master-0 kubenswrapper[4147]: I0216 02:06:29.040650 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8f33151-61df-4b66-ba85-9ba210779059-config\") pod \"kube-controller-manager-operator-78ff47c7c5-dgxhp\" (UID: \"a8f33151-61df-4b66-ba85-9ba210779059\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-dgxhp" Feb 16 02:06:29.043087 master-0 kubenswrapper[4147]: I0216 02:06:29.040732 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0540a70-a256-422b-a827-e564d0e67866-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-bxgpd\" (UID: \"a0540a70-a256-422b-a827-e564d0e67866\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-bxgpd" Feb 16 02:06:29.043087 master-0 kubenswrapper[4147]: I0216 02:06:29.040770 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/21686a6d-f685-4fb6-98af-3e8a39c5981b-telemetry-config\") pod \"cluster-monitoring-operator-756d64c8c4-5q4zs\" (UID: \"21686a6d-f685-4fb6-98af-3e8a39c5981b\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-5q4zs" Feb 16 02:06:29.043087 master-0 kubenswrapper[4147]: I0216 02:06:29.042261 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a5b01c1-1231-4e69-8b6c-c4981b65b26e-config\") pod \"kube-storage-version-migrator-operator-cd5474998-x2sh4\" (UID: \"4a5b01c1-1231-4e69-8b6c-c4981b65b26e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-x2sh4" Feb 16 02:06:29.043087 master-0 kubenswrapper[4147]: I0216 02:06:29.040821 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a0540a70-a256-422b-a827-e564d0e67866-trusted-ca\") pod \"cluster-image-registry-operator-96c8c64b8-bxgpd\" (UID: \"a0540a70-a256-422b-a827-e564d0e67866\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-bxgpd" Feb 16 02:06:29.043087 master-0 kubenswrapper[4147]: E0216 02:06:29.042850 4147 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 16 02:06:29.043087 master-0 kubenswrapper[4147]: E0216 02:06:29.042917 4147 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bde83629-b39c-401e-bc30-5ce205638918-marketplace-operator-metrics podName:bde83629-b39c-401e-bc30-5ce205638918 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:29.542900917 +0000 UTC m=+108.178636043 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/bde83629-b39c-401e-bc30-5ce205638918-marketplace-operator-metrics") pod "marketplace-operator-6cc5b65c6b-8nl7s" (UID: "bde83629-b39c-401e-bc30-5ce205638918") : secret "marketplace-operator-metrics" not found Feb 16 02:06:29.043087 master-0 kubenswrapper[4147]: I0216 02:06:29.042911 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/724ac845-3835-458b-9645-e665be135ff9-serving-cert\") pod \"etcd-operator-67bf55ccdd-htjgz\" (UID: \"724ac845-3835-458b-9645-e665be135ff9\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-htjgz" Feb 16 02:06:29.043087 master-0 kubenswrapper[4147]: I0216 02:06:29.042938 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a5b01c1-1231-4e69-8b6c-c4981b65b26e-serving-cert\") pod \"kube-storage-version-migrator-operator-cd5474998-x2sh4\" (UID: \"4a5b01c1-1231-4e69-8b6c-c4981b65b26e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-x2sh4" Feb 16 02:06:29.043087 master-0 kubenswrapper[4147]: I0216 02:06:29.042975 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e379cfaf-3a4c-40e7-8641-3524b3669295-serving-cert\") pod \"openshift-apiserver-operator-6d4655d9cf-v7lmz\" (UID: \"e379cfaf-3a4c-40e7-8641-3524b3669295\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-v7lmz" Feb 16 02:06:29.043990 master-0 kubenswrapper[4147]: E0216 02:06:29.043210 4147 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 16 02:06:29.043990 master-0 kubenswrapper[4147]: E0216 02:06:29.043277 4147 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a0540a70-a256-422b-a827-e564d0e67866-image-registry-operator-tls podName:a0540a70-a256-422b-a827-e564d0e67866 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:29.543256566 +0000 UTC m=+108.178991722 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/a0540a70-a256-422b-a827-e564d0e67866-image-registry-operator-tls") pod "cluster-image-registry-operator-96c8c64b8-bxgpd" (UID: "a0540a70-a256-422b-a827-e564d0e67866") : secret "image-registry-operator-tls" not found Feb 16 02:06:29.043990 master-0 kubenswrapper[4147]: I0216 02:06:29.043283 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1743372f-bdb0-4558-b47b-3714f3aa3fde-serving-cert\") pod \"openshift-kube-scheduler-operator-7485d55966-mmhcs\" (UID: \"1743372f-bdb0-4558-b47b-3714f3aa3fde\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-mmhcs" Feb 16 02:06:29.043990 master-0 kubenswrapper[4147]: I0216 02:06:29.043682 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8f33151-61df-4b66-ba85-9ba210779059-serving-cert\") pod \"kube-controller-manager-operator-78ff47c7c5-dgxhp\" (UID: \"a8f33151-61df-4b66-ba85-9ba210779059\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-dgxhp" Feb 16 02:06:29.044713 master-0 kubenswrapper[4147]: E0216 02:06:29.044645 4147 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 16 02:06:29.044713 master-0 kubenswrapper[4147]: I0216 02:06:29.044685 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1743372f-bdb0-4558-b47b-3714f3aa3fde-config\") pod \"openshift-kube-scheduler-operator-7485d55966-mmhcs\" (UID: \"1743372f-bdb0-4558-b47b-3714f3aa3fde\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-mmhcs" Feb 16 02:06:29.044902 master-0 kubenswrapper[4147]: E0216 02:06:29.044751 4147 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/21686a6d-f685-4fb6-98af-3e8a39c5981b-cluster-monitoring-operator-tls podName:21686a6d-f685-4fb6-98af-3e8a39c5981b nodeName:}" failed. No retries permitted until 2026-02-16 02:06:29.544711082 +0000 UTC m=+108.180446228 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/21686a6d-f685-4fb6-98af-3e8a39c5981b-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-756d64c8c4-5q4zs" (UID: "21686a6d-f685-4fb6-98af-3e8a39c5981b") : secret "cluster-monitoring-operator-tls" not found Feb 16 02:06:29.046215 master-0 kubenswrapper[4147]: I0216 02:06:29.044985 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/21686a6d-f685-4fb6-98af-3e8a39c5981b-telemetry-config\") pod \"cluster-monitoring-operator-756d64c8c4-5q4zs\" (UID: \"21686a6d-f685-4fb6-98af-3e8a39c5981b\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-5q4zs" Feb 16 02:06:29.046215 master-0 kubenswrapper[4147]: I0216 02:06:29.046229 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/1f2d2601-481d-4e86-ac4c-3d34d5691261-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-55b69c6c48-jshtp\" (UID: \"1f2d2601-481d-4e86-ac4c-3d34d5691261\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-jshtp" Feb 16 02:06:29.049195 master-0 kubenswrapper[4147]: I0216 02:06:29.049157 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9be9fd24-fdb1-43dc-80b8-68020427bfd7-serving-cert\") pod \"openshift-config-operator-7c6bdb986f-zlbd2\" (UID: \"9be9fd24-fdb1-43dc-80b8-68020427bfd7\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-zlbd2" Feb 16 02:06:29.049715 master-0 kubenswrapper[4147]: I0216 02:06:29.049671 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bde83629-b39c-401e-bc30-5ce205638918-marketplace-trusted-ca\") pod \"marketplace-operator-6cc5b65c6b-8nl7s\" (UID: \"bde83629-b39c-401e-bc30-5ce205638918\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-8nl7s" Feb 16 02:06:29.051190 master-0 kubenswrapper[4147]: I0216 02:06:29.051161 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/980aa005-f51d-4ca2-aee6-a6fdeefd86d0-serving-cert\") pod \"kube-apiserver-operator-54984b6678-dsjz2\" (UID: \"980aa005-f51d-4ca2-aee6-a6fdeefd86d0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-dsjz2" Feb 16 02:06:29.142986 master-0 kubenswrapper[4147]: I0216 02:06:29.142723 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/2a67f799-fd8d-4bee-9d67-720151c1650b-iptables-alerter-script\") pod \"iptables-alerter-9bnql\" (UID: \"2a67f799-fd8d-4bee-9d67-720151c1650b\") " pod="openshift-network-operator/iptables-alerter-9bnql" Feb 16 02:06:29.142986 master-0 kubenswrapper[4147]: I0216 02:06:29.142798 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2a67f799-fd8d-4bee-9d67-720151c1650b-host-slash\") pod \"iptables-alerter-9bnql\" (UID: \"2a67f799-fd8d-4bee-9d67-720151c1650b\") " pod="openshift-network-operator/iptables-alerter-9bnql" Feb 16 02:06:29.143256 master-0 kubenswrapper[4147]: I0216 02:06:29.143044 4147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47lht\" (UniqueName: \"kubernetes.io/projected/2a67f799-fd8d-4bee-9d67-720151c1650b-kube-api-access-47lht\") pod \"iptables-alerter-9bnql\" (UID: \"2a67f799-fd8d-4bee-9d67-720151c1650b\") " pod="openshift-network-operator/iptables-alerter-9bnql" Feb 16 02:06:29.151193 master-0 kubenswrapper[4147]: I0216 02:06:29.151076 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-755d954778-bngv9" Feb 16 02:06:29.168598 master-0 kubenswrapper[4147]: I0216 02:06:29.168500 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-b47jp" Feb 16 02:06:29.187087 master-0 kubenswrapper[4147]: I0216 02:06:29.187034 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-hswdj" Feb 16 02:06:29.187215 master-0 kubenswrapper[4147]: I0216 02:06:29.187090 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gn9mv" Feb 16 02:06:29.190401 master-0 kubenswrapper[4147]: I0216 02:06:29.190063 4147 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 16 02:06:29.190401 master-0 kubenswrapper[4147]: I0216 02:06:29.190293 4147 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 16 02:06:29.190856 master-0 kubenswrapper[4147]: I0216 02:06:29.190792 4147 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 16 02:06:29.244918 master-0 kubenswrapper[4147]: I0216 02:06:29.244837 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47lht\" (UniqueName: \"kubernetes.io/projected/2a67f799-fd8d-4bee-9d67-720151c1650b-kube-api-access-47lht\") pod \"iptables-alerter-9bnql\" (UID: \"2a67f799-fd8d-4bee-9d67-720151c1650b\") " pod="openshift-network-operator/iptables-alerter-9bnql" Feb 16 02:06:29.245864 master-0 kubenswrapper[4147]: I0216 02:06:29.245775 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/2a67f799-fd8d-4bee-9d67-720151c1650b-iptables-alerter-script\") pod \"iptables-alerter-9bnql\" (UID: \"2a67f799-fd8d-4bee-9d67-720151c1650b\") " pod="openshift-network-operator/iptables-alerter-9bnql" Feb 16 02:06:29.246012 master-0 kubenswrapper[4147]: I0216 02:06:29.245915 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2a67f799-fd8d-4bee-9d67-720151c1650b-host-slash\") pod \"iptables-alerter-9bnql\" (UID: \"2a67f799-fd8d-4bee-9d67-720151c1650b\") " pod="openshift-network-operator/iptables-alerter-9bnql" Feb 16 02:06:29.246012 master-0 kubenswrapper[4147]: I0216 02:06:29.245993 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2a67f799-fd8d-4bee-9d67-720151c1650b-host-slash\") pod \"iptables-alerter-9bnql\" (UID: \"2a67f799-fd8d-4bee-9d67-720151c1650b\") " pod="openshift-network-operator/iptables-alerter-9bnql" Feb 16 02:06:29.247380 master-0 kubenswrapper[4147]: I0216 02:06:29.247275 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/2a67f799-fd8d-4bee-9d67-720151c1650b-iptables-alerter-script\") pod \"iptables-alerter-9bnql\" (UID: \"2a67f799-fd8d-4bee-9d67-720151c1650b\") " pod="openshift-network-operator/iptables-alerter-9bnql" Feb 16 02:06:29.263527 master-0 kubenswrapper[4147]: I0216 02:06:29.262876 4147 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-7c64d55f8-62wr2"] Feb 16 02:06:29.265395 master-0 kubenswrapper[4147]: I0216 02:06:29.265354 4147 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-mmhcs"] Feb 16 02:06:29.267114 master-0 kubenswrapper[4147]: I0216 02:06:29.267027 4147 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-6cc5b65c6b-8nl7s"] Feb 16 02:06:29.269831 master-0 kubenswrapper[4147]: I0216 02:06:29.268464 4147 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-tkqng"] Feb 16 02:06:29.379372 master-0 kubenswrapper[4147]: I0216 02:06:29.378578 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rc6w\" (UniqueName: \"kubernetes.io/projected/04804a08-e3a5-46f3-abcb-967866834baa-kube-api-access-8rc6w\") pod \"ingress-operator-c588d8cb4-nbjz6\" (UID: \"04804a08-e3a5-46f3-abcb-967866834baa\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nbjz6" Feb 16 02:06:29.381535 master-0 kubenswrapper[4147]: I0216 02:06:29.381280 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a8f33151-61df-4b66-ba85-9ba210779059-kube-api-access\") pod \"kube-controller-manager-operator-78ff47c7c5-dgxhp\" (UID: \"a8f33151-61df-4b66-ba85-9ba210779059\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-dgxhp" Feb 16 02:06:29.382287 master-0 kubenswrapper[4147]: I0216 02:06:29.382239 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8d49c\" (UniqueName: \"kubernetes.io/projected/1f2d2601-481d-4e86-ac4c-3d34d5691261-kube-api-access-8d49c\") pod \"cluster-olm-operator-55b69c6c48-jshtp\" (UID: \"1f2d2601-481d-4e86-ac4c-3d34d5691261\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-jshtp" Feb 16 02:06:29.384806 master-0 kubenswrapper[4147]: I0216 02:06:29.384763 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/980aa005-f51d-4ca2-aee6-a6fdeefd86d0-kube-api-access\") pod \"kube-apiserver-operator-54984b6678-dsjz2\" (UID: \"980aa005-f51d-4ca2-aee6-a6fdeefd86d0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-dsjz2" Feb 16 02:06:29.386226 master-0 kubenswrapper[4147]: I0216 02:06:29.385661 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zr872\" (UniqueName: \"kubernetes.io/projected/4a5b01c1-1231-4e69-8b6c-c4981b65b26e-kube-api-access-zr872\") pod \"kube-storage-version-migrator-operator-cd5474998-x2sh4\" (UID: \"4a5b01c1-1231-4e69-8b6c-c4981b65b26e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-x2sh4" Feb 16 02:06:29.386875 master-0 kubenswrapper[4147]: I0216 02:06:29.386828 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgpcj\" (UniqueName: \"kubernetes.io/projected/b6088119-1125-4271-8c0b-0675e700edd9-kube-api-access-jgpcj\") pod \"multus-admission-controller-7c64d55f8-62wr2\" (UID: \"b6088119-1125-4271-8c0b-0675e700edd9\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-62wr2" Feb 16 02:06:29.387636 master-0 kubenswrapper[4147]: I0216 02:06:29.387599 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47lht\" (UniqueName: \"kubernetes.io/projected/2a67f799-fd8d-4bee-9d67-720151c1650b-kube-api-access-47lht\") pod \"iptables-alerter-9bnql\" (UID: \"2a67f799-fd8d-4bee-9d67-720151c1650b\") " pod="openshift-network-operator/iptables-alerter-9bnql" Feb 16 02:06:29.388955 master-0 kubenswrapper[4147]: I0216 02:06:29.388890 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1743372f-bdb0-4558-b47b-3714f3aa3fde-kube-api-access\") pod \"openshift-kube-scheduler-operator-7485d55966-mmhcs\" (UID: \"1743372f-bdb0-4558-b47b-3714f3aa3fde\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-mmhcs" Feb 16 02:06:29.389879 master-0 kubenswrapper[4147]: I0216 02:06:29.389831 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/04804a08-e3a5-46f3-abcb-967866834baa-bound-sa-token\") pod \"ingress-operator-c588d8cb4-nbjz6\" (UID: \"04804a08-e3a5-46f3-abcb-967866834baa\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nbjz6" Feb 16 02:06:29.390223 master-0 kubenswrapper[4147]: I0216 02:06:29.390177 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9p9r\" (UniqueName: \"kubernetes.io/projected/a0540a70-a256-422b-a827-e564d0e67866-kube-api-access-s9p9r\") pod \"cluster-image-registry-operator-96c8c64b8-bxgpd\" (UID: \"a0540a70-a256-422b-a827-e564d0e67866\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-bxgpd" Feb 16 02:06:29.390944 master-0 kubenswrapper[4147]: I0216 02:06:29.390650 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4gmn\" (UniqueName: \"kubernetes.io/projected/23755f7f-dce6-4dcf-9664-22e3aedb5c81-kube-api-access-n4gmn\") pod \"package-server-manager-5c696dbdcd-tkqng\" (UID: \"23755f7f-dce6-4dcf-9664-22e3aedb5c81\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-tkqng" Feb 16 02:06:29.390944 master-0 kubenswrapper[4147]: I0216 02:06:29.390801 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bff42\" (UniqueName: \"kubernetes.io/projected/724ac845-3835-458b-9645-e665be135ff9-kube-api-access-bff42\") pod \"etcd-operator-67bf55ccdd-htjgz\" (UID: \"724ac845-3835-458b-9645-e665be135ff9\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-htjgz" Feb 16 02:06:29.390944 master-0 kubenswrapper[4147]: I0216 02:06:29.390891 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcq6v\" (UniqueName: \"kubernetes.io/projected/e379cfaf-3a4c-40e7-8641-3524b3669295-kube-api-access-gcq6v\") pod \"openshift-apiserver-operator-6d4655d9cf-v7lmz\" (UID: \"e379cfaf-3a4c-40e7-8641-3524b3669295\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-v7lmz" Feb 16 02:06:29.391964 master-0 kubenswrapper[4147]: I0216 02:06:29.391909 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-v7lmz" Feb 16 02:06:29.393340 master-0 kubenswrapper[4147]: I0216 02:06:29.393294 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a0540a70-a256-422b-a827-e564d0e67866-bound-sa-token\") pod \"cluster-image-registry-operator-96c8c64b8-bxgpd\" (UID: \"a0540a70-a256-422b-a827-e564d0e67866\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-bxgpd" Feb 16 02:06:29.394172 master-0 kubenswrapper[4147]: I0216 02:06:29.394119 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2qvg\" (UniqueName: \"kubernetes.io/projected/9be9fd24-fdb1-43dc-80b8-68020427bfd7-kube-api-access-k2qvg\") pod \"openshift-config-operator-7c6bdb986f-zlbd2\" (UID: \"9be9fd24-fdb1-43dc-80b8-68020427bfd7\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-zlbd2" Feb 16 02:06:29.397367 master-0 kubenswrapper[4147]: I0216 02:06:29.397318 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-9bnql" Feb 16 02:06:29.398262 master-0 kubenswrapper[4147]: I0216 02:06:29.398070 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvf8t\" (UniqueName: \"kubernetes.io/projected/21686a6d-f685-4fb6-98af-3e8a39c5981b-kube-api-access-lvf8t\") pod \"cluster-monitoring-operator-756d64c8c4-5q4zs\" (UID: \"21686a6d-f685-4fb6-98af-3e8a39c5981b\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-5q4zs" Feb 16 02:06:29.400068 master-0 kubenswrapper[4147]: I0216 02:06:29.400033 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24b6h\" (UniqueName: \"kubernetes.io/projected/bde83629-b39c-401e-bc30-5ce205638918-kube-api-access-24b6h\") pod \"marketplace-operator-6cc5b65c6b-8nl7s\" (UID: \"bde83629-b39c-401e-bc30-5ce205638918\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-8nl7s" Feb 16 02:06:29.422487 master-0 kubenswrapper[4147]: W0216 02:06:29.421653 4147 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a67f799_fd8d_4bee_9d67_720151c1650b.slice/crio-65db28ff03b176892a8eec81629c7d19dbef022673e856af206a72dde2a48896 WatchSource:0}: Error finding container 65db28ff03b176892a8eec81629c7d19dbef022673e856af206a72dde2a48896: Status 404 returned error can't find the container with id 65db28ff03b176892a8eec81629c7d19dbef022673e856af206a72dde2a48896 Feb 16 02:06:29.454947 master-0 kubenswrapper[4147]: I0216 02:06:29.454898 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/b2a83ddd-ffa5-4127-9099-91187ad9dbba-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-845gn\" (UID: \"b2a83ddd-ffa5-4127-9099-91187ad9dbba\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-845gn" Feb 16 02:06:29.454947 master-0 kubenswrapper[4147]: I0216 02:06:29.454946 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b2a83ddd-ffa5-4127-9099-91187ad9dbba-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-845gn\" (UID: \"b2a83ddd-ffa5-4127-9099-91187ad9dbba\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-845gn" Feb 16 02:06:29.455132 master-0 kubenswrapper[4147]: E0216 02:06:29.455096 4147 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 16 02:06:29.455193 master-0 kubenswrapper[4147]: E0216 02:06:29.455168 4147 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b2a83ddd-ffa5-4127-9099-91187ad9dbba-node-tuning-operator-tls podName:b2a83ddd-ffa5-4127-9099-91187ad9dbba nodeName:}" failed. No retries permitted until 2026-02-16 02:06:30.455146528 +0000 UTC m=+109.090881654 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/b2a83ddd-ffa5-4127-9099-91187ad9dbba-node-tuning-operator-tls") pod "cluster-node-tuning-operator-ff6c9b66-845gn" (UID: "b2a83ddd-ffa5-4127-9099-91187ad9dbba") : secret "node-tuning-operator-tls" not found Feb 16 02:06:29.455237 master-0 kubenswrapper[4147]: I0216 02:06:29.455208 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2ffa4db8-97da-42de-8e51-35680f518ca7-metrics-tls\") pod \"dns-operator-86b8869b79-4rfwq\" (UID: \"2ffa4db8-97da-42de-8e51-35680f518ca7\") " pod="openshift-dns-operator/dns-operator-86b8869b79-4rfwq" Feb 16 02:06:29.455265 master-0 kubenswrapper[4147]: E0216 02:06:29.455243 4147 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 16 02:06:29.455290 master-0 kubenswrapper[4147]: I0216 02:06:29.455271 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/76915cba-7c11-4bd8-9943-81de74e7781b-srv-cert\") pod \"catalog-operator-588944557d-2z8fq\" (UID: \"76915cba-7c11-4bd8-9943-81de74e7781b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-2z8fq" Feb 16 02:06:29.455321 master-0 kubenswrapper[4147]: E0216 02:06:29.455312 4147 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b2a83ddd-ffa5-4127-9099-91187ad9dbba-apiservice-cert podName:b2a83ddd-ffa5-4127-9099-91187ad9dbba nodeName:}" failed. No retries permitted until 2026-02-16 02:06:30.455294282 +0000 UTC m=+109.091029398 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/b2a83ddd-ffa5-4127-9099-91187ad9dbba-apiservice-cert") pod "cluster-node-tuning-operator-ff6c9b66-845gn" (UID: "b2a83ddd-ffa5-4127-9099-91187ad9dbba") : secret "performance-addon-operator-webhook-cert" not found Feb 16 02:06:29.455423 master-0 kubenswrapper[4147]: E0216 02:06:29.455392 4147 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 16 02:06:29.455510 master-0 kubenswrapper[4147]: E0216 02:06:29.455490 4147 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ffa4db8-97da-42de-8e51-35680f518ca7-metrics-tls podName:2ffa4db8-97da-42de-8e51-35680f518ca7 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:30.455466436 +0000 UTC m=+109.091201642 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/2ffa4db8-97da-42de-8e51-35680f518ca7-metrics-tls") pod "dns-operator-86b8869b79-4rfwq" (UID: "2ffa4db8-97da-42de-8e51-35680f518ca7") : secret "metrics-tls" not found Feb 16 02:06:29.455601 master-0 kubenswrapper[4147]: I0216 02:06:29.455566 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/467d92a2-1cf3-418d-b41e-8e5f9d7a5b74-srv-cert\") pod \"olm-operator-6b56bd877c-qwp9g\" (UID: \"467d92a2-1cf3-418d-b41e-8e5f9d7a5b74\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-qwp9g" Feb 16 02:06:29.455664 master-0 kubenswrapper[4147]: E0216 02:06:29.455621 4147 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Feb 16 02:06:29.455711 master-0 kubenswrapper[4147]: E0216 02:06:29.455671 4147 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Feb 16 02:06:29.455738 master-0 kubenswrapper[4147]: E0216 02:06:29.455726 4147 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/467d92a2-1cf3-418d-b41e-8e5f9d7a5b74-srv-cert podName:467d92a2-1cf3-418d-b41e-8e5f9d7a5b74 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:30.455701392 +0000 UTC m=+109.091436518 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/467d92a2-1cf3-418d-b41e-8e5f9d7a5b74-srv-cert") pod "olm-operator-6b56bd877c-qwp9g" (UID: "467d92a2-1cf3-418d-b41e-8e5f9d7a5b74") : secret "olm-operator-serving-cert" not found Feb 16 02:06:29.455769 master-0 kubenswrapper[4147]: E0216 02:06:29.455745 4147 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/76915cba-7c11-4bd8-9943-81de74e7781b-srv-cert podName:76915cba-7c11-4bd8-9943-81de74e7781b nodeName:}" failed. No retries permitted until 2026-02-16 02:06:30.455736133 +0000 UTC m=+109.091471259 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/76915cba-7c11-4bd8-9943-81de74e7781b-srv-cert") pod "catalog-operator-588944557d-2z8fq" (UID: "76915cba-7c11-4bd8-9943-81de74e7781b") : secret "catalog-operator-serving-cert" not found Feb 16 02:06:29.478902 master-0 kubenswrapper[4147]: I0216 02:06:29.478763 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-htjgz" Feb 16 02:06:29.484925 master-0 kubenswrapper[4147]: I0216 02:06:29.484850 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-jshtp" Feb 16 02:06:29.497973 master-0 kubenswrapper[4147]: I0216 02:06:29.497916 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-zlbd2" Feb 16 02:06:29.500153 master-0 kubenswrapper[4147]: I0216 02:06:29.500081 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-mmhcs" Feb 16 02:06:29.514978 master-0 kubenswrapper[4147]: I0216 02:06:29.512866 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-dsjz2" Feb 16 02:06:29.556728 master-0 kubenswrapper[4147]: I0216 02:06:29.556663 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/04804a08-e3a5-46f3-abcb-967866834baa-metrics-tls\") pod \"ingress-operator-c588d8cb4-nbjz6\" (UID: \"04804a08-e3a5-46f3-abcb-967866834baa\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nbjz6" Feb 16 02:06:29.556874 master-0 kubenswrapper[4147]: I0216 02:06:29.556842 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b6088119-1125-4271-8c0b-0675e700edd9-webhook-certs\") pod \"multus-admission-controller-7c64d55f8-62wr2\" (UID: \"b6088119-1125-4271-8c0b-0675e700edd9\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-62wr2" Feb 16 02:06:29.557139 master-0 kubenswrapper[4147]: E0216 02:06:29.557071 4147 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 16 02:06:29.557233 master-0 kubenswrapper[4147]: E0216 02:06:29.557189 4147 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04804a08-e3a5-46f3-abcb-967866834baa-metrics-tls podName:04804a08-e3a5-46f3-abcb-967866834baa nodeName:}" failed. No retries permitted until 2026-02-16 02:06:30.557157932 +0000 UTC m=+109.192893068 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/04804a08-e3a5-46f3-abcb-967866834baa-metrics-tls") pod "ingress-operator-c588d8cb4-nbjz6" (UID: "04804a08-e3a5-46f3-abcb-967866834baa") : secret "metrics-tls" not found Feb 16 02:06:29.557233 master-0 kubenswrapper[4147]: E0216 02:06:29.557212 4147 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 16 02:06:29.557380 master-0 kubenswrapper[4147]: E0216 02:06:29.557343 4147 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b6088119-1125-4271-8c0b-0675e700edd9-webhook-certs podName:b6088119-1125-4271-8c0b-0675e700edd9 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:30.557278315 +0000 UTC m=+109.193013471 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b6088119-1125-4271-8c0b-0675e700edd9-webhook-certs") pod "multus-admission-controller-7c64d55f8-62wr2" (UID: "b6088119-1125-4271-8c0b-0675e700edd9") : secret "multus-admission-controller-secret" not found Feb 16 02:06:29.557525 master-0 kubenswrapper[4147]: I0216 02:06:29.557483 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/bde83629-b39c-401e-bc30-5ce205638918-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-8nl7s\" (UID: \"bde83629-b39c-401e-bc30-5ce205638918\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-8nl7s" Feb 16 02:06:29.557600 master-0 kubenswrapper[4147]: I0216 02:06:29.557558 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/21686a6d-f685-4fb6-98af-3e8a39c5981b-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-5q4zs\" (UID: \"21686a6d-f685-4fb6-98af-3e8a39c5981b\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-5q4zs" Feb 16 02:06:29.557662 master-0 kubenswrapper[4147]: I0216 02:06:29.557622 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/23755f7f-dce6-4dcf-9664-22e3aedb5c81-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-tkqng\" (UID: \"23755f7f-dce6-4dcf-9664-22e3aedb5c81\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-tkqng" Feb 16 02:06:29.557797 master-0 kubenswrapper[4147]: I0216 02:06:29.557728 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0540a70-a256-422b-a827-e564d0e67866-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-bxgpd\" (UID: \"a0540a70-a256-422b-a827-e564d0e67866\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-bxgpd" Feb 16 02:06:29.557947 master-0 kubenswrapper[4147]: E0216 02:06:29.557918 4147 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 16 02:06:29.558020 master-0 kubenswrapper[4147]: E0216 02:06:29.557994 4147 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a0540a70-a256-422b-a827-e564d0e67866-image-registry-operator-tls podName:a0540a70-a256-422b-a827-e564d0e67866 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:30.557971623 +0000 UTC m=+109.193706779 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/a0540a70-a256-422b-a827-e564d0e67866-image-registry-operator-tls") pod "cluster-image-registry-operator-96c8c64b8-bxgpd" (UID: "a0540a70-a256-422b-a827-e564d0e67866") : secret "image-registry-operator-tls" not found Feb 16 02:06:29.558170 master-0 kubenswrapper[4147]: E0216 02:06:29.558136 4147 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 16 02:06:29.558246 master-0 kubenswrapper[4147]: E0216 02:06:29.558217 4147 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bde83629-b39c-401e-bc30-5ce205638918-marketplace-operator-metrics podName:bde83629-b39c-401e-bc30-5ce205638918 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:30.558193118 +0000 UTC m=+109.193928284 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/bde83629-b39c-401e-bc30-5ce205638918-marketplace-operator-metrics") pod "marketplace-operator-6cc5b65c6b-8nl7s" (UID: "bde83629-b39c-401e-bc30-5ce205638918") : secret "marketplace-operator-metrics" not found Feb 16 02:06:29.558348 master-0 kubenswrapper[4147]: E0216 02:06:29.558317 4147 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 16 02:06:29.558421 master-0 kubenswrapper[4147]: E0216 02:06:29.558390 4147 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/21686a6d-f685-4fb6-98af-3e8a39c5981b-cluster-monitoring-operator-tls podName:21686a6d-f685-4fb6-98af-3e8a39c5981b nodeName:}" failed. No retries permitted until 2026-02-16 02:06:30.558367193 +0000 UTC m=+109.194102359 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/21686a6d-f685-4fb6-98af-3e8a39c5981b-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-756d64c8c4-5q4zs" (UID: "21686a6d-f685-4fb6-98af-3e8a39c5981b") : secret "cluster-monitoring-operator-tls" not found Feb 16 02:06:29.558573 master-0 kubenswrapper[4147]: E0216 02:06:29.558539 4147 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 16 02:06:29.558667 master-0 kubenswrapper[4147]: E0216 02:06:29.558610 4147 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23755f7f-dce6-4dcf-9664-22e3aedb5c81-package-server-manager-serving-cert podName:23755f7f-dce6-4dcf-9664-22e3aedb5c81 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:30.558590628 +0000 UTC m=+109.194325794 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/23755f7f-dce6-4dcf-9664-22e3aedb5c81-package-server-manager-serving-cert") pod "package-server-manager-5c696dbdcd-tkqng" (UID: "23755f7f-dce6-4dcf-9664-22e3aedb5c81") : secret "package-server-manager-serving-cert" not found Feb 16 02:06:29.641678 master-0 kubenswrapper[4147]: I0216 02:06:29.632843 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-x2sh4" Feb 16 02:06:29.672978 master-0 kubenswrapper[4147]: I0216 02:06:29.672053 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-dgxhp" Feb 16 02:06:29.760607 master-0 kubenswrapper[4147]: I0216 02:06:29.760480 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-9bnql" event={"ID":"2a67f799-fd8d-4bee-9d67-720151c1650b","Type":"ContainerStarted","Data":"65db28ff03b176892a8eec81629c7d19dbef022673e856af206a72dde2a48896"} Feb 16 02:06:29.836972 master-0 kubenswrapper[4147]: I0216 02:06:29.836571 4147 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-b47jp"] Feb 16 02:06:29.837956 master-0 kubenswrapper[4147]: I0216 02:06:29.837924 4147 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-755d954778-bngv9"] Feb 16 02:06:29.841502 master-0 kubenswrapper[4147]: W0216 02:06:29.841090 4147 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9cd32bc_a13a_44ee_ba52_7bb335c7007b.slice/crio-d3a6bee8bdf67740901292411913794cb77a0e097ae15189322f724e1617872d WatchSource:0}: Error finding container d3a6bee8bdf67740901292411913794cb77a0e097ae15189322f724e1617872d: Status 404 returned error can't find the container with id d3a6bee8bdf67740901292411913794cb77a0e097ae15189322f724e1617872d Feb 16 02:06:29.842469 master-0 kubenswrapper[4147]: W0216 02:06:29.842443 4147 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c02961f_30ec_4405_b7fa_9c4192342ae9.slice/crio-4b7e8c0ad2cdf87e8552c9488b3b26422f87ac52802cbde7bf5707282a58545e WatchSource:0}: Error finding container 4b7e8c0ad2cdf87e8552c9488b3b26422f87ac52802cbde7bf5707282a58545e: Status 404 returned error can't find the container with id 4b7e8c0ad2cdf87e8552c9488b3b26422f87ac52802cbde7bf5707282a58545e Feb 16 02:06:29.870272 master-0 kubenswrapper[4147]: I0216 02:06:29.870230 4147 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Feb 16 02:06:29.930816 master-0 kubenswrapper[4147]: E0216 02:06:29.930747 4147 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 02:06:29.930882 master-0 kubenswrapper[4147]: E0216 02:06:29.930828 4147 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/91938be6-9ae4-4849-abe8-fc842daecd23-config podName:91938be6-9ae4-4849-abe8-fc842daecd23 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:30.430809821 +0000 UTC m=+109.066544937 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/91938be6-9ae4-4849-abe8-fc842daecd23-config") pod "service-ca-operator-5dc4688546-ck5nr" (UID: "91938be6-9ae4-4849-abe8-fc842daecd23") : failed to sync configmap cache: timed out waiting for the condition Feb 16 02:06:29.933612 master-0 kubenswrapper[4147]: E0216 02:06:29.933568 4147 secret.go:189] Couldn't get secret openshift-service-ca-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 02:06:29.933685 master-0 kubenswrapper[4147]: E0216 02:06:29.933660 4147 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/91938be6-9ae4-4849-abe8-fc842daecd23-serving-cert podName:91938be6-9ae4-4849-abe8-fc842daecd23 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:30.433640762 +0000 UTC m=+109.069375908 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/91938be6-9ae4-4849-abe8-fc842daecd23-serving-cert") pod "service-ca-operator-5dc4688546-ck5nr" (UID: "91938be6-9ae4-4849-abe8-fc842daecd23") : failed to sync secret cache: timed out waiting for the condition Feb 16 02:06:29.998839 master-0 kubenswrapper[4147]: E0216 02:06:29.998781 4147 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 02:06:29.998839 master-0 kubenswrapper[4147]: E0216 02:06:29.998850 4147 projected.go:194] Error preparing data for projected volume kube-api-access-2582m for pod openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-8n9v4: failed to sync configmap cache: timed out waiting for the condition Feb 16 02:06:29.999047 master-0 kubenswrapper[4147]: E0216 02:06:29.998939 4147 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d008dbd4-e713-4f2e-b64d-ca9cfc83a502-kube-api-access-2582m podName:d008dbd4-e713-4f2e-b64d-ca9cfc83a502 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:30.49891252 +0000 UTC m=+109.134647646 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2582m" (UniqueName: "kubernetes.io/projected/d008dbd4-e713-4f2e-b64d-ca9cfc83a502-kube-api-access-2582m") pod "csi-snapshot-controller-operator-7b87b97578-8n9v4" (UID: "d008dbd4-e713-4f2e-b64d-ca9cfc83a502") : failed to sync configmap cache: timed out waiting for the condition Feb 16 02:06:30.084715 master-0 kubenswrapper[4147]: I0216 02:06:30.084614 4147 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 16 02:06:30.197284 master-0 kubenswrapper[4147]: I0216 02:06:30.196155 4147 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 16 02:06:30.242089 master-0 kubenswrapper[4147]: I0216 02:06:30.242017 4147 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-v7lmz"] Feb 16 02:06:30.250925 master-0 kubenswrapper[4147]: W0216 02:06:30.250837 4147 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode379cfaf_3a4c_40e7_8641_3524b3669295.slice/crio-99cec3957b7591d54dab1b67d940469ccca762d577aa986d00f4e46746ba55f5 WatchSource:0}: Error finding container 99cec3957b7591d54dab1b67d940469ccca762d577aa986d00f4e46746ba55f5: Status 404 returned error can't find the container with id 99cec3957b7591d54dab1b67d940469ccca762d577aa986d00f4e46746ba55f5 Feb 16 02:06:30.281830 master-0 kubenswrapper[4147]: I0216 02:06:30.281478 4147 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Feb 16 02:06:30.472363 master-0 kubenswrapper[4147]: I0216 02:06:30.472146 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/467d92a2-1cf3-418d-b41e-8e5f9d7a5b74-srv-cert\") pod \"olm-operator-6b56bd877c-qwp9g\" (UID: \"467d92a2-1cf3-418d-b41e-8e5f9d7a5b74\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-qwp9g" Feb 16 02:06:30.472363 master-0 kubenswrapper[4147]: I0216 02:06:30.472256 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91938be6-9ae4-4849-abe8-fc842daecd23-serving-cert\") pod \"service-ca-operator-5dc4688546-ck5nr\" (UID: \"91938be6-9ae4-4849-abe8-fc842daecd23\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-ck5nr" Feb 16 02:06:30.472808 master-0 kubenswrapper[4147]: E0216 02:06:30.472493 4147 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Feb 16 02:06:30.472808 master-0 kubenswrapper[4147]: E0216 02:06:30.472618 4147 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/467d92a2-1cf3-418d-b41e-8e5f9d7a5b74-srv-cert podName:467d92a2-1cf3-418d-b41e-8e5f9d7a5b74 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:32.472583723 +0000 UTC m=+111.108318869 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/467d92a2-1cf3-418d-b41e-8e5f9d7a5b74-srv-cert") pod "olm-operator-6b56bd877c-qwp9g" (UID: "467d92a2-1cf3-418d-b41e-8e5f9d7a5b74") : secret "olm-operator-serving-cert" not found Feb 16 02:06:30.472808 master-0 kubenswrapper[4147]: I0216 02:06:30.472677 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/b2a83ddd-ffa5-4127-9099-91187ad9dbba-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-845gn\" (UID: \"b2a83ddd-ffa5-4127-9099-91187ad9dbba\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-845gn" Feb 16 02:06:30.472808 master-0 kubenswrapper[4147]: I0216 02:06:30.472748 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b2a83ddd-ffa5-4127-9099-91187ad9dbba-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-845gn\" (UID: \"b2a83ddd-ffa5-4127-9099-91187ad9dbba\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-845gn" Feb 16 02:06:30.473054 master-0 kubenswrapper[4147]: I0216 02:06:30.472871 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91938be6-9ae4-4849-abe8-fc842daecd23-config\") pod \"service-ca-operator-5dc4688546-ck5nr\" (UID: \"91938be6-9ae4-4849-abe8-fc842daecd23\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-ck5nr" Feb 16 02:06:30.473054 master-0 kubenswrapper[4147]: E0216 02:06:30.472879 4147 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 16 02:06:30.473054 master-0 kubenswrapper[4147]: E0216 02:06:30.472972 4147 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b2a83ddd-ffa5-4127-9099-91187ad9dbba-node-tuning-operator-tls podName:b2a83ddd-ffa5-4127-9099-91187ad9dbba nodeName:}" failed. No retries permitted until 2026-02-16 02:06:32.472944122 +0000 UTC m=+111.108679268 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/b2a83ddd-ffa5-4127-9099-91187ad9dbba-node-tuning-operator-tls") pod "cluster-node-tuning-operator-ff6c9b66-845gn" (UID: "b2a83ddd-ffa5-4127-9099-91187ad9dbba") : secret "node-tuning-operator-tls" not found Feb 16 02:06:30.473054 master-0 kubenswrapper[4147]: E0216 02:06:30.473003 4147 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 16 02:06:30.473311 master-0 kubenswrapper[4147]: E0216 02:06:30.473089 4147 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b2a83ddd-ffa5-4127-9099-91187ad9dbba-apiservice-cert podName:b2a83ddd-ffa5-4127-9099-91187ad9dbba nodeName:}" failed. No retries permitted until 2026-02-16 02:06:32.473060635 +0000 UTC m=+111.108795811 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/b2a83ddd-ffa5-4127-9099-91187ad9dbba-apiservice-cert") pod "cluster-node-tuning-operator-ff6c9b66-845gn" (UID: "b2a83ddd-ffa5-4127-9099-91187ad9dbba") : secret "performance-addon-operator-webhook-cert" not found Feb 16 02:06:30.473311 master-0 kubenswrapper[4147]: I0216 02:06:30.473132 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2ffa4db8-97da-42de-8e51-35680f518ca7-metrics-tls\") pod \"dns-operator-86b8869b79-4rfwq\" (UID: \"2ffa4db8-97da-42de-8e51-35680f518ca7\") " pod="openshift-dns-operator/dns-operator-86b8869b79-4rfwq" Feb 16 02:06:30.473311 master-0 kubenswrapper[4147]: I0216 02:06:30.473229 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/76915cba-7c11-4bd8-9943-81de74e7781b-srv-cert\") pod \"catalog-operator-588944557d-2z8fq\" (UID: \"76915cba-7c11-4bd8-9943-81de74e7781b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-2z8fq" Feb 16 02:06:30.473529 master-0 kubenswrapper[4147]: E0216 02:06:30.473401 4147 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Feb 16 02:06:30.473529 master-0 kubenswrapper[4147]: E0216 02:06:30.473493 4147 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/76915cba-7c11-4bd8-9943-81de74e7781b-srv-cert podName:76915cba-7c11-4bd8-9943-81de74e7781b nodeName:}" failed. No retries permitted until 2026-02-16 02:06:32.473471815 +0000 UTC m=+111.109206961 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/76915cba-7c11-4bd8-9943-81de74e7781b-srv-cert") pod "catalog-operator-588944557d-2z8fq" (UID: "76915cba-7c11-4bd8-9943-81de74e7781b") : secret "catalog-operator-serving-cert" not found Feb 16 02:06:30.473671 master-0 kubenswrapper[4147]: E0216 02:06:30.473589 4147 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 16 02:06:30.473671 master-0 kubenswrapper[4147]: E0216 02:06:30.473637 4147 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ffa4db8-97da-42de-8e51-35680f518ca7-metrics-tls podName:2ffa4db8-97da-42de-8e51-35680f518ca7 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:32.473620329 +0000 UTC m=+111.109355615 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/2ffa4db8-97da-42de-8e51-35680f518ca7-metrics-tls") pod "dns-operator-86b8869b79-4rfwq" (UID: "2ffa4db8-97da-42de-8e51-35680f518ca7") : secret "metrics-tls" not found Feb 16 02:06:30.474598 master-0 kubenswrapper[4147]: I0216 02:06:30.474544 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91938be6-9ae4-4849-abe8-fc842daecd23-config\") pod \"service-ca-operator-5dc4688546-ck5nr\" (UID: \"91938be6-9ae4-4849-abe8-fc842daecd23\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-ck5nr" Feb 16 02:06:30.479675 master-0 kubenswrapper[4147]: I0216 02:06:30.479615 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91938be6-9ae4-4849-abe8-fc842daecd23-serving-cert\") pod \"service-ca-operator-5dc4688546-ck5nr\" (UID: \"91938be6-9ae4-4849-abe8-fc842daecd23\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-ck5nr" Feb 16 02:06:30.519375 master-0 kubenswrapper[4147]: I0216 02:06:30.517100 4147 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-x2sh4"] Feb 16 02:06:30.520193 master-0 kubenswrapper[4147]: I0216 02:06:30.520080 4147 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-mmhcs"] Feb 16 02:06:30.521180 master-0 kubenswrapper[4147]: I0216 02:06:30.521119 4147 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-dsjz2"] Feb 16 02:06:30.522847 master-0 kubenswrapper[4147]: I0216 02:06:30.522798 4147 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7c6bdb986f-zlbd2"] Feb 16 02:06:30.525970 master-0 kubenswrapper[4147]: I0216 02:06:30.524367 4147 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-67bf55ccdd-htjgz"] Feb 16 02:06:30.526146 master-0 kubenswrapper[4147]: I0216 02:06:30.526090 4147 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-jshtp"] Feb 16 02:06:30.528216 master-0 kubenswrapper[4147]: I0216 02:06:30.527741 4147 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-dgxhp"] Feb 16 02:06:30.546878 master-0 kubenswrapper[4147]: W0216 02:06:30.545902 4147 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f2d2601_481d_4e86_ac4c_3d34d5691261.slice/crio-48c24060310ea59ea7726c8831161c102b57d6b94e31b9bc5a4ace9382583b32 WatchSource:0}: Error finding container 48c24060310ea59ea7726c8831161c102b57d6b94e31b9bc5a4ace9382583b32: Status 404 returned error can't find the container with id 48c24060310ea59ea7726c8831161c102b57d6b94e31b9bc5a4ace9382583b32 Feb 16 02:06:30.574394 master-0 kubenswrapper[4147]: I0216 02:06:30.574345 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b6088119-1125-4271-8c0b-0675e700edd9-webhook-certs\") pod \"multus-admission-controller-7c64d55f8-62wr2\" (UID: \"b6088119-1125-4271-8c0b-0675e700edd9\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-62wr2" Feb 16 02:06:30.575415 master-0 kubenswrapper[4147]: I0216 02:06:30.574519 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2582m\" (UniqueName: \"kubernetes.io/projected/d008dbd4-e713-4f2e-b64d-ca9cfc83a502-kube-api-access-2582m\") pod \"csi-snapshot-controller-operator-7b87b97578-8n9v4\" (UID: \"d008dbd4-e713-4f2e-b64d-ca9cfc83a502\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-8n9v4" Feb 16 02:06:30.575415 master-0 kubenswrapper[4147]: E0216 02:06:30.574588 4147 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 16 02:06:30.575625 master-0 kubenswrapper[4147]: E0216 02:06:30.575517 4147 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b6088119-1125-4271-8c0b-0675e700edd9-webhook-certs podName:b6088119-1125-4271-8c0b-0675e700edd9 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:32.575468119 +0000 UTC m=+111.211203275 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b6088119-1125-4271-8c0b-0675e700edd9-webhook-certs") pod "multus-admission-controller-7c64d55f8-62wr2" (UID: "b6088119-1125-4271-8c0b-0675e700edd9") : secret "multus-admission-controller-secret" not found Feb 16 02:06:30.575808 master-0 kubenswrapper[4147]: I0216 02:06:30.575689 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/bde83629-b39c-401e-bc30-5ce205638918-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-8nl7s\" (UID: \"bde83629-b39c-401e-bc30-5ce205638918\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-8nl7s" Feb 16 02:06:30.575808 master-0 kubenswrapper[4147]: I0216 02:06:30.575793 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/21686a6d-f685-4fb6-98af-3e8a39c5981b-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-5q4zs\" (UID: \"21686a6d-f685-4fb6-98af-3e8a39c5981b\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-5q4zs" Feb 16 02:06:30.575961 master-0 kubenswrapper[4147]: I0216 02:06:30.575864 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/23755f7f-dce6-4dcf-9664-22e3aedb5c81-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-tkqng\" (UID: \"23755f7f-dce6-4dcf-9664-22e3aedb5c81\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-tkqng" Feb 16 02:06:30.575961 master-0 kubenswrapper[4147]: E0216 02:06:30.575892 4147 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 16 02:06:30.576066 master-0 kubenswrapper[4147]: E0216 02:06:30.575974 4147 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bde83629-b39c-401e-bc30-5ce205638918-marketplace-operator-metrics podName:bde83629-b39c-401e-bc30-5ce205638918 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:32.575948981 +0000 UTC m=+111.211684157 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/bde83629-b39c-401e-bc30-5ce205638918-marketplace-operator-metrics") pod "marketplace-operator-6cc5b65c6b-8nl7s" (UID: "bde83629-b39c-401e-bc30-5ce205638918") : secret "marketplace-operator-metrics" not found Feb 16 02:06:30.576312 master-0 kubenswrapper[4147]: E0216 02:06:30.576063 4147 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 16 02:06:30.576595 master-0 kubenswrapper[4147]: E0216 02:06:30.576331 4147 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/21686a6d-f685-4fb6-98af-3e8a39c5981b-cluster-monitoring-operator-tls podName:21686a6d-f685-4fb6-98af-3e8a39c5981b nodeName:}" failed. No retries permitted until 2026-02-16 02:06:32.576315421 +0000 UTC m=+111.212050577 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/21686a6d-f685-4fb6-98af-3e8a39c5981b-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-756d64c8c4-5q4zs" (UID: "21686a6d-f685-4fb6-98af-3e8a39c5981b") : secret "cluster-monitoring-operator-tls" not found Feb 16 02:06:30.576595 master-0 kubenswrapper[4147]: E0216 02:06:30.576339 4147 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 16 02:06:30.576595 master-0 kubenswrapper[4147]: I0216 02:06:30.576144 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0540a70-a256-422b-a827-e564d0e67866-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-bxgpd\" (UID: \"a0540a70-a256-422b-a827-e564d0e67866\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-bxgpd" Feb 16 02:06:30.576595 master-0 kubenswrapper[4147]: E0216 02:06:30.576505 4147 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23755f7f-dce6-4dcf-9664-22e3aedb5c81-package-server-manager-serving-cert podName:23755f7f-dce6-4dcf-9664-22e3aedb5c81 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:32.576491945 +0000 UTC m=+111.212227091 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/23755f7f-dce6-4dcf-9664-22e3aedb5c81-package-server-manager-serving-cert") pod "package-server-manager-5c696dbdcd-tkqng" (UID: "23755f7f-dce6-4dcf-9664-22e3aedb5c81") : secret "package-server-manager-serving-cert" not found Feb 16 02:06:30.576595 master-0 kubenswrapper[4147]: E0216 02:06:30.576234 4147 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 16 02:06:30.576937 master-0 kubenswrapper[4147]: I0216 02:06:30.576609 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/04804a08-e3a5-46f3-abcb-967866834baa-metrics-tls\") pod \"ingress-operator-c588d8cb4-nbjz6\" (UID: \"04804a08-e3a5-46f3-abcb-967866834baa\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nbjz6" Feb 16 02:06:30.576937 master-0 kubenswrapper[4147]: E0216 02:06:30.576677 4147 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a0540a70-a256-422b-a827-e564d0e67866-image-registry-operator-tls podName:a0540a70-a256-422b-a827-e564d0e67866 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:32.576642539 +0000 UTC m=+111.212377765 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/a0540a70-a256-422b-a827-e564d0e67866-image-registry-operator-tls") pod "cluster-image-registry-operator-96c8c64b8-bxgpd" (UID: "a0540a70-a256-422b-a827-e564d0e67866") : secret "image-registry-operator-tls" not found Feb 16 02:06:30.576937 master-0 kubenswrapper[4147]: E0216 02:06:30.576748 4147 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 16 02:06:30.576937 master-0 kubenswrapper[4147]: E0216 02:06:30.576928 4147 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04804a08-e3a5-46f3-abcb-967866834baa-metrics-tls podName:04804a08-e3a5-46f3-abcb-967866834baa nodeName:}" failed. No retries permitted until 2026-02-16 02:06:32.576898425 +0000 UTC m=+111.212633601 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/04804a08-e3a5-46f3-abcb-967866834baa-metrics-tls") pod "ingress-operator-c588d8cb4-nbjz6" (UID: "04804a08-e3a5-46f3-abcb-967866834baa") : secret "metrics-tls" not found Feb 16 02:06:30.581569 master-0 kubenswrapper[4147]: I0216 02:06:30.581289 4147 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2582m\" (UniqueName: \"kubernetes.io/projected/d008dbd4-e713-4f2e-b64d-ca9cfc83a502-kube-api-access-2582m\") pod \"csi-snapshot-controller-operator-7b87b97578-8n9v4\" (UID: \"d008dbd4-e713-4f2e-b64d-ca9cfc83a502\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-8n9v4" Feb 16 02:06:30.585427 master-0 kubenswrapper[4147]: I0216 02:06:30.585378 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-ck5nr" Feb 16 02:06:30.631109 master-0 kubenswrapper[4147]: I0216 02:06:30.631030 4147 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-8n9v4" Feb 16 02:06:30.768894 master-0 kubenswrapper[4147]: I0216 02:06:30.768718 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-b47jp" event={"ID":"6c02961f-30ec-4405-b7fa-9c4192342ae9","Type":"ContainerStarted","Data":"4b7e8c0ad2cdf87e8552c9488b3b26422f87ac52802cbde7bf5707282a58545e"} Feb 16 02:06:30.777186 master-0 kubenswrapper[4147]: I0216 02:06:30.770553 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-dgxhp" event={"ID":"a8f33151-61df-4b66-ba85-9ba210779059","Type":"ContainerStarted","Data":"19f7fe92f509ebf58263703a24e99425e1eb0493ad65313aae50f23d57b15adc"} Feb 16 02:06:30.777186 master-0 kubenswrapper[4147]: I0216 02:06:30.772999 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-755d954778-bngv9" event={"ID":"c9cd32bc-a13a-44ee-ba52-7bb335c7007b","Type":"ContainerStarted","Data":"d3a6bee8bdf67740901292411913794cb77a0e097ae15189322f724e1617872d"} Feb 16 02:06:30.777186 master-0 kubenswrapper[4147]: I0216 02:06:30.774284 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-dsjz2" event={"ID":"980aa005-f51d-4ca2-aee6-a6fdeefd86d0","Type":"ContainerStarted","Data":"2287a210e87155c02ab6e622acb47d96fd89d65dc49d2afbe29745b869fd7b87"} Feb 16 02:06:30.777186 master-0 kubenswrapper[4147]: I0216 02:06:30.775964 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-zlbd2" event={"ID":"9be9fd24-fdb1-43dc-80b8-68020427bfd7","Type":"ContainerStarted","Data":"3aa52e31a2b9a476aba0a48b18d458a5a18722a85353d604bbc35df3b9829545"} Feb 16 02:06:30.780080 master-0 kubenswrapper[4147]: I0216 02:06:30.780019 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-x2sh4" event={"ID":"4a5b01c1-1231-4e69-8b6c-c4981b65b26e","Type":"ContainerStarted","Data":"93fe8320cd8b094f12e9b856631a3581df910023e217ba523e4fc8bbdc13eff6"} Feb 16 02:06:30.781500 master-0 kubenswrapper[4147]: I0216 02:06:30.781408 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-jshtp" event={"ID":"1f2d2601-481d-4e86-ac4c-3d34d5691261","Type":"ContainerStarted","Data":"48c24060310ea59ea7726c8831161c102b57d6b94e31b9bc5a4ace9382583b32"} Feb 16 02:06:30.783034 master-0 kubenswrapper[4147]: I0216 02:06:30.782983 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-htjgz" event={"ID":"724ac845-3835-458b-9645-e665be135ff9","Type":"ContainerStarted","Data":"248a67424bdbb1372ecb2fb070f15261787aceee6d09513f5274ef915ebe68ae"} Feb 16 02:06:30.784870 master-0 kubenswrapper[4147]: I0216 02:06:30.784786 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-v7lmz" event={"ID":"e379cfaf-3a4c-40e7-8641-3524b3669295","Type":"ContainerStarted","Data":"99cec3957b7591d54dab1b67d940469ccca762d577aa986d00f4e46746ba55f5"} Feb 16 02:06:30.786411 master-0 kubenswrapper[4147]: I0216 02:06:30.786341 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-mmhcs" event={"ID":"1743372f-bdb0-4558-b47b-3714f3aa3fde","Type":"ContainerStarted","Data":"aafd16466f6eed6a672c6fa59488bcd1ea8cbc42fa0dfe86540d9e97cd364cb6"} Feb 16 02:06:31.182668 master-0 kubenswrapper[4147]: I0216 02:06:31.182603 4147 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-8n9v4"] Feb 16 02:06:31.184662 master-0 kubenswrapper[4147]: I0216 02:06:31.184539 4147 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5dc4688546-ck5nr"] Feb 16 02:06:31.210188 master-0 kubenswrapper[4147]: W0216 02:06:31.210101 4147 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod91938be6_9ae4_4849_abe8_fc842daecd23.slice/crio-ef2bb9465307e33223e533d623bdfd016157fa7b6e73255487b68bf12c529272 WatchSource:0}: Error finding container ef2bb9465307e33223e533d623bdfd016157fa7b6e73255487b68bf12c529272: Status 404 returned error can't find the container with id ef2bb9465307e33223e533d623bdfd016157fa7b6e73255487b68bf12c529272 Feb 16 02:06:31.789822 master-0 kubenswrapper[4147]: I0216 02:06:31.789771 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-ck5nr" event={"ID":"91938be6-9ae4-4849-abe8-fc842daecd23","Type":"ContainerStarted","Data":"ef2bb9465307e33223e533d623bdfd016157fa7b6e73255487b68bf12c529272"} Feb 16 02:06:31.792474 master-0 kubenswrapper[4147]: I0216 02:06:31.792426 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-dsjz2" event={"ID":"980aa005-f51d-4ca2-aee6-a6fdeefd86d0","Type":"ContainerStarted","Data":"ff658673f7455373622a5b5ee3d4af2de4dca2c4bf35a4e09ed477558c99902a"} Feb 16 02:06:31.796096 master-0 kubenswrapper[4147]: I0216 02:06:31.795891 4147 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-8n9v4" event={"ID":"d008dbd4-e713-4f2e-b64d-ca9cfc83a502","Type":"ContainerStarted","Data":"8dd32bd58a893bd46ee61ae39a01f4492842e7fb2c4d56eeca513230f073e979"} Feb 16 02:06:32.207851 master-0 kubenswrapper[4147]: I0216 02:06:32.207785 4147 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-dsjz2" podStartSLOduration=75.207765908 podStartE2EDuration="1m15.207765908s" podCreationTimestamp="2026-02-16 02:05:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:06:31.807078986 +0000 UTC m=+110.442814102" watchObservedRunningTime="2026-02-16 02:06:32.207765908 +0000 UTC m=+110.843501044" Feb 16 02:06:32.496716 master-0 kubenswrapper[4147]: I0216 02:06:32.496625 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/b2a83ddd-ffa5-4127-9099-91187ad9dbba-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-845gn\" (UID: \"b2a83ddd-ffa5-4127-9099-91187ad9dbba\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-845gn" Feb 16 02:06:32.496716 master-0 kubenswrapper[4147]: I0216 02:06:32.496672 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b2a83ddd-ffa5-4127-9099-91187ad9dbba-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-845gn\" (UID: \"b2a83ddd-ffa5-4127-9099-91187ad9dbba\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-845gn" Feb 16 02:06:32.496914 master-0 kubenswrapper[4147]: E0216 02:06:32.496838 4147 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 16 02:06:32.496914 master-0 kubenswrapper[4147]: E0216 02:06:32.496912 4147 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b2a83ddd-ffa5-4127-9099-91187ad9dbba-node-tuning-operator-tls podName:b2a83ddd-ffa5-4127-9099-91187ad9dbba nodeName:}" failed. No retries permitted until 2026-02-16 02:06:36.496893819 +0000 UTC m=+115.132628935 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/b2a83ddd-ffa5-4127-9099-91187ad9dbba-node-tuning-operator-tls") pod "cluster-node-tuning-operator-ff6c9b66-845gn" (UID: "b2a83ddd-ffa5-4127-9099-91187ad9dbba") : secret "node-tuning-operator-tls" not found Feb 16 02:06:32.497087 master-0 kubenswrapper[4147]: E0216 02:06:32.497055 4147 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 16 02:06:32.497140 master-0 kubenswrapper[4147]: E0216 02:06:32.497126 4147 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b2a83ddd-ffa5-4127-9099-91187ad9dbba-apiservice-cert podName:b2a83ddd-ffa5-4127-9099-91187ad9dbba nodeName:}" failed. No retries permitted until 2026-02-16 02:06:36.497111164 +0000 UTC m=+115.132846280 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/b2a83ddd-ffa5-4127-9099-91187ad9dbba-apiservice-cert") pod "cluster-node-tuning-operator-ff6c9b66-845gn" (UID: "b2a83ddd-ffa5-4127-9099-91187ad9dbba") : secret "performance-addon-operator-webhook-cert" not found Feb 16 02:06:32.497175 master-0 kubenswrapper[4147]: I0216 02:06:32.497162 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2ffa4db8-97da-42de-8e51-35680f518ca7-metrics-tls\") pod \"dns-operator-86b8869b79-4rfwq\" (UID: \"2ffa4db8-97da-42de-8e51-35680f518ca7\") " pod="openshift-dns-operator/dns-operator-86b8869b79-4rfwq" Feb 16 02:06:32.497271 master-0 kubenswrapper[4147]: I0216 02:06:32.497214 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/76915cba-7c11-4bd8-9943-81de74e7781b-srv-cert\") pod \"catalog-operator-588944557d-2z8fq\" (UID: \"76915cba-7c11-4bd8-9943-81de74e7781b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-2z8fq" Feb 16 02:06:32.497304 master-0 kubenswrapper[4147]: I0216 02:06:32.497274 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/467d92a2-1cf3-418d-b41e-8e5f9d7a5b74-srv-cert\") pod \"olm-operator-6b56bd877c-qwp9g\" (UID: \"467d92a2-1cf3-418d-b41e-8e5f9d7a5b74\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-qwp9g" Feb 16 02:06:32.497348 master-0 kubenswrapper[4147]: E0216 02:06:32.497327 4147 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 16 02:06:32.497385 master-0 kubenswrapper[4147]: E0216 02:06:32.497371 4147 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ffa4db8-97da-42de-8e51-35680f518ca7-metrics-tls podName:2ffa4db8-97da-42de-8e51-35680f518ca7 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:36.49735811 +0000 UTC m=+115.133093226 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/2ffa4db8-97da-42de-8e51-35680f518ca7-metrics-tls") pod "dns-operator-86b8869b79-4rfwq" (UID: "2ffa4db8-97da-42de-8e51-35680f518ca7") : secret "metrics-tls" not found Feb 16 02:06:32.497385 master-0 kubenswrapper[4147]: E0216 02:06:32.497374 4147 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Feb 16 02:06:32.497455 master-0 kubenswrapper[4147]: E0216 02:06:32.497396 4147 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/467d92a2-1cf3-418d-b41e-8e5f9d7a5b74-srv-cert podName:467d92a2-1cf3-418d-b41e-8e5f9d7a5b74 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:36.497390821 +0000 UTC m=+115.133125937 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/467d92a2-1cf3-418d-b41e-8e5f9d7a5b74-srv-cert") pod "olm-operator-6b56bd877c-qwp9g" (UID: "467d92a2-1cf3-418d-b41e-8e5f9d7a5b74") : secret "olm-operator-serving-cert" not found Feb 16 02:06:32.497455 master-0 kubenswrapper[4147]: E0216 02:06:32.497407 4147 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Feb 16 02:06:32.497455 master-0 kubenswrapper[4147]: E0216 02:06:32.497447 4147 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/76915cba-7c11-4bd8-9943-81de74e7781b-srv-cert podName:76915cba-7c11-4bd8-9943-81de74e7781b nodeName:}" failed. No retries permitted until 2026-02-16 02:06:36.497424712 +0000 UTC m=+115.133159828 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/76915cba-7c11-4bd8-9943-81de74e7781b-srv-cert") pod "catalog-operator-588944557d-2z8fq" (UID: "76915cba-7c11-4bd8-9943-81de74e7781b") : secret "catalog-operator-serving-cert" not found Feb 16 02:06:32.599366 master-0 kubenswrapper[4147]: I0216 02:06:32.599103 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/bde83629-b39c-401e-bc30-5ce205638918-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-8nl7s\" (UID: \"bde83629-b39c-401e-bc30-5ce205638918\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-8nl7s" Feb 16 02:06:32.599366 master-0 kubenswrapper[4147]: I0216 02:06:32.599366 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/21686a6d-f685-4fb6-98af-3e8a39c5981b-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-5q4zs\" (UID: \"21686a6d-f685-4fb6-98af-3e8a39c5981b\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-5q4zs" Feb 16 02:06:32.599637 master-0 kubenswrapper[4147]: I0216 02:06:32.599397 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/23755f7f-dce6-4dcf-9664-22e3aedb5c81-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-tkqng\" (UID: \"23755f7f-dce6-4dcf-9664-22e3aedb5c81\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-tkqng" Feb 16 02:06:32.599637 master-0 kubenswrapper[4147]: I0216 02:06:32.599453 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0540a70-a256-422b-a827-e564d0e67866-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-bxgpd\" (UID: \"a0540a70-a256-422b-a827-e564d0e67866\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-bxgpd" Feb 16 02:06:32.599637 master-0 kubenswrapper[4147]: I0216 02:06:32.599489 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/04804a08-e3a5-46f3-abcb-967866834baa-metrics-tls\") pod \"ingress-operator-c588d8cb4-nbjz6\" (UID: \"04804a08-e3a5-46f3-abcb-967866834baa\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nbjz6" Feb 16 02:06:32.599637 master-0 kubenswrapper[4147]: I0216 02:06:32.599568 4147 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b6088119-1125-4271-8c0b-0675e700edd9-webhook-certs\") pod \"multus-admission-controller-7c64d55f8-62wr2\" (UID: \"b6088119-1125-4271-8c0b-0675e700edd9\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-62wr2" Feb 16 02:06:32.599783 master-0 kubenswrapper[4147]: E0216 02:06:32.599742 4147 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 16 02:06:32.599824 master-0 kubenswrapper[4147]: E0216 02:06:32.599799 4147 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b6088119-1125-4271-8c0b-0675e700edd9-webhook-certs podName:b6088119-1125-4271-8c0b-0675e700edd9 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:36.599783225 +0000 UTC m=+115.235518331 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b6088119-1125-4271-8c0b-0675e700edd9-webhook-certs") pod "multus-admission-controller-7c64d55f8-62wr2" (UID: "b6088119-1125-4271-8c0b-0675e700edd9") : secret "multus-admission-controller-secret" not found Feb 16 02:06:32.601479 master-0 kubenswrapper[4147]: E0216 02:06:32.600213 4147 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 16 02:06:32.601479 master-0 kubenswrapper[4147]: E0216 02:06:32.600250 4147 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bde83629-b39c-401e-bc30-5ce205638918-marketplace-operator-metrics podName:bde83629-b39c-401e-bc30-5ce205638918 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:36.600241426 +0000 UTC m=+115.235976542 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/bde83629-b39c-401e-bc30-5ce205638918-marketplace-operator-metrics") pod "marketplace-operator-6cc5b65c6b-8nl7s" (UID: "bde83629-b39c-401e-bc30-5ce205638918") : secret "marketplace-operator-metrics" not found Feb 16 02:06:32.601479 master-0 kubenswrapper[4147]: E0216 02:06:32.600303 4147 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 16 02:06:32.601479 master-0 kubenswrapper[4147]: E0216 02:06:32.600332 4147 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/21686a6d-f685-4fb6-98af-3e8a39c5981b-cluster-monitoring-operator-tls podName:21686a6d-f685-4fb6-98af-3e8a39c5981b nodeName:}" failed. No retries permitted until 2026-02-16 02:06:36.600324528 +0000 UTC m=+115.236059644 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/21686a6d-f685-4fb6-98af-3e8a39c5981b-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-756d64c8c4-5q4zs" (UID: "21686a6d-f685-4fb6-98af-3e8a39c5981b") : secret "cluster-monitoring-operator-tls" not found Feb 16 02:06:32.601479 master-0 kubenswrapper[4147]: E0216 02:06:32.600378 4147 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 16 02:06:32.601479 master-0 kubenswrapper[4147]: E0216 02:06:32.600403 4147 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23755f7f-dce6-4dcf-9664-22e3aedb5c81-package-server-manager-serving-cert podName:23755f7f-dce6-4dcf-9664-22e3aedb5c81 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:36.60039752 +0000 UTC m=+115.236132636 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/23755f7f-dce6-4dcf-9664-22e3aedb5c81-package-server-manager-serving-cert") pod "package-server-manager-5c696dbdcd-tkqng" (UID: "23755f7f-dce6-4dcf-9664-22e3aedb5c81") : secret "package-server-manager-serving-cert" not found Feb 16 02:06:32.601479 master-0 kubenswrapper[4147]: E0216 02:06:32.600469 4147 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 16 02:06:32.601479 master-0 kubenswrapper[4147]: E0216 02:06:32.600496 4147 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a0540a70-a256-422b-a827-e564d0e67866-image-registry-operator-tls podName:a0540a70-a256-422b-a827-e564d0e67866 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:36.600489452 +0000 UTC m=+115.236224568 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/a0540a70-a256-422b-a827-e564d0e67866-image-registry-operator-tls") pod "cluster-image-registry-operator-96c8c64b8-bxgpd" (UID: "a0540a70-a256-422b-a827-e564d0e67866") : secret "image-registry-operator-tls" not found Feb 16 02:06:32.601479 master-0 kubenswrapper[4147]: E0216 02:06:32.600542 4147 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 16 02:06:32.601479 master-0 kubenswrapper[4147]: E0216 02:06:32.600565 4147 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04804a08-e3a5-46f3-abcb-967866834baa-metrics-tls podName:04804a08-e3a5-46f3-abcb-967866834baa nodeName:}" failed. No retries permitted until 2026-02-16 02:06:36.600558604 +0000 UTC m=+115.236293720 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/04804a08-e3a5-46f3-abcb-967866834baa-metrics-tls") pod "ingress-operator-c588d8cb4-nbjz6" (UID: "04804a08-e3a5-46f3-abcb-967866834baa") : secret "metrics-tls" not found Feb 16 02:06:36.356392 master-0 systemd[1]: Stopping Kubernetes Kubelet... Feb 16 02:06:36.400155 master-0 systemd[1]: kubelet.service: Deactivated successfully. Feb 16 02:06:36.400570 master-0 systemd[1]: Stopped Kubernetes Kubelet. Feb 16 02:06:36.402283 master-0 systemd[1]: kubelet.service: Consumed 10.492s CPU time. Feb 16 02:06:36.421588 master-0 systemd[1]: Starting Kubernetes Kubelet... Feb 16 02:06:36.575242 master-0 kubenswrapper[7721]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 02:06:36.575242 master-0 kubenswrapper[7721]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 16 02:06:36.575242 master-0 kubenswrapper[7721]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 02:06:36.575242 master-0 kubenswrapper[7721]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 02:06:36.575242 master-0 kubenswrapper[7721]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 16 02:06:36.575242 master-0 kubenswrapper[7721]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 02:06:36.576576 master-0 kubenswrapper[7721]: I0216 02:06:36.575362 7721 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 16 02:06:36.578326 master-0 kubenswrapper[7721]: W0216 02:06:36.578290 7721 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 02:06:36.578326 master-0 kubenswrapper[7721]: W0216 02:06:36.578312 7721 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 02:06:36.578326 master-0 kubenswrapper[7721]: W0216 02:06:36.578318 7721 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 02:06:36.578326 master-0 kubenswrapper[7721]: W0216 02:06:36.578326 7721 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 02:06:36.578326 master-0 kubenswrapper[7721]: W0216 02:06:36.578333 7721 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 02:06:36.578551 master-0 kubenswrapper[7721]: W0216 02:06:36.578340 7721 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 02:06:36.578551 master-0 kubenswrapper[7721]: W0216 02:06:36.578348 7721 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 02:06:36.578551 master-0 kubenswrapper[7721]: W0216 02:06:36.578354 7721 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 02:06:36.578551 master-0 kubenswrapper[7721]: W0216 02:06:36.578361 7721 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 02:06:36.578551 master-0 kubenswrapper[7721]: W0216 02:06:36.578387 7721 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 02:06:36.578551 master-0 kubenswrapper[7721]: W0216 02:06:36.578394 7721 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 02:06:36.578551 master-0 kubenswrapper[7721]: W0216 02:06:36.578401 7721 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 02:06:36.578551 master-0 kubenswrapper[7721]: W0216 02:06:36.578407 7721 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 02:06:36.578551 master-0 kubenswrapper[7721]: W0216 02:06:36.578412 7721 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 02:06:36.578551 master-0 kubenswrapper[7721]: W0216 02:06:36.578418 7721 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 02:06:36.578551 master-0 kubenswrapper[7721]: W0216 02:06:36.578423 7721 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 02:06:36.578551 master-0 kubenswrapper[7721]: W0216 02:06:36.578429 7721 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 02:06:36.578551 master-0 kubenswrapper[7721]: W0216 02:06:36.578454 7721 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 02:06:36.578551 master-0 kubenswrapper[7721]: W0216 02:06:36.578460 7721 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 02:06:36.578551 master-0 kubenswrapper[7721]: W0216 02:06:36.578465 7721 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 02:06:36.578551 master-0 kubenswrapper[7721]: W0216 02:06:36.578471 7721 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 02:06:36.578551 master-0 kubenswrapper[7721]: W0216 02:06:36.578476 7721 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 02:06:36.578551 master-0 kubenswrapper[7721]: W0216 02:06:36.578481 7721 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 02:06:36.578551 master-0 kubenswrapper[7721]: W0216 02:06:36.578487 7721 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 02:06:36.579184 master-0 kubenswrapper[7721]: W0216 02:06:36.578492 7721 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 02:06:36.579184 master-0 kubenswrapper[7721]: W0216 02:06:36.578497 7721 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 02:06:36.579184 master-0 kubenswrapper[7721]: W0216 02:06:36.578502 7721 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 02:06:36.579184 master-0 kubenswrapper[7721]: W0216 02:06:36.578508 7721 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 02:06:36.579184 master-0 kubenswrapper[7721]: W0216 02:06:36.578513 7721 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 02:06:36.579184 master-0 kubenswrapper[7721]: W0216 02:06:36.578519 7721 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 02:06:36.579184 master-0 kubenswrapper[7721]: W0216 02:06:36.578524 7721 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 02:06:36.579184 master-0 kubenswrapper[7721]: W0216 02:06:36.578530 7721 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 02:06:36.579184 master-0 kubenswrapper[7721]: W0216 02:06:36.578536 7721 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 02:06:36.579184 master-0 kubenswrapper[7721]: W0216 02:06:36.578541 7721 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 02:06:36.579184 master-0 kubenswrapper[7721]: W0216 02:06:36.578546 7721 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 16 02:06:36.579184 master-0 kubenswrapper[7721]: W0216 02:06:36.578554 7721 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 02:06:36.579184 master-0 kubenswrapper[7721]: W0216 02:06:36.578563 7721 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 02:06:36.579184 master-0 kubenswrapper[7721]: W0216 02:06:36.578570 7721 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 02:06:36.579184 master-0 kubenswrapper[7721]: W0216 02:06:36.578576 7721 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 02:06:36.579184 master-0 kubenswrapper[7721]: W0216 02:06:36.578582 7721 feature_gate.go:330] unrecognized feature gate: Example Feb 16 02:06:36.579184 master-0 kubenswrapper[7721]: W0216 02:06:36.578588 7721 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 02:06:36.579184 master-0 kubenswrapper[7721]: W0216 02:06:36.578593 7721 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 02:06:36.579184 master-0 kubenswrapper[7721]: W0216 02:06:36.578598 7721 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 02:06:36.579816 master-0 kubenswrapper[7721]: W0216 02:06:36.578604 7721 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 02:06:36.579816 master-0 kubenswrapper[7721]: W0216 02:06:36.578609 7721 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 02:06:36.579816 master-0 kubenswrapper[7721]: W0216 02:06:36.578614 7721 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 02:06:36.579816 master-0 kubenswrapper[7721]: W0216 02:06:36.578618 7721 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 02:06:36.579816 master-0 kubenswrapper[7721]: W0216 02:06:36.578623 7721 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 02:06:36.579816 master-0 kubenswrapper[7721]: W0216 02:06:36.578628 7721 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 02:06:36.579816 master-0 kubenswrapper[7721]: W0216 02:06:36.578633 7721 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 02:06:36.579816 master-0 kubenswrapper[7721]: W0216 02:06:36.578638 7721 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 02:06:36.579816 master-0 kubenswrapper[7721]: W0216 02:06:36.578643 7721 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 02:06:36.579816 master-0 kubenswrapper[7721]: W0216 02:06:36.578649 7721 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 02:06:36.579816 master-0 kubenswrapper[7721]: W0216 02:06:36.578654 7721 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 02:06:36.579816 master-0 kubenswrapper[7721]: W0216 02:06:36.578659 7721 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 02:06:36.579816 master-0 kubenswrapper[7721]: W0216 02:06:36.578673 7721 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 02:06:36.579816 master-0 kubenswrapper[7721]: W0216 02:06:36.578679 7721 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 02:06:36.579816 master-0 kubenswrapper[7721]: W0216 02:06:36.578683 7721 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 02:06:36.579816 master-0 kubenswrapper[7721]: W0216 02:06:36.578688 7721 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 02:06:36.579816 master-0 kubenswrapper[7721]: W0216 02:06:36.578693 7721 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 02:06:36.579816 master-0 kubenswrapper[7721]: W0216 02:06:36.578698 7721 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 02:06:36.579816 master-0 kubenswrapper[7721]: W0216 02:06:36.578703 7721 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 02:06:36.579816 master-0 kubenswrapper[7721]: W0216 02:06:36.578708 7721 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 02:06:36.580304 master-0 kubenswrapper[7721]: W0216 02:06:36.578713 7721 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 02:06:36.580304 master-0 kubenswrapper[7721]: W0216 02:06:36.578717 7721 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 02:06:36.580304 master-0 kubenswrapper[7721]: W0216 02:06:36.578724 7721 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 02:06:36.580304 master-0 kubenswrapper[7721]: W0216 02:06:36.578728 7721 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 02:06:36.580304 master-0 kubenswrapper[7721]: W0216 02:06:36.578733 7721 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 02:06:36.580304 master-0 kubenswrapper[7721]: W0216 02:06:36.578738 7721 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 02:06:36.580304 master-0 kubenswrapper[7721]: W0216 02:06:36.578743 7721 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 02:06:36.580304 master-0 kubenswrapper[7721]: W0216 02:06:36.578748 7721 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 02:06:36.580304 master-0 kubenswrapper[7721]: W0216 02:06:36.578753 7721 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 02:06:36.580304 master-0 kubenswrapper[7721]: I0216 02:06:36.578856 7721 flags.go:64] FLAG: --address="0.0.0.0" Feb 16 02:06:36.580304 master-0 kubenswrapper[7721]: I0216 02:06:36.578868 7721 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 16 02:06:36.580304 master-0 kubenswrapper[7721]: I0216 02:06:36.578878 7721 flags.go:64] FLAG: --anonymous-auth="true" Feb 16 02:06:36.580304 master-0 kubenswrapper[7721]: I0216 02:06:36.578885 7721 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 16 02:06:36.580304 master-0 kubenswrapper[7721]: I0216 02:06:36.578891 7721 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 16 02:06:36.580304 master-0 kubenswrapper[7721]: I0216 02:06:36.578897 7721 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 16 02:06:36.580304 master-0 kubenswrapper[7721]: I0216 02:06:36.578904 7721 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 16 02:06:36.580304 master-0 kubenswrapper[7721]: I0216 02:06:36.578911 7721 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 16 02:06:36.580304 master-0 kubenswrapper[7721]: I0216 02:06:36.578917 7721 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 16 02:06:36.580304 master-0 kubenswrapper[7721]: I0216 02:06:36.578923 7721 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 16 02:06:36.580304 master-0 kubenswrapper[7721]: I0216 02:06:36.578931 7721 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 16 02:06:36.580304 master-0 kubenswrapper[7721]: I0216 02:06:36.578937 7721 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 16 02:06:36.580304 master-0 kubenswrapper[7721]: I0216 02:06:36.578943 7721 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 16 02:06:36.580839 master-0 kubenswrapper[7721]: I0216 02:06:36.578949 7721 flags.go:64] FLAG: --cgroup-root="" Feb 16 02:06:36.580839 master-0 kubenswrapper[7721]: I0216 02:06:36.578955 7721 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 16 02:06:36.580839 master-0 kubenswrapper[7721]: I0216 02:06:36.578964 7721 flags.go:64] FLAG: --client-ca-file="" Feb 16 02:06:36.580839 master-0 kubenswrapper[7721]: I0216 02:06:36.578969 7721 flags.go:64] FLAG: --cloud-config="" Feb 16 02:06:36.580839 master-0 kubenswrapper[7721]: I0216 02:06:36.578975 7721 flags.go:64] FLAG: --cloud-provider="" Feb 16 02:06:36.580839 master-0 kubenswrapper[7721]: I0216 02:06:36.578980 7721 flags.go:64] FLAG: --cluster-dns="[]" Feb 16 02:06:36.580839 master-0 kubenswrapper[7721]: I0216 02:06:36.578987 7721 flags.go:64] FLAG: --cluster-domain="" Feb 16 02:06:36.580839 master-0 kubenswrapper[7721]: I0216 02:06:36.578992 7721 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 16 02:06:36.580839 master-0 kubenswrapper[7721]: I0216 02:06:36.578998 7721 flags.go:64] FLAG: --config-dir="" Feb 16 02:06:36.580839 master-0 kubenswrapper[7721]: I0216 02:06:36.579003 7721 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 16 02:06:36.580839 master-0 kubenswrapper[7721]: I0216 02:06:36.579009 7721 flags.go:64] FLAG: --container-log-max-files="5" Feb 16 02:06:36.580839 master-0 kubenswrapper[7721]: I0216 02:06:36.579017 7721 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 16 02:06:36.580839 master-0 kubenswrapper[7721]: I0216 02:06:36.579022 7721 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 16 02:06:36.580839 master-0 kubenswrapper[7721]: I0216 02:06:36.579028 7721 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 16 02:06:36.580839 master-0 kubenswrapper[7721]: I0216 02:06:36.579034 7721 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 16 02:06:36.580839 master-0 kubenswrapper[7721]: I0216 02:06:36.579039 7721 flags.go:64] FLAG: --contention-profiling="false" Feb 16 02:06:36.580839 master-0 kubenswrapper[7721]: I0216 02:06:36.579046 7721 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 16 02:06:36.580839 master-0 kubenswrapper[7721]: I0216 02:06:36.579052 7721 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 16 02:06:36.580839 master-0 kubenswrapper[7721]: I0216 02:06:36.579058 7721 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 16 02:06:36.580839 master-0 kubenswrapper[7721]: I0216 02:06:36.579063 7721 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 16 02:06:36.580839 master-0 kubenswrapper[7721]: I0216 02:06:36.579070 7721 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 16 02:06:36.580839 master-0 kubenswrapper[7721]: I0216 02:06:36.579076 7721 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 16 02:06:36.580839 master-0 kubenswrapper[7721]: I0216 02:06:36.579081 7721 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 16 02:06:36.580839 master-0 kubenswrapper[7721]: I0216 02:06:36.579087 7721 flags.go:64] FLAG: --enable-load-reader="false" Feb 16 02:06:36.580839 master-0 kubenswrapper[7721]: I0216 02:06:36.579092 7721 flags.go:64] FLAG: --enable-server="true" Feb 16 02:06:36.581465 master-0 kubenswrapper[7721]: I0216 02:06:36.579098 7721 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 16 02:06:36.581465 master-0 kubenswrapper[7721]: I0216 02:06:36.579106 7721 flags.go:64] FLAG: --event-burst="100" Feb 16 02:06:36.581465 master-0 kubenswrapper[7721]: I0216 02:06:36.579112 7721 flags.go:64] FLAG: --event-qps="50" Feb 16 02:06:36.581465 master-0 kubenswrapper[7721]: I0216 02:06:36.579117 7721 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 16 02:06:36.581465 master-0 kubenswrapper[7721]: I0216 02:06:36.579123 7721 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 16 02:06:36.581465 master-0 kubenswrapper[7721]: I0216 02:06:36.579129 7721 flags.go:64] FLAG: --eviction-hard="" Feb 16 02:06:36.581465 master-0 kubenswrapper[7721]: I0216 02:06:36.579135 7721 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 16 02:06:36.581465 master-0 kubenswrapper[7721]: I0216 02:06:36.579141 7721 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 16 02:06:36.581465 master-0 kubenswrapper[7721]: I0216 02:06:36.579147 7721 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 16 02:06:36.581465 master-0 kubenswrapper[7721]: I0216 02:06:36.579155 7721 flags.go:64] FLAG: --eviction-soft="" Feb 16 02:06:36.581465 master-0 kubenswrapper[7721]: I0216 02:06:36.579160 7721 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 16 02:06:36.581465 master-0 kubenswrapper[7721]: I0216 02:06:36.579166 7721 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 16 02:06:36.581465 master-0 kubenswrapper[7721]: I0216 02:06:36.579172 7721 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 16 02:06:36.581465 master-0 kubenswrapper[7721]: I0216 02:06:36.579178 7721 flags.go:64] FLAG: --experimental-mounter-path="" Feb 16 02:06:36.581465 master-0 kubenswrapper[7721]: I0216 02:06:36.579183 7721 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 16 02:06:36.581465 master-0 kubenswrapper[7721]: I0216 02:06:36.579189 7721 flags.go:64] FLAG: --fail-swap-on="true" Feb 16 02:06:36.581465 master-0 kubenswrapper[7721]: I0216 02:06:36.579195 7721 flags.go:64] FLAG: --feature-gates="" Feb 16 02:06:36.581465 master-0 kubenswrapper[7721]: I0216 02:06:36.579203 7721 flags.go:64] FLAG: --file-check-frequency="20s" Feb 16 02:06:36.581465 master-0 kubenswrapper[7721]: I0216 02:06:36.579209 7721 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 16 02:06:36.581465 master-0 kubenswrapper[7721]: I0216 02:06:36.579215 7721 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 16 02:06:36.581465 master-0 kubenswrapper[7721]: I0216 02:06:36.579221 7721 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 16 02:06:36.581465 master-0 kubenswrapper[7721]: I0216 02:06:36.579227 7721 flags.go:64] FLAG: --healthz-port="10248" Feb 16 02:06:36.581465 master-0 kubenswrapper[7721]: I0216 02:06:36.579233 7721 flags.go:64] FLAG: --help="false" Feb 16 02:06:36.581465 master-0 kubenswrapper[7721]: I0216 02:06:36.579239 7721 flags.go:64] FLAG: --hostname-override="" Feb 16 02:06:36.581465 master-0 kubenswrapper[7721]: I0216 02:06:36.579244 7721 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 16 02:06:36.581465 master-0 kubenswrapper[7721]: I0216 02:06:36.579249 7721 flags.go:64] FLAG: --http-check-frequency="20s" Feb 16 02:06:36.582103 master-0 kubenswrapper[7721]: I0216 02:06:36.579255 7721 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 16 02:06:36.582103 master-0 kubenswrapper[7721]: I0216 02:06:36.579261 7721 flags.go:64] FLAG: --image-credential-provider-config="" Feb 16 02:06:36.582103 master-0 kubenswrapper[7721]: I0216 02:06:36.579267 7721 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 16 02:06:36.582103 master-0 kubenswrapper[7721]: I0216 02:06:36.579273 7721 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 16 02:06:36.582103 master-0 kubenswrapper[7721]: I0216 02:06:36.579278 7721 flags.go:64] FLAG: --image-service-endpoint="" Feb 16 02:06:36.582103 master-0 kubenswrapper[7721]: I0216 02:06:36.579283 7721 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 16 02:06:36.582103 master-0 kubenswrapper[7721]: I0216 02:06:36.579289 7721 flags.go:64] FLAG: --kube-api-burst="100" Feb 16 02:06:36.582103 master-0 kubenswrapper[7721]: I0216 02:06:36.579295 7721 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 16 02:06:36.582103 master-0 kubenswrapper[7721]: I0216 02:06:36.579300 7721 flags.go:64] FLAG: --kube-api-qps="50" Feb 16 02:06:36.582103 master-0 kubenswrapper[7721]: I0216 02:06:36.579306 7721 flags.go:64] FLAG: --kube-reserved="" Feb 16 02:06:36.582103 master-0 kubenswrapper[7721]: I0216 02:06:36.579311 7721 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 16 02:06:36.582103 master-0 kubenswrapper[7721]: I0216 02:06:36.579317 7721 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 16 02:06:36.582103 master-0 kubenswrapper[7721]: I0216 02:06:36.579322 7721 flags.go:64] FLAG: --kubelet-cgroups="" Feb 16 02:06:36.582103 master-0 kubenswrapper[7721]: I0216 02:06:36.579328 7721 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 16 02:06:36.582103 master-0 kubenswrapper[7721]: I0216 02:06:36.579334 7721 flags.go:64] FLAG: --lock-file="" Feb 16 02:06:36.582103 master-0 kubenswrapper[7721]: I0216 02:06:36.579342 7721 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 16 02:06:36.582103 master-0 kubenswrapper[7721]: I0216 02:06:36.579348 7721 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 16 02:06:36.582103 master-0 kubenswrapper[7721]: I0216 02:06:36.579354 7721 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 16 02:06:36.582103 master-0 kubenswrapper[7721]: I0216 02:06:36.579362 7721 flags.go:64] FLAG: --log-json-split-stream="false" Feb 16 02:06:36.582103 master-0 kubenswrapper[7721]: I0216 02:06:36.579368 7721 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 16 02:06:36.582103 master-0 kubenswrapper[7721]: I0216 02:06:36.579373 7721 flags.go:64] FLAG: --log-text-split-stream="false" Feb 16 02:06:36.582103 master-0 kubenswrapper[7721]: I0216 02:06:36.579379 7721 flags.go:64] FLAG: --logging-format="text" Feb 16 02:06:36.582103 master-0 kubenswrapper[7721]: I0216 02:06:36.579385 7721 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 16 02:06:36.582103 master-0 kubenswrapper[7721]: I0216 02:06:36.579390 7721 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 16 02:06:36.582103 master-0 kubenswrapper[7721]: I0216 02:06:36.579396 7721 flags.go:64] FLAG: --manifest-url="" Feb 16 02:06:36.582687 master-0 kubenswrapper[7721]: I0216 02:06:36.579401 7721 flags.go:64] FLAG: --manifest-url-header="" Feb 16 02:06:36.582687 master-0 kubenswrapper[7721]: I0216 02:06:36.579409 7721 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 16 02:06:36.582687 master-0 kubenswrapper[7721]: I0216 02:06:36.579414 7721 flags.go:64] FLAG: --max-open-files="1000000" Feb 16 02:06:36.582687 master-0 kubenswrapper[7721]: I0216 02:06:36.579421 7721 flags.go:64] FLAG: --max-pods="110" Feb 16 02:06:36.582687 master-0 kubenswrapper[7721]: I0216 02:06:36.579427 7721 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 16 02:06:36.582687 master-0 kubenswrapper[7721]: I0216 02:06:36.579449 7721 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 16 02:06:36.582687 master-0 kubenswrapper[7721]: I0216 02:06:36.579454 7721 flags.go:64] FLAG: --memory-manager-policy="None" Feb 16 02:06:36.582687 master-0 kubenswrapper[7721]: I0216 02:06:36.579463 7721 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 16 02:06:36.582687 master-0 kubenswrapper[7721]: I0216 02:06:36.579468 7721 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 16 02:06:36.582687 master-0 kubenswrapper[7721]: I0216 02:06:36.579474 7721 flags.go:64] FLAG: --node-ip="192.168.32.10" Feb 16 02:06:36.582687 master-0 kubenswrapper[7721]: I0216 02:06:36.579479 7721 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 16 02:06:36.582687 master-0 kubenswrapper[7721]: I0216 02:06:36.579491 7721 flags.go:64] FLAG: --node-status-max-images="50" Feb 16 02:06:36.582687 master-0 kubenswrapper[7721]: I0216 02:06:36.579497 7721 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 16 02:06:36.582687 master-0 kubenswrapper[7721]: I0216 02:06:36.579503 7721 flags.go:64] FLAG: --oom-score-adj="-999" Feb 16 02:06:36.582687 master-0 kubenswrapper[7721]: I0216 02:06:36.579509 7721 flags.go:64] FLAG: --pod-cidr="" Feb 16 02:06:36.582687 master-0 kubenswrapper[7721]: I0216 02:06:36.579515 7721 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1593b6aac7bb18c1bbb5d41693e8b8c7f0c0410fcc09e15de52d8bd53e356541" Feb 16 02:06:36.582687 master-0 kubenswrapper[7721]: I0216 02:06:36.579522 7721 flags.go:64] FLAG: --pod-manifest-path="" Feb 16 02:06:36.582687 master-0 kubenswrapper[7721]: I0216 02:06:36.579528 7721 flags.go:64] FLAG: --pod-max-pids="-1" Feb 16 02:06:36.582687 master-0 kubenswrapper[7721]: I0216 02:06:36.579534 7721 flags.go:64] FLAG: --pods-per-core="0" Feb 16 02:06:36.582687 master-0 kubenswrapper[7721]: I0216 02:06:36.579539 7721 flags.go:64] FLAG: --port="10250" Feb 16 02:06:36.582687 master-0 kubenswrapper[7721]: I0216 02:06:36.579545 7721 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 16 02:06:36.582687 master-0 kubenswrapper[7721]: I0216 02:06:36.579550 7721 flags.go:64] FLAG: --provider-id="" Feb 16 02:06:36.582687 master-0 kubenswrapper[7721]: I0216 02:06:36.579556 7721 flags.go:64] FLAG: --qos-reserved="" Feb 16 02:06:36.582687 master-0 kubenswrapper[7721]: I0216 02:06:36.579563 7721 flags.go:64] FLAG: --read-only-port="10255" Feb 16 02:06:36.583476 master-0 kubenswrapper[7721]: I0216 02:06:36.579569 7721 flags.go:64] FLAG: --register-node="true" Feb 16 02:06:36.583476 master-0 kubenswrapper[7721]: I0216 02:06:36.579574 7721 flags.go:64] FLAG: --register-schedulable="true" Feb 16 02:06:36.583476 master-0 kubenswrapper[7721]: I0216 02:06:36.579580 7721 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 16 02:06:36.583476 master-0 kubenswrapper[7721]: I0216 02:06:36.579589 7721 flags.go:64] FLAG: --registry-burst="10" Feb 16 02:06:36.583476 master-0 kubenswrapper[7721]: I0216 02:06:36.579595 7721 flags.go:64] FLAG: --registry-qps="5" Feb 16 02:06:36.583476 master-0 kubenswrapper[7721]: I0216 02:06:36.579600 7721 flags.go:64] FLAG: --reserved-cpus="" Feb 16 02:06:36.583476 master-0 kubenswrapper[7721]: I0216 02:06:36.579606 7721 flags.go:64] FLAG: --reserved-memory="" Feb 16 02:06:36.583476 master-0 kubenswrapper[7721]: I0216 02:06:36.579612 7721 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 16 02:06:36.583476 master-0 kubenswrapper[7721]: I0216 02:06:36.579618 7721 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 16 02:06:36.583476 master-0 kubenswrapper[7721]: I0216 02:06:36.579624 7721 flags.go:64] FLAG: --rotate-certificates="false" Feb 16 02:06:36.583476 master-0 kubenswrapper[7721]: I0216 02:06:36.579629 7721 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 16 02:06:36.583476 master-0 kubenswrapper[7721]: I0216 02:06:36.579634 7721 flags.go:64] FLAG: --runonce="false" Feb 16 02:06:36.583476 master-0 kubenswrapper[7721]: I0216 02:06:36.579640 7721 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 16 02:06:36.583476 master-0 kubenswrapper[7721]: I0216 02:06:36.579646 7721 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 16 02:06:36.583476 master-0 kubenswrapper[7721]: I0216 02:06:36.579651 7721 flags.go:64] FLAG: --seccomp-default="false" Feb 16 02:06:36.583476 master-0 kubenswrapper[7721]: I0216 02:06:36.579659 7721 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 16 02:06:36.583476 master-0 kubenswrapper[7721]: I0216 02:06:36.579665 7721 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 16 02:06:36.583476 master-0 kubenswrapper[7721]: I0216 02:06:36.579671 7721 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 16 02:06:36.583476 master-0 kubenswrapper[7721]: I0216 02:06:36.579676 7721 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 16 02:06:36.583476 master-0 kubenswrapper[7721]: I0216 02:06:36.579682 7721 flags.go:64] FLAG: --storage-driver-password="root" Feb 16 02:06:36.583476 master-0 kubenswrapper[7721]: I0216 02:06:36.579687 7721 flags.go:64] FLAG: --storage-driver-secure="false" Feb 16 02:06:36.583476 master-0 kubenswrapper[7721]: I0216 02:06:36.579693 7721 flags.go:64] FLAG: --storage-driver-table="stats" Feb 16 02:06:36.583476 master-0 kubenswrapper[7721]: I0216 02:06:36.579698 7721 flags.go:64] FLAG: --storage-driver-user="root" Feb 16 02:06:36.583476 master-0 kubenswrapper[7721]: I0216 02:06:36.579704 7721 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 16 02:06:36.583476 master-0 kubenswrapper[7721]: I0216 02:06:36.579710 7721 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 16 02:06:36.584265 master-0 kubenswrapper[7721]: I0216 02:06:36.579716 7721 flags.go:64] FLAG: --system-cgroups="" Feb 16 02:06:36.584265 master-0 kubenswrapper[7721]: I0216 02:06:36.579722 7721 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Feb 16 02:06:36.584265 master-0 kubenswrapper[7721]: I0216 02:06:36.579730 7721 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 16 02:06:36.584265 master-0 kubenswrapper[7721]: I0216 02:06:36.579743 7721 flags.go:64] FLAG: --tls-cert-file="" Feb 16 02:06:36.584265 master-0 kubenswrapper[7721]: I0216 02:06:36.579748 7721 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 16 02:06:36.584265 master-0 kubenswrapper[7721]: I0216 02:06:36.579755 7721 flags.go:64] FLAG: --tls-min-version="" Feb 16 02:06:36.584265 master-0 kubenswrapper[7721]: I0216 02:06:36.579762 7721 flags.go:64] FLAG: --tls-private-key-file="" Feb 16 02:06:36.584265 master-0 kubenswrapper[7721]: I0216 02:06:36.579768 7721 flags.go:64] FLAG: --topology-manager-policy="none" Feb 16 02:06:36.584265 master-0 kubenswrapper[7721]: I0216 02:06:36.579773 7721 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 16 02:06:36.584265 master-0 kubenswrapper[7721]: I0216 02:06:36.579779 7721 flags.go:64] FLAG: --topology-manager-scope="container" Feb 16 02:06:36.584265 master-0 kubenswrapper[7721]: I0216 02:06:36.579785 7721 flags.go:64] FLAG: --v="2" Feb 16 02:06:36.584265 master-0 kubenswrapper[7721]: I0216 02:06:36.579792 7721 flags.go:64] FLAG: --version="false" Feb 16 02:06:36.584265 master-0 kubenswrapper[7721]: I0216 02:06:36.579799 7721 flags.go:64] FLAG: --vmodule="" Feb 16 02:06:36.584265 master-0 kubenswrapper[7721]: I0216 02:06:36.579806 7721 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 16 02:06:36.584265 master-0 kubenswrapper[7721]: I0216 02:06:36.579812 7721 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 16 02:06:36.584265 master-0 kubenswrapper[7721]: W0216 02:06:36.579935 7721 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 02:06:36.584265 master-0 kubenswrapper[7721]: W0216 02:06:36.579942 7721 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 02:06:36.584265 master-0 kubenswrapper[7721]: W0216 02:06:36.579948 7721 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 02:06:36.584265 master-0 kubenswrapper[7721]: W0216 02:06:36.579953 7721 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 02:06:36.584265 master-0 kubenswrapper[7721]: W0216 02:06:36.579959 7721 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 02:06:36.584265 master-0 kubenswrapper[7721]: W0216 02:06:36.579964 7721 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 02:06:36.584265 master-0 kubenswrapper[7721]: W0216 02:06:36.579970 7721 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 02:06:36.584265 master-0 kubenswrapper[7721]: W0216 02:06:36.579985 7721 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 02:06:36.585137 master-0 kubenswrapper[7721]: W0216 02:06:36.579991 7721 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 02:06:36.585137 master-0 kubenswrapper[7721]: W0216 02:06:36.579996 7721 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 02:06:36.585137 master-0 kubenswrapper[7721]: W0216 02:06:36.580001 7721 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 02:06:36.585137 master-0 kubenswrapper[7721]: W0216 02:06:36.580006 7721 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 02:06:36.585137 master-0 kubenswrapper[7721]: W0216 02:06:36.580011 7721 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 02:06:36.585137 master-0 kubenswrapper[7721]: W0216 02:06:36.580016 7721 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 02:06:36.585137 master-0 kubenswrapper[7721]: W0216 02:06:36.580021 7721 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 02:06:36.585137 master-0 kubenswrapper[7721]: W0216 02:06:36.580028 7721 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 02:06:36.585137 master-0 kubenswrapper[7721]: W0216 02:06:36.580034 7721 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 02:06:36.585137 master-0 kubenswrapper[7721]: W0216 02:06:36.580039 7721 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 02:06:36.585137 master-0 kubenswrapper[7721]: W0216 02:06:36.580044 7721 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 02:06:36.585137 master-0 kubenswrapper[7721]: W0216 02:06:36.580050 7721 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 02:06:36.585137 master-0 kubenswrapper[7721]: W0216 02:06:36.580056 7721 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 02:06:36.585137 master-0 kubenswrapper[7721]: W0216 02:06:36.580061 7721 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 02:06:36.585137 master-0 kubenswrapper[7721]: W0216 02:06:36.580066 7721 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 02:06:36.585137 master-0 kubenswrapper[7721]: W0216 02:06:36.580073 7721 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 02:06:36.585137 master-0 kubenswrapper[7721]: W0216 02:06:36.580078 7721 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 02:06:36.585137 master-0 kubenswrapper[7721]: W0216 02:06:36.580083 7721 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 02:06:36.585137 master-0 kubenswrapper[7721]: W0216 02:06:36.580088 7721 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 02:06:36.585137 master-0 kubenswrapper[7721]: W0216 02:06:36.580093 7721 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 02:06:36.585978 master-0 kubenswrapper[7721]: W0216 02:06:36.580098 7721 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 02:06:36.585978 master-0 kubenswrapper[7721]: W0216 02:06:36.580103 7721 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 02:06:36.585978 master-0 kubenswrapper[7721]: W0216 02:06:36.580107 7721 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 02:06:36.585978 master-0 kubenswrapper[7721]: W0216 02:06:36.580112 7721 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 02:06:36.585978 master-0 kubenswrapper[7721]: W0216 02:06:36.580117 7721 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 02:06:36.585978 master-0 kubenswrapper[7721]: W0216 02:06:36.580122 7721 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 02:06:36.585978 master-0 kubenswrapper[7721]: W0216 02:06:36.580127 7721 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 02:06:36.585978 master-0 kubenswrapper[7721]: W0216 02:06:36.580132 7721 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 02:06:36.585978 master-0 kubenswrapper[7721]: W0216 02:06:36.580137 7721 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 02:06:36.585978 master-0 kubenswrapper[7721]: W0216 02:06:36.580142 7721 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 02:06:36.585978 master-0 kubenswrapper[7721]: W0216 02:06:36.580147 7721 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 02:06:36.585978 master-0 kubenswrapper[7721]: W0216 02:06:36.580154 7721 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 02:06:36.585978 master-0 kubenswrapper[7721]: W0216 02:06:36.580159 7721 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 02:06:36.585978 master-0 kubenswrapper[7721]: W0216 02:06:36.580164 7721 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 02:06:36.585978 master-0 kubenswrapper[7721]: W0216 02:06:36.580169 7721 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 02:06:36.585978 master-0 kubenswrapper[7721]: W0216 02:06:36.580174 7721 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 02:06:36.585978 master-0 kubenswrapper[7721]: W0216 02:06:36.580179 7721 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 02:06:36.585978 master-0 kubenswrapper[7721]: W0216 02:06:36.580184 7721 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 02:06:36.585978 master-0 kubenswrapper[7721]: W0216 02:06:36.580189 7721 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 02:06:36.585978 master-0 kubenswrapper[7721]: W0216 02:06:36.580194 7721 feature_gate.go:330] unrecognized feature gate: Example Feb 16 02:06:36.586643 master-0 kubenswrapper[7721]: W0216 02:06:36.580198 7721 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 16 02:06:36.586643 master-0 kubenswrapper[7721]: W0216 02:06:36.580203 7721 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 02:06:36.586643 master-0 kubenswrapper[7721]: W0216 02:06:36.580208 7721 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 02:06:36.586643 master-0 kubenswrapper[7721]: W0216 02:06:36.580213 7721 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 02:06:36.586643 master-0 kubenswrapper[7721]: W0216 02:06:36.580218 7721 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 02:06:36.586643 master-0 kubenswrapper[7721]: W0216 02:06:36.580223 7721 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 02:06:36.586643 master-0 kubenswrapper[7721]: W0216 02:06:36.580227 7721 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 02:06:36.586643 master-0 kubenswrapper[7721]: W0216 02:06:36.580235 7721 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 02:06:36.586643 master-0 kubenswrapper[7721]: W0216 02:06:36.580240 7721 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 02:06:36.586643 master-0 kubenswrapper[7721]: W0216 02:06:36.580247 7721 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 02:06:36.586643 master-0 kubenswrapper[7721]: W0216 02:06:36.580254 7721 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 02:06:36.586643 master-0 kubenswrapper[7721]: W0216 02:06:36.580260 7721 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 02:06:36.586643 master-0 kubenswrapper[7721]: W0216 02:06:36.580266 7721 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 02:06:36.586643 master-0 kubenswrapper[7721]: W0216 02:06:36.580271 7721 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 02:06:36.586643 master-0 kubenswrapper[7721]: W0216 02:06:36.580276 7721 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 02:06:36.586643 master-0 kubenswrapper[7721]: W0216 02:06:36.580281 7721 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 02:06:36.586643 master-0 kubenswrapper[7721]: W0216 02:06:36.580286 7721 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 02:06:36.586643 master-0 kubenswrapper[7721]: W0216 02:06:36.580291 7721 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 02:06:36.586643 master-0 kubenswrapper[7721]: W0216 02:06:36.580297 7721 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 02:06:36.587254 master-0 kubenswrapper[7721]: W0216 02:06:36.580303 7721 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 02:06:36.587254 master-0 kubenswrapper[7721]: W0216 02:06:36.580309 7721 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 02:06:36.587254 master-0 kubenswrapper[7721]: W0216 02:06:36.580315 7721 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 02:06:36.587254 master-0 kubenswrapper[7721]: W0216 02:06:36.580320 7721 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 02:06:36.587254 master-0 kubenswrapper[7721]: W0216 02:06:36.580327 7721 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 02:06:36.587254 master-0 kubenswrapper[7721]: I0216 02:06:36.580335 7721 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 02:06:36.592112 master-0 kubenswrapper[7721]: I0216 02:06:36.592067 7721 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Feb 16 02:06:36.592270 master-0 kubenswrapper[7721]: I0216 02:06:36.592196 7721 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 16 02:06:36.592450 master-0 kubenswrapper[7721]: W0216 02:06:36.592384 7721 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 02:06:36.592450 master-0 kubenswrapper[7721]: W0216 02:06:36.592396 7721 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 02:06:36.592450 master-0 kubenswrapper[7721]: W0216 02:06:36.592402 7721 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 02:06:36.592450 master-0 kubenswrapper[7721]: W0216 02:06:36.592406 7721 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 02:06:36.592450 master-0 kubenswrapper[7721]: W0216 02:06:36.592410 7721 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 16 02:06:36.592450 master-0 kubenswrapper[7721]: W0216 02:06:36.592414 7721 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 02:06:36.592450 master-0 kubenswrapper[7721]: W0216 02:06:36.592418 7721 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 02:06:36.592450 master-0 kubenswrapper[7721]: W0216 02:06:36.592423 7721 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 02:06:36.592828 master-0 kubenswrapper[7721]: W0216 02:06:36.592428 7721 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 02:06:36.592828 master-0 kubenswrapper[7721]: W0216 02:06:36.592807 7721 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 02:06:36.592985 master-0 kubenswrapper[7721]: W0216 02:06:36.592915 7721 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 02:06:36.592985 master-0 kubenswrapper[7721]: W0216 02:06:36.592924 7721 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 02:06:36.592985 master-0 kubenswrapper[7721]: W0216 02:06:36.592927 7721 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 02:06:36.592985 master-0 kubenswrapper[7721]: W0216 02:06:36.592931 7721 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 02:06:36.592985 master-0 kubenswrapper[7721]: W0216 02:06:36.592935 7721 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 02:06:36.592985 master-0 kubenswrapper[7721]: W0216 02:06:36.592939 7721 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 02:06:36.592985 master-0 kubenswrapper[7721]: W0216 02:06:36.592943 7721 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 02:06:36.592985 master-0 kubenswrapper[7721]: W0216 02:06:36.592947 7721 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 02:06:36.592985 master-0 kubenswrapper[7721]: W0216 02:06:36.592950 7721 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 02:06:36.592985 master-0 kubenswrapper[7721]: W0216 02:06:36.592954 7721 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 02:06:36.592985 master-0 kubenswrapper[7721]: W0216 02:06:36.592958 7721 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 02:06:36.592985 master-0 kubenswrapper[7721]: W0216 02:06:36.592961 7721 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 02:06:36.592985 master-0 kubenswrapper[7721]: W0216 02:06:36.592965 7721 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 02:06:36.593450 master-0 kubenswrapper[7721]: W0216 02:06:36.592968 7721 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 02:06:36.593450 master-0 kubenswrapper[7721]: W0216 02:06:36.593427 7721 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 02:06:36.593577 master-0 kubenswrapper[7721]: W0216 02:06:36.593543 7721 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 02:06:36.593577 master-0 kubenswrapper[7721]: W0216 02:06:36.593552 7721 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 02:06:36.593577 master-0 kubenswrapper[7721]: W0216 02:06:36.593555 7721 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 02:06:36.593711 master-0 kubenswrapper[7721]: W0216 02:06:36.593559 7721 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 02:06:36.593796 master-0 kubenswrapper[7721]: W0216 02:06:36.593765 7721 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 02:06:36.593878 master-0 kubenswrapper[7721]: W0216 02:06:36.593848 7721 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 02:06:36.594039 master-0 kubenswrapper[7721]: W0216 02:06:36.593938 7721 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 02:06:36.594039 master-0 kubenswrapper[7721]: W0216 02:06:36.593972 7721 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 02:06:36.594039 master-0 kubenswrapper[7721]: W0216 02:06:36.593980 7721 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 02:06:36.594039 master-0 kubenswrapper[7721]: W0216 02:06:36.593985 7721 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 02:06:36.594039 master-0 kubenswrapper[7721]: W0216 02:06:36.593989 7721 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 02:06:36.594039 master-0 kubenswrapper[7721]: W0216 02:06:36.593994 7721 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 02:06:36.594039 master-0 kubenswrapper[7721]: W0216 02:06:36.593999 7721 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 02:06:36.594039 master-0 kubenswrapper[7721]: W0216 02:06:36.594004 7721 feature_gate.go:330] unrecognized feature gate: Example Feb 16 02:06:36.594039 master-0 kubenswrapper[7721]: W0216 02:06:36.594008 7721 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 02:06:36.594039 master-0 kubenswrapper[7721]: W0216 02:06:36.594013 7721 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 02:06:36.594039 master-0 kubenswrapper[7721]: W0216 02:06:36.594017 7721 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 02:06:36.594558 master-0 kubenswrapper[7721]: W0216 02:06:36.594524 7721 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 02:06:36.594558 master-0 kubenswrapper[7721]: W0216 02:06:36.594535 7721 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 02:06:36.594682 master-0 kubenswrapper[7721]: W0216 02:06:36.594646 7721 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 02:06:36.594682 master-0 kubenswrapper[7721]: W0216 02:06:36.594655 7721 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 02:06:36.594837 master-0 kubenswrapper[7721]: W0216 02:06:36.594661 7721 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 02:06:36.594837 master-0 kubenswrapper[7721]: W0216 02:06:36.594778 7721 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 02:06:36.594837 master-0 kubenswrapper[7721]: W0216 02:06:36.594784 7721 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 02:06:36.594837 master-0 kubenswrapper[7721]: W0216 02:06:36.594789 7721 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 02:06:36.594837 master-0 kubenswrapper[7721]: W0216 02:06:36.594793 7721 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 02:06:36.594837 master-0 kubenswrapper[7721]: W0216 02:06:36.594797 7721 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 02:06:36.594837 master-0 kubenswrapper[7721]: W0216 02:06:36.594801 7721 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 02:06:36.594837 master-0 kubenswrapper[7721]: W0216 02:06:36.594806 7721 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 02:06:36.594837 master-0 kubenswrapper[7721]: W0216 02:06:36.594811 7721 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 02:06:36.594837 master-0 kubenswrapper[7721]: W0216 02:06:36.594817 7721 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 02:06:36.595193 master-0 kubenswrapper[7721]: W0216 02:06:36.595168 7721 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 02:06:36.595269 master-0 kubenswrapper[7721]: W0216 02:06:36.595237 7721 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 02:06:36.595269 master-0 kubenswrapper[7721]: W0216 02:06:36.595245 7721 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 02:06:36.595269 master-0 kubenswrapper[7721]: W0216 02:06:36.595249 7721 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 02:06:36.595453 master-0 kubenswrapper[7721]: W0216 02:06:36.595388 7721 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 02:06:36.595453 master-0 kubenswrapper[7721]: W0216 02:06:36.595398 7721 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 02:06:36.595453 master-0 kubenswrapper[7721]: W0216 02:06:36.595404 7721 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 02:06:36.595453 master-0 kubenswrapper[7721]: W0216 02:06:36.595408 7721 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 02:06:36.595453 master-0 kubenswrapper[7721]: W0216 02:06:36.595414 7721 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 02:06:36.595453 master-0 kubenswrapper[7721]: W0216 02:06:36.595418 7721 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 02:06:36.595453 master-0 kubenswrapper[7721]: W0216 02:06:36.595423 7721 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 02:06:36.595453 master-0 kubenswrapper[7721]: W0216 02:06:36.595427 7721 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 02:06:36.595768 master-0 kubenswrapper[7721]: W0216 02:06:36.595723 7721 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 02:06:36.595768 master-0 kubenswrapper[7721]: W0216 02:06:36.595732 7721 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 02:06:36.595768 master-0 kubenswrapper[7721]: W0216 02:06:36.595736 7721 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 02:06:36.595768 master-0 kubenswrapper[7721]: W0216 02:06:36.595740 7721 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 02:06:36.596004 master-0 kubenswrapper[7721]: I0216 02:06:36.595747 7721 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 02:06:36.596168 master-0 kubenswrapper[7721]: W0216 02:06:36.596139 7721 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 02:06:36.596247 master-0 kubenswrapper[7721]: W0216 02:06:36.596213 7721 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 02:06:36.596247 master-0 kubenswrapper[7721]: W0216 02:06:36.596222 7721 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 02:06:36.596247 master-0 kubenswrapper[7721]: W0216 02:06:36.596226 7721 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 02:06:36.596427 master-0 kubenswrapper[7721]: W0216 02:06:36.596388 7721 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 02:06:36.596427 master-0 kubenswrapper[7721]: W0216 02:06:36.596404 7721 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 02:06:36.596593 master-0 kubenswrapper[7721]: W0216 02:06:36.596550 7721 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 02:06:36.596593 master-0 kubenswrapper[7721]: W0216 02:06:36.596562 7721 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 02:06:36.596593 master-0 kubenswrapper[7721]: W0216 02:06:36.596567 7721 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 02:06:36.596749 master-0 kubenswrapper[7721]: W0216 02:06:36.596721 7721 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 02:06:36.596749 master-0 kubenswrapper[7721]: W0216 02:06:36.596731 7721 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 02:06:36.596939 master-0 kubenswrapper[7721]: W0216 02:06:36.596835 7721 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 02:06:36.596939 master-0 kubenswrapper[7721]: W0216 02:06:36.596866 7721 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 02:06:36.596939 master-0 kubenswrapper[7721]: W0216 02:06:36.596872 7721 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 02:06:36.596939 master-0 kubenswrapper[7721]: W0216 02:06:36.596877 7721 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 02:06:36.596939 master-0 kubenswrapper[7721]: W0216 02:06:36.596881 7721 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 02:06:36.596939 master-0 kubenswrapper[7721]: W0216 02:06:36.596886 7721 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 02:06:36.596939 master-0 kubenswrapper[7721]: W0216 02:06:36.596893 7721 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 02:06:36.596939 master-0 kubenswrapper[7721]: W0216 02:06:36.596900 7721 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 02:06:36.596939 master-0 kubenswrapper[7721]: W0216 02:06:36.596904 7721 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 02:06:36.596939 master-0 kubenswrapper[7721]: W0216 02:06:36.596909 7721 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 02:06:36.596939 master-0 kubenswrapper[7721]: W0216 02:06:36.596913 7721 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 16 02:06:36.597679 master-0 kubenswrapper[7721]: W0216 02:06:36.596917 7721 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 02:06:36.597679 master-0 kubenswrapper[7721]: W0216 02:06:36.597649 7721 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 02:06:36.597844 master-0 kubenswrapper[7721]: W0216 02:06:36.597657 7721 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 02:06:36.597844 master-0 kubenswrapper[7721]: W0216 02:06:36.597810 7721 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 02:06:36.597844 master-0 kubenswrapper[7721]: W0216 02:06:36.597816 7721 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 02:06:36.598087 master-0 kubenswrapper[7721]: W0216 02:06:36.597821 7721 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 02:06:36.598087 master-0 kubenswrapper[7721]: W0216 02:06:36.598015 7721 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 02:06:36.598087 master-0 kubenswrapper[7721]: W0216 02:06:36.598021 7721 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 02:06:36.598087 master-0 kubenswrapper[7721]: W0216 02:06:36.598027 7721 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 02:06:36.598087 master-0 kubenswrapper[7721]: W0216 02:06:36.598032 7721 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 02:06:36.598087 master-0 kubenswrapper[7721]: W0216 02:06:36.598036 7721 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 02:06:36.598087 master-0 kubenswrapper[7721]: W0216 02:06:36.598042 7721 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 02:06:36.598087 master-0 kubenswrapper[7721]: W0216 02:06:36.598046 7721 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 02:06:36.598087 master-0 kubenswrapper[7721]: W0216 02:06:36.598050 7721 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 02:06:36.598087 master-0 kubenswrapper[7721]: W0216 02:06:36.598055 7721 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 02:06:36.598087 master-0 kubenswrapper[7721]: W0216 02:06:36.598060 7721 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 02:06:36.598737 master-0 kubenswrapper[7721]: W0216 02:06:36.598659 7721 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 02:06:36.598737 master-0 kubenswrapper[7721]: W0216 02:06:36.598673 7721 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 02:06:36.598737 master-0 kubenswrapper[7721]: W0216 02:06:36.598679 7721 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 02:06:36.598737 master-0 kubenswrapper[7721]: W0216 02:06:36.598683 7721 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 02:06:36.598737 master-0 kubenswrapper[7721]: W0216 02:06:36.598688 7721 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 02:06:36.598737 master-0 kubenswrapper[7721]: W0216 02:06:36.598694 7721 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 02:06:36.598737 master-0 kubenswrapper[7721]: W0216 02:06:36.598700 7721 feature_gate.go:330] unrecognized feature gate: Example Feb 16 02:06:36.598737 master-0 kubenswrapper[7721]: W0216 02:06:36.598706 7721 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 02:06:36.598737 master-0 kubenswrapper[7721]: W0216 02:06:36.598712 7721 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 02:06:36.599214 master-0 kubenswrapper[7721]: W0216 02:06:36.598717 7721 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 02:06:36.599372 master-0 kubenswrapper[7721]: W0216 02:06:36.599270 7721 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 02:06:36.599372 master-0 kubenswrapper[7721]: W0216 02:06:36.599301 7721 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 02:06:36.599372 master-0 kubenswrapper[7721]: W0216 02:06:36.599306 7721 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 02:06:36.599372 master-0 kubenswrapper[7721]: W0216 02:06:36.599311 7721 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 02:06:36.599372 master-0 kubenswrapper[7721]: W0216 02:06:36.599315 7721 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 02:06:36.599372 master-0 kubenswrapper[7721]: W0216 02:06:36.599320 7721 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 02:06:36.599372 master-0 kubenswrapper[7721]: W0216 02:06:36.599325 7721 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 02:06:36.599372 master-0 kubenswrapper[7721]: W0216 02:06:36.599330 7721 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 02:06:36.599372 master-0 kubenswrapper[7721]: W0216 02:06:36.599334 7721 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 02:06:36.599372 master-0 kubenswrapper[7721]: W0216 02:06:36.599339 7721 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 02:06:36.599372 master-0 kubenswrapper[7721]: W0216 02:06:36.599343 7721 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 02:06:36.599372 master-0 kubenswrapper[7721]: W0216 02:06:36.599348 7721 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 02:06:36.599372 master-0 kubenswrapper[7721]: W0216 02:06:36.599352 7721 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 02:06:36.600032 master-0 kubenswrapper[7721]: W0216 02:06:36.599925 7721 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 02:06:36.600032 master-0 kubenswrapper[7721]: W0216 02:06:36.599958 7721 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 02:06:36.600032 master-0 kubenswrapper[7721]: W0216 02:06:36.599964 7721 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 02:06:36.600032 master-0 kubenswrapper[7721]: W0216 02:06:36.599968 7721 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 02:06:36.600032 master-0 kubenswrapper[7721]: W0216 02:06:36.599972 7721 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 02:06:36.600032 master-0 kubenswrapper[7721]: W0216 02:06:36.599976 7721 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 02:06:36.600032 master-0 kubenswrapper[7721]: W0216 02:06:36.599980 7721 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 02:06:36.600032 master-0 kubenswrapper[7721]: W0216 02:06:36.599987 7721 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 02:06:36.600032 master-0 kubenswrapper[7721]: W0216 02:06:36.599992 7721 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 02:06:36.600032 master-0 kubenswrapper[7721]: W0216 02:06:36.599996 7721 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 02:06:36.600032 master-0 kubenswrapper[7721]: W0216 02:06:36.600001 7721 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 02:06:36.601240 master-0 kubenswrapper[7721]: I0216 02:06:36.600008 7721 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 02:06:36.601240 master-0 kubenswrapper[7721]: I0216 02:06:36.600834 7721 server.go:940] "Client rotation is on, will bootstrap in background" Feb 16 02:06:36.604039 master-0 kubenswrapper[7721]: I0216 02:06:36.604005 7721 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 16 02:06:36.604286 master-0 kubenswrapper[7721]: I0216 02:06:36.604236 7721 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 16 02:06:36.604795 master-0 kubenswrapper[7721]: I0216 02:06:36.604761 7721 server.go:997] "Starting client certificate rotation" Feb 16 02:06:36.604953 master-0 kubenswrapper[7721]: I0216 02:06:36.604888 7721 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 16 02:06:36.605229 master-0 kubenswrapper[7721]: I0216 02:06:36.605114 7721 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-17 01:56:50 +0000 UTC, rotation deadline is 2026-02-16 22:05:57.033738427 +0000 UTC Feb 16 02:06:36.605311 master-0 kubenswrapper[7721]: I0216 02:06:36.605229 7721 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 19h59m20.428514347s for next certificate rotation Feb 16 02:06:36.606193 master-0 kubenswrapper[7721]: I0216 02:06:36.606144 7721 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 02:06:36.608657 master-0 kubenswrapper[7721]: I0216 02:06:36.608546 7721 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 02:06:36.612902 master-0 kubenswrapper[7721]: I0216 02:06:36.612862 7721 log.go:25] "Validated CRI v1 runtime API" Feb 16 02:06:36.615538 master-0 kubenswrapper[7721]: I0216 02:06:36.615504 7721 log.go:25] "Validated CRI v1 image API" Feb 16 02:06:36.617107 master-0 kubenswrapper[7721]: I0216 02:06:36.617084 7721 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 16 02:06:36.621781 master-0 kubenswrapper[7721]: I0216 02:06:36.621756 7721 fs.go:135] Filesystem UUIDs: map[62dc72f5-7748-49f9-b4d1-75449f1d8b55:/dev/vda3 7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4] Feb 16 02:06:36.622172 master-0 kubenswrapper[7721]: I0216 02:06:36.621853 7721 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/07f7d55685e3891e139cfcc8fc39a4525349b15753a33187f5704239bf899022/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/07f7d55685e3891e139cfcc8fc39a4525349b15753a33187f5704239bf899022/userdata/shm major:0 minor:50 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0e4ba8ef0f2d2dcfdef03df990cea8e18604ff8954454af1715e76176988bea9/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0e4ba8ef0f2d2dcfdef03df990cea8e18604ff8954454af1715e76176988bea9/userdata/shm major:0 minor:41 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/19f7fe92f509ebf58263703a24e99425e1eb0493ad65313aae50f23d57b15adc/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/19f7fe92f509ebf58263703a24e99425e1eb0493ad65313aae50f23d57b15adc/userdata/shm major:0 minor:295 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2287a210e87155c02ab6e622acb47d96fd89d65dc49d2afbe29745b869fd7b87/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2287a210e87155c02ab6e622acb47d96fd89d65dc49d2afbe29745b869fd7b87/userdata/shm major:0 minor:291 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/248a67424bdbb1372ecb2fb070f15261787aceee6d09513f5274ef915ebe68ae/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/248a67424bdbb1372ecb2fb070f15261787aceee6d09513f5274ef915ebe68ae/userdata/shm major:0 minor:283 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/359c73798649b5d5b089f1492d973d2f87ffd23f53f2f5868ba22d8d7543d4cc/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/359c73798649b5d5b089f1492d973d2f87ffd23f53f2f5868ba22d8d7543d4cc/userdata/shm major:0 minor:120 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3aa52e31a2b9a476aba0a48b18d458a5a18722a85353d604bbc35df3b9829545/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3aa52e31a2b9a476aba0a48b18d458a5a18722a85353d604bbc35df3b9829545/userdata/shm major:0 minor:287 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/450f24c15fabf2ce3093e6381873f7497e388b2d4b0a5acae355eb63b714bf74/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/450f24c15fabf2ce3093e6381873f7497e388b2d4b0a5acae355eb63b714bf74/userdata/shm major:0 minor:77 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/48c24060310ea59ea7726c8831161c102b57d6b94e31b9bc5a4ace9382583b32/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/48c24060310ea59ea7726c8831161c102b57d6b94e31b9bc5a4ace9382583b32/userdata/shm major:0 minor:284 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4b7e8c0ad2cdf87e8552c9488b3b26422f87ac52802cbde7bf5707282a58545e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4b7e8c0ad2cdf87e8552c9488b3b26422f87ac52802cbde7bf5707282a58545e/userdata/shm major:0 minor:258 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/65db28ff03b176892a8eec81629c7d19dbef022673e856af206a72dde2a48896/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/65db28ff03b176892a8eec81629c7d19dbef022673e856af206a72dde2a48896/userdata/shm major:0 minor:278 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7ffd086045d7cf31e1d7c2da1b8924ee64ea940c7d3c880260b182cb5c759f90/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7ffd086045d7cf31e1d7c2da1b8924ee64ea940c7d3c880260b182cb5c759f90/userdata/shm major:0 minor:129 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8dd32bd58a893bd46ee61ae39a01f4492842e7fb2c4d56eeca513230f073e979/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8dd32bd58a893bd46ee61ae39a01f4492842e7fb2c4d56eeca513230f073e979/userdata/shm major:0 minor:323 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/93fe8320cd8b094f12e9b856631a3581df910023e217ba523e4fc8bbdc13eff6/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/93fe8320cd8b094f12e9b856631a3581df910023e217ba523e4fc8bbdc13eff6/userdata/shm major:0 minor:293 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/99cec3957b7591d54dab1b67d940469ccca762d577aa986d00f4e46746ba55f5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/99cec3957b7591d54dab1b67d940469ccca762d577aa986d00f4e46746ba55f5/userdata/shm major:0 minor:277 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a24c34b6599e046cb5b217ee112cd5793502433694aca39a7811b07f3f980447/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a24c34b6599e046cb5b217ee112cd5793502433694aca39a7811b07f3f980447/userdata/shm major:0 minor:58 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/aafd16466f6eed6a672c6fa59488bcd1ea8cbc42fa0dfe86540d9e97cd364cb6/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/aafd16466f6eed6a672c6fa59488bcd1ea8cbc42fa0dfe86540d9e97cd364cb6/userdata/shm major:0 minor:289 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c8b97e97c35ace1d8e3342fb279b58e63ecd66a09abba6b504fe344a2864fe27/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c8b97e97c35ace1d8e3342fb279b58e63ecd66a09abba6b504fe344a2864fe27/userdata/shm major:0 minor:42 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d3a6bee8bdf67740901292411913794cb77a0e097ae15189322f724e1617872d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d3a6bee8bdf67740901292411913794cb77a0e097ae15189322f724e1617872d/userdata/shm major:0 minor:256 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ee59c4907031f715d1c9629d7cd8d627d819c1de44021beecbbfe36a41fcaf72/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ee59c4907031f715d1c9629d7cd8d627d819c1de44021beecbbfe36a41fcaf72/userdata/shm major:0 minor:166 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ef2bb9465307e33223e533d623bdfd016157fa7b6e73255487b68bf12c529272/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ef2bb9465307e33223e533d623bdfd016157fa7b6e73255487b68bf12c529272/userdata/shm major:0 minor:321 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f3b7c64f3be908fc19e9deab55b835cdfbaa84035406e99a4fd85bf496337788/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f3b7c64f3be908fc19e9deab55b835cdfbaa84035406e99a4fd85bf496337788/userdata/shm major:0 minor:144 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/fc01c156470f39fb1cec479037027fa891a6711ffbe4b5da46389ad652e479bb/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/fc01c156470f39fb1cec479037027fa891a6711ffbe4b5da46389ad652e479bb/userdata/shm major:0 minor:140 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ff8deeed5842106bfd4d1b27be4848f25105bbaa159314b19c6a3add851fbf37/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ff8deeed5842106bfd4d1b27be4848f25105bbaa159314b19c6a3add851fbf37/userdata/shm major:0 minor:54 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/04804a08-e3a5-46f3-abcb-967866834baa/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/04804a08-e3a5-46f3-abcb-967866834baa/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:270 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/04804a08-e3a5-46f3-abcb-967866834baa/volumes/kubernetes.io~projected/kube-api-access-8rc6w:{mountpoint:/var/lib/kubelet/pods/04804a08-e3a5-46f3-abcb-967866834baa/volumes/kubernetes.io~projected/kube-api-access-8rc6w major:0 minor:260 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1743372f-bdb0-4558-b47b-3714f3aa3fde/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/1743372f-bdb0-4558-b47b-3714f3aa3fde/volumes/kubernetes.io~projected/kube-api-access major:0 minor:269 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1743372f-bdb0-4558-b47b-3714f3aa3fde/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/1743372f-bdb0-4558-b47b-3714f3aa3fde/volumes/kubernetes.io~secret/serving-cert major:0 minor:248 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1f2d2601-481d-4e86-ac4c-3d34d5691261/volumes/kubernetes.io~projected/kube-api-access-8d49c:{mountpoint:/var/lib/kubelet/pods/1f2d2601-481d-4e86-ac4c-3d34d5691261/volumes/kubernetes.io~projected/kube-api-access-8d49c major:0 minor:263 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1f2d2601-481d-4e86-ac4c-3d34d5691261/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/1f2d2601-481d-4e86-ac4c-3d34d5691261/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert major:0 minor:253 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/21686a6d-f685-4fb6-98af-3e8a39c5981b/volumes/kubernetes.io~projected/kube-api-access-lvf8t:{mountpoint:/var/lib/kubelet/pods/21686a6d-f685-4fb6-98af-3e8a39c5981b/volumes/kubernetes.io~projected/kube-api-access-lvf8t major:0 minor:273 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/23755f7f-dce6-4dcf-9664-22e3aedb5c81/volumes/kubernetes.io~projected/kube-api-access-n4gmn:{mountpoint:/var/lib/kubelet/pods/23755f7f-dce6-4dcf-9664-22e3aedb5c81/volumes/kubernetes.io~projected/kube-api-access-n4gmn major:0 minor:271 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2a67f799-fd8d-4bee-9d67-720151c1650b/volumes/kubernetes.io~projected/kube-api-access-47lht:{mountpoint:/var/lib/kubelet/pods/2a67f799-fd8d-4bee-9d67-720151c1650b/volumes/kubernetes.io~projected/kube-api-access-47lht major:0 minor:267 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2ffa4db8-97da-42de-8e51-35680f518ca7/volumes/kubernetes.io~projected/kube-api-access-t9sgx:{mountpoint:/var/lib/kubelet/pods/2ffa4db8-97da-42de-8e51-35680f518ca7/volumes/kubernetes.io~projected/kube-api-access-t9sgx major:0 minor:243 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/430c146b-ceaf-411a-add6-ce949243aabf/volumes/kubernetes.io~projected/kube-api-access-vdllq:{mountpoint:/var/lib/kubelet/pods/430c146b-ceaf-411a-add6-ce949243aabf/volumes/kubernetes.io~projected/kube-api-access-vdllq major:0 minor:111 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/456e6c3a-c16c-470b-a0cd-bb79865b54f0/volumes/kubernetes.io~projected/kube-api-access-nl7r8:{mountpoint:/var/lib/kubelet/pods/456e6c3a-c16c-470b-a0cd-bb79865b54f0/volumes/kubernetes.io~projected/kube-api-access-nl7r8 major:0 minor:70 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/456e6c3a-c16c-470b-a0cd-bb79865b54f0/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/456e6c3a-c16c-470b-a0cd-bb79865b54f0/volumes/kubernetes.io~secret/metrics-tls major:0 minor:43 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/467d92a2-1cf3-418d-b41e-8e5f9d7a5b74/volumes/kubernetes.io~projected/kube-api-access-f8lvq:{mountpoint:/var/lib/kubelet/pods/467d92a2-1cf3-418d-b41e-8e5f9d7a5b74/volumes/kubernetes.io~projected/kube-api-access-f8lvq major:0 minor:245 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/467d92a2-1cf3-418d-b41e-8e5f9d7a5b74/volumes/kubernetes.io~secret/profile-collector-cert:{mountpoint:/var/lib/kubelet/pods/467d92a2-1cf3-418d-b41e-8e5f9d7a5b74/volumes/kubernetes.io~secret/profile-collector-cert major:0 minor:239 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4a5b01c1-1231-4e69-8b6c-c4981b65b26e/volumes/kubernetes.io~projected/kube-api-access-zr872:{mountpoint:/var/lib/kubelet/pods/4a5b01c1-1231-4e69-8b6c-c4981b65b26e/volumes/kubernetes.io~projected/kube-api-access-zr872 major:0 minor:265 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4a5b01c1-1231-4e69-8b6c-c4981b65b26e/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/4a5b01c1-1231-4e69-8b6c-c4981b65b26e/volumes/kubernetes.io~secret/serving-cert major:0 minor:252 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6c02961f-30ec-4405-b7fa-9c4192342ae9/volumes/kubernetes.io~projected/kube-api-access-7llx6:{mountpoint:/var/lib/kubelet/pods/6c02961f-30ec-4405-b7fa-9c4192342ae9/volumes/kubernetes.io~projected/kube-api-access-7llx6 major:0 minor:242 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6c02961f-30ec-4405-b7fa-9c4192342ae9/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/6c02961f-30ec-4405-b7fa-9c4192342ae9/volumes/kubernetes.io~secret/serving-cert major:0 minor:237 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6dcef814-353e-4985-9afc-9e545f7853ae/volume-subpaths/run-systemd/ovnkube-controller/6:{mountpoint:/var/lib/kubelet/pods/6dcef814-353e-4985-9afc-9e545f7853ae/volume-subpaths/run-systemd/ovnkube-controller/6 major:0 minor:24 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6dcef814-353e-4985-9afc-9e545f7853ae/volumes/kubernetes.io~projected/kube-api-access-pjsbs:{mountpoint:/var/lib/kubelet/pods/6dcef814-353e-4985-9afc-9e545f7853ae/volumes/kubernetes.io~projected/kube-api-access-pjsbs major:0 minor:139 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6dcef814-353e-4985-9afc-9e545f7853ae/volumes/kubernetes.io~secret/ovn-node-metrics-cert:{mountpoint:/var/lib/kubelet/pods/6dcef814-353e-4985-9afc-9e545f7853ae/volumes/kubernetes.io~secret/ovn-node-metrics-cert major:0 minor:138 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/724ac845-3835-458b-9645-e665be135ff9/volumes/kubernetes.io~projected/kube-api-access-bff42:{mountpoint:/var/lib/kubelet/pods/724ac845-3835-458b-9645-e665be135ff9/volumes/kubernetes.io~projected/kube-api-access-bff42 major:0 minor:268 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/724ac845-3835-458b-9645-e665be135ff9/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/724ac845-3835-458b-9645-e665be135ff9/volumes/kubernetes.io~secret/etcd-client major:0 minor:247 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/724ac845-3835-458b-9645-e665be135ff9/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/724ac845-3835-458b-9645-e665be135ff9/volumes/kubernetes.io~secret/serving-cert major:0 minor:249 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/76915cba-7c11-4bd8-9943-81de74e7781b/volumes/kubernetes.io~projected/kube-api-access-6f8fj:{mountpoint:/var/lib/kubelet/pods/76915cba-7c11-4bd8-9943-81de74e7781b/volumes/kubernetes.io~projected/kube-api-access-6f8fj major:0 minor:240 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/76915cba-7c11-4bd8-9943-81de74e7781b/volumes/kubernetes.io~secret/profile-collector-cert:{mountpoint:/var/lib/kubelet/pods/76915cba-7c11-4bd8-9943-81de74e7781b/volumes/kubernetes.io~secret/profile-collector-cert major:0 minor:233 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7f0f9b7d-e663-4927-861b-a9544d483b6e/volumes/kubernetes.io~projected/kube-api-access-5m4sb:{mountpoint:/var/lib/kubelet/pods/7f0f9b7d-e663-4927-861b-a9544d483b6e/volumes/kubernetes.io~projected/kube-api-access-5m4sb major:0 minor:133 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff/volumes/kubernetes.io~projected/kube-api-access major:0 minor:108 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/91938be6-9ae4-4849-abe8-fc842daecd23/volumes/kubernetes.io~projected/kube-api-access-bhz2m:{mountpoint:/var/lib/kubelet/pods/91938be6-9ae4-4849-abe8-fc842daecd23/volumes/kubernetes.io~projected/kube-api-access-bhz2m major:0 minor:241 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/91938be6-9ae4-4849-abe8-fc842daecd23/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/91938be6-9ae4-4849-abe8-fc842daecd23/volumes/kubernetes.io~secret/serving-cert major:0 minor:303 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/980aa005-f51d-4ca2-aee6-a6fdeefd86d0/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/980aa005-f51d-4ca2-aee6-a6fdeefd86d0/volumes/kubernetes.io~projected/kube-api-access major:0 minor:264 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/980aa005-f51d-4ca2-aee6-a6fdeefd86d0/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/980aa005-f51d-4ca2-aee6-a6fdeefd86d0/volumes/kubernetes.io~secret/serving-cert major:0 minor:255 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9be9fd24-fdb1-43dc-80b8-68020427bfd7/volumes/kubernetes.io~projected/kube-api-access-k2qvg:{mountpoint:/var/lib/kubelet/pods/9be9fd24-fdb1-43dc-80b8-68020427bfd7/volumes/kubernetes.io~projected/kube-api-access-k2qvg major:0 minor:275 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9be9fd24-fdb1-43dc-80b8-68020427bfd7/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/9be9fd24-fdb1-43dc-80b8-68020427bfd7/volumes/kubernetes.io~secret/serving-cert major:0 minor:254 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a0540a70-a256-422b-a827-e564d0e67866/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/a0540a70-a256-422b-a827-e564d0e67866/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:274 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a0540a70-a256-422b-a827-e564d0e67866/volumes/kubernetes.io~projected/kube-api-access-s9p9r:{mountpoint:/var/lib/kubelet/pods/a0540a70-a256-422b-a827-e564d0e67866/volumes/kubernetes.io~projected/kube-api-access-s9p9r major:0 minor:261 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a8f33151-61df-4b66-ba85-9ba210779059/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/a8f33151-61df-4b66-ba85-9ba210779059/volumes/kubernetes.io~projected/kube-api-access major:0 minor:262 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a8f33151-61df-4b66-ba85-9ba210779059/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/a8f33151-61df-4b66-ba85-9ba210779059/volumes/kubernetes.io~secret/serving-cert major:0 minor:250 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b2a83ddd-ffa5-4127-9099-91187ad9dbba/volumes/kubernetes.io~projected/kube-api-access-t7fmj:{mountpoint:/var/lib/kubelet/pods/b2a83ddd-ffa5-4127-9099-91187ad9dbba/volumes/kubernetes.io~projected/kube-api-access-t7fmj major:0 minor:246 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b6088119-1125-4271-8c0b-0675e700edd9/volumes/kubernetes.io~projected/kube-api-access-jgpcj:{mountpoint:/var/lib/kubelet/pods/b6088119-1125-4271-8c0b-0675e700edd9/volumes/kubernetes.io~projected/kube-api-access-jgpcj major:0 minor:266 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bde83629-b39c-401e-bc30-5ce205638918/volumes/kubernetes.io~projected/kube-api-access-24b6h:{mountpoint:/var/lib/kubelet/pods/bde83629-b39c-401e-bc30-5ce205638918/volumes/kubernetes.io~projected/kube-api-access-24b6h major:0 minor:276 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c9cd32bc-a13a-44ee-ba52-7bb335c7007b/volumes/kubernetes.io~projected/kube-api-access-xr7gn:{mountpoint:/var/lib/kubelet/pods/c9cd32bc-a13a-44ee-ba52-7bb335c7007b/volumes/kubernetes.io~projected/kube-api-access-xr7gn major:0 minor:244 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c9cd32bc-a13a-44ee-ba52-7bb335c7007b/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/c9cd32bc-a13a-44ee-ba52-7bb335c7007b/volumes/kubernetes.io~secret/serving-cert major:0 minor:238 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d008dbd4-e713-4f2e-b64d-ca9cfc83a502/volumes/kubernetes.io~projected/kube-api-access-2582m:{mountpoint:/var/lib/kubelet/pods/d008dbd4-e713-4f2e-b64d-ca9cfc83a502/volumes/kubernetes.io~projected/kube-api-access-2582m major:0 minor:320 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/dbc5b101-936f-4bf3-bbf3-f30966b0ab50/volumes/kubernetes.io~projected/kube-api-access-jlnkb:{mountpoint:/var/lib/kubelet/pods/dbc5b101-936f-4bf3-bbf3-f30966b0ab50/volumes/kubernetes.io~projected/kube-api-access-jlnkb major:0 minor:164 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/dbc5b101-936f-4bf3-bbf3-f30966b0ab50/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/dbc5b101-936f-4bf3-bbf3-f30966b0ab50/volumes/kubernetes.io~secret/webhook-cert major:0 minor:165 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e379cfaf-3a4c-40e7-8641-3524b3669295/volumes/kubernetes.io~projected/kube-api-access-gcq6v:{mountpoint:/var/lib/kubelet/pods/e379cfaf-3a4c-40e7-8641-3524b3669295/volumes/kubernetes.io~projected/kube-api-access-gcq6v major:0 minor:272 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e379cfaf-3a4c-40e7-8641-3524b3669295/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/e379cfaf-3a4c-40e7-8641-3524b3669295/volumes/kubernetes.io~secret/serving-cert major:0 minor:251 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f7317f91-9441-449f-9738-85da088cf94f/volumes/kubernetes.io~projected/kube-api-access-58cq8:{mountpoint:/var/lib/kubelet/pods/f7317f91-9441-449f-9738-85da088cf94f/volumes/kubernetes.io~projected/kube-api-access-58cq8 major:0 minor:137 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f7317f91-9441-449f-9738-85da088cf94f/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert:{mountpoint:/var/lib/kubelet/pods/f7317f91-9441-449f-9738-85da088cf94f/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert major:0 minor:136 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f91346c7-bde4-4fa2-ac27-b5f0d25eeb75/volumes/kubernetes.io~projected/kube-api-access-4ns9l:{mountpoint:/var/lib/kubelet/pods/f91346c7-bde4-4fa2-ac27-b5f0d25eeb75/volumes/kubernetes.io~projected/kube-api-access-4ns9l major:0 minor:128 fsType:tmpfs blockSize:0} overlay_0-100:{mountpoint:/var/lib/containers/storage/overlay/3cfcebeeab3f705a7ad8b725dc4e5f5ad5596daa4425443330f3e9b3bbe6905e/merged major:0 minor:100 fsType:overlay blockSize:0} overlay_0-106:{mountpoint:/var/lib/containers/storage/overlay/e341cb9331b0e1405401f0adcde76cf27e8779b1826f31464f0a8a51f71e18b1/merged major:0 minor:106 fsType:overlay blockSize:0} overlay_0-109:{mountpoint:/var/lib/containers/storage/overlay/310634e6d0f7931700ee0ddf5c483dedca45928231d0b1c092041db02444afe8/merged major:0 minor:109 fsType:overlay blockSize:0} overlay_0-114:{mountpoint:/var/lib/containers/storage/overlay/68f1733e4847cb00b9491f9cab45a2882a3c8ab7b2b7631d0ad08535c9f52013/merged major:0 minor:114 fsType:overlay blockSize:0} overlay_0-122:{mountpoint:/var/lib/containers/storage/overlay/8ab7da6c1c07f08f7d77afa1a99d4653b459d42329c716dadaf4c11c6125dc35/merged major:0 minor:122 fsType:overlay blockSize:0} overlay_0-124:{mountpoint:/var/lib/containers/storage/overlay/8178f31c03b49d6f2963c83d2a6c31838666ea3255991295b90a5327f570a8ff/merged major:0 minor:124 fsType:overlay blockSize:0} overlay_0-126:{mountpoint:/var/lib/containers/storage/overlay/b50a1317c43369c3944f99de41cccc4b6694ce586f9df3f06a9b086923ed4fdc/merged major:0 minor:126 fsType:overlay blockSize:0} overlay_0-131:{mountpoint:/var/lib/containers/storage/overlay/6e9afbac13e454e623a09bf9a8403e8fc60c5a051c19f9e25f6eef8dcb794501/merged major:0 minor:131 fsType:overlay blockSize:0} overlay_0-134:{mountpoint:/var/lib/containers/storage/overlay/1d64b1376fddf11bac05423cfa723b2a1c876471b3e65585677b948b17032172/merged major:0 minor:134 fsType:overlay blockSize:0} overlay_0-142:{mountpoint:/var/lib/containers/storage/overlay/1a25e24392c0d16a9d21ecc80e6eb250c115d250ad2174ee4413b9befe9b5dbc/merged major:0 minor:142 fsType:overlay blockSize:0} overlay_0-146:{mountpoint:/var/lib/containers/storage/overlay/77a0271ce4805a938af450dac532ebb8770ade03689b2bc7034d7d7ebd13a331/merged major:0 minor:146 fsType:overlay blockSize:0} overlay_0-148:{mountpoint:/var/lib/containers/storage/overlay/fdc356e2a1779d00a7b145eabe64616f2dc310bfd5a6e4b058b9bd65bca8e8e6/merged major:0 minor:148 fsType:overlay blockSize:0} overlay_0-150:{mountpoint:/var/lib/containers/storage/overlay/94d6bb60cc779d223c874f2fc63dbb9c23f8cdf8c1504c9a6400090a6763956b/merged major:0 minor:150 fsType:overlay blockSize:0} overlay_0-152:{mountpoint:/var/lib/containers/storage/overlay/32a4a87ec9b18a4c7ebee3f9940a51df5edef2903fa47300d4832a3259958a26/merged major:0 minor:152 fsType:overlay blockSize:0} overlay_0-158:{mountpoint:/var/lib/containers/storage/overlay/b1d9a348a8ef0aacec94ca5b756c279157b27710d898119705d5fc33b9da377b/merged major:0 minor:158 fsType:overlay blockSize:0} overlay_0-162:{mountpoint:/var/lib/containers/storage/overlay/ee772f1396b735d3bc013b62f9d0852b28584129e3bbdb86f13c9e606e71ee1f/merged major:0 minor:162 fsType:overlay blockSize:0} overlay_0-168:{mountpoint:/var/lib/containers/storage/overlay/fe69b399d9e5d4f0846be82e32429b9b86ce0e6a59df88d9618bc8ce3344a154/merged major:0 minor:168 fsType:overlay blockSize:0} overlay_0-170:{mountpoint:/var/lib/containers/storage/overlay/d08b61e4f980d1d105057d5fc8cdc5fbd129f50d8b2381f62ef5e6c45138ba22/merged major:0 minor:170 fsType:overlay blockSize:0} overlay_0-172:{mountpoint:/var/lib/containers/storage/overlay/92114d496354e7a0f1b9599b5c9612b69eaf691c15cd38c65a717387fe415660/merged major:0 minor:172 fsType:overlay blockSize:0} overlay_0-174:{mountpoint:/var/lib/containers/storage/overlay/837f2208beacbe381d071b5ffaa113f747886b6d64d84479ffccdaaefa5022f8/merged major:0 minor:174 fsType:overlay blockSize:0} overlay_0-176:{mountpoint:/var/lib/containers/storage/overlay/21ca3330603cf7353703c18b2e0b9896585827c73f47ee83db9951a948397cf6/merged major:0 minor:176 fsType:overlay blockSize:0} overlay_0-177:{mountpoint:/var/lib/containers/storage/overlay/cd6b4bdf136ed74023657cf718652b16c659e3ca0ff3203dee588d54febb8e6e/merged major:0 minor:177 fsType:overlay blockSize:0} overlay_0-186:{mountpoint:/var/lib/containers/storage/overlay/0019173c877e5c7080fde2fcef184a539b5105a93fd56db4cdfd15e86f8746aa/merged major:0 minor:186 fsType:overlay blockSize:0} overlay_0-188:{mountpoint:/var/lib/containers/storage/overlay/ad6ce98cc472875b1838a985624e8ca554af39da6ae9cda8c39ec83576d844ab/merged major:0 minor:188 fsType:overlay blockSize:0} overlay_0-193:{mountpoint:/var/lib/containers/storage/overlay/441fd4a7abc3373228b43c6d404e3989069290071598ae2efa920f619cc6d7ed/merged major:0 minor:193 fsType:overlay blockSize:0} overlay_0-198:{mountpoint:/var/lib/containers/storage/overlay/504a20a8cc1350d02dd4fb180d5a4f29a53d26d039177b2aa6186fc6498d4659/merged major:0 minor:198 fsType:overlay blockSize:0} overlay_0-203:{mountpoint:/var/lib/containers/storage/overlay/25072c49dc0a20454397851b4e1fc284b670a9065c6535addb2909fa0343d7e8/merged major:0 minor:203 fsType:overlay blockSize:0} overlay_0-208:{mountpoint:/var/lib/containers/storage/overlay/67f96cdb99208cf1d74c8ea90120280ae7e8a371de2f618ee7eac490ed39ad35/merged major:0 minor:208 fsType:overlay blockSize:0} overlay_0-213:{mountpoint:/var/lib/containers/storage/overlay/db5e1e6c05588444abaaaf420dc49a4d5da83697eb1ecc0121385a773513ad32/merged major:0 minor:213 fsType:overlay blockSize:0} overlay_0-218:{mountpoint:/var/lib/containers/storage/overlay/58f4fdc277689463210318fc48f92e93ee832b85de994c0e10b702f7303268c1/merged major:0 minor:218 fsType:overlay blockSize:0} overlay_0-219:{mountpoint:/var/lib/containers/storage/overlay/4392bf7be2e6542ab7ae865856f291764611c7a10550e9f8c1e7e11fc3729bfc/merged major:0 minor:219 fsType:overlay blockSize:0} overlay_0-228:{mountpoint:/var/lib/containers/storage/overlay/ad87e093dcee11518887953cc5ceb966a9f0677381f734fb832ab23cc8045f6e/merged major:0 minor:228 fsType:overlay blockSize:0} overlay_0-281:{mountpoint:/var/lib/containers/storage/overlay/5ee1318c84d0e94fec137a921e81aa0595ee6236631233010cb8c3e463f8af2c/merged major:0 minor:281 fsType:overlay blockSize:0} overlay_0-297:{mountpoint:/var/lib/containers/storage/overlay/148c5cb55218412f773b8cdd2bb29c87c6ebe66eba90ad00ac25344fac91c105/merged major:0 minor:297 fsType:overlay blockSize:0} overlay_0-299:{mountpoint:/var/lib/containers/storage/overlay/87952562372e2a7892b5aa348eb26a6f6a9159aff0fcacc9741b72ddcb34d8e4/merged major:0 minor:299 fsType:overlay blockSize:0} overlay_0-301:{mountpoint:/var/lib/containers/storage/overlay/5f5dbdfc105665922ace53841cf5dd544ef2dc2ff42047d3dba120aa8962bfdc/merged major:0 minor:301 fsType:overlay blockSize:0} overlay_0-304:{mountpoint:/var/lib/containers/storage/overlay/704c1de4b7d88fa1f5d8f4af59f9ed667383fad472b15eebef55812680b59d52/merged major:0 minor:304 fsType:overlay blockSize:0} overlay_0-306:{mountpoint:/var/lib/containers/storage/overlay/415ecab0d00fe172952c193c077298a53c30952ba8c238494c7e1d8f861733c6/merged major:0 minor:306 fsType:overlay blockSize:0} overlay_0-308:{mountpoint:/var/lib/containers/storage/overlay/bae028af1b38e21a880b73a3d29acac533ea9a612f6b900d2788710cd6a8c11a/merged major:0 minor:308 fsType:overlay blockSize:0} overlay_0-310:{mountpoint:/var/lib/containers/storage/overlay/7988a1fdb70772cbaec49e60bd582584cb6050e00e3af58858de80ded3cdf803/merged major:0 minor:310 fsType:overlay blockSize:0} overlay_0-312:{mountpoint:/var/lib/containers/storage/overlay/a134cd8c876b5ca62017716e4e84d75eab9ef4ec124dcdc8603a9471fc85bf0e/merged major:0 minor:312 fsType:overlay blockSize:0} overlay_0-314:{mountpoint:/var/lib/containers/storage/overlay/5500e4f9c337cbf9889046ae70595f81db67a36ef9b1031d1410f2913f928766/merged major:0 minor:314 fsType:overlay blockSize:0} overlay_0-316:{mountpoint:/var/lib/containers/storage/overlay/089e0c76d39ba4d9d9d4d5ca153eed13907e5a68dd93980687e47238dddd4ef2/merged major:0 minor:316 fsType:overlay blockSize:0} overlay_0-318:{mountpoint:/var/lib/containers/storage/overlay/ff58ff3270df13baedd0ccaa409c18ad052276ac3841aa99361ad7d80e851091/merged major:0 minor:318 fsType:overlay blockSize:0} overlay_0-325:{mountpoint:/var/lib/containers/storage/overlay/a32b6f3afae2282bd39a19bbfe3f29f0fa7daa6b8af2b42eaf5bd63248de3eff/merged major:0 minor:325 fsType:overlay blockSize:0} overlay_0-331:{mountpoint:/var/lib/containers/storage/overlay/bce687113ff7c771221a83db3194c601355c55519c2980dc8d4249a526d35e71/merged major:0 minor:331 fsType:overlay blockSize:0} overlay_0-46:{mountpoint:/var/lib/containers/storage/overlay/63151899cafd21ac91211c684ff84318421d1e2b4d6dfe46ee7f47a7a44d9d96/merged major:0 minor:46 fsType:overlay blockSize:0} overlay_0-48:{mountpoint:/var/lib/containers/storage/overlay/37a802f20f3da0c283fed8e15841faba813ee943340ec080c859fde68bf7122d/merged major:0 minor:48 fsType:overlay blockSize:0} overlay_0-52:{mountpoint:/var/lib/containers/storage/overlay/7db1534167c3b4cf719b50455f9cbd5ca050b66495d2b715a1f8888ce91422e4/merged major:0 minor:52 fsType:overlay blockSize:0} overlay_0-56:{mountpoint:/var/lib/containers/storage/overlay/e7ca8b712d2f06b00fa762cf85ac8e2d884df5dc03347482f86aaa1c72c32fe0/merged major:0 minor:56 fsType:overlay blockSize:0} overlay_0-60:{mountpoint:/var/lib/containers/storage/overlay/59ebc9f56a92d2fb27c9c65c642ccf340f1322a9747942ea4db447650e40c812/merged major:0 minor:60 fsType:overlay blockSize:0} overlay_0-62:{mountpoint:/var/lib/containers/storage/overlay/60745b1de9155d97c0167d9d8261ef67f903db796bcbaa5e03ffbf46e269a0a0/merged major:0 minor:62 fsType:overlay blockSize:0} overlay_0-64:{mountpoint:/var/lib/containers/storage/overlay/26121b9a852b95f2e67c1f7cf76ce36bdbe4a0abe1b3c92ff0a76ec47ad993dd/merged major:0 minor:64 fsType:overlay blockSize:0} overlay_0-66:{mountpoint:/var/lib/containers/storage/overlay/be02d296b21e94b9668a7a677c84759341f59cb1fa84a9476fba5687fa506302/merged major:0 minor:66 fsType:overlay blockSize:0} overlay_0-68:{mountpoint:/var/lib/containers/storage/overlay/204a0b39eb8db35b884f6e11e2259e6ca9dc01524d1f0df1c655ff8e2f3a8cd3/merged major:0 minor:68 fsType:overlay blockSize:0} overlay_0-78:{mountpoint:/var/lib/containers/storage/overlay/144b9b3b281d200ba12a0e779deddadcbe099ddb0725090be56c70195755f707/merged major:0 minor:78 fsType:overlay blockSize:0} overlay_0-80:{mountpoint:/var/lib/containers/storage/overlay/0bf728546d22b140e53f66b815d1cfd52f6fe348d699d700fcd0da809873f980/merged major:0 minor:80 fsType:overlay blockSize:0} overlay_0-82:{mountpoint:/var/lib/containers/storage/overlay/1d3e6efb0848de0f1f53c450952972106645f7f7494ebf9bb641d04fcbc0c592/merged major:0 minor:82 fsType:overlay blockSize:0} overlay_0-84:{mountpoint:/var/lib/containers/storage/overlay/a183303019409ec8fa33499b6a5f49fad810e419786bd565cfe91114887d5117/merged major:0 minor:84 fsType:overlay blockSize:0} overlay_0-95:{mountpoint:/var/lib/containers/storage/overlay/c77fe74687132958b95261be3e1a56ca802dc0e02c3d7df1c5970fd53b29a209/merged major:0 minor:95 fsType:overlay blockSize:0}] Feb 16 02:06:36.653171 master-0 kubenswrapper[7721]: I0216 02:06:36.652328 7721 manager.go:217] Machine: {Timestamp:2026-02-16 02:06:36.651445772 +0000 UTC m=+0.145680034 CPUVendorID:AuthenticAMD NumCores:16 NumPhysicalCores:1 NumSockets:16 CpuFrequency:2800000 MemoryCapacity:50514153472 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:1c19e24b661c4676981e885f5d8565ba SystemUUID:1c19e24b-661c-4676-981e-885f5d8565ba BootID:6af96a74-4ecc-4294-8d2f-0e5321b23e8e Filesystems:[{Device:/var/lib/kubelet/pods/980aa005-f51d-4ca2-aee6-a6fdeefd86d0/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:264 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/1743372f-bdb0-4558-b47b-3714f3aa3fde/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:269 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/91938be6-9ae4-4849-abe8-fc842daecd23/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:303 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-318 DeviceMajor:0 DeviceMinor:318 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-52 DeviceMajor:0 DeviceMinor:52 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-56 DeviceMajor:0 DeviceMinor:56 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-193 DeviceMajor:0 DeviceMinor:193 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/91938be6-9ae4-4849-abe8-fc842daecd23/volumes/kubernetes.io~projected/kube-api-access-bhz2m DeviceMajor:0 DeviceMinor:241 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/65db28ff03b176892a8eec81629c7d19dbef022673e856af206a72dde2a48896/userdata/shm DeviceMajor:0 DeviceMinor:278 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-82 DeviceMajor:0 DeviceMinor:82 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2ffa4db8-97da-42de-8e51-35680f518ca7/volumes/kubernetes.io~projected/kube-api-access-t9sgx DeviceMajor:0 DeviceMinor:243 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/9be9fd24-fdb1-43dc-80b8-68020427bfd7/volumes/kubernetes.io~projected/kube-api-access-k2qvg DeviceMajor:0 DeviceMinor:275 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8dd32bd58a893bd46ee61ae39a01f4492842e7fb2c4d56eeca513230f073e979/userdata/shm DeviceMajor:0 DeviceMinor:323 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:10102833152 Type:vfs Inodes:819200 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ff8deeed5842106bfd4d1b27be4848f25105bbaa159314b19c6a3add851fbf37/userdata/shm DeviceMajor:0 DeviceMinor:54 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-150 DeviceMajor:0 DeviceMinor:150 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-162 DeviceMajor:0 DeviceMinor:162 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3aa52e31a2b9a476aba0a48b18d458a5a18722a85353d604bbc35df3b9829545/userdata/shm DeviceMajor:0 DeviceMinor:287 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-158 DeviceMajor:0 DeviceMinor:158 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/980aa005-f51d-4ca2-aee6-a6fdeefd86d0/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:255 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-281 DeviceMajor:0 DeviceMinor:281 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-219 DeviceMajor:0 DeviceMinor:219 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/76915cba-7c11-4bd8-9943-81de74e7781b/volumes/kubernetes.io~projected/kube-api-access-6f8fj DeviceMajor:0 DeviceMinor:240 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/a0540a70-a256-422b-a827-e564d0e67866/volumes/kubernetes.io~projected/kube-api-access-s9p9r DeviceMajor:0 DeviceMinor:261 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-310 DeviceMajor:0 DeviceMinor:310 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:25257074688 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-95 DeviceMajor:0 DeviceMinor:95 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/467d92a2-1cf3-418d-b41e-8e5f9d7a5b74/volumes/kubernetes.io~secret/profile-collector-cert DeviceMajor:0 DeviceMinor:239 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4b7e8c0ad2cdf87e8552c9488b3b26422f87ac52802cbde7bf5707282a58545e/userdata/shm DeviceMajor:0 DeviceMinor:258 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-84 DeviceMajor:0 DeviceMinor:84 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/359c73798649b5d5b089f1492d973d2f87ffd23f53f2f5868ba22d8d7543d4cc/userdata/shm DeviceMajor:0 DeviceMinor:120 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-218 DeviceMajor:0 DeviceMinor:218 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1f2d2601-481d-4e86-ac4c-3d34d5691261/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert DeviceMajor:0 DeviceMinor:253 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-208 DeviceMajor:0 DeviceMinor:208 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-314 DeviceMajor:0 DeviceMinor:314 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-170 DeviceMajor:0 DeviceMinor:170 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c9cd32bc-a13a-44ee-ba52-7bb335c7007b/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:238 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/a8f33151-61df-4b66-ba85-9ba210779059/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:250 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/9be9fd24-fdb1-43dc-80b8-68020427bfd7/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:254 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/248a67424bdbb1372ecb2fb070f15261787aceee6d09513f5274ef915ebe68ae/userdata/shm DeviceMajor:0 DeviceMinor:283 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-46 DeviceMajor:0 DeviceMinor:46 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-60 DeviceMajor:0 DeviceMinor:60 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-68 DeviceMajor:0 DeviceMinor:68 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e379cfaf-3a4c-40e7-8641-3524b3669295/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:251 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ef2bb9465307e33223e533d623bdfd016157fa7b6e73255487b68bf12c529272/userdata/shm DeviceMajor:0 DeviceMinor:321 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-78 DeviceMajor:0 DeviceMinor:78 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-109 DeviceMajor:0 DeviceMinor:109 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f7317f91-9441-449f-9738-85da088cf94f/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert DeviceMajor:0 DeviceMinor:136 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d3a6bee8bdf67740901292411913794cb77a0e097ae15189322f724e1617872d/userdata/shm DeviceMajor:0 DeviceMinor:256 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/a0540a70-a256-422b-a827-e564d0e67866/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:274 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-299 DeviceMajor:0 DeviceMinor:299 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-325 DeviceMajor:0 DeviceMinor:325 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c8b97e97c35ace1d8e3342fb279b58e63ecd66a09abba6b504fe344a2864fe27/userdata/shm DeviceMajor:0 DeviceMinor:42 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/07f7d55685e3891e139cfcc8fc39a4525349b15753a33187f5704239bf899022/userdata/shm DeviceMajor:0 DeviceMinor:50 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-100 DeviceMajor:0 DeviceMinor:100 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-131 DeviceMajor:0 DeviceMinor:131 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-168 DeviceMajor:0 DeviceMinor:168 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6dcef814-353e-4985-9afc-9e545f7853ae/volumes/kubernetes.io~secret/ovn-node-metrics-cert DeviceMajor:0 DeviceMinor:138 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-122 DeviceMajor:0 DeviceMinor:122 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-198 DeviceMajor:0 DeviceMinor:198 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6c02961f-30ec-4405-b7fa-9c4192342ae9/volumes/kubernetes.io~projected/kube-api-access-7llx6 DeviceMajor:0 DeviceMinor:242 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/a8f33151-61df-4b66-ba85-9ba210779059/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:262 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/2a67f799-fd8d-4bee-9d67-720151c1650b/volumes/kubernetes.io~projected/kube-api-access-47lht DeviceMajor:0 DeviceMinor:267 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/6dcef814-353e-4985-9afc-9e545f7853ae/volumes/kubernetes.io~projected/kube-api-access-pjsbs DeviceMajor:0 DeviceMinor:139 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/467d92a2-1cf3-418d-b41e-8e5f9d7a5b74/volumes/kubernetes.io~projected/kube-api-access-f8lvq DeviceMajor:0 DeviceMinor:245 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/bde83629-b39c-401e-bc30-5ce205638918/volumes/kubernetes.io~projected/kube-api-access-24b6h DeviceMajor:0 DeviceMinor:276 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/aafd16466f6eed6a672c6fa59488bcd1ea8cbc42fa0dfe86540d9e97cd364cb6/userdata/shm DeviceMajor:0 DeviceMinor:289 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-306 DeviceMajor:0 DeviceMinor:306 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ee59c4907031f715d1c9629d7cd8d627d819c1de44021beecbbfe36a41fcaf72/userdata/shm DeviceMajor:0 DeviceMinor:166 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/f91346c7-bde4-4fa2-ac27-b5f0d25eeb75/volumes/kubernetes.io~projected/kube-api-access-4ns9l DeviceMajor:0 DeviceMinor:128 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7ffd086045d7cf31e1d7c2da1b8924ee64ea940c7d3c880260b182cb5c759f90/userdata/shm DeviceMajor:0 DeviceMinor:129 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:25257078784 Type:vfs Inodes:1048576 HasInodes:true} {Device:overlay_0-177 DeviceMajor:0 DeviceMinor:177 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f3b7c64f3be908fc19e9deab55b835cdfbaa84035406e99a4fd85bf496337788/userdata/shm DeviceMajor:0 DeviceMinor:144 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/6c02961f-30ec-4405-b7fa-9c4192342ae9/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:237 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/b2a83ddd-ffa5-4127-9099-91187ad9dbba/volumes/kubernetes.io~projected/kube-api-access-t7fmj DeviceMajor:0 DeviceMinor:246 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/1f2d2601-481d-4e86-ac4c-3d34d5691261/volumes/kubernetes.io~projected/kube-api-access-8d49c DeviceMajor:0 DeviceMinor:263 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2287a210e87155c02ab6e622acb47d96fd89d65dc49d2afbe29745b869fd7b87/userdata/shm DeviceMajor:0 DeviceMinor:291 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/7f0f9b7d-e663-4927-861b-a9544d483b6e/volumes/kubernetes.io~projected/kube-api-access-5m4sb DeviceMajor:0 DeviceMinor:133 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-146 DeviceMajor:0 DeviceMinor:146 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/04804a08-e3a5-46f3-abcb-967866834baa/volumes/kubernetes.io~projected/kube-api-access-8rc6w DeviceMajor:0 DeviceMinor:260 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/04804a08-e3a5-46f3-abcb-967866834baa/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:270 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-301 DeviceMajor:0 DeviceMinor:301 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-304 DeviceMajor:0 DeviceMinor:304 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0e4ba8ef0f2d2dcfdef03df990cea8e18604ff8954454af1715e76176988bea9/userdata/shm DeviceMajor:0 DeviceMinor:41 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-62 DeviceMajor:0 DeviceMinor:62 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-114 DeviceMajor:0 DeviceMinor:114 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f7317f91-9441-449f-9738-85da088cf94f/volumes/kubernetes.io~projected/kube-api-access-58cq8 DeviceMajor:0 DeviceMinor:137 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/dbc5b101-936f-4bf3-bbf3-f30966b0ab50/volumes/kubernetes.io~projected/kube-api-access-jlnkb DeviceMajor:0 DeviceMinor:164 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-213 DeviceMajor:0 DeviceMinor:213 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/19f7fe92f509ebf58263703a24e99425e1eb0493ad65313aae50f23d57b15adc/userdata/shm DeviceMajor:0 DeviceMinor:295 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-48 DeviceMajor:0 DeviceMinor:48 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-134 DeviceMajor:0 DeviceMinor:134 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/48c24060310ea59ea7726c8831161c102b57d6b94e31b9bc5a4ace9382583b32/userdata/shm DeviceMajor:0 DeviceMinor:284 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-312 DeviceMajor:0 DeviceMinor:312 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a24c34b6599e046cb5b217ee112cd5793502433694aca39a7811b07f3f980447/userdata/shm DeviceMajor:0 DeviceMinor:58 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/456e6c3a-c16c-470b-a0cd-bb79865b54f0/volumes/kubernetes.io~projected/kube-api-access-nl7r8 DeviceMajor:0 DeviceMinor:70 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-148 DeviceMajor:0 DeviceMinor:148 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4a5b01c1-1231-4e69-8b6c-c4981b65b26e/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:252 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/b6088119-1125-4271-8c0b-0675e700edd9/volumes/kubernetes.io~projected/kube-api-access-jgpcj DeviceMajor:0 DeviceMinor:266 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/724ac845-3835-458b-9645-e665be135ff9/volumes/kubernetes.io~projected/kube-api-access-bff42 DeviceMajor:0 DeviceMinor:268 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/450f24c15fabf2ce3093e6381873f7497e388b2d4b0a5acae355eb63b714bf74/userdata/shm DeviceMajor:0 DeviceMinor:77 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-124 DeviceMajor:0 DeviceMinor:124 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-172 DeviceMajor:0 DeviceMinor:172 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/76915cba-7c11-4bd8-9943-81de74e7781b/volumes/kubernetes.io~secret/profile-collector-cert DeviceMajor:0 DeviceMinor:233 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/c9cd32bc-a13a-44ee-ba52-7bb335c7007b/volumes/kubernetes.io~projected/kube-api-access-xr7gn DeviceMajor:0 DeviceMinor:244 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/1743372f-bdb0-4558-b47b-3714f3aa3fde/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:248 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-316 DeviceMajor:0 DeviceMinor:316 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/containers/storage/overlay-containers/fc01c156470f39fb1cec479037027fa891a6711ffbe4b5da46389ad652e479bb/userdata/shm DeviceMajor:0 DeviceMinor:140 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-203 DeviceMajor:0 DeviceMinor:203 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-297 DeviceMajor:0 DeviceMinor:297 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-80 DeviceMajor:0 DeviceMinor:80 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/430c146b-ceaf-411a-add6-ce949243aabf/volumes/kubernetes.io~projected/kube-api-access-vdllq DeviceMajor:0 DeviceMinor:111 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/6dcef814-353e-4985-9afc-9e545f7853ae/volume-subpaths/run-systemd/ovnkube-controller/6 DeviceMajor:0 DeviceMinor:24 Capacity:10102833152 Type:vfs Inodes:819200 HasInodes:true} {Device:/var/lib/kubelet/pods/724ac845-3835-458b-9645-e665be135ff9/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:249 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-106 DeviceMajor:0 DeviceMinor:106 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:108 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-176 DeviceMajor:0 DeviceMinor:176 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-188 DeviceMajor:0 DeviceMinor:188 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4a5b01c1-1231-4e69-8b6c-c4981b65b26e/volumes/kubernetes.io~projected/kube-api-access-zr872 DeviceMajor:0 DeviceMinor:265 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/21686a6d-f685-4fb6-98af-3e8a39c5981b/volumes/kubernetes.io~projected/kube-api-access-lvf8t DeviceMajor:0 DeviceMinor:273 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-331 DeviceMajor:0 DeviceMinor:331 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/724ac845-3835-458b-9645-e665be135ff9/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:247 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/93fe8320cd8b094f12e9b856631a3581df910023e217ba523e4fc8bbdc13eff6/userdata/shm DeviceMajor:0 DeviceMinor:293 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/456e6c3a-c16c-470b-a0cd-bb79865b54f0/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:43 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/99cec3957b7591d54dab1b67d940469ccca762d577aa986d00f4e46746ba55f5/userdata/shm DeviceMajor:0 DeviceMinor:277 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-66 DeviceMajor:0 DeviceMinor:66 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-64 DeviceMajor:0 DeviceMinor:64 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-152 DeviceMajor:0 DeviceMinor:152 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-186 DeviceMajor:0 DeviceMinor:186 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-142 DeviceMajor:0 DeviceMinor:142 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-174 DeviceMajor:0 DeviceMinor:174 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-228 DeviceMajor:0 DeviceMinor:228 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/23755f7f-dce6-4dcf-9664-22e3aedb5c81/volumes/kubernetes.io~projected/kube-api-access-n4gmn DeviceMajor:0 DeviceMinor:271 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/d008dbd4-e713-4f2e-b64d-ca9cfc83a502/volumes/kubernetes.io~projected/kube-api-access-2582m DeviceMajor:0 DeviceMinor:320 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-126 DeviceMajor:0 DeviceMinor:126 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/dbc5b101-936f-4bf3-bbf3-f30966b0ab50/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:165 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/e379cfaf-3a4c-40e7-8641-3524b3669295/volumes/kubernetes.io~projected/kube-api-access-gcq6v DeviceMajor:0 DeviceMinor:272 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-308 DeviceMajor:0 DeviceMinor:308 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:19f7fe92f509ebf MacAddress:7a:63:6a:b6:5b:46 Speed:10000 Mtu:8900} {Name:2287a210e87155c MacAddress:96:97:5e:e0:30:a7 Speed:10000 Mtu:8900} {Name:248a67424bdbb13 MacAddress:22:24:86:9d:14:ab Speed:10000 Mtu:8900} {Name:3aa52e31a2b9a47 MacAddress:fe:87:a4:68:f4:40 Speed:10000 Mtu:8900} {Name:48c24060310ea59 MacAddress:fe:6c:18:48:49:e2 Speed:10000 Mtu:8900} {Name:4b7e8c0ad2cdf87 MacAddress:72:14:f5:73:3d:2f Speed:10000 Mtu:8900} {Name:8dd32bd58a893bd MacAddress:f2:32:a3:0c:dc:de Speed:10000 Mtu:8900} {Name:93fe8320cd8b094 MacAddress:be:88:41:75:9a:09 Speed:10000 Mtu:8900} {Name:99cec3957b7591d MacAddress:76:92:3b:53:dc:fe Speed:10000 Mtu:8900} {Name:aafd16466f6eed6 MacAddress:2e:e7:f0:f5:28:92 Speed:10000 Mtu:8900} {Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:br-int MacAddress:46:04:3f:41:5e:67 Speed:0 Mtu:8900} {Name:d3a6bee8bdf6774 MacAddress:12:3a:a6:af:75:30 Speed:10000 Mtu:8900} {Name:ef2bb9465307e33 MacAddress:0e:fb:54:bc:93:ec Speed:10000 Mtu:8900} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:61:d7:58 Speed:-1 Mtu:9000} {Name:eth2 MacAddress:fa:16:3e:30:df:9f Speed:-1 Mtu:9000} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:80:00:02 Speed:0 Mtu:8900} {Name:ovs-system MacAddress:6e:22:cb:a5:72:48 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:50514153472 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[12] Caches:[{Id:12 Size:32768 Type:Data Level:1} {Id:12 Size:32768 Type:Instruction Level:1} {Id:12 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:12 Size:16777216 Type:Unified Level:3}] SocketID:12 BookID: DrawerID:} {Id:0 Threads:[13] Caches:[{Id:13 Size:32768 Type:Data Level:1} {Id:13 Size:32768 Type:Instruction Level:1} {Id:13 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:13 Size:16777216 Type:Unified Level:3}] SocketID:13 BookID: DrawerID:} {Id:0 Threads:[14] Caches:[{Id:14 Size:32768 Type:Data Level:1} {Id:14 Size:32768 Type:Instruction Level:1} {Id:14 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:14 Size:16777216 Type:Unified Level:3}] SocketID:14 BookID: DrawerID:} {Id:0 Threads:[15] Caches:[{Id:15 Size:32768 Type:Data Level:1} {Id:15 Size:32768 Type:Instruction Level:1} {Id:15 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:15 Size:16777216 Type:Unified Level:3}] SocketID:15 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 16 02:06:36.653171 master-0 kubenswrapper[7721]: I0216 02:06:36.653165 7721 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 16 02:06:36.653417 master-0 kubenswrapper[7721]: I0216 02:06:36.653258 7721 manager.go:233] Version: {KernelVersion:5.14.0-427.107.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202601202224-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 16 02:06:36.653506 master-0 kubenswrapper[7721]: I0216 02:06:36.653487 7721 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 16 02:06:36.653646 master-0 kubenswrapper[7721]: I0216 02:06:36.653614 7721 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 16 02:06:36.653819 master-0 kubenswrapper[7721]: I0216 02:06:36.653644 7721 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 16 02:06:36.653861 master-0 kubenswrapper[7721]: I0216 02:06:36.653839 7721 topology_manager.go:138] "Creating topology manager with none policy" Feb 16 02:06:36.653861 master-0 kubenswrapper[7721]: I0216 02:06:36.653850 7721 container_manager_linux.go:303] "Creating device plugin manager" Feb 16 02:06:36.653861 master-0 kubenswrapper[7721]: I0216 02:06:36.653858 7721 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 16 02:06:36.653934 master-0 kubenswrapper[7721]: I0216 02:06:36.653876 7721 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 16 02:06:36.653963 master-0 kubenswrapper[7721]: I0216 02:06:36.653950 7721 state_mem.go:36] "Initialized new in-memory state store" Feb 16 02:06:36.654032 master-0 kubenswrapper[7721]: I0216 02:06:36.654022 7721 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 16 02:06:36.654100 master-0 kubenswrapper[7721]: I0216 02:06:36.654091 7721 kubelet.go:418] "Attempting to sync node with API server" Feb 16 02:06:36.654125 master-0 kubenswrapper[7721]: I0216 02:06:36.654105 7721 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 16 02:06:36.654125 master-0 kubenswrapper[7721]: I0216 02:06:36.654121 7721 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 16 02:06:36.654170 master-0 kubenswrapper[7721]: I0216 02:06:36.654132 7721 kubelet.go:324] "Adding apiserver pod source" Feb 16 02:06:36.654170 master-0 kubenswrapper[7721]: I0216 02:06:36.654149 7721 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 16 02:06:36.656524 master-0 kubenswrapper[7721]: I0216 02:06:36.656477 7721 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-3.rhaos4.18.gite0b87e5.el9" apiVersion="v1" Feb 16 02:06:36.656857 master-0 kubenswrapper[7721]: I0216 02:06:36.656825 7721 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 16 02:06:36.657245 master-0 kubenswrapper[7721]: I0216 02:06:36.657217 7721 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 16 02:06:36.657388 master-0 kubenswrapper[7721]: I0216 02:06:36.657360 7721 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 16 02:06:36.657388 master-0 kubenswrapper[7721]: I0216 02:06:36.657386 7721 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 16 02:06:36.657473 master-0 kubenswrapper[7721]: I0216 02:06:36.657396 7721 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 16 02:06:36.657473 master-0 kubenswrapper[7721]: I0216 02:06:36.657404 7721 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 16 02:06:36.657473 master-0 kubenswrapper[7721]: I0216 02:06:36.657412 7721 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 16 02:06:36.657473 master-0 kubenswrapper[7721]: I0216 02:06:36.657420 7721 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 16 02:06:36.657473 master-0 kubenswrapper[7721]: I0216 02:06:36.657467 7721 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 16 02:06:36.657473 master-0 kubenswrapper[7721]: I0216 02:06:36.657474 7721 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 16 02:06:36.657616 master-0 kubenswrapper[7721]: I0216 02:06:36.657487 7721 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 16 02:06:36.657616 master-0 kubenswrapper[7721]: I0216 02:06:36.657496 7721 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 16 02:06:36.657616 master-0 kubenswrapper[7721]: I0216 02:06:36.657509 7721 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 16 02:06:36.657616 master-0 kubenswrapper[7721]: I0216 02:06:36.657525 7721 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 16 02:06:36.657616 master-0 kubenswrapper[7721]: I0216 02:06:36.657558 7721 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 16 02:06:36.658179 master-0 kubenswrapper[7721]: I0216 02:06:36.658148 7721 server.go:1280] "Started kubelet" Feb 16 02:06:36.660310 master-0 kubenswrapper[7721]: I0216 02:06:36.659972 7721 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 16 02:06:36.660380 master-0 kubenswrapper[7721]: I0216 02:06:36.659950 7721 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 16 02:06:36.660472 master-0 kubenswrapper[7721]: I0216 02:06:36.660357 7721 server_v1.go:47] "podresources" method="list" useActivePods=true Feb 16 02:06:36.661177 master-0 kubenswrapper[7721]: I0216 02:06:36.661146 7721 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 16 02:06:36.662595 master-0 systemd[1]: Started Kubernetes Kubelet. Feb 16 02:06:36.670675 master-0 kubenswrapper[7721]: I0216 02:06:36.669951 7721 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 16 02:06:36.670729 master-0 kubenswrapper[7721]: I0216 02:06:36.670664 7721 server.go:449] "Adding debug handlers to kubelet server" Feb 16 02:06:36.671033 master-0 kubenswrapper[7721]: I0216 02:06:36.670995 7721 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 16 02:06:36.673695 master-0 kubenswrapper[7721]: I0216 02:06:36.673650 7721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 16 02:06:36.673917 master-0 kubenswrapper[7721]: I0216 02:06:36.673895 7721 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 16 02:06:36.674234 master-0 kubenswrapper[7721]: I0216 02:06:36.674191 7721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-17 01:56:50 +0000 UTC, rotation deadline is 2026-02-16 20:08:23.972013332 +0000 UTC Feb 16 02:06:36.674273 master-0 kubenswrapper[7721]: I0216 02:06:36.674251 7721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 18h1m47.297764741s for next certificate rotation Feb 16 02:06:36.674504 master-0 kubenswrapper[7721]: I0216 02:06:36.674487 7721 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 16 02:06:36.674504 master-0 kubenswrapper[7721]: I0216 02:06:36.674500 7721 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 16 02:06:36.675726 master-0 kubenswrapper[7721]: I0216 02:06:36.675711 7721 factory.go:55] Registering systemd factory Feb 16 02:06:36.675801 master-0 kubenswrapper[7721]: I0216 02:06:36.675792 7721 factory.go:221] Registration of the systemd container factory successfully Feb 16 02:06:36.676099 master-0 kubenswrapper[7721]: I0216 02:06:36.676078 7721 factory.go:153] Registering CRI-O factory Feb 16 02:06:36.676099 master-0 kubenswrapper[7721]: I0216 02:06:36.676092 7721 factory.go:221] Registration of the crio container factory successfully Feb 16 02:06:36.676175 master-0 kubenswrapper[7721]: I0216 02:06:36.676153 7721 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 16 02:06:36.676202 master-0 kubenswrapper[7721]: I0216 02:06:36.676176 7721 factory.go:103] Registering Raw factory Feb 16 02:06:36.676202 master-0 kubenswrapper[7721]: I0216 02:06:36.676191 7721 manager.go:1196] Started watching for new ooms in manager Feb 16 02:06:36.676596 master-0 kubenswrapper[7721]: I0216 02:06:36.676585 7721 manager.go:319] Starting recovery of all containers Feb 16 02:06:36.676648 master-0 kubenswrapper[7721]: I0216 02:06:36.676613 7721 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 16 02:06:36.677544 master-0 kubenswrapper[7721]: I0216 02:06:36.677509 7721 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Feb 16 02:06:36.678127 master-0 kubenswrapper[7721]: I0216 02:06:36.678065 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6dcef814-353e-4985-9afc-9e545f7853ae" volumeName="kubernetes.io/configmap/6dcef814-353e-4985-9afc-9e545f7853ae-ovnkube-script-lib" seLinuxMountContext="" Feb 16 02:06:36.678127 master-0 kubenswrapper[7721]: I0216 02:06:36.678126 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6dcef814-353e-4985-9afc-9e545f7853ae" volumeName="kubernetes.io/secret/6dcef814-353e-4985-9afc-9e545f7853ae-ovn-node-metrics-cert" seLinuxMountContext="" Feb 16 02:06:36.678190 master-0 kubenswrapper[7721]: I0216 02:06:36.678147 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="724ac845-3835-458b-9645-e665be135ff9" volumeName="kubernetes.io/configmap/724ac845-3835-458b-9645-e665be135ff9-etcd-service-ca" seLinuxMountContext="" Feb 16 02:06:36.678190 master-0 kubenswrapper[7721]: I0216 02:06:36.678159 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="864c0ef4-319c-457c-aa3b-adf0c3e5a0ff" volumeName="kubernetes.io/configmap/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-service-ca" seLinuxMountContext="" Feb 16 02:06:36.678190 master-0 kubenswrapper[7721]: I0216 02:06:36.678169 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="864c0ef4-319c-457c-aa3b-adf0c3e5a0ff" volumeName="kubernetes.io/projected/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-kube-api-access" seLinuxMountContext="" Feb 16 02:06:36.678190 master-0 kubenswrapper[7721]: I0216 02:06:36.678185 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f91346c7-bde4-4fa2-ac27-b5f0d25eeb75" volumeName="kubernetes.io/configmap/f91346c7-bde4-4fa2-ac27-b5f0d25eeb75-cni-sysctl-allowlist" seLinuxMountContext="" Feb 16 02:06:36.678792 master-0 kubenswrapper[7721]: I0216 02:06:36.678196 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c9cd32bc-a13a-44ee-ba52-7bb335c7007b" volumeName="kubernetes.io/configmap/c9cd32bc-a13a-44ee-ba52-7bb335c7007b-service-ca-bundle" seLinuxMountContext="" Feb 16 02:06:36.678792 master-0 kubenswrapper[7721]: I0216 02:06:36.678211 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="04804a08-e3a5-46f3-abcb-967866834baa" volumeName="kubernetes.io/projected/04804a08-e3a5-46f3-abcb-967866834baa-bound-sa-token" seLinuxMountContext="" Feb 16 02:06:36.678792 master-0 kubenswrapper[7721]: I0216 02:06:36.678225 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6dcef814-353e-4985-9afc-9e545f7853ae" volumeName="kubernetes.io/projected/6dcef814-353e-4985-9afc-9e545f7853ae-kube-api-access-pjsbs" seLinuxMountContext="" Feb 16 02:06:36.678792 master-0 kubenswrapper[7721]: I0216 02:06:36.678237 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="724ac845-3835-458b-9645-e665be135ff9" volumeName="kubernetes.io/projected/724ac845-3835-458b-9645-e665be135ff9-kube-api-access-bff42" seLinuxMountContext="" Feb 16 02:06:36.678792 master-0 kubenswrapper[7721]: I0216 02:06:36.678252 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0540a70-a256-422b-a827-e564d0e67866" volumeName="kubernetes.io/configmap/a0540a70-a256-422b-a827-e564d0e67866-trusted-ca" seLinuxMountContext="" Feb 16 02:06:36.678792 master-0 kubenswrapper[7721]: I0216 02:06:36.678265 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a8f33151-61df-4b66-ba85-9ba210779059" volumeName="kubernetes.io/configmap/a8f33151-61df-4b66-ba85-9ba210779059-config" seLinuxMountContext="" Feb 16 02:06:36.678792 master-0 kubenswrapper[7721]: I0216 02:06:36.678281 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b2a83ddd-ffa5-4127-9099-91187ad9dbba" volumeName="kubernetes.io/configmap/b2a83ddd-ffa5-4127-9099-91187ad9dbba-trusted-ca" seLinuxMountContext="" Feb 16 02:06:36.678792 master-0 kubenswrapper[7721]: I0216 02:06:36.678296 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6088119-1125-4271-8c0b-0675e700edd9" volumeName="kubernetes.io/projected/b6088119-1125-4271-8c0b-0675e700edd9-kube-api-access-jgpcj" seLinuxMountContext="" Feb 16 02:06:36.678792 master-0 kubenswrapper[7721]: I0216 02:06:36.678311 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e379cfaf-3a4c-40e7-8641-3524b3669295" volumeName="kubernetes.io/secret/e379cfaf-3a4c-40e7-8641-3524b3669295-serving-cert" seLinuxMountContext="" Feb 16 02:06:36.678792 master-0 kubenswrapper[7721]: I0216 02:06:36.678323 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2a67f799-fd8d-4bee-9d67-720151c1650b" volumeName="kubernetes.io/configmap/2a67f799-fd8d-4bee-9d67-720151c1650b-iptables-alerter-script" seLinuxMountContext="" Feb 16 02:06:36.678792 master-0 kubenswrapper[7721]: I0216 02:06:36.678337 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4a5b01c1-1231-4e69-8b6c-c4981b65b26e" volumeName="kubernetes.io/projected/4a5b01c1-1231-4e69-8b6c-c4981b65b26e-kube-api-access-zr872" seLinuxMountContext="" Feb 16 02:06:36.678792 master-0 kubenswrapper[7721]: I0216 02:06:36.678350 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6dcef814-353e-4985-9afc-9e545f7853ae" volumeName="kubernetes.io/configmap/6dcef814-353e-4985-9afc-9e545f7853ae-ovnkube-config" seLinuxMountContext="" Feb 16 02:06:36.678792 master-0 kubenswrapper[7721]: I0216 02:06:36.678361 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="724ac845-3835-458b-9645-e665be135ff9" volumeName="kubernetes.io/configmap/724ac845-3835-458b-9645-e665be135ff9-config" seLinuxMountContext="" Feb 16 02:06:36.678792 master-0 kubenswrapper[7721]: I0216 02:06:36.678375 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="724ac845-3835-458b-9645-e665be135ff9" volumeName="kubernetes.io/secret/724ac845-3835-458b-9645-e665be135ff9-serving-cert" seLinuxMountContext="" Feb 16 02:06:36.678792 master-0 kubenswrapper[7721]: I0216 02:06:36.678386 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6c02961f-30ec-4405-b7fa-9c4192342ae9" volumeName="kubernetes.io/projected/6c02961f-30ec-4405-b7fa-9c4192342ae9-kube-api-access-7llx6" seLinuxMountContext="" Feb 16 02:06:36.678792 master-0 kubenswrapper[7721]: I0216 02:06:36.678401 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6c02961f-30ec-4405-b7fa-9c4192342ae9" volumeName="kubernetes.io/secret/6c02961f-30ec-4405-b7fa-9c4192342ae9-serving-cert" seLinuxMountContext="" Feb 16 02:06:36.678792 master-0 kubenswrapper[7721]: I0216 02:06:36.678412 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="724ac845-3835-458b-9645-e665be135ff9" volumeName="kubernetes.io/configmap/724ac845-3835-458b-9645-e665be135ff9-etcd-ca" seLinuxMountContext="" Feb 16 02:06:36.678792 master-0 kubenswrapper[7721]: I0216 02:06:36.678425 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0540a70-a256-422b-a827-e564d0e67866" volumeName="kubernetes.io/projected/a0540a70-a256-422b-a827-e564d0e67866-bound-sa-token" seLinuxMountContext="" Feb 16 02:06:36.678792 master-0 kubenswrapper[7721]: I0216 02:06:36.678451 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b2a83ddd-ffa5-4127-9099-91187ad9dbba" volumeName="kubernetes.io/projected/b2a83ddd-ffa5-4127-9099-91187ad9dbba-kube-api-access-t7fmj" seLinuxMountContext="" Feb 16 02:06:36.678792 master-0 kubenswrapper[7721]: I0216 02:06:36.678463 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c9cd32bc-a13a-44ee-ba52-7bb335c7007b" volumeName="kubernetes.io/configmap/c9cd32bc-a13a-44ee-ba52-7bb335c7007b-trusted-ca-bundle" seLinuxMountContext="" Feb 16 02:06:36.678792 master-0 kubenswrapper[7721]: I0216 02:06:36.678483 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7317f91-9441-449f-9738-85da088cf94f" volumeName="kubernetes.io/configmap/f7317f91-9441-449f-9738-85da088cf94f-env-overrides" seLinuxMountContext="" Feb 16 02:06:36.678792 master-0 kubenswrapper[7721]: I0216 02:06:36.678497 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1743372f-bdb0-4558-b47b-3714f3aa3fde" volumeName="kubernetes.io/configmap/1743372f-bdb0-4558-b47b-3714f3aa3fde-config" seLinuxMountContext="" Feb 16 02:06:36.678792 master-0 kubenswrapper[7721]: I0216 02:06:36.678514 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="456e6c3a-c16c-470b-a0cd-bb79865b54f0" volumeName="kubernetes.io/secret/456e6c3a-c16c-470b-a0cd-bb79865b54f0-metrics-tls" seLinuxMountContext="" Feb 16 02:06:36.678792 master-0 kubenswrapper[7721]: I0216 02:06:36.678530 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="724ac845-3835-458b-9645-e665be135ff9" volumeName="kubernetes.io/secret/724ac845-3835-458b-9645-e665be135ff9-etcd-client" seLinuxMountContext="" Feb 16 02:06:36.678792 master-0 kubenswrapper[7721]: I0216 02:06:36.678546 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="980aa005-f51d-4ca2-aee6-a6fdeefd86d0" volumeName="kubernetes.io/configmap/980aa005-f51d-4ca2-aee6-a6fdeefd86d0-config" seLinuxMountContext="" Feb 16 02:06:36.678792 master-0 kubenswrapper[7721]: I0216 02:06:36.678557 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="980aa005-f51d-4ca2-aee6-a6fdeefd86d0" volumeName="kubernetes.io/secret/980aa005-f51d-4ca2-aee6-a6fdeefd86d0-serving-cert" seLinuxMountContext="" Feb 16 02:06:36.678792 master-0 kubenswrapper[7721]: I0216 02:06:36.678574 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c9cd32bc-a13a-44ee-ba52-7bb335c7007b" volumeName="kubernetes.io/secret/c9cd32bc-a13a-44ee-ba52-7bb335c7007b-serving-cert" seLinuxMountContext="" Feb 16 02:06:36.681083 master-0 kubenswrapper[7721]: I0216 02:06:36.678586 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e379cfaf-3a4c-40e7-8641-3524b3669295" volumeName="kubernetes.io/projected/e379cfaf-3a4c-40e7-8641-3524b3669295-kube-api-access-gcq6v" seLinuxMountContext="" Feb 16 02:06:36.681152 master-0 kubenswrapper[7721]: I0216 02:06:36.681109 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="456e6c3a-c16c-470b-a0cd-bb79865b54f0" volumeName="kubernetes.io/projected/456e6c3a-c16c-470b-a0cd-bb79865b54f0-kube-api-access-nl7r8" seLinuxMountContext="" Feb 16 02:06:36.681194 master-0 kubenswrapper[7721]: I0216 02:06:36.681151 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="76915cba-7c11-4bd8-9943-81de74e7781b" volumeName="kubernetes.io/secret/76915cba-7c11-4bd8-9943-81de74e7781b-profile-collector-cert" seLinuxMountContext="" Feb 16 02:06:36.681225 master-0 kubenswrapper[7721]: I0216 02:06:36.681194 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a8f33151-61df-4b66-ba85-9ba210779059" volumeName="kubernetes.io/secret/a8f33151-61df-4b66-ba85-9ba210779059-serving-cert" seLinuxMountContext="" Feb 16 02:06:36.681225 master-0 kubenswrapper[7721]: I0216 02:06:36.681215 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1743372f-bdb0-4558-b47b-3714f3aa3fde" volumeName="kubernetes.io/secret/1743372f-bdb0-4558-b47b-3714f3aa3fde-serving-cert" seLinuxMountContext="" Feb 16 02:06:36.681280 master-0 kubenswrapper[7721]: I0216 02:06:36.681237 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="21686a6d-f685-4fb6-98af-3e8a39c5981b" volumeName="kubernetes.io/projected/21686a6d-f685-4fb6-98af-3e8a39c5981b-kube-api-access-lvf8t" seLinuxMountContext="" Feb 16 02:06:36.681280 master-0 kubenswrapper[7721]: I0216 02:06:36.681253 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="430c146b-ceaf-411a-add6-ce949243aabf" volumeName="kubernetes.io/configmap/430c146b-ceaf-411a-add6-ce949243aabf-multus-daemon-config" seLinuxMountContext="" Feb 16 02:06:36.681280 master-0 kubenswrapper[7721]: I0216 02:06:36.681266 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="467d92a2-1cf3-418d-b41e-8e5f9d7a5b74" volumeName="kubernetes.io/projected/467d92a2-1cf3-418d-b41e-8e5f9d7a5b74-kube-api-access-f8lvq" seLinuxMountContext="" Feb 16 02:06:36.681418 master-0 kubenswrapper[7721]: I0216 02:06:36.681285 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d008dbd4-e713-4f2e-b64d-ca9cfc83a502" volumeName="kubernetes.io/projected/d008dbd4-e713-4f2e-b64d-ca9cfc83a502-kube-api-access-2582m" seLinuxMountContext="" Feb 16 02:06:36.681418 master-0 kubenswrapper[7721]: I0216 02:06:36.681299 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1f2d2601-481d-4e86-ac4c-3d34d5691261" volumeName="kubernetes.io/projected/1f2d2601-481d-4e86-ac4c-3d34d5691261-kube-api-access-8d49c" seLinuxMountContext="" Feb 16 02:06:36.681418 master-0 kubenswrapper[7721]: I0216 02:06:36.681312 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="430c146b-ceaf-411a-add6-ce949243aabf" volumeName="kubernetes.io/configmap/430c146b-ceaf-411a-add6-ce949243aabf-cni-binary-copy" seLinuxMountContext="" Feb 16 02:06:36.681418 master-0 kubenswrapper[7721]: I0216 02:06:36.681333 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f91346c7-bde4-4fa2-ac27-b5f0d25eeb75" volumeName="kubernetes.io/configmap/f91346c7-bde4-4fa2-ac27-b5f0d25eeb75-whereabouts-configmap" seLinuxMountContext="" Feb 16 02:06:36.681418 master-0 kubenswrapper[7721]: I0216 02:06:36.681346 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4a5b01c1-1231-4e69-8b6c-c4981b65b26e" volumeName="kubernetes.io/secret/4a5b01c1-1231-4e69-8b6c-c4981b65b26e-serving-cert" seLinuxMountContext="" Feb 16 02:06:36.681418 master-0 kubenswrapper[7721]: I0216 02:06:36.681365 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6dcef814-353e-4985-9afc-9e545f7853ae" volumeName="kubernetes.io/configmap/6dcef814-353e-4985-9afc-9e545f7853ae-env-overrides" seLinuxMountContext="" Feb 16 02:06:36.681418 master-0 kubenswrapper[7721]: I0216 02:06:36.681378 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0540a70-a256-422b-a827-e564d0e67866" volumeName="kubernetes.io/projected/a0540a70-a256-422b-a827-e564d0e67866-kube-api-access-s9p9r" seLinuxMountContext="" Feb 16 02:06:36.681418 master-0 kubenswrapper[7721]: I0216 02:06:36.681395 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bde83629-b39c-401e-bc30-5ce205638918" volumeName="kubernetes.io/configmap/bde83629-b39c-401e-bc30-5ce205638918-marketplace-trusted-ca" seLinuxMountContext="" Feb 16 02:06:36.681418 master-0 kubenswrapper[7721]: I0216 02:06:36.681416 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c9cd32bc-a13a-44ee-ba52-7bb335c7007b" volumeName="kubernetes.io/projected/c9cd32bc-a13a-44ee-ba52-7bb335c7007b-kube-api-access-xr7gn" seLinuxMountContext="" Feb 16 02:06:36.681669 master-0 kubenswrapper[7721]: I0216 02:06:36.681477 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dbc5b101-936f-4bf3-bbf3-f30966b0ab50" volumeName="kubernetes.io/configmap/dbc5b101-936f-4bf3-bbf3-f30966b0ab50-env-overrides" seLinuxMountContext="" Feb 16 02:06:36.681669 master-0 kubenswrapper[7721]: I0216 02:06:36.681501 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dbc5b101-936f-4bf3-bbf3-f30966b0ab50" volumeName="kubernetes.io/configmap/dbc5b101-936f-4bf3-bbf3-f30966b0ab50-ovnkube-identity-cm" seLinuxMountContext="" Feb 16 02:06:36.681669 master-0 kubenswrapper[7721]: I0216 02:06:36.681532 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9be9fd24-fdb1-43dc-80b8-68020427bfd7" volumeName="kubernetes.io/projected/9be9fd24-fdb1-43dc-80b8-68020427bfd7-kube-api-access-k2qvg" seLinuxMountContext="" Feb 16 02:06:36.681669 master-0 kubenswrapper[7721]: I0216 02:06:36.681555 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9be9fd24-fdb1-43dc-80b8-68020427bfd7" volumeName="kubernetes.io/secret/9be9fd24-fdb1-43dc-80b8-68020427bfd7-serving-cert" seLinuxMountContext="" Feb 16 02:06:36.681669 master-0 kubenswrapper[7721]: I0216 02:06:36.681575 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1f2d2601-481d-4e86-ac4c-3d34d5691261" volumeName="kubernetes.io/empty-dir/1f2d2601-481d-4e86-ac4c-3d34d5691261-operand-assets" seLinuxMountContext="" Feb 16 02:06:36.681669 master-0 kubenswrapper[7721]: I0216 02:06:36.681596 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="21686a6d-f685-4fb6-98af-3e8a39c5981b" volumeName="kubernetes.io/configmap/21686a6d-f685-4fb6-98af-3e8a39c5981b-telemetry-config" seLinuxMountContext="" Feb 16 02:06:36.681669 master-0 kubenswrapper[7721]: I0216 02:06:36.681618 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="91938be6-9ae4-4849-abe8-fc842daecd23" volumeName="kubernetes.io/configmap/91938be6-9ae4-4849-abe8-fc842daecd23-config" seLinuxMountContext="" Feb 16 02:06:36.681669 master-0 kubenswrapper[7721]: I0216 02:06:36.681633 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dbc5b101-936f-4bf3-bbf3-f30966b0ab50" volumeName="kubernetes.io/projected/dbc5b101-936f-4bf3-bbf3-f30966b0ab50-kube-api-access-jlnkb" seLinuxMountContext="" Feb 16 02:06:36.681669 master-0 kubenswrapper[7721]: I0216 02:06:36.681655 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e379cfaf-3a4c-40e7-8641-3524b3669295" volumeName="kubernetes.io/configmap/e379cfaf-3a4c-40e7-8641-3524b3669295-config" seLinuxMountContext="" Feb 16 02:06:36.681669 master-0 kubenswrapper[7721]: I0216 02:06:36.681674 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7317f91-9441-449f-9738-85da088cf94f" volumeName="kubernetes.io/configmap/f7317f91-9441-449f-9738-85da088cf94f-ovnkube-config" seLinuxMountContext="" Feb 16 02:06:36.681917 master-0 kubenswrapper[7721]: I0216 02:06:36.681695 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f91346c7-bde4-4fa2-ac27-b5f0d25eeb75" volumeName="kubernetes.io/configmap/f91346c7-bde4-4fa2-ac27-b5f0d25eeb75-cni-binary-copy" seLinuxMountContext="" Feb 16 02:06:36.681917 master-0 kubenswrapper[7721]: I0216 02:06:36.681712 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="04804a08-e3a5-46f3-abcb-967866834baa" volumeName="kubernetes.io/configmap/04804a08-e3a5-46f3-abcb-967866834baa-trusted-ca" seLinuxMountContext="" Feb 16 02:06:36.682091 master-0 kubenswrapper[7721]: I0216 02:06:36.681729 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2ffa4db8-97da-42de-8e51-35680f518ca7" volumeName="kubernetes.io/projected/2ffa4db8-97da-42de-8e51-35680f518ca7-kube-api-access-t9sgx" seLinuxMountContext="" Feb 16 02:06:36.682136 master-0 kubenswrapper[7721]: I0216 02:06:36.682117 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="430c146b-ceaf-411a-add6-ce949243aabf" volumeName="kubernetes.io/projected/430c146b-ceaf-411a-add6-ce949243aabf-kube-api-access-vdllq" seLinuxMountContext="" Feb 16 02:06:36.682166 master-0 kubenswrapper[7721]: I0216 02:06:36.682144 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="467d92a2-1cf3-418d-b41e-8e5f9d7a5b74" volumeName="kubernetes.io/secret/467d92a2-1cf3-418d-b41e-8e5f9d7a5b74-profile-collector-cert" seLinuxMountContext="" Feb 16 02:06:36.682196 master-0 kubenswrapper[7721]: I0216 02:06:36.682173 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="76915cba-7c11-4bd8-9943-81de74e7781b" volumeName="kubernetes.io/projected/76915cba-7c11-4bd8-9943-81de74e7781b-kube-api-access-6f8fj" seLinuxMountContext="" Feb 16 02:06:36.682868 master-0 kubenswrapper[7721]: I0216 02:06:36.682268 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7f0f9b7d-e663-4927-861b-a9544d483b6e" volumeName="kubernetes.io/projected/7f0f9b7d-e663-4927-861b-a9544d483b6e-kube-api-access-5m4sb" seLinuxMountContext="" Feb 16 02:06:36.682928 master-0 kubenswrapper[7721]: I0216 02:06:36.682895 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a8f33151-61df-4b66-ba85-9ba210779059" volumeName="kubernetes.io/projected/a8f33151-61df-4b66-ba85-9ba210779059-kube-api-access" seLinuxMountContext="" Feb 16 02:06:36.682958 master-0 kubenswrapper[7721]: I0216 02:06:36.682934 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1743372f-bdb0-4558-b47b-3714f3aa3fde" volumeName="kubernetes.io/projected/1743372f-bdb0-4558-b47b-3714f3aa3fde-kube-api-access" seLinuxMountContext="" Feb 16 02:06:36.683880 master-0 kubenswrapper[7721]: I0216 02:06:36.683820 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6c02961f-30ec-4405-b7fa-9c4192342ae9" volumeName="kubernetes.io/configmap/6c02961f-30ec-4405-b7fa-9c4192342ae9-config" seLinuxMountContext="" Feb 16 02:06:36.683912 master-0 kubenswrapper[7721]: I0216 02:06:36.683890 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="23755f7f-dce6-4dcf-9664-22e3aedb5c81" volumeName="kubernetes.io/projected/23755f7f-dce6-4dcf-9664-22e3aedb5c81-kube-api-access-n4gmn" seLinuxMountContext="" Feb 16 02:06:36.683938 master-0 kubenswrapper[7721]: I0216 02:06:36.683923 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="980aa005-f51d-4ca2-aee6-a6fdeefd86d0" volumeName="kubernetes.io/projected/980aa005-f51d-4ca2-aee6-a6fdeefd86d0-kube-api-access" seLinuxMountContext="" Feb 16 02:06:36.683974 master-0 kubenswrapper[7721]: I0216 02:06:36.683952 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9be9fd24-fdb1-43dc-80b8-68020427bfd7" volumeName="kubernetes.io/empty-dir/9be9fd24-fdb1-43dc-80b8-68020427bfd7-available-featuregates" seLinuxMountContext="" Feb 16 02:06:36.684002 master-0 kubenswrapper[7721]: I0216 02:06:36.683983 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bde83629-b39c-401e-bc30-5ce205638918" volumeName="kubernetes.io/projected/bde83629-b39c-401e-bc30-5ce205638918-kube-api-access-24b6h" seLinuxMountContext="" Feb 16 02:06:36.684032 master-0 kubenswrapper[7721]: I0216 02:06:36.684004 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2a67f799-fd8d-4bee-9d67-720151c1650b" volumeName="kubernetes.io/projected/2a67f799-fd8d-4bee-9d67-720151c1650b-kube-api-access-47lht" seLinuxMountContext="" Feb 16 02:06:36.684032 master-0 kubenswrapper[7721]: I0216 02:06:36.684024 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="91938be6-9ae4-4849-abe8-fc842daecd23" volumeName="kubernetes.io/secret/91938be6-9ae4-4849-abe8-fc842daecd23-serving-cert" seLinuxMountContext="" Feb 16 02:06:36.684085 master-0 kubenswrapper[7721]: I0216 02:06:36.684044 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dbc5b101-936f-4bf3-bbf3-f30966b0ab50" volumeName="kubernetes.io/secret/dbc5b101-936f-4bf3-bbf3-f30966b0ab50-webhook-cert" seLinuxMountContext="" Feb 16 02:06:36.684085 master-0 kubenswrapper[7721]: I0216 02:06:36.684063 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7317f91-9441-449f-9738-85da088cf94f" volumeName="kubernetes.io/projected/f7317f91-9441-449f-9738-85da088cf94f-kube-api-access-58cq8" seLinuxMountContext="" Feb 16 02:06:36.684615 master-0 kubenswrapper[7721]: I0216 02:06:36.684575 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="04804a08-e3a5-46f3-abcb-967866834baa" volumeName="kubernetes.io/projected/04804a08-e3a5-46f3-abcb-967866834baa-kube-api-access-8rc6w" seLinuxMountContext="" Feb 16 02:06:36.684664 master-0 kubenswrapper[7721]: I0216 02:06:36.684617 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1f2d2601-481d-4e86-ac4c-3d34d5691261" volumeName="kubernetes.io/secret/1f2d2601-481d-4e86-ac4c-3d34d5691261-cluster-olm-operator-serving-cert" seLinuxMountContext="" Feb 16 02:06:36.684664 master-0 kubenswrapper[7721]: I0216 02:06:36.684637 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4a5b01c1-1231-4e69-8b6c-c4981b65b26e" volumeName="kubernetes.io/configmap/4a5b01c1-1231-4e69-8b6c-c4981b65b26e-config" seLinuxMountContext="" Feb 16 02:06:36.684664 master-0 kubenswrapper[7721]: I0216 02:06:36.684657 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="91938be6-9ae4-4849-abe8-fc842daecd23" volumeName="kubernetes.io/projected/91938be6-9ae4-4849-abe8-fc842daecd23-kube-api-access-bhz2m" seLinuxMountContext="" Feb 16 02:06:36.684742 master-0 kubenswrapper[7721]: I0216 02:06:36.684678 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c9cd32bc-a13a-44ee-ba52-7bb335c7007b" volumeName="kubernetes.io/configmap/c9cd32bc-a13a-44ee-ba52-7bb335c7007b-config" seLinuxMountContext="" Feb 16 02:06:36.684742 master-0 kubenswrapper[7721]: I0216 02:06:36.684696 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7317f91-9441-449f-9738-85da088cf94f" volumeName="kubernetes.io/secret/f7317f91-9441-449f-9738-85da088cf94f-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 16 02:06:36.684742 master-0 kubenswrapper[7721]: I0216 02:06:36.684717 7721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f91346c7-bde4-4fa2-ac27-b5f0d25eeb75" volumeName="kubernetes.io/projected/f91346c7-bde4-4fa2-ac27-b5f0d25eeb75-kube-api-access-4ns9l" seLinuxMountContext="" Feb 16 02:06:36.684742 master-0 kubenswrapper[7721]: I0216 02:06:36.684735 7721 reconstruct.go:97] "Volume reconstruction finished" Feb 16 02:06:36.684841 master-0 kubenswrapper[7721]: I0216 02:06:36.684747 7721 reconciler.go:26] "Reconciler: start to sync state" Feb 16 02:06:36.688454 master-0 kubenswrapper[7721]: I0216 02:06:36.688374 7721 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 16 02:06:36.720791 master-0 kubenswrapper[7721]: I0216 02:06:36.720683 7721 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 16 02:06:36.723515 master-0 kubenswrapper[7721]: I0216 02:06:36.723471 7721 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 16 02:06:36.723568 master-0 kubenswrapper[7721]: I0216 02:06:36.723543 7721 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 16 02:06:36.723606 master-0 kubenswrapper[7721]: I0216 02:06:36.723578 7721 kubelet.go:2335] "Starting kubelet main sync loop" Feb 16 02:06:36.723695 master-0 kubenswrapper[7721]: E0216 02:06:36.723656 7721 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 16 02:06:36.725375 master-0 kubenswrapper[7721]: I0216 02:06:36.725326 7721 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 16 02:06:36.744427 master-0 kubenswrapper[7721]: I0216 02:06:36.744375 7721 generic.go:334] "Generic (PLEG): container finished" podID="6dcef814-353e-4985-9afc-9e545f7853ae" containerID="64b93c97323f7e51986ec036f1f46d7cb6a600efeaf1c716bc52e696eb3b4391" exitCode=0 Feb 16 02:06:36.750413 master-0 kubenswrapper[7721]: I0216 02:06:36.747919 7721 generic.go:334] "Generic (PLEG): container finished" podID="5d1e91e5a1fed5cf7076a92d2830d36f" containerID="3872302a7c76c50aca9a9d80255ef01b4820b2081427956ca06ca96b2b4c695b" exitCode=0 Feb 16 02:06:36.759578 master-0 kubenswrapper[7721]: I0216 02:06:36.759534 7721 generic.go:334] "Generic (PLEG): container finished" podID="f91346c7-bde4-4fa2-ac27-b5f0d25eeb75" containerID="0858acab9d05c2a71790635b1c93f375645e501dd52d5da79fb7b1cbf9b57e86" exitCode=0 Feb 16 02:06:36.759578 master-0 kubenswrapper[7721]: I0216 02:06:36.759572 7721 generic.go:334] "Generic (PLEG): container finished" podID="f91346c7-bde4-4fa2-ac27-b5f0d25eeb75" containerID="e44874f0350470f24d5ea4a5701795fe3efa7441e0282bd848060a5c5089ab29" exitCode=0 Feb 16 02:06:36.759699 master-0 kubenswrapper[7721]: I0216 02:06:36.759589 7721 generic.go:334] "Generic (PLEG): container finished" podID="f91346c7-bde4-4fa2-ac27-b5f0d25eeb75" containerID="f32a8a71ff757721727f0a15b091975f54ceee8df971155b55280b5af1e45ccf" exitCode=0 Feb 16 02:06:36.759699 master-0 kubenswrapper[7721]: I0216 02:06:36.759607 7721 generic.go:334] "Generic (PLEG): container finished" podID="f91346c7-bde4-4fa2-ac27-b5f0d25eeb75" containerID="d0f78aff7e0b714e84872137c91a78811349c06129b280efb18e955c4097bbb8" exitCode=0 Feb 16 02:06:36.759699 master-0 kubenswrapper[7721]: I0216 02:06:36.759624 7721 generic.go:334] "Generic (PLEG): container finished" podID="f91346c7-bde4-4fa2-ac27-b5f0d25eeb75" containerID="93e94006a31f3ba668f3844369615d0dcc4ff0267ec4f323096fa745c6b0818c" exitCode=0 Feb 16 02:06:36.759699 master-0 kubenswrapper[7721]: I0216 02:06:36.759649 7721 generic.go:334] "Generic (PLEG): container finished" podID="f91346c7-bde4-4fa2-ac27-b5f0d25eeb75" containerID="25d520296eb3e3e0c239fcaebd996a70fe80cf8a6487bc284e94db513bb2809d" exitCode=0 Feb 16 02:06:36.761409 master-0 kubenswrapper[7721]: I0216 02:06:36.761346 7721 generic.go:334] "Generic (PLEG): container finished" podID="a874e346-456c-4e93-87bd-7b70434ddeb1" containerID="55eb3affcbf11e7c854417599461f5fea225338fb48ac5a0d81226de9a467092" exitCode=0 Feb 16 02:06:36.790802 master-0 kubenswrapper[7721]: I0216 02:06:36.790756 7721 generic.go:334] "Generic (PLEG): container finished" podID="5fb79c4e-3171-4d3c-a0d1-ed1a93acafa8" containerID="e0727143cdcad3cc5251095472bb96e72f7ab1b59c0a90ac12887c7c83657168" exitCode=0 Feb 16 02:06:36.797771 master-0 kubenswrapper[7721]: I0216 02:06:36.797730 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_b3322fd3717f4aec0d8f54ec7862c07e/kube-rbac-proxy-crio/2.log" Feb 16 02:06:36.798243 master-0 kubenswrapper[7721]: I0216 02:06:36.798195 7721 generic.go:334] "Generic (PLEG): container finished" podID="b3322fd3717f4aec0d8f54ec7862c07e" containerID="1315b8b70fa662058fdbb3d25c0b57bbe5b7832e14fd3593c7b3c8b6954d366b" exitCode=1 Feb 16 02:06:36.798243 master-0 kubenswrapper[7721]: I0216 02:06:36.798239 7721 generic.go:334] "Generic (PLEG): container finished" podID="b3322fd3717f4aec0d8f54ec7862c07e" containerID="0a5e2f09da7456e3ddaab1d9e62abba19553e3d0c15bed35db33dc97212f0fb6" exitCode=0 Feb 16 02:06:36.823918 master-0 kubenswrapper[7721]: E0216 02:06:36.823875 7721 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 16 02:06:36.845889 master-0 kubenswrapper[7721]: I0216 02:06:36.845839 7721 manager.go:324] Recovery completed Feb 16 02:06:36.885387 master-0 kubenswrapper[7721]: I0216 02:06:36.885318 7721 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 16 02:06:36.885387 master-0 kubenswrapper[7721]: I0216 02:06:36.885343 7721 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 16 02:06:36.885387 master-0 kubenswrapper[7721]: I0216 02:06:36.885373 7721 state_mem.go:36] "Initialized new in-memory state store" Feb 16 02:06:36.885612 master-0 kubenswrapper[7721]: I0216 02:06:36.885586 7721 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 16 02:06:36.885652 master-0 kubenswrapper[7721]: I0216 02:06:36.885603 7721 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 16 02:06:36.885652 master-0 kubenswrapper[7721]: I0216 02:06:36.885627 7721 state_checkpoint.go:136] "State checkpoint: restored state from checkpoint" Feb 16 02:06:36.885652 master-0 kubenswrapper[7721]: I0216 02:06:36.885634 7721 state_checkpoint.go:137] "State checkpoint: defaultCPUSet" defaultCpuSet="" Feb 16 02:06:36.885652 master-0 kubenswrapper[7721]: I0216 02:06:36.885641 7721 policy_none.go:49] "None policy: Start" Feb 16 02:06:36.888087 master-0 kubenswrapper[7721]: I0216 02:06:36.888058 7721 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 16 02:06:36.888134 master-0 kubenswrapper[7721]: I0216 02:06:36.888094 7721 state_mem.go:35] "Initializing new in-memory state store" Feb 16 02:06:36.888337 master-0 kubenswrapper[7721]: I0216 02:06:36.888321 7721 state_mem.go:75] "Updated machine memory state" Feb 16 02:06:36.888337 master-0 kubenswrapper[7721]: I0216 02:06:36.888334 7721 state_checkpoint.go:82] "State checkpoint: restored state from checkpoint" Feb 16 02:06:36.898962 master-0 kubenswrapper[7721]: I0216 02:06:36.898925 7721 manager.go:334] "Starting Device Plugin manager" Feb 16 02:06:36.899191 master-0 kubenswrapper[7721]: I0216 02:06:36.899163 7721 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 16 02:06:36.899191 master-0 kubenswrapper[7721]: I0216 02:06:36.899192 7721 server.go:79] "Starting device plugin registration server" Feb 16 02:06:36.899714 master-0 kubenswrapper[7721]: I0216 02:06:36.899692 7721 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 16 02:06:36.899796 master-0 kubenswrapper[7721]: I0216 02:06:36.899713 7721 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 16 02:06:36.900298 master-0 kubenswrapper[7721]: I0216 02:06:36.900271 7721 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 16 02:06:36.900375 master-0 kubenswrapper[7721]: I0216 02:06:36.900356 7721 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 16 02:06:36.900375 master-0 kubenswrapper[7721]: I0216 02:06:36.900369 7721 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 16 02:06:37.000250 master-0 kubenswrapper[7721]: I0216 02:06:37.000194 7721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:06:37.002349 master-0 kubenswrapper[7721]: I0216 02:06:37.002317 7721 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:06:37.002349 master-0 kubenswrapper[7721]: I0216 02:06:37.002351 7721 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:06:37.002637 master-0 kubenswrapper[7721]: I0216 02:06:37.002363 7721 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:06:37.002637 master-0 kubenswrapper[7721]: I0216 02:06:37.002420 7721 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 16 02:06:37.012859 master-0 kubenswrapper[7721]: I0216 02:06:37.012801 7721 kubelet_node_status.go:115] "Node was previously registered" node="master-0" Feb 16 02:06:37.013127 master-0 kubenswrapper[7721]: I0216 02:06:37.013092 7721 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Feb 16 02:06:37.024354 master-0 kubenswrapper[7721]: I0216 02:06:37.024274 7721 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","kube-system/bootstrap-kube-controller-manager-master-0","kube-system/bootstrap-kube-scheduler-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0-master-0"] Feb 16 02:06:37.025256 master-0 kubenswrapper[7721]: I0216 02:06:37.025185 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5d1e91e5a1fed5cf7076a92d2830d36f","Type":"ContainerStarted","Data":"34fdb1528016b6c99888a5b17a344114bc05a46b5e53091141b876be457cb369"} Feb 16 02:06:37.025340 master-0 kubenswrapper[7721]: I0216 02:06:37.025254 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5d1e91e5a1fed5cf7076a92d2830d36f","Type":"ContainerStarted","Data":"b4c44b5842e3ec8dee60bb9b2661e316b9a431e19fc3d6452f904a284fd4a961"} Feb 16 02:06:37.025340 master-0 kubenswrapper[7721]: I0216 02:06:37.025273 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5d1e91e5a1fed5cf7076a92d2830d36f","Type":"ContainerDied","Data":"3872302a7c76c50aca9a9d80255ef01b4820b2081427956ca06ca96b2b4c695b"} Feb 16 02:06:37.025340 master-0 kubenswrapper[7721]: I0216 02:06:37.025286 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5d1e91e5a1fed5cf7076a92d2830d36f","Type":"ContainerStarted","Data":"ff8deeed5842106bfd4d1b27be4848f25105bbaa159314b19c6a3add851fbf37"} Feb 16 02:06:37.025340 master-0 kubenswrapper[7721]: I0216 02:06:37.025300 7721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e947e806d4cd304de7ae0e1eadc87bafdcb0e816f937b266a440ab82d15412f1" Feb 16 02:06:37.025340 master-0 kubenswrapper[7721]: I0216 02:06:37.025317 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerStarted","Data":"7921033cca2163ce5e4549f18d23b23e3797f9935bb1bd7ed5580d96e9031f08"} Feb 16 02:06:37.025340 master-0 kubenswrapper[7721]: I0216 02:06:37.025330 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerStarted","Data":"e320e64d785f6de34aed9795724368979c84944d7c7a25afb100430d56e9ef3e"} Feb 16 02:06:37.025340 master-0 kubenswrapper[7721]: I0216 02:06:37.025342 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerStarted","Data":"0e4ba8ef0f2d2dcfdef03df990cea8e18604ff8954454af1715e76176988bea9"} Feb 16 02:06:37.025820 master-0 kubenswrapper[7721]: I0216 02:06:37.025380 7721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="821f1d48faf9d45a2dade69f93e659089f19489c18c9dee5c3fa79d5976d4f78" Feb 16 02:06:37.025820 master-0 kubenswrapper[7721]: I0216 02:06:37.025427 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"400a178a4d5e9a88ba5bbbd1da2ad15e","Type":"ContainerStarted","Data":"d3c90f1e73202b8ff7d7463b840dabc14c0987d4a5dea05816767a582d4b8f44"} Feb 16 02:06:37.025820 master-0 kubenswrapper[7721]: I0216 02:06:37.025460 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"400a178a4d5e9a88ba5bbbd1da2ad15e","Type":"ContainerStarted","Data":"1ac49b435dd0ca350c530f89dad4bc64dffda1e4e142d28d15199074e3eba071"} Feb 16 02:06:37.025820 master-0 kubenswrapper[7721]: I0216 02:06:37.025474 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"400a178a4d5e9a88ba5bbbd1da2ad15e","Type":"ContainerStarted","Data":"a24c34b6599e046cb5b217ee112cd5793502433694aca39a7811b07f3f980447"} Feb 16 02:06:37.025820 master-0 kubenswrapper[7721]: I0216 02:06:37.025488 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"9460ca0802075a8a6a10d7b3e6052c4d","Type":"ContainerStarted","Data":"0d84c00dcc11900a2f5a4ff15f798ef8c8b6cc92a9b7e1f32a7c33bfeed4a478"} Feb 16 02:06:37.025820 master-0 kubenswrapper[7721]: I0216 02:06:37.025500 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"9460ca0802075a8a6a10d7b3e6052c4d","Type":"ContainerStarted","Data":"c8b97e97c35ace1d8e3342fb279b58e63ecd66a09abba6b504fe344a2864fe27"} Feb 16 02:06:37.025820 master-0 kubenswrapper[7721]: I0216 02:06:37.025531 7721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="053a88a441f8640d0f49f51a0dfba08da37a8b783fce751c0810295de49cf426" Feb 16 02:06:37.025820 master-0 kubenswrapper[7721]: I0216 02:06:37.025545 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b3322fd3717f4aec0d8f54ec7862c07e","Type":"ContainerStarted","Data":"93fa1f75b959883173b882fc0c221239f90d8c0f0c6f464304aa368bf78625b2"} Feb 16 02:06:37.025820 master-0 kubenswrapper[7721]: I0216 02:06:37.025556 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b3322fd3717f4aec0d8f54ec7862c07e","Type":"ContainerDied","Data":"1315b8b70fa662058fdbb3d25c0b57bbe5b7832e14fd3593c7b3c8b6954d366b"} Feb 16 02:06:37.025820 master-0 kubenswrapper[7721]: I0216 02:06:37.025571 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b3322fd3717f4aec0d8f54ec7862c07e","Type":"ContainerDied","Data":"0a5e2f09da7456e3ddaab1d9e62abba19553e3d0c15bed35db33dc97212f0fb6"} Feb 16 02:06:37.025820 master-0 kubenswrapper[7721]: I0216 02:06:37.025586 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b3322fd3717f4aec0d8f54ec7862c07e","Type":"ContainerStarted","Data":"07f7d55685e3891e139cfcc8fc39a4525349b15753a33187f5704239bf899022"} Feb 16 02:06:37.041100 master-0 kubenswrapper[7721]: E0216 02:06:37.041033 7721 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-scheduler-master-0\" already exists" pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 16 02:06:37.041306 master-0 kubenswrapper[7721]: E0216 02:06:37.041257 7721 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-controller-manager-master-0\" already exists" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 02:06:37.041396 master-0 kubenswrapper[7721]: E0216 02:06:37.041293 7721 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 02:06:37.046129 master-0 kubenswrapper[7721]: E0216 02:06:37.046012 7721 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 02:06:37.047024 master-0 kubenswrapper[7721]: W0216 02:06:37.046988 7721 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Feb 16 02:06:37.047150 master-0 kubenswrapper[7721]: E0216 02:06:37.047051 7721 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0-master-0\" already exists" pod="openshift-etcd/etcd-master-0-master-0" Feb 16 02:06:37.090503 master-0 kubenswrapper[7721]: I0216 02:06:37.090474 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/9460ca0802075a8a6a10d7b3e6052c4d-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"9460ca0802075a8a6a10d7b3e6052c4d\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 16 02:06:37.090586 master-0 kubenswrapper[7721]: I0216 02:06:37.090516 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/9460ca0802075a8a6a10d7b3e6052c4d-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"9460ca0802075a8a6a10d7b3e6052c4d\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 16 02:06:37.090586 master-0 kubenswrapper[7721]: I0216 02:06:37.090550 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 02:06:37.090586 master-0 kubenswrapper[7721]: I0216 02:06:37.090575 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 02:06:37.090802 master-0 kubenswrapper[7721]: I0216 02:06:37.090612 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 02:06:37.090802 master-0 kubenswrapper[7721]: I0216 02:06:37.090636 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 02:06:37.090802 master-0 kubenswrapper[7721]: I0216 02:06:37.090660 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 02:06:37.090802 master-0 kubenswrapper[7721]: I0216 02:06:37.090686 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 02:06:37.090802 master-0 kubenswrapper[7721]: I0216 02:06:37.090707 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/400a178a4d5e9a88ba5bbbd1da2ad15e-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"400a178a4d5e9a88ba5bbbd1da2ad15e\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 16 02:06:37.090802 master-0 kubenswrapper[7721]: I0216 02:06:37.090727 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 02:06:37.090802 master-0 kubenswrapper[7721]: I0216 02:06:37.090748 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/400a178a4d5e9a88ba5bbbd1da2ad15e-certs\") pod \"etcd-master-0-master-0\" (UID: \"400a178a4d5e9a88ba5bbbd1da2ad15e\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 16 02:06:37.091040 master-0 kubenswrapper[7721]: I0216 02:06:37.090812 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 02:06:37.091040 master-0 kubenswrapper[7721]: I0216 02:06:37.090863 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 02:06:37.091040 master-0 kubenswrapper[7721]: I0216 02:06:37.090901 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 02:06:37.091040 master-0 kubenswrapper[7721]: I0216 02:06:37.090936 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 02:06:37.091040 master-0 kubenswrapper[7721]: I0216 02:06:37.090971 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 02:06:37.091040 master-0 kubenswrapper[7721]: I0216 02:06:37.091001 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 02:06:37.191625 master-0 kubenswrapper[7721]: I0216 02:06:37.191561 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 02:06:37.191782 master-0 kubenswrapper[7721]: I0216 02:06:37.191630 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 02:06:37.191782 master-0 kubenswrapper[7721]: I0216 02:06:37.191669 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 02:06:37.191782 master-0 kubenswrapper[7721]: I0216 02:06:37.191683 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 02:06:37.191782 master-0 kubenswrapper[7721]: I0216 02:06:37.191705 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 02:06:37.191881 master-0 kubenswrapper[7721]: I0216 02:06:37.191777 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 02:06:37.191881 master-0 kubenswrapper[7721]: I0216 02:06:37.191822 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/9460ca0802075a8a6a10d7b3e6052c4d-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"9460ca0802075a8a6a10d7b3e6052c4d\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 16 02:06:37.191881 master-0 kubenswrapper[7721]: I0216 02:06:37.191855 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/9460ca0802075a8a6a10d7b3e6052c4d-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"9460ca0802075a8a6a10d7b3e6052c4d\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 16 02:06:37.191964 master-0 kubenswrapper[7721]: I0216 02:06:37.191869 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 02:06:37.191964 master-0 kubenswrapper[7721]: I0216 02:06:37.191888 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 02:06:37.191964 master-0 kubenswrapper[7721]: I0216 02:06:37.191928 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 02:06:37.192056 master-0 kubenswrapper[7721]: I0216 02:06:37.191933 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 02:06:37.192103 master-0 kubenswrapper[7721]: I0216 02:06:37.192074 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 02:06:37.192137 master-0 kubenswrapper[7721]: I0216 02:06:37.192110 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/400a178a4d5e9a88ba5bbbd1da2ad15e-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"400a178a4d5e9a88ba5bbbd1da2ad15e\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 16 02:06:37.192137 master-0 kubenswrapper[7721]: I0216 02:06:37.192133 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 02:06:37.192199 master-0 kubenswrapper[7721]: I0216 02:06:37.192155 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 02:06:37.192199 master-0 kubenswrapper[7721]: I0216 02:06:37.192181 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 02:06:37.192255 master-0 kubenswrapper[7721]: I0216 02:06:37.192202 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 02:06:37.192255 master-0 kubenswrapper[7721]: I0216 02:06:37.192228 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/400a178a4d5e9a88ba5bbbd1da2ad15e-certs\") pod \"etcd-master-0-master-0\" (UID: \"400a178a4d5e9a88ba5bbbd1da2ad15e\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 16 02:06:37.192255 master-0 kubenswrapper[7721]: I0216 02:06:37.192252 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 02:06:37.192330 master-0 kubenswrapper[7721]: I0216 02:06:37.192273 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 02:06:37.192330 master-0 kubenswrapper[7721]: I0216 02:06:37.192308 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 02:06:37.192330 master-0 kubenswrapper[7721]: I0216 02:06:37.191952 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 02:06:37.192463 master-0 kubenswrapper[7721]: I0216 02:06:37.192352 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 02:06:37.192463 master-0 kubenswrapper[7721]: I0216 02:06:37.191962 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 02:06:37.192463 master-0 kubenswrapper[7721]: I0216 02:06:37.192396 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/400a178a4d5e9a88ba5bbbd1da2ad15e-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"400a178a4d5e9a88ba5bbbd1da2ad15e\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 16 02:06:37.192463 master-0 kubenswrapper[7721]: I0216 02:06:37.191979 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/9460ca0802075a8a6a10d7b3e6052c4d-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"9460ca0802075a8a6a10d7b3e6052c4d\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 16 02:06:37.192627 master-0 kubenswrapper[7721]: I0216 02:06:37.192462 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 02:06:37.192627 master-0 kubenswrapper[7721]: I0216 02:06:37.191997 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/9460ca0802075a8a6a10d7b3e6052c4d-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"9460ca0802075a8a6a10d7b3e6052c4d\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 16 02:06:37.192627 master-0 kubenswrapper[7721]: I0216 02:06:37.192526 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 02:06:37.192627 master-0 kubenswrapper[7721]: I0216 02:06:37.192557 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 02:06:37.192627 master-0 kubenswrapper[7721]: I0216 02:06:37.192588 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 02:06:37.192627 master-0 kubenswrapper[7721]: I0216 02:06:37.192620 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 02:06:37.192776 master-0 kubenswrapper[7721]: I0216 02:06:37.192672 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/400a178a4d5e9a88ba5bbbd1da2ad15e-certs\") pod \"etcd-master-0-master-0\" (UID: \"400a178a4d5e9a88ba5bbbd1da2ad15e\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 16 02:06:37.655598 master-0 kubenswrapper[7721]: I0216 02:06:37.655420 7721 apiserver.go:52] "Watching apiserver" Feb 16 02:06:37.669022 master-0 kubenswrapper[7721]: I0216 02:06:37.668940 7721 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 16 02:06:37.670339 master-0 kubenswrapper[7721]: I0216 02:06:37.670258 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0","openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-mmhcs","openshift-multus/multus-8jgrl","openshift-multus/multus-admission-controller-7c64d55f8-62wr2","openshift-multus/network-metrics-daemon-gn9mv","kube-system/bootstrap-kube-controller-manager-master-0","openshift-etcd/etcd-master-0-master-0","openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-x2sh4","openshift-marketplace/marketplace-operator-6cc5b65c6b-8nl7s","openshift-network-node-identity/network-node-identity-kffmg","openshift-service-ca-operator/service-ca-operator-5dc4688546-ck5nr","openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-v7lmz","openshift-image-registry/cluster-image-registry-operator-96c8c64b8-bxgpd","openshift-ingress-operator/ingress-operator-c588d8cb4-nbjz6","openshift-config-operator/openshift-config-operator-7c6bdb986f-zlbd2","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-network-diagnostics/network-check-target-hswdj","openshift-network-operator/iptables-alerter-9bnql","openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-845gn","openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-jshtp","openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-8n9v4","openshift-network-operator/network-operator-6fcf4c966-dctqr","openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-mcff9","openshift-ovn-kubernetes/ovnkube-node-bs85n","openshift-authentication-operator/authentication-operator-755d954778-bngv9","openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-dgxhp","openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-tkqng","openshift-multus/multus-additional-cni-plugins-mvdkf","assisted-installer/assisted-installer-controller-p2zdr","openshift-cluster-version/cluster-version-operator-76959b6567-9fxxl","openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-dsjz2","openshift-operator-lifecycle-manager/catalog-operator-588944557d-2z8fq","openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-qwp9g","openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-b47jp","openshift-etcd-operator/etcd-operator-67bf55ccdd-htjgz","openshift-monitoring/cluster-monitoring-operator-756d64c8c4-5q4zs","openshift-dns-operator/dns-operator-86b8869b79-4rfwq"] Feb 16 02:06:37.670605 master-0 kubenswrapper[7721]: I0216 02:06:37.670546 7721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-p2zdr" Feb 16 02:06:37.670734 master-0 kubenswrapper[7721]: I0216 02:06:37.670632 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-2z8fq" Feb 16 02:06:37.670878 master-0 kubenswrapper[7721]: I0216 02:06:37.670832 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-qwp9g" Feb 16 02:06:37.671617 master-0 kubenswrapper[7721]: I0216 02:06:37.671570 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-86b8869b79-4rfwq" Feb 16 02:06:37.677318 master-0 kubenswrapper[7721]: I0216 02:06:37.673067 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 16 02:06:37.677318 master-0 kubenswrapper[7721]: I0216 02:06:37.673156 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 16 02:06:37.677318 master-0 kubenswrapper[7721]: I0216 02:06:37.674121 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 16 02:06:37.677318 master-0 kubenswrapper[7721]: I0216 02:06:37.674295 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 16 02:06:37.677318 master-0 kubenswrapper[7721]: I0216 02:06:37.674555 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-845gn" Feb 16 02:06:37.677318 master-0 kubenswrapper[7721]: I0216 02:06:37.674869 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 16 02:06:37.677318 master-0 kubenswrapper[7721]: I0216 02:06:37.675496 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 16 02:06:37.677318 master-0 kubenswrapper[7721]: I0216 02:06:37.675679 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 16 02:06:37.677318 master-0 kubenswrapper[7721]: I0216 02:06:37.675943 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 16 02:06:37.677318 master-0 kubenswrapper[7721]: I0216 02:06:37.676097 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 16 02:06:37.677318 master-0 kubenswrapper[7721]: I0216 02:06:37.676277 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 16 02:06:37.677318 master-0 kubenswrapper[7721]: I0216 02:06:37.676317 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Feb 16 02:06:37.677318 master-0 kubenswrapper[7721]: I0216 02:06:37.676364 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 16 02:06:37.677318 master-0 kubenswrapper[7721]: I0216 02:06:37.676778 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 16 02:06:37.677318 master-0 kubenswrapper[7721]: I0216 02:06:37.676911 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 16 02:06:37.677318 master-0 kubenswrapper[7721]: I0216 02:06:37.676943 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Feb 16 02:06:37.677318 master-0 kubenswrapper[7721]: I0216 02:06:37.677142 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Feb 16 02:06:37.677318 master-0 kubenswrapper[7721]: I0216 02:06:37.677349 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 16 02:06:37.683636 master-0 kubenswrapper[7721]: I0216 02:06:37.677552 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 16 02:06:37.683636 master-0 kubenswrapper[7721]: I0216 02:06:37.677591 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 16 02:06:37.683636 master-0 kubenswrapper[7721]: I0216 02:06:37.678257 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 16 02:06:37.683636 master-0 kubenswrapper[7721]: I0216 02:06:37.678886 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nbjz6" Feb 16 02:06:37.683636 master-0 kubenswrapper[7721]: I0216 02:06:37.681392 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-76959b6567-9fxxl" Feb 16 02:06:37.683636 master-0 kubenswrapper[7721]: I0216 02:06:37.681461 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-bxgpd" Feb 16 02:06:37.683636 master-0 kubenswrapper[7721]: I0216 02:06:37.682287 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Feb 16 02:06:37.683636 master-0 kubenswrapper[7721]: I0216 02:06:37.682353 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Feb 16 02:06:37.683636 master-0 kubenswrapper[7721]: I0216 02:06:37.682423 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 16 02:06:37.683636 master-0 kubenswrapper[7721]: I0216 02:06:37.682761 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Feb 16 02:06:37.683636 master-0 kubenswrapper[7721]: I0216 02:06:37.682851 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 16 02:06:37.683636 master-0 kubenswrapper[7721]: I0216 02:06:37.682767 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 16 02:06:37.683636 master-0 kubenswrapper[7721]: I0216 02:06:37.683027 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 16 02:06:37.683636 master-0 kubenswrapper[7721]: I0216 02:06:37.682901 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 16 02:06:37.683636 master-0 kubenswrapper[7721]: I0216 02:06:37.683069 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 16 02:06:37.683636 master-0 kubenswrapper[7721]: I0216 02:06:37.683343 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 16 02:06:37.683636 master-0 kubenswrapper[7721]: I0216 02:06:37.683401 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 16 02:06:37.683636 master-0 kubenswrapper[7721]: I0216 02:06:37.683588 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 16 02:06:37.683636 master-0 kubenswrapper[7721]: I0216 02:06:37.683633 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Feb 16 02:06:37.689610 master-0 kubenswrapper[7721]: I0216 02:06:37.683706 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Feb 16 02:06:37.689610 master-0 kubenswrapper[7721]: I0216 02:06:37.684425 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-8nl7s" Feb 16 02:06:37.689610 master-0 kubenswrapper[7721]: I0216 02:06:37.685037 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 16 02:06:37.689610 master-0 kubenswrapper[7721]: I0216 02:06:37.687384 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 16 02:06:37.689610 master-0 kubenswrapper[7721]: I0216 02:06:37.687783 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Feb 16 02:06:37.689610 master-0 kubenswrapper[7721]: I0216 02:06:37.687789 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 16 02:06:37.689610 master-0 kubenswrapper[7721]: I0216 02:06:37.687899 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 16 02:06:37.689610 master-0 kubenswrapper[7721]: I0216 02:06:37.688496 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 16 02:06:37.689610 master-0 kubenswrapper[7721]: I0216 02:06:37.688649 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 16 02:06:37.689610 master-0 kubenswrapper[7721]: I0216 02:06:37.688904 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 16 02:06:37.689610 master-0 kubenswrapper[7721]: I0216 02:06:37.689115 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 16 02:06:37.689610 master-0 kubenswrapper[7721]: I0216 02:06:37.689211 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 16 02:06:37.689610 master-0 kubenswrapper[7721]: I0216 02:06:37.689291 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 16 02:06:37.689610 master-0 kubenswrapper[7721]: I0216 02:06:37.689307 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 16 02:06:37.689610 master-0 kubenswrapper[7721]: I0216 02:06:37.689613 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 16 02:06:37.690761 master-0 kubenswrapper[7721]: I0216 02:06:37.689716 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 16 02:06:37.690761 master-0 kubenswrapper[7721]: I0216 02:06:37.689861 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 16 02:06:37.690761 master-0 kubenswrapper[7721]: I0216 02:06:37.689932 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 16 02:06:37.690761 master-0 kubenswrapper[7721]: I0216 02:06:37.690123 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-5q4zs" Feb 16 02:06:37.690761 master-0 kubenswrapper[7721]: I0216 02:06:37.690133 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 16 02:06:37.690761 master-0 kubenswrapper[7721]: I0216 02:06:37.690124 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-tkqng" Feb 16 02:06:37.690761 master-0 kubenswrapper[7721]: I0216 02:06:37.690450 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 16 02:06:37.696408 master-0 kubenswrapper[7721]: I0216 02:06:37.692496 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 16 02:06:37.696408 master-0 kubenswrapper[7721]: I0216 02:06:37.693130 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gn9mv" Feb 16 02:06:37.696408 master-0 kubenswrapper[7721]: I0216 02:06:37.693180 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-7c64d55f8-62wr2" Feb 16 02:06:37.696408 master-0 kubenswrapper[7721]: I0216 02:06:37.693499 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 16 02:06:37.696408 master-0 kubenswrapper[7721]: I0216 02:06:37.694057 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-hswdj" Feb 16 02:06:37.696408 master-0 kubenswrapper[7721]: I0216 02:06:37.694971 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a5b01c1-1231-4e69-8b6c-c4981b65b26e-serving-cert\") pod \"kube-storage-version-migrator-operator-cd5474998-x2sh4\" (UID: \"4a5b01c1-1231-4e69-8b6c-c4981b65b26e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-x2sh4" Feb 16 02:06:37.696408 master-0 kubenswrapper[7721]: I0216 02:06:37.695022 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zr872\" (UniqueName: \"kubernetes.io/projected/4a5b01c1-1231-4e69-8b6c-c4981b65b26e-kube-api-access-zr872\") pod \"kube-storage-version-migrator-operator-cd5474998-x2sh4\" (UID: \"4a5b01c1-1231-4e69-8b6c-c4981b65b26e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-x2sh4" Feb 16 02:06:37.696408 master-0 kubenswrapper[7721]: I0216 02:06:37.695055 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 16 02:06:37.696408 master-0 kubenswrapper[7721]: I0216 02:06:37.695067 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/467d92a2-1cf3-418d-b41e-8e5f9d7a5b74-profile-collector-cert\") pod \"olm-operator-6b56bd877c-qwp9g\" (UID: \"467d92a2-1cf3-418d-b41e-8e5f9d7a5b74\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-qwp9g" Feb 16 02:06:37.696408 master-0 kubenswrapper[7721]: I0216 02:06:37.695116 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c02961f-30ec-4405-b7fa-9c4192342ae9-serving-cert\") pod \"openshift-controller-manager-operator-5f5f84757d-b47jp\" (UID: \"6c02961f-30ec-4405-b7fa-9c4192342ae9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-b47jp" Feb 16 02:06:37.696408 master-0 kubenswrapper[7721]: I0216 02:06:37.695165 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/b2a83ddd-ffa5-4127-9099-91187ad9dbba-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-845gn\" (UID: \"b2a83ddd-ffa5-4127-9099-91187ad9dbba\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-845gn" Feb 16 02:06:37.696408 master-0 kubenswrapper[7721]: I0216 02:06:37.695203 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/724ac845-3835-458b-9645-e665be135ff9-etcd-client\") pod \"etcd-operator-67bf55ccdd-htjgz\" (UID: \"724ac845-3835-458b-9645-e665be135ff9\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-htjgz" Feb 16 02:06:37.696408 master-0 kubenswrapper[7721]: I0216 02:06:37.695324 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e379cfaf-3a4c-40e7-8641-3524b3669295-config\") pod \"openshift-apiserver-operator-6d4655d9cf-v7lmz\" (UID: \"e379cfaf-3a4c-40e7-8641-3524b3669295\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-v7lmz" Feb 16 02:06:37.696408 master-0 kubenswrapper[7721]: I0216 02:06:37.695374 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-kube-api-access\") pod \"cluster-version-operator-76959b6567-9fxxl\" (UID: \"864c0ef4-319c-457c-aa3b-adf0c3e5a0ff\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-9fxxl" Feb 16 02:06:37.696408 master-0 kubenswrapper[7721]: I0216 02:06:37.695420 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-etc-cvo-updatepayloads\") pod \"cluster-version-operator-76959b6567-9fxxl\" (UID: \"864c0ef4-319c-457c-aa3b-adf0c3e5a0ff\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-9fxxl" Feb 16 02:06:37.696408 master-0 kubenswrapper[7721]: I0216 02:06:37.695479 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/724ac845-3835-458b-9645-e665be135ff9-etcd-ca\") pod \"etcd-operator-67bf55ccdd-htjgz\" (UID: \"724ac845-3835-458b-9645-e665be135ff9\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-htjgz" Feb 16 02:06:37.696408 master-0 kubenswrapper[7721]: I0216 02:06:37.695523 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a5b01c1-1231-4e69-8b6c-c4981b65b26e-serving-cert\") pod \"kube-storage-version-migrator-operator-cd5474998-x2sh4\" (UID: \"4a5b01c1-1231-4e69-8b6c-c4981b65b26e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-x2sh4" Feb 16 02:06:37.696408 master-0 kubenswrapper[7721]: I0216 02:06:37.695644 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e379cfaf-3a4c-40e7-8641-3524b3669295-config\") pod \"openshift-apiserver-operator-6d4655d9cf-v7lmz\" (UID: \"e379cfaf-3a4c-40e7-8641-3524b3669295\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-v7lmz" Feb 16 02:06:37.696408 master-0 kubenswrapper[7721]: I0216 02:06:37.695708 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c02961f-30ec-4405-b7fa-9c4192342ae9-serving-cert\") pod \"openshift-controller-manager-operator-5f5f84757d-b47jp\" (UID: \"6c02961f-30ec-4405-b7fa-9c4192342ae9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-b47jp" Feb 16 02:06:37.696408 master-0 kubenswrapper[7721]: I0216 02:06:37.695758 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/724ac845-3835-458b-9645-e665be135ff9-etcd-client\") pod \"etcd-operator-67bf55ccdd-htjgz\" (UID: \"724ac845-3835-458b-9645-e665be135ff9\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-htjgz" Feb 16 02:06:37.696408 master-0 kubenswrapper[7721]: I0216 02:06:37.695808 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c02961f-30ec-4405-b7fa-9c4192342ae9-config\") pod \"openshift-controller-manager-operator-5f5f84757d-b47jp\" (UID: \"6c02961f-30ec-4405-b7fa-9c4192342ae9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-b47jp" Feb 16 02:06:37.696408 master-0 kubenswrapper[7721]: I0216 02:06:37.695994 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xr7gn\" (UniqueName: \"kubernetes.io/projected/c9cd32bc-a13a-44ee-ba52-7bb335c7007b-kube-api-access-xr7gn\") pod \"authentication-operator-755d954778-bngv9\" (UID: \"c9cd32bc-a13a-44ee-ba52-7bb335c7007b\") " pod="openshift-authentication-operator/authentication-operator-755d954778-bngv9" Feb 16 02:06:37.696408 master-0 kubenswrapper[7721]: I0216 02:06:37.696029 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2582m\" (UniqueName: \"kubernetes.io/projected/d008dbd4-e713-4f2e-b64d-ca9cfc83a502-kube-api-access-2582m\") pod \"csi-snapshot-controller-operator-7b87b97578-8n9v4\" (UID: \"d008dbd4-e713-4f2e-b64d-ca9cfc83a502\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-8n9v4" Feb 16 02:06:37.696408 master-0 kubenswrapper[7721]: I0216 02:06:37.695850 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/724ac845-3835-458b-9645-e665be135ff9-etcd-ca\") pod \"etcd-operator-67bf55ccdd-htjgz\" (UID: \"724ac845-3835-458b-9645-e665be135ff9\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-htjgz" Feb 16 02:06:37.696408 master-0 kubenswrapper[7721]: I0216 02:06:37.696049 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c02961f-30ec-4405-b7fa-9c4192342ae9-config\") pod \"openshift-controller-manager-operator-5f5f84757d-b47jp\" (UID: \"6c02961f-30ec-4405-b7fa-9c4192342ae9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-b47jp" Feb 16 02:06:37.696408 master-0 kubenswrapper[7721]: I0216 02:06:37.696202 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9cd32bc-a13a-44ee-ba52-7bb335c7007b-service-ca-bundle\") pod \"authentication-operator-755d954778-bngv9\" (UID: \"c9cd32bc-a13a-44ee-ba52-7bb335c7007b\") " pod="openshift-authentication-operator/authentication-operator-755d954778-bngv9" Feb 16 02:06:37.696408 master-0 kubenswrapper[7721]: I0216 02:06:37.696247 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/04804a08-e3a5-46f3-abcb-967866834baa-bound-sa-token\") pod \"ingress-operator-c588d8cb4-nbjz6\" (UID: \"04804a08-e3a5-46f3-abcb-967866834baa\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nbjz6" Feb 16 02:06:37.696408 master-0 kubenswrapper[7721]: I0216 02:06:37.696380 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/1f2d2601-481d-4e86-ac4c-3d34d5691261-operand-assets\") pod \"cluster-olm-operator-55b69c6c48-jshtp\" (UID: \"1f2d2601-481d-4e86-ac4c-3d34d5691261\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-jshtp" Feb 16 02:06:37.696408 master-0 kubenswrapper[7721]: I0216 02:06:37.696467 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2ffa4db8-97da-42de-8e51-35680f518ca7-metrics-tls\") pod \"dns-operator-86b8869b79-4rfwq\" (UID: \"2ffa4db8-97da-42de-8e51-35680f518ca7\") " pod="openshift-dns-operator/dns-operator-86b8869b79-4rfwq" Feb 16 02:06:37.696408 master-0 kubenswrapper[7721]: I0216 02:06:37.696487 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/1f2d2601-481d-4e86-ac4c-3d34d5691261-operand-assets\") pod \"cluster-olm-operator-55b69c6c48-jshtp\" (UID: \"1f2d2601-481d-4e86-ac4c-3d34d5691261\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-jshtp" Feb 16 02:06:37.696408 master-0 kubenswrapper[7721]: I0216 02:06:37.696496 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b2a83ddd-ffa5-4127-9099-91187ad9dbba-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-845gn\" (UID: \"b2a83ddd-ffa5-4127-9099-91187ad9dbba\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-845gn" Feb 16 02:06:37.699367 master-0 kubenswrapper[7721]: I0216 02:06:37.696530 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rc6w\" (UniqueName: \"kubernetes.io/projected/04804a08-e3a5-46f3-abcb-967866834baa-kube-api-access-8rc6w\") pod \"ingress-operator-c588d8cb4-nbjz6\" (UID: \"04804a08-e3a5-46f3-abcb-967866834baa\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nbjz6" Feb 16 02:06:37.699367 master-0 kubenswrapper[7721]: I0216 02:06:37.696693 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9cd32bc-a13a-44ee-ba52-7bb335c7007b-serving-cert\") pod \"authentication-operator-755d954778-bngv9\" (UID: \"c9cd32bc-a13a-44ee-ba52-7bb335c7007b\") " pod="openshift-authentication-operator/authentication-operator-755d954778-bngv9" Feb 16 02:06:37.699367 master-0 kubenswrapper[7721]: I0216 02:06:37.696762 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a5b01c1-1231-4e69-8b6c-c4981b65b26e-config\") pod \"kube-storage-version-migrator-operator-cd5474998-x2sh4\" (UID: \"4a5b01c1-1231-4e69-8b6c-c4981b65b26e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-x2sh4" Feb 16 02:06:37.699367 master-0 kubenswrapper[7721]: I0216 02:06:37.696801 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-serving-cert\") pod \"cluster-version-operator-76959b6567-9fxxl\" (UID: \"864c0ef4-319c-457c-aa3b-adf0c3e5a0ff\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-9fxxl" Feb 16 02:06:37.699367 master-0 kubenswrapper[7721]: I0216 02:06:37.696833 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91938be6-9ae4-4849-abe8-fc842daecd23-serving-cert\") pod \"service-ca-operator-5dc4688546-ck5nr\" (UID: \"91938be6-9ae4-4849-abe8-fc842daecd23\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-ck5nr" Feb 16 02:06:37.699367 master-0 kubenswrapper[7721]: I0216 02:06:37.696861 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7llx6\" (UniqueName: \"kubernetes.io/projected/6c02961f-30ec-4405-b7fa-9c4192342ae9-kube-api-access-7llx6\") pod \"openshift-controller-manager-operator-5f5f84757d-b47jp\" (UID: \"6c02961f-30ec-4405-b7fa-9c4192342ae9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-b47jp" Feb 16 02:06:37.699367 master-0 kubenswrapper[7721]: I0216 02:06:37.696875 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9cd32bc-a13a-44ee-ba52-7bb335c7007b-serving-cert\") pod \"authentication-operator-755d954778-bngv9\" (UID: \"c9cd32bc-a13a-44ee-ba52-7bb335c7007b\") " pod="openshift-authentication-operator/authentication-operator-755d954778-bngv9" Feb 16 02:06:37.699367 master-0 kubenswrapper[7721]: I0216 02:06:37.696894 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b2a83ddd-ffa5-4127-9099-91187ad9dbba-trusted-ca\") pod \"cluster-node-tuning-operator-ff6c9b66-845gn\" (UID: \"b2a83ddd-ffa5-4127-9099-91187ad9dbba\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-845gn" Feb 16 02:06:37.699367 master-0 kubenswrapper[7721]: I0216 02:06:37.696923 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-etc-ssl-certs\") pod \"cluster-version-operator-76959b6567-9fxxl\" (UID: \"864c0ef4-319c-457c-aa3b-adf0c3e5a0ff\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-9fxxl" Feb 16 02:06:37.699367 master-0 kubenswrapper[7721]: I0216 02:06:37.696956 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7fmj\" (UniqueName: \"kubernetes.io/projected/b2a83ddd-ffa5-4127-9099-91187ad9dbba-kube-api-access-t7fmj\") pod \"cluster-node-tuning-operator-ff6c9b66-845gn\" (UID: \"b2a83ddd-ffa5-4127-9099-91187ad9dbba\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-845gn" Feb 16 02:06:37.699367 master-0 kubenswrapper[7721]: I0216 02:06:37.696987 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9be9fd24-fdb1-43dc-80b8-68020427bfd7-serving-cert\") pod \"openshift-config-operator-7c6bdb986f-zlbd2\" (UID: \"9be9fd24-fdb1-43dc-80b8-68020427bfd7\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-zlbd2" Feb 16 02:06:37.699367 master-0 kubenswrapper[7721]: I0216 02:06:37.697030 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91938be6-9ae4-4849-abe8-fc842daecd23-serving-cert\") pod \"service-ca-operator-5dc4688546-ck5nr\" (UID: \"91938be6-9ae4-4849-abe8-fc842daecd23\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-ck5nr" Feb 16 02:06:37.699367 master-0 kubenswrapper[7721]: I0216 02:06:37.697055 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/724ac845-3835-458b-9645-e665be135ff9-etcd-service-ca\") pod \"etcd-operator-67bf55ccdd-htjgz\" (UID: \"724ac845-3835-458b-9645-e665be135ff9\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-htjgz" Feb 16 02:06:37.699367 master-0 kubenswrapper[7721]: I0216 02:06:37.697064 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9cd32bc-a13a-44ee-ba52-7bb335c7007b-service-ca-bundle\") pod \"authentication-operator-755d954778-bngv9\" (UID: \"c9cd32bc-a13a-44ee-ba52-7bb335c7007b\") " pod="openshift-authentication-operator/authentication-operator-755d954778-bngv9" Feb 16 02:06:37.699367 master-0 kubenswrapper[7721]: I0216 02:06:37.697083 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bff42\" (UniqueName: \"kubernetes.io/projected/724ac845-3835-458b-9645-e665be135ff9-kube-api-access-bff42\") pod \"etcd-operator-67bf55ccdd-htjgz\" (UID: \"724ac845-3835-458b-9645-e665be135ff9\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-htjgz" Feb 16 02:06:37.699367 master-0 kubenswrapper[7721]: I0216 02:06:37.697081 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a5b01c1-1231-4e69-8b6c-c4981b65b26e-config\") pod \"kube-storage-version-migrator-operator-cd5474998-x2sh4\" (UID: \"4a5b01c1-1231-4e69-8b6c-c4981b65b26e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-x2sh4" Feb 16 02:06:37.699367 master-0 kubenswrapper[7721]: I0216 02:06:37.697163 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9be9fd24-fdb1-43dc-80b8-68020427bfd7-serving-cert\") pod \"openshift-config-operator-7c6bdb986f-zlbd2\" (UID: \"9be9fd24-fdb1-43dc-80b8-68020427bfd7\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-zlbd2" Feb 16 02:06:37.699367 master-0 kubenswrapper[7721]: I0216 02:06:37.697213 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9sgx\" (UniqueName: \"kubernetes.io/projected/2ffa4db8-97da-42de-8e51-35680f518ca7-kube-api-access-t9sgx\") pod \"dns-operator-86b8869b79-4rfwq\" (UID: \"2ffa4db8-97da-42de-8e51-35680f518ca7\") " pod="openshift-dns-operator/dns-operator-86b8869b79-4rfwq" Feb 16 02:06:37.699367 master-0 kubenswrapper[7721]: I0216 02:06:37.697341 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/467d92a2-1cf3-418d-b41e-8e5f9d7a5b74-profile-collector-cert\") pod \"olm-operator-6b56bd877c-qwp9g\" (UID: \"467d92a2-1cf3-418d-b41e-8e5f9d7a5b74\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-qwp9g" Feb 16 02:06:37.699367 master-0 kubenswrapper[7721]: I0216 02:06:37.697338 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nl7r8\" (UniqueName: \"kubernetes.io/projected/456e6c3a-c16c-470b-a0cd-bb79865b54f0-kube-api-access-nl7r8\") pod \"network-operator-6fcf4c966-dctqr\" (UID: \"456e6c3a-c16c-470b-a0cd-bb79865b54f0\") " pod="openshift-network-operator/network-operator-6fcf4c966-dctqr" Feb 16 02:06:37.699367 master-0 kubenswrapper[7721]: I0216 02:06:37.697418 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/724ac845-3835-458b-9645-e665be135ff9-etcd-service-ca\") pod \"etcd-operator-67bf55ccdd-htjgz\" (UID: \"724ac845-3835-458b-9645-e665be135ff9\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-htjgz" Feb 16 02:06:37.699367 master-0 kubenswrapper[7721]: I0216 02:06:37.697479 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9cd32bc-a13a-44ee-ba52-7bb335c7007b-config\") pod \"authentication-operator-755d954778-bngv9\" (UID: \"c9cd32bc-a13a-44ee-ba52-7bb335c7007b\") " pod="openshift-authentication-operator/authentication-operator-755d954778-bngv9" Feb 16 02:06:37.699367 master-0 kubenswrapper[7721]: I0216 02:06:37.697531 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8lvq\" (UniqueName: \"kubernetes.io/projected/467d92a2-1cf3-418d-b41e-8e5f9d7a5b74-kube-api-access-f8lvq\") pod \"olm-operator-6b56bd877c-qwp9g\" (UID: \"467d92a2-1cf3-418d-b41e-8e5f9d7a5b74\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-qwp9g" Feb 16 02:06:37.699367 master-0 kubenswrapper[7721]: I0216 02:06:37.697568 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/724ac845-3835-458b-9645-e665be135ff9-serving-cert\") pod \"etcd-operator-67bf55ccdd-htjgz\" (UID: \"724ac845-3835-458b-9645-e665be135ff9\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-htjgz" Feb 16 02:06:37.699367 master-0 kubenswrapper[7721]: I0216 02:06:37.697605 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/76915cba-7c11-4bd8-9943-81de74e7781b-profile-collector-cert\") pod \"catalog-operator-588944557d-2z8fq\" (UID: \"76915cba-7c11-4bd8-9943-81de74e7781b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-2z8fq" Feb 16 02:06:37.699367 master-0 kubenswrapper[7721]: I0216 02:06:37.697637 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/456e6c3a-c16c-470b-a0cd-bb79865b54f0-metrics-tls\") pod \"network-operator-6fcf4c966-dctqr\" (UID: \"456e6c3a-c16c-470b-a0cd-bb79865b54f0\") " pod="openshift-network-operator/network-operator-6fcf4c966-dctqr" Feb 16 02:06:37.699367 master-0 kubenswrapper[7721]: I0216 02:06:37.697713 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 16 02:06:37.699367 master-0 kubenswrapper[7721]: I0216 02:06:37.697728 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04804a08-e3a5-46f3-abcb-967866834baa-trusted-ca\") pod \"ingress-operator-c588d8cb4-nbjz6\" (UID: \"04804a08-e3a5-46f3-abcb-967866834baa\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nbjz6" Feb 16 02:06:37.699367 master-0 kubenswrapper[7721]: I0216 02:06:37.697766 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9cd32bc-a13a-44ee-ba52-7bb335c7007b-trusted-ca-bundle\") pod \"authentication-operator-755d954778-bngv9\" (UID: \"c9cd32bc-a13a-44ee-ba52-7bb335c7007b\") " pod="openshift-authentication-operator/authentication-operator-755d954778-bngv9" Feb 16 02:06:37.699367 master-0 kubenswrapper[7721]: I0216 02:06:37.697802 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-service-ca\") pod \"cluster-version-operator-76959b6567-9fxxl\" (UID: \"864c0ef4-319c-457c-aa3b-adf0c3e5a0ff\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-9fxxl" Feb 16 02:06:37.699367 master-0 kubenswrapper[7721]: I0216 02:06:37.697822 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/76915cba-7c11-4bd8-9943-81de74e7781b-profile-collector-cert\") pod \"catalog-operator-588944557d-2z8fq\" (UID: \"76915cba-7c11-4bd8-9943-81de74e7781b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-2z8fq" Feb 16 02:06:37.699367 master-0 kubenswrapper[7721]: I0216 02:06:37.697841 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/467d92a2-1cf3-418d-b41e-8e5f9d7a5b74-srv-cert\") pod \"olm-operator-6b56bd877c-qwp9g\" (UID: \"467d92a2-1cf3-418d-b41e-8e5f9d7a5b74\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-qwp9g" Feb 16 02:06:37.699367 master-0 kubenswrapper[7721]: I0216 02:06:37.697847 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9cd32bc-a13a-44ee-ba52-7bb335c7007b-config\") pod \"authentication-operator-755d954778-bngv9\" (UID: \"c9cd32bc-a13a-44ee-ba52-7bb335c7007b\") " pod="openshift-authentication-operator/authentication-operator-755d954778-bngv9" Feb 16 02:06:37.699367 master-0 kubenswrapper[7721]: I0216 02:06:37.697891 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhz2m\" (UniqueName: \"kubernetes.io/projected/91938be6-9ae4-4849-abe8-fc842daecd23-kube-api-access-bhz2m\") pod \"service-ca-operator-5dc4688546-ck5nr\" (UID: \"91938be6-9ae4-4849-abe8-fc842daecd23\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-ck5nr" Feb 16 02:06:37.699367 master-0 kubenswrapper[7721]: I0216 02:06:37.697952 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 16 02:06:37.699367 master-0 kubenswrapper[7721]: I0216 02:06:37.697978 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6f8fj\" (UniqueName: \"kubernetes.io/projected/76915cba-7c11-4bd8-9943-81de74e7781b-kube-api-access-6f8fj\") pod \"catalog-operator-588944557d-2z8fq\" (UID: \"76915cba-7c11-4bd8-9943-81de74e7781b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-2z8fq" Feb 16 02:06:37.699367 master-0 kubenswrapper[7721]: I0216 02:06:37.698019 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9be9fd24-fdb1-43dc-80b8-68020427bfd7-available-featuregates\") pod \"openshift-config-operator-7c6bdb986f-zlbd2\" (UID: \"9be9fd24-fdb1-43dc-80b8-68020427bfd7\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-zlbd2" Feb 16 02:06:37.699367 master-0 kubenswrapper[7721]: I0216 02:06:37.698058 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/456e6c3a-c16c-470b-a0cd-bb79865b54f0-host-etc-kube\") pod \"network-operator-6fcf4c966-dctqr\" (UID: \"456e6c3a-c16c-470b-a0cd-bb79865b54f0\") " pod="openshift-network-operator/network-operator-6fcf4c966-dctqr" Feb 16 02:06:37.699367 master-0 kubenswrapper[7721]: I0216 02:06:37.698178 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e379cfaf-3a4c-40e7-8641-3524b3669295-serving-cert\") pod \"openshift-apiserver-operator-6d4655d9cf-v7lmz\" (UID: \"e379cfaf-3a4c-40e7-8641-3524b3669295\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-v7lmz" Feb 16 02:06:37.699367 master-0 kubenswrapper[7721]: I0216 02:06:37.698186 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/456e6c3a-c16c-470b-a0cd-bb79865b54f0-metrics-tls\") pod \"network-operator-6fcf4c966-dctqr\" (UID: \"456e6c3a-c16c-470b-a0cd-bb79865b54f0\") " pod="openshift-network-operator/network-operator-6fcf4c966-dctqr" Feb 16 02:06:37.699367 master-0 kubenswrapper[7721]: I0216 02:06:37.698214 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9be9fd24-fdb1-43dc-80b8-68020427bfd7-available-featuregates\") pod \"openshift-config-operator-7c6bdb986f-zlbd2\" (UID: \"9be9fd24-fdb1-43dc-80b8-68020427bfd7\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-zlbd2" Feb 16 02:06:37.699367 master-0 kubenswrapper[7721]: I0216 02:06:37.698260 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 16 02:06:37.699367 master-0 kubenswrapper[7721]: I0216 02:06:37.698322 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/04804a08-e3a5-46f3-abcb-967866834baa-metrics-tls\") pod \"ingress-operator-c588d8cb4-nbjz6\" (UID: \"04804a08-e3a5-46f3-abcb-967866834baa\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nbjz6" Feb 16 02:06:37.699367 master-0 kubenswrapper[7721]: I0216 02:06:37.698417 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2qvg\" (UniqueName: \"kubernetes.io/projected/9be9fd24-fdb1-43dc-80b8-68020427bfd7-kube-api-access-k2qvg\") pod \"openshift-config-operator-7c6bdb986f-zlbd2\" (UID: \"9be9fd24-fdb1-43dc-80b8-68020427bfd7\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-zlbd2" Feb 16 02:06:37.699367 master-0 kubenswrapper[7721]: I0216 02:06:37.698478 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e379cfaf-3a4c-40e7-8641-3524b3669295-serving-cert\") pod \"openshift-apiserver-operator-6d4655d9cf-v7lmz\" (UID: \"e379cfaf-3a4c-40e7-8641-3524b3669295\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-v7lmz" Feb 16 02:06:37.699367 master-0 kubenswrapper[7721]: I0216 02:06:37.698483 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/1f2d2601-481d-4e86-ac4c-3d34d5691261-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-55b69c6c48-jshtp\" (UID: \"1f2d2601-481d-4e86-ac4c-3d34d5691261\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-jshtp" Feb 16 02:06:37.699367 master-0 kubenswrapper[7721]: I0216 02:06:37.698520 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcq6v\" (UniqueName: \"kubernetes.io/projected/e379cfaf-3a4c-40e7-8641-3524b3669295-kube-api-access-gcq6v\") pod \"openshift-apiserver-operator-6d4655d9cf-v7lmz\" (UID: \"e379cfaf-3a4c-40e7-8641-3524b3669295\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-v7lmz" Feb 16 02:06:37.699367 master-0 kubenswrapper[7721]: I0216 02:06:37.698559 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/76915cba-7c11-4bd8-9943-81de74e7781b-srv-cert\") pod \"catalog-operator-588944557d-2z8fq\" (UID: \"76915cba-7c11-4bd8-9943-81de74e7781b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-2z8fq" Feb 16 02:06:37.699367 master-0 kubenswrapper[7721]: I0216 02:06:37.698593 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91938be6-9ae4-4849-abe8-fc842daecd23-config\") pod \"service-ca-operator-5dc4688546-ck5nr\" (UID: \"91938be6-9ae4-4849-abe8-fc842daecd23\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-ck5nr" Feb 16 02:06:37.699367 master-0 kubenswrapper[7721]: I0216 02:06:37.698619 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8d49c\" (UniqueName: \"kubernetes.io/projected/1f2d2601-481d-4e86-ac4c-3d34d5691261-kube-api-access-8d49c\") pod \"cluster-olm-operator-55b69c6c48-jshtp\" (UID: \"1f2d2601-481d-4e86-ac4c-3d34d5691261\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-jshtp" Feb 16 02:06:37.699367 master-0 kubenswrapper[7721]: I0216 02:06:37.698669 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/724ac845-3835-458b-9645-e665be135ff9-config\") pod \"etcd-operator-67bf55ccdd-htjgz\" (UID: \"724ac845-3835-458b-9645-e665be135ff9\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-htjgz" Feb 16 02:06:37.699367 master-0 kubenswrapper[7721]: I0216 02:06:37.698682 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/1f2d2601-481d-4e86-ac4c-3d34d5691261-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-55b69c6c48-jshtp\" (UID: \"1f2d2601-481d-4e86-ac4c-3d34d5691261\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-jshtp" Feb 16 02:06:37.699367 master-0 kubenswrapper[7721]: I0216 02:06:37.698916 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/724ac845-3835-458b-9645-e665be135ff9-config\") pod \"etcd-operator-67bf55ccdd-htjgz\" (UID: \"724ac845-3835-458b-9645-e665be135ff9\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-htjgz" Feb 16 02:06:37.699367 master-0 kubenswrapper[7721]: I0216 02:06:37.698967 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91938be6-9ae4-4849-abe8-fc842daecd23-config\") pod \"service-ca-operator-5dc4688546-ck5nr\" (UID: \"91938be6-9ae4-4849-abe8-fc842daecd23\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-ck5nr" Feb 16 02:06:37.714737 master-0 kubenswrapper[7721]: I0216 02:06:37.699875 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 16 02:06:37.714737 master-0 kubenswrapper[7721]: I0216 02:06:37.699936 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 16 02:06:37.714737 master-0 kubenswrapper[7721]: I0216 02:06:37.701289 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/724ac845-3835-458b-9645-e665be135ff9-serving-cert\") pod \"etcd-operator-67bf55ccdd-htjgz\" (UID: \"724ac845-3835-458b-9645-e665be135ff9\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-htjgz" Feb 16 02:06:37.714737 master-0 kubenswrapper[7721]: I0216 02:06:37.701691 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 16 02:06:37.714737 master-0 kubenswrapper[7721]: I0216 02:06:37.701692 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 16 02:06:37.714737 master-0 kubenswrapper[7721]: I0216 02:06:37.702765 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 16 02:06:37.714737 master-0 kubenswrapper[7721]: I0216 02:06:37.707735 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 16 02:06:37.714737 master-0 kubenswrapper[7721]: I0216 02:06:37.708635 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9cd32bc-a13a-44ee-ba52-7bb335c7007b-trusted-ca-bundle\") pod \"authentication-operator-755d954778-bngv9\" (UID: \"c9cd32bc-a13a-44ee-ba52-7bb335c7007b\") " pod="openshift-authentication-operator/authentication-operator-755d954778-bngv9" Feb 16 02:06:37.714737 master-0 kubenswrapper[7721]: I0216 02:06:37.708913 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Feb 16 02:06:37.714737 master-0 kubenswrapper[7721]: I0216 02:06:37.710530 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-service-ca\") pod \"cluster-version-operator-76959b6567-9fxxl\" (UID: \"864c0ef4-319c-457c-aa3b-adf0c3e5a0ff\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-9fxxl" Feb 16 02:06:37.714737 master-0 kubenswrapper[7721]: I0216 02:06:37.713004 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 16 02:06:37.714737 master-0 kubenswrapper[7721]: I0216 02:06:37.713095 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 16 02:06:37.714737 master-0 kubenswrapper[7721]: I0216 02:06:37.713354 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 16 02:06:37.714737 master-0 kubenswrapper[7721]: I0216 02:06:37.713581 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Feb 16 02:06:37.714737 master-0 kubenswrapper[7721]: I0216 02:06:37.713755 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 16 02:06:37.714737 master-0 kubenswrapper[7721]: I0216 02:06:37.713766 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Feb 16 02:06:37.714737 master-0 kubenswrapper[7721]: I0216 02:06:37.713885 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 16 02:06:37.714737 master-0 kubenswrapper[7721]: I0216 02:06:37.714385 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 16 02:06:37.715417 master-0 kubenswrapper[7721]: I0216 02:06:37.715393 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 16 02:06:37.715640 master-0 kubenswrapper[7721]: I0216 02:06:37.715617 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 16 02:06:37.715893 master-0 kubenswrapper[7721]: I0216 02:06:37.715801 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 16 02:06:37.718142 master-0 kubenswrapper[7721]: I0216 02:06:37.716747 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 16 02:06:37.718142 master-0 kubenswrapper[7721]: I0216 02:06:37.716778 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 16 02:06:37.718142 master-0 kubenswrapper[7721]: I0216 02:06:37.716799 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 16 02:06:37.718142 master-0 kubenswrapper[7721]: I0216 02:06:37.716810 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 16 02:06:37.718142 master-0 kubenswrapper[7721]: I0216 02:06:37.717361 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 16 02:06:37.718142 master-0 kubenswrapper[7721]: I0216 02:06:37.717389 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 16 02:06:37.718142 master-0 kubenswrapper[7721]: I0216 02:06:37.717399 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 16 02:06:37.718142 master-0 kubenswrapper[7721]: I0216 02:06:37.717489 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 16 02:06:37.718142 master-0 kubenswrapper[7721]: I0216 02:06:37.717514 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 16 02:06:37.718142 master-0 kubenswrapper[7721]: I0216 02:06:37.717615 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 16 02:06:37.718142 master-0 kubenswrapper[7721]: I0216 02:06:37.717651 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 16 02:06:37.718142 master-0 kubenswrapper[7721]: I0216 02:06:37.717779 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 16 02:06:37.718142 master-0 kubenswrapper[7721]: I0216 02:06:37.717801 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 16 02:06:37.718142 master-0 kubenswrapper[7721]: I0216 02:06:37.717817 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 16 02:06:37.718142 master-0 kubenswrapper[7721]: I0216 02:06:37.717970 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Feb 16 02:06:37.718142 master-0 kubenswrapper[7721]: I0216 02:06:37.718189 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 16 02:06:37.722229 master-0 kubenswrapper[7721]: I0216 02:06:37.718216 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 16 02:06:37.722229 master-0 kubenswrapper[7721]: I0216 02:06:37.718213 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Feb 16 02:06:37.722229 master-0 kubenswrapper[7721]: I0216 02:06:37.719175 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Feb 16 02:06:37.722229 master-0 kubenswrapper[7721]: I0216 02:06:37.719583 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b2a83ddd-ffa5-4127-9099-91187ad9dbba-trusted-ca\") pod \"cluster-node-tuning-operator-ff6c9b66-845gn\" (UID: \"b2a83ddd-ffa5-4127-9099-91187ad9dbba\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-845gn" Feb 16 02:06:37.722229 master-0 kubenswrapper[7721]: I0216 02:06:37.719927 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 16 02:06:37.722229 master-0 kubenswrapper[7721]: I0216 02:06:37.720313 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 16 02:06:37.724128 master-0 kubenswrapper[7721]: I0216 02:06:37.723305 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 16 02:06:37.724295 master-0 kubenswrapper[7721]: I0216 02:06:37.724157 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 16 02:06:37.728658 master-0 kubenswrapper[7721]: I0216 02:06:37.728543 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04804a08-e3a5-46f3-abcb-967866834baa-trusted-ca\") pod \"ingress-operator-c588d8cb4-nbjz6\" (UID: \"04804a08-e3a5-46f3-abcb-967866834baa\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nbjz6" Feb 16 02:06:37.728986 master-0 kubenswrapper[7721]: I0216 02:06:37.728945 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 16 02:06:37.737644 master-0 kubenswrapper[7721]: I0216 02:06:37.737593 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 16 02:06:37.758112 master-0 kubenswrapper[7721]: I0216 02:06:37.758046 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 16 02:06:37.778615 master-0 kubenswrapper[7721]: I0216 02:06:37.778553 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 16 02:06:37.782122 master-0 kubenswrapper[7721]: I0216 02:06:37.782054 7721 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Feb 16 02:06:37.799971 master-0 kubenswrapper[7721]: I0216 02:06:37.799079 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 16 02:06:37.799971 master-0 kubenswrapper[7721]: I0216 02:06:37.799286 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6dcef814-353e-4985-9afc-9e545f7853ae-ovn-node-metrics-cert\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:37.799971 master-0 kubenswrapper[7721]: I0216 02:06:37.799349 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-etc-cvo-updatepayloads\") pod \"cluster-version-operator-76959b6567-9fxxl\" (UID: \"864c0ef4-319c-457c-aa3b-adf0c3e5a0ff\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-9fxxl" Feb 16 02:06:37.799971 master-0 kubenswrapper[7721]: I0216 02:06:37.799398 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-cnibin\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:06:37.799971 master-0 kubenswrapper[7721]: I0216 02:06:37.799461 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-host-kubelet\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:37.799971 master-0 kubenswrapper[7721]: I0216 02:06:37.799504 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-etc-cvo-updatepayloads\") pod \"cluster-version-operator-76959b6567-9fxxl\" (UID: \"864c0ef4-319c-457c-aa3b-adf0c3e5a0ff\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-9fxxl" Feb 16 02:06:37.799971 master-0 kubenswrapper[7721]: I0216 02:06:37.799510 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b6088119-1125-4271-8c0b-0675e700edd9-webhook-certs\") pod \"multus-admission-controller-7c64d55f8-62wr2\" (UID: \"b6088119-1125-4271-8c0b-0675e700edd9\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-62wr2" Feb 16 02:06:37.799971 master-0 kubenswrapper[7721]: I0216 02:06:37.799579 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a8f33151-61df-4b66-ba85-9ba210779059-kube-api-access\") pod \"kube-controller-manager-operator-78ff47c7c5-dgxhp\" (UID: \"a8f33151-61df-4b66-ba85-9ba210779059\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-dgxhp" Feb 16 02:06:37.799971 master-0 kubenswrapper[7721]: I0216 02:06:37.799613 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-run-ovn\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:37.799971 master-0 kubenswrapper[7721]: I0216 02:06:37.799639 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/21686a6d-f685-4fb6-98af-3e8a39c5981b-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-5q4zs\" (UID: \"21686a6d-f685-4fb6-98af-3e8a39c5981b\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-5q4zs" Feb 16 02:06:37.799971 master-0 kubenswrapper[7721]: I0216 02:06:37.799684 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6dcef814-353e-4985-9afc-9e545f7853ae-ovn-node-metrics-cert\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:37.799971 master-0 kubenswrapper[7721]: I0216 02:06:37.799817 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgpcj\" (UniqueName: \"kubernetes.io/projected/b6088119-1125-4271-8c0b-0675e700edd9-kube-api-access-jgpcj\") pod \"multus-admission-controller-7c64d55f8-62wr2\" (UID: \"b6088119-1125-4271-8c0b-0675e700edd9\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-62wr2" Feb 16 02:06:37.799971 master-0 kubenswrapper[7721]: I0216 02:06:37.799851 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-log-socket\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:37.799971 master-0 kubenswrapper[7721]: I0216 02:06:37.799883 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjsbs\" (UniqueName: \"kubernetes.io/projected/6dcef814-353e-4985-9afc-9e545f7853ae-kube-api-access-pjsbs\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:37.799971 master-0 kubenswrapper[7721]: I0216 02:06:37.799911 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2ffa4db8-97da-42de-8e51-35680f518ca7-metrics-tls\") pod \"dns-operator-86b8869b79-4rfwq\" (UID: \"2ffa4db8-97da-42de-8e51-35680f518ca7\") " pod="openshift-dns-operator/dns-operator-86b8869b79-4rfwq" Feb 16 02:06:37.799971 master-0 kubenswrapper[7721]: I0216 02:06:37.799938 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b2a83ddd-ffa5-4127-9099-91187ad9dbba-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-845gn\" (UID: \"b2a83ddd-ffa5-4127-9099-91187ad9dbba\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-845gn" Feb 16 02:06:37.799971 master-0 kubenswrapper[7721]: I0216 02:06:37.799974 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f7317f91-9441-449f-9738-85da088cf94f-env-overrides\") pod \"ovnkube-control-plane-bb7ffbb8d-mcff9\" (UID: \"f7317f91-9441-449f-9738-85da088cf94f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-mcff9" Feb 16 02:06:37.799971 master-0 kubenswrapper[7721]: I0216 02:06:37.800009 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a0540a70-a256-422b-a827-e564d0e67866-bound-sa-token\") pod \"cluster-image-registry-operator-96c8c64b8-bxgpd\" (UID: \"a0540a70-a256-422b-a827-e564d0e67866\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-bxgpd" Feb 16 02:06:37.803063 master-0 kubenswrapper[7721]: I0216 02:06:37.800035 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8f33151-61df-4b66-ba85-9ba210779059-config\") pod \"kube-controller-manager-operator-78ff47c7c5-dgxhp\" (UID: \"a8f33151-61df-4b66-ba85-9ba210779059\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-dgxhp" Feb 16 02:06:37.803063 master-0 kubenswrapper[7721]: I0216 02:06:37.800076 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7f0f9b7d-e663-4927-861b-a9544d483b6e-metrics-certs\") pod \"network-metrics-daemon-gn9mv\" (UID: \"7f0f9b7d-e663-4927-861b-a9544d483b6e\") " pod="openshift-multus/network-metrics-daemon-gn9mv" Feb 16 02:06:37.803063 master-0 kubenswrapper[7721]: I0216 02:06:37.800102 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-etc-ssl-certs\") pod \"cluster-version-operator-76959b6567-9fxxl\" (UID: \"864c0ef4-319c-457c-aa3b-adf0c3e5a0ff\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-9fxxl" Feb 16 02:06:37.803063 master-0 kubenswrapper[7721]: I0216 02:06:37.800143 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/430c146b-ceaf-411a-add6-ce949243aabf-multus-daemon-config\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:06:37.803063 master-0 kubenswrapper[7721]: I0216 02:06:37.800171 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-host-run-netns\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:37.803063 master-0 kubenswrapper[7721]: I0216 02:06:37.800209 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-host-cni-netd\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:37.803063 master-0 kubenswrapper[7721]: I0216 02:06:37.800268 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6dcef814-353e-4985-9afc-9e545f7853ae-env-overrides\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:37.803063 master-0 kubenswrapper[7721]: I0216 02:06:37.800309 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/23755f7f-dce6-4dcf-9664-22e3aedb5c81-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-tkqng\" (UID: \"23755f7f-dce6-4dcf-9664-22e3aedb5c81\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-tkqng" Feb 16 02:06:37.803063 master-0 kubenswrapper[7721]: I0216 02:06:37.800336 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-host-run-ovn-kubernetes\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:37.803063 master-0 kubenswrapper[7721]: I0216 02:06:37.800370 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f7317f91-9441-449f-9738-85da088cf94f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-bb7ffbb8d-mcff9\" (UID: \"f7317f91-9441-449f-9738-85da088cf94f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-mcff9" Feb 16 02:06:37.803063 master-0 kubenswrapper[7721]: I0216 02:06:37.800423 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f91346c7-bde4-4fa2-ac27-b5f0d25eeb75-cni-binary-copy\") pod \"multus-additional-cni-plugins-mvdkf\" (UID: \"f91346c7-bde4-4fa2-ac27-b5f0d25eeb75\") " pod="openshift-multus/multus-additional-cni-plugins-mvdkf" Feb 16 02:06:37.803063 master-0 kubenswrapper[7721]: I0216 02:06:37.800484 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24b6h\" (UniqueName: \"kubernetes.io/projected/bde83629-b39c-401e-bc30-5ce205638918-kube-api-access-24b6h\") pod \"marketplace-operator-6cc5b65c6b-8nl7s\" (UID: \"bde83629-b39c-401e-bc30-5ce205638918\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-8nl7s" Feb 16 02:06:37.803063 master-0 kubenswrapper[7721]: I0216 02:06:37.800524 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-os-release\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:06:37.803063 master-0 kubenswrapper[7721]: I0216 02:06:37.800558 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-host-run-k8s-cni-cncf-io\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:06:37.803063 master-0 kubenswrapper[7721]: I0216 02:06:37.800583 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-host-run-multus-certs\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:06:37.803063 master-0 kubenswrapper[7721]: I0216 02:06:37.800615 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/467d92a2-1cf3-418d-b41e-8e5f9d7a5b74-srv-cert\") pod \"olm-operator-6b56bd877c-qwp9g\" (UID: \"467d92a2-1cf3-418d-b41e-8e5f9d7a5b74\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-qwp9g" Feb 16 02:06:37.803063 master-0 kubenswrapper[7721]: I0216 02:06:37.800657 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1743372f-bdb0-4558-b47b-3714f3aa3fde-serving-cert\") pod \"openshift-kube-scheduler-operator-7485d55966-mmhcs\" (UID: \"1743372f-bdb0-4558-b47b-3714f3aa3fde\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-mmhcs" Feb 16 02:06:37.803063 master-0 kubenswrapper[7721]: I0216 02:06:37.800694 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6dcef814-353e-4985-9afc-9e545f7853ae-ovnkube-script-lib\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:37.803063 master-0 kubenswrapper[7721]: I0216 02:06:37.800726 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/456e6c3a-c16c-470b-a0cd-bb79865b54f0-host-etc-kube\") pod \"network-operator-6fcf4c966-dctqr\" (UID: \"456e6c3a-c16c-470b-a0cd-bb79865b54f0\") " pod="openshift-network-operator/network-operator-6fcf4c966-dctqr" Feb 16 02:06:37.803063 master-0 kubenswrapper[7721]: I0216 02:06:37.800753 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-host-var-lib-cni-multus\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:06:37.803063 master-0 kubenswrapper[7721]: E0216 02:06:37.800842 7721 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Feb 16 02:06:37.803063 master-0 kubenswrapper[7721]: I0216 02:06:37.800860 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/04804a08-e3a5-46f3-abcb-967866834baa-metrics-tls\") pod \"ingress-operator-c588d8cb4-nbjz6\" (UID: \"04804a08-e3a5-46f3-abcb-967866834baa\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nbjz6" Feb 16 02:06:37.803063 master-0 kubenswrapper[7721]: I0216 02:06:37.800915 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f91346c7-bde4-4fa2-ac27-b5f0d25eeb75-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mvdkf\" (UID: \"f91346c7-bde4-4fa2-ac27-b5f0d25eeb75\") " pod="openshift-multus/multus-additional-cni-plugins-mvdkf" Feb 16 02:06:37.803063 master-0 kubenswrapper[7721]: I0216 02:06:37.800952 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0540a70-a256-422b-a827-e564d0e67866-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-bxgpd\" (UID: \"a0540a70-a256-422b-a827-e564d0e67866\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-bxgpd" Feb 16 02:06:37.803063 master-0 kubenswrapper[7721]: I0216 02:06:37.801028 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6dcef814-353e-4985-9afc-9e545f7853ae-env-overrides\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:37.803063 master-0 kubenswrapper[7721]: I0216 02:06:37.801158 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1743372f-bdb0-4558-b47b-3714f3aa3fde-serving-cert\") pod \"openshift-kube-scheduler-operator-7485d55966-mmhcs\" (UID: \"1743372f-bdb0-4558-b47b-3714f3aa3fde\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-mmhcs" Feb 16 02:06:37.803063 master-0 kubenswrapper[7721]: I0216 02:06:37.801213 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/456e6c3a-c16c-470b-a0cd-bb79865b54f0-host-etc-kube\") pod \"network-operator-6fcf4c966-dctqr\" (UID: \"456e6c3a-c16c-470b-a0cd-bb79865b54f0\") " pod="openshift-network-operator/network-operator-6fcf4c966-dctqr" Feb 16 02:06:37.803063 master-0 kubenswrapper[7721]: I0216 02:06:37.801422 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6dcef814-353e-4985-9afc-9e545f7853ae-ovnkube-script-lib\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:37.803063 master-0 kubenswrapper[7721]: I0216 02:06:37.801485 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f7317f91-9441-449f-9738-85da088cf94f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-bb7ffbb8d-mcff9\" (UID: \"f7317f91-9441-449f-9738-85da088cf94f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-mcff9" Feb 16 02:06:37.803063 master-0 kubenswrapper[7721]: E0216 02:06:37.801621 7721 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 16 02:06:37.803063 master-0 kubenswrapper[7721]: I0216 02:06:37.801620 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f91346c7-bde4-4fa2-ac27-b5f0d25eeb75-cni-binary-copy\") pod \"multus-additional-cni-plugins-mvdkf\" (UID: \"f91346c7-bde4-4fa2-ac27-b5f0d25eeb75\") " pod="openshift-multus/multus-additional-cni-plugins-mvdkf" Feb 16 02:06:37.803063 master-0 kubenswrapper[7721]: E0216 02:06:37.801675 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ffa4db8-97da-42de-8e51-35680f518ca7-metrics-tls podName:2ffa4db8-97da-42de-8e51-35680f518ca7 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:38.301654159 +0000 UTC m=+1.795888421 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/2ffa4db8-97da-42de-8e51-35680f518ca7-metrics-tls") pod "dns-operator-86b8869b79-4rfwq" (UID: "2ffa4db8-97da-42de-8e51-35680f518ca7") : secret "metrics-tls" not found Feb 16 02:06:37.803063 master-0 kubenswrapper[7721]: I0216 02:06:37.801667 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-etc-ssl-certs\") pod \"cluster-version-operator-76959b6567-9fxxl\" (UID: \"864c0ef4-319c-457c-aa3b-adf0c3e5a0ff\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-9fxxl" Feb 16 02:06:37.803063 master-0 kubenswrapper[7721]: E0216 02:06:37.801686 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/467d92a2-1cf3-418d-b41e-8e5f9d7a5b74-srv-cert podName:467d92a2-1cf3-418d-b41e-8e5f9d7a5b74 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:38.301681149 +0000 UTC m=+1.795915411 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/467d92a2-1cf3-418d-b41e-8e5f9d7a5b74-srv-cert") pod "olm-operator-6b56bd877c-qwp9g" (UID: "467d92a2-1cf3-418d-b41e-8e5f9d7a5b74") : secret "olm-operator-serving-cert" not found Feb 16 02:06:37.803063 master-0 kubenswrapper[7721]: I0216 02:06:37.801739 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-hostroot\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:06:37.803063 master-0 kubenswrapper[7721]: E0216 02:06:37.801756 7721 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 16 02:06:37.803063 master-0 kubenswrapper[7721]: E0216 02:06:37.801803 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04804a08-e3a5-46f3-abcb-967866834baa-metrics-tls podName:04804a08-e3a5-46f3-abcb-967866834baa nodeName:}" failed. No retries permitted until 2026-02-16 02:06:38.301792042 +0000 UTC m=+1.796026534 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/04804a08-e3a5-46f3-abcb-967866834baa-metrics-tls") pod "ingress-operator-c588d8cb4-nbjz6" (UID: "04804a08-e3a5-46f3-abcb-967866834baa") : secret "metrics-tls" not found Feb 16 02:06:37.803063 master-0 kubenswrapper[7721]: E0216 02:06:37.801812 7721 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 16 02:06:37.803063 master-0 kubenswrapper[7721]: I0216 02:06:37.801800 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f7317f91-9441-449f-9738-85da088cf94f-ovnkube-config\") pod \"ovnkube-control-plane-bb7ffbb8d-mcff9\" (UID: \"f7317f91-9441-449f-9738-85da088cf94f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-mcff9" Feb 16 02:06:37.803063 master-0 kubenswrapper[7721]: E0216 02:06:37.801833 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b2a83ddd-ffa5-4127-9099-91187ad9dbba-apiservice-cert podName:b2a83ddd-ffa5-4127-9099-91187ad9dbba nodeName:}" failed. No retries permitted until 2026-02-16 02:06:38.301827263 +0000 UTC m=+1.796061525 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/b2a83ddd-ffa5-4127-9099-91187ad9dbba-apiservice-cert") pod "cluster-node-tuning-operator-ff6c9b66-845gn" (UID: "b2a83ddd-ffa5-4127-9099-91187ad9dbba") : secret "performance-addon-operator-webhook-cert" not found Feb 16 02:06:37.803063 master-0 kubenswrapper[7721]: I0216 02:06:37.801861 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/980aa005-f51d-4ca2-aee6-a6fdeefd86d0-config\") pod \"kube-apiserver-operator-54984b6678-dsjz2\" (UID: \"980aa005-f51d-4ca2-aee6-a6fdeefd86d0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-dsjz2" Feb 16 02:06:37.803063 master-0 kubenswrapper[7721]: I0216 02:06:37.801918 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8f33151-61df-4b66-ba85-9ba210779059-serving-cert\") pod \"kube-controller-manager-operator-78ff47c7c5-dgxhp\" (UID: \"a8f33151-61df-4b66-ba85-9ba210779059\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-dgxhp" Feb 16 02:06:37.803063 master-0 kubenswrapper[7721]: I0216 02:06:37.802027 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/430c146b-ceaf-411a-add6-ce949243aabf-cni-binary-copy\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:06:37.803063 master-0 kubenswrapper[7721]: I0216 02:06:37.802095 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f7317f91-9441-449f-9738-85da088cf94f-ovnkube-config\") pod \"ovnkube-control-plane-bb7ffbb8d-mcff9\" (UID: \"f7317f91-9441-449f-9738-85da088cf94f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-mcff9" Feb 16 02:06:37.803063 master-0 kubenswrapper[7721]: I0216 02:06:37.802228 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-run-systemd\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:37.803063 master-0 kubenswrapper[7721]: I0216 02:06:37.802258 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8f33151-61df-4b66-ba85-9ba210779059-config\") pod \"kube-controller-manager-operator-78ff47c7c5-dgxhp\" (UID: \"a8f33151-61df-4b66-ba85-9ba210779059\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-dgxhp" Feb 16 02:06:37.803063 master-0 kubenswrapper[7721]: I0216 02:06:37.802327 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/430c146b-ceaf-411a-add6-ce949243aabf-multus-daemon-config\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:06:37.803063 master-0 kubenswrapper[7721]: I0216 02:06:37.802377 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f7317f91-9441-449f-9738-85da088cf94f-env-overrides\") pod \"ovnkube-control-plane-bb7ffbb8d-mcff9\" (UID: \"f7317f91-9441-449f-9738-85da088cf94f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-mcff9" Feb 16 02:06:37.803063 master-0 kubenswrapper[7721]: I0216 02:06:37.802393 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f91346c7-bde4-4fa2-ac27-b5f0d25eeb75-cnibin\") pod \"multus-additional-cni-plugins-mvdkf\" (UID: \"f91346c7-bde4-4fa2-ac27-b5f0d25eeb75\") " pod="openshift-multus/multus-additional-cni-plugins-mvdkf" Feb 16 02:06:37.803063 master-0 kubenswrapper[7721]: I0216 02:06:37.802494 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8f33151-61df-4b66-ba85-9ba210779059-serving-cert\") pod \"kube-controller-manager-operator-78ff47c7c5-dgxhp\" (UID: \"a8f33151-61df-4b66-ba85-9ba210779059\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-dgxhp" Feb 16 02:06:37.803063 master-0 kubenswrapper[7721]: I0216 02:06:37.802382 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/430c146b-ceaf-411a-add6-ce949243aabf-cni-binary-copy\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:06:37.803063 master-0 kubenswrapper[7721]: I0216 02:06:37.802412 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/980aa005-f51d-4ca2-aee6-a6fdeefd86d0-config\") pod \"kube-apiserver-operator-54984b6678-dsjz2\" (UID: \"980aa005-f51d-4ca2-aee6-a6fdeefd86d0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-dsjz2" Feb 16 02:06:37.803063 master-0 kubenswrapper[7721]: I0216 02:06:37.802561 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/bde83629-b39c-401e-bc30-5ce205638918-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-8nl7s\" (UID: \"bde83629-b39c-401e-bc30-5ce205638918\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-8nl7s" Feb 16 02:06:37.803063 master-0 kubenswrapper[7721]: I0216 02:06:37.802642 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9p9r\" (UniqueName: \"kubernetes.io/projected/a0540a70-a256-422b-a827-e564d0e67866-kube-api-access-s9p9r\") pod \"cluster-image-registry-operator-96c8c64b8-bxgpd\" (UID: \"a0540a70-a256-422b-a827-e564d0e67866\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-bxgpd" Feb 16 02:06:37.803063 master-0 kubenswrapper[7721]: I0216 02:06:37.802684 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-etc-openvswitch\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:37.803063 master-0 kubenswrapper[7721]: I0216 02:06:37.802724 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6dcef814-353e-4985-9afc-9e545f7853ae-ovnkube-config\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:37.803063 master-0 kubenswrapper[7721]: I0216 02:06:37.802812 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-host-var-lib-cni-bin\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:06:37.803063 master-0 kubenswrapper[7721]: I0216 02:06:37.802848 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-multus-conf-dir\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:06:37.803063 master-0 kubenswrapper[7721]: I0216 02:06:37.802888 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvf8t\" (UniqueName: \"kubernetes.io/projected/21686a6d-f685-4fb6-98af-3e8a39c5981b-kube-api-access-lvf8t\") pod \"cluster-monitoring-operator-756d64c8c4-5q4zs\" (UID: \"21686a6d-f685-4fb6-98af-3e8a39c5981b\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-5q4zs" Feb 16 02:06:37.803063 master-0 kubenswrapper[7721]: I0216 02:06:37.802947 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/980aa005-f51d-4ca2-aee6-a6fdeefd86d0-kube-api-access\") pod \"kube-apiserver-operator-54984b6678-dsjz2\" (UID: \"980aa005-f51d-4ca2-aee6-a6fdeefd86d0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-dsjz2" Feb 16 02:06:37.803063 master-0 kubenswrapper[7721]: I0216 02:06:37.802969 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6dcef814-353e-4985-9afc-9e545f7853ae-ovnkube-config\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:37.803063 master-0 kubenswrapper[7721]: I0216 02:06:37.802983 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-multus-cni-dir\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:06:37.803063 master-0 kubenswrapper[7721]: I0216 02:06:37.803021 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/dbc5b101-936f-4bf3-bbf3-f30966b0ab50-ovnkube-identity-cm\") pod \"network-node-identity-kffmg\" (UID: \"dbc5b101-936f-4bf3-bbf3-f30966b0ab50\") " pod="openshift-network-node-identity/network-node-identity-kffmg" Feb 16 02:06:37.803063 master-0 kubenswrapper[7721]: I0216 02:06:37.803057 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5m4sb\" (UniqueName: \"kubernetes.io/projected/7f0f9b7d-e663-4927-861b-a9544d483b6e-kube-api-access-5m4sb\") pod \"network-metrics-daemon-gn9mv\" (UID: \"7f0f9b7d-e663-4927-861b-a9544d483b6e\") " pod="openshift-multus/network-metrics-daemon-gn9mv" Feb 16 02:06:37.803063 master-0 kubenswrapper[7721]: I0216 02:06:37.803095 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47lht\" (UniqueName: \"kubernetes.io/projected/2a67f799-fd8d-4bee-9d67-720151c1650b-kube-api-access-47lht\") pod \"iptables-alerter-9bnql\" (UID: \"2a67f799-fd8d-4bee-9d67-720151c1650b\") " pod="openshift-network-operator/iptables-alerter-9bnql" Feb 16 02:06:37.803063 master-0 kubenswrapper[7721]: I0216 02:06:37.803129 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-var-lib-openvswitch\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:37.814903 master-0 kubenswrapper[7721]: I0216 02:06:37.803211 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-host-run-netns\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:06:37.814903 master-0 kubenswrapper[7721]: I0216 02:06:37.803394 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-multus-socket-dir-parent\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:06:37.814903 master-0 kubenswrapper[7721]: I0216 02:06:37.803478 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdllq\" (UniqueName: \"kubernetes.io/projected/430c146b-ceaf-411a-add6-ce949243aabf-kube-api-access-vdllq\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:06:37.814903 master-0 kubenswrapper[7721]: I0216 02:06:37.803529 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:37.814903 master-0 kubenswrapper[7721]: I0216 02:06:37.803574 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-serving-cert\") pod \"cluster-version-operator-76959b6567-9fxxl\" (UID: \"864c0ef4-319c-457c-aa3b-adf0c3e5a0ff\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-9fxxl" Feb 16 02:06:37.814903 master-0 kubenswrapper[7721]: I0216 02:06:37.803612 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58cq8\" (UniqueName: \"kubernetes.io/projected/f7317f91-9441-449f-9738-85da088cf94f-kube-api-access-58cq8\") pod \"ovnkube-control-plane-bb7ffbb8d-mcff9\" (UID: \"f7317f91-9441-449f-9738-85da088cf94f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-mcff9" Feb 16 02:06:37.814903 master-0 kubenswrapper[7721]: I0216 02:06:37.803654 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1743372f-bdb0-4558-b47b-3714f3aa3fde-config\") pod \"openshift-kube-scheduler-operator-7485d55966-mmhcs\" (UID: \"1743372f-bdb0-4558-b47b-3714f3aa3fde\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-mmhcs" Feb 16 02:06:37.814903 master-0 kubenswrapper[7721]: E0216 02:06:37.803671 7721 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 16 02:06:37.814903 master-0 kubenswrapper[7721]: I0216 02:06:37.803692 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2a67f799-fd8d-4bee-9d67-720151c1650b-host-slash\") pod \"iptables-alerter-9bnql\" (UID: \"2a67f799-fd8d-4bee-9d67-720151c1650b\") " pod="openshift-network-operator/iptables-alerter-9bnql" Feb 16 02:06:37.814903 master-0 kubenswrapper[7721]: E0216 02:06:37.803709 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-serving-cert podName:864c0ef4-319c-457c-aa3b-adf0c3e5a0ff nodeName:}" failed. No retries permitted until 2026-02-16 02:06:38.30369798 +0000 UTC m=+1.797932242 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-serving-cert") pod "cluster-version-operator-76959b6567-9fxxl" (UID: "864c0ef4-319c-457c-aa3b-adf0c3e5a0ff") : secret "cluster-version-operator-serving-cert" not found Feb 16 02:06:37.814903 master-0 kubenswrapper[7721]: I0216 02:06:37.803748 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dbc5b101-936f-4bf3-bbf3-f30966b0ab50-env-overrides\") pod \"network-node-identity-kffmg\" (UID: \"dbc5b101-936f-4bf3-bbf3-f30966b0ab50\") " pod="openshift-network-node-identity/network-node-identity-kffmg" Feb 16 02:06:37.814903 master-0 kubenswrapper[7721]: I0216 02:06:37.803776 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/dbc5b101-936f-4bf3-bbf3-f30966b0ab50-ovnkube-identity-cm\") pod \"network-node-identity-kffmg\" (UID: \"dbc5b101-936f-4bf3-bbf3-f30966b0ab50\") " pod="openshift-network-node-identity/network-node-identity-kffmg" Feb 16 02:06:37.814903 master-0 kubenswrapper[7721]: I0216 02:06:37.803922 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dbc5b101-936f-4bf3-bbf3-f30966b0ab50-env-overrides\") pod \"network-node-identity-kffmg\" (UID: \"dbc5b101-936f-4bf3-bbf3-f30966b0ab50\") " pod="openshift-network-node-identity/network-node-identity-kffmg" Feb 16 02:06:37.814903 master-0 kubenswrapper[7721]: I0216 02:06:37.803944 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f91346c7-bde4-4fa2-ac27-b5f0d25eeb75-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-mvdkf\" (UID: \"f91346c7-bde4-4fa2-ac27-b5f0d25eeb75\") " pod="openshift-multus/multus-additional-cni-plugins-mvdkf" Feb 16 02:06:37.814903 master-0 kubenswrapper[7721]: I0216 02:06:37.803998 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a0540a70-a256-422b-a827-e564d0e67866-trusted-ca\") pod \"cluster-image-registry-operator-96c8c64b8-bxgpd\" (UID: \"a0540a70-a256-422b-a827-e564d0e67866\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-bxgpd" Feb 16 02:06:37.814903 master-0 kubenswrapper[7721]: I0216 02:06:37.804020 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1743372f-bdb0-4558-b47b-3714f3aa3fde-config\") pod \"openshift-kube-scheduler-operator-7485d55966-mmhcs\" (UID: \"1743372f-bdb0-4558-b47b-3714f3aa3fde\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-mmhcs" Feb 16 02:06:37.814903 master-0 kubenswrapper[7721]: I0216 02:06:37.804031 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/980aa005-f51d-4ca2-aee6-a6fdeefd86d0-serving-cert\") pod \"kube-apiserver-operator-54984b6678-dsjz2\" (UID: \"980aa005-f51d-4ca2-aee6-a6fdeefd86d0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-dsjz2" Feb 16 02:06:37.814903 master-0 kubenswrapper[7721]: I0216 02:06:37.804071 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlnkb\" (UniqueName: \"kubernetes.io/projected/dbc5b101-936f-4bf3-bbf3-f30966b0ab50-kube-api-access-jlnkb\") pod \"network-node-identity-kffmg\" (UID: \"dbc5b101-936f-4bf3-bbf3-f30966b0ab50\") " pod="openshift-network-node-identity/network-node-identity-kffmg" Feb 16 02:06:37.814903 master-0 kubenswrapper[7721]: I0216 02:06:37.804112 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-run-openvswitch\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:37.814903 master-0 kubenswrapper[7721]: I0216 02:06:37.804172 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-systemd-units\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:37.814903 master-0 kubenswrapper[7721]: I0216 02:06:37.804204 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-host-slash\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:37.814903 master-0 kubenswrapper[7721]: I0216 02:06:37.804213 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f91346c7-bde4-4fa2-ac27-b5f0d25eeb75-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-mvdkf\" (UID: \"f91346c7-bde4-4fa2-ac27-b5f0d25eeb75\") " pod="openshift-multus/multus-additional-cni-plugins-mvdkf" Feb 16 02:06:37.814903 master-0 kubenswrapper[7721]: I0216 02:06:37.804253 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/f91346c7-bde4-4fa2-ac27-b5f0d25eeb75-whereabouts-configmap\") pod \"multus-additional-cni-plugins-mvdkf\" (UID: \"f91346c7-bde4-4fa2-ac27-b5f0d25eeb75\") " pod="openshift-multus/multus-additional-cni-plugins-mvdkf" Feb 16 02:06:37.814903 master-0 kubenswrapper[7721]: I0216 02:06:37.804249 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/980aa005-f51d-4ca2-aee6-a6fdeefd86d0-serving-cert\") pod \"kube-apiserver-operator-54984b6678-dsjz2\" (UID: \"980aa005-f51d-4ca2-aee6-a6fdeefd86d0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-dsjz2" Feb 16 02:06:37.814903 master-0 kubenswrapper[7721]: I0216 02:06:37.804377 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bde83629-b39c-401e-bc30-5ce205638918-marketplace-trusted-ca\") pod \"marketplace-operator-6cc5b65c6b-8nl7s\" (UID: \"bde83629-b39c-401e-bc30-5ce205638918\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-8nl7s" Feb 16 02:06:37.814903 master-0 kubenswrapper[7721]: I0216 02:06:37.804418 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-etc-kubernetes\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:06:37.814903 master-0 kubenswrapper[7721]: I0216 02:06:37.804467 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/f91346c7-bde4-4fa2-ac27-b5f0d25eeb75-whereabouts-configmap\") pod \"multus-additional-cni-plugins-mvdkf\" (UID: \"f91346c7-bde4-4fa2-ac27-b5f0d25eeb75\") " pod="openshift-multus/multus-additional-cni-plugins-mvdkf" Feb 16 02:06:37.814903 master-0 kubenswrapper[7721]: I0216 02:06:37.804480 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f91346c7-bde4-4fa2-ac27-b5f0d25eeb75-system-cni-dir\") pod \"multus-additional-cni-plugins-mvdkf\" (UID: \"f91346c7-bde4-4fa2-ac27-b5f0d25eeb75\") " pod="openshift-multus/multus-additional-cni-plugins-mvdkf" Feb 16 02:06:37.814903 master-0 kubenswrapper[7721]: I0216 02:06:37.804534 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4gmn\" (UniqueName: \"kubernetes.io/projected/23755f7f-dce6-4dcf-9664-22e3aedb5c81-kube-api-access-n4gmn\") pod \"package-server-manager-5c696dbdcd-tkqng\" (UID: \"23755f7f-dce6-4dcf-9664-22e3aedb5c81\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-tkqng" Feb 16 02:06:37.814903 master-0 kubenswrapper[7721]: I0216 02:06:37.804569 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ns9l\" (UniqueName: \"kubernetes.io/projected/f91346c7-bde4-4fa2-ac27-b5f0d25eeb75-kube-api-access-4ns9l\") pod \"multus-additional-cni-plugins-mvdkf\" (UID: \"f91346c7-bde4-4fa2-ac27-b5f0d25eeb75\") " pod="openshift-multus/multus-additional-cni-plugins-mvdkf" Feb 16 02:06:37.814903 master-0 kubenswrapper[7721]: I0216 02:06:37.804606 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-system-cni-dir\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:06:37.814903 master-0 kubenswrapper[7721]: I0216 02:06:37.804646 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/2a67f799-fd8d-4bee-9d67-720151c1650b-iptables-alerter-script\") pod \"iptables-alerter-9bnql\" (UID: \"2a67f799-fd8d-4bee-9d67-720151c1650b\") " pod="openshift-network-operator/iptables-alerter-9bnql" Feb 16 02:06:37.814903 master-0 kubenswrapper[7721]: I0216 02:06:37.804681 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfgxq\" (UniqueName: \"kubernetes.io/projected/e478bdcc-052e-42f8-91b6-58c26cfc9cfc-kube-api-access-pfgxq\") pod \"network-check-target-hswdj\" (UID: \"e478bdcc-052e-42f8-91b6-58c26cfc9cfc\") " pod="openshift-network-diagnostics/network-check-target-hswdj" Feb 16 02:06:37.814903 master-0 kubenswrapper[7721]: I0216 02:06:37.804716 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/21686a6d-f685-4fb6-98af-3e8a39c5981b-telemetry-config\") pod \"cluster-monitoring-operator-756d64c8c4-5q4zs\" (UID: \"21686a6d-f685-4fb6-98af-3e8a39c5981b\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-5q4zs" Feb 16 02:06:37.814903 master-0 kubenswrapper[7721]: I0216 02:06:37.804753 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bde83629-b39c-401e-bc30-5ce205638918-marketplace-trusted-ca\") pod \"marketplace-operator-6cc5b65c6b-8nl7s\" (UID: \"bde83629-b39c-401e-bc30-5ce205638918\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-8nl7s" Feb 16 02:06:37.814903 master-0 kubenswrapper[7721]: I0216 02:06:37.804754 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/76915cba-7c11-4bd8-9943-81de74e7781b-srv-cert\") pod \"catalog-operator-588944557d-2z8fq\" (UID: \"76915cba-7c11-4bd8-9943-81de74e7781b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-2z8fq" Feb 16 02:06:37.814903 master-0 kubenswrapper[7721]: I0216 02:06:37.804832 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-node-log\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:37.814903 master-0 kubenswrapper[7721]: I0216 02:06:37.804852 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-host-cni-bin\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:37.814903 master-0 kubenswrapper[7721]: E0216 02:06:37.804866 7721 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Feb 16 02:06:37.814903 master-0 kubenswrapper[7721]: I0216 02:06:37.804883 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-host-var-lib-kubelet\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:06:37.814903 master-0 kubenswrapper[7721]: I0216 02:06:37.804907 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1743372f-bdb0-4558-b47b-3714f3aa3fde-kube-api-access\") pod \"openshift-kube-scheduler-operator-7485d55966-mmhcs\" (UID: \"1743372f-bdb0-4558-b47b-3714f3aa3fde\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-mmhcs" Feb 16 02:06:37.814903 master-0 kubenswrapper[7721]: E0216 02:06:37.804937 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/76915cba-7c11-4bd8-9943-81de74e7781b-srv-cert podName:76915cba-7c11-4bd8-9943-81de74e7781b nodeName:}" failed. No retries permitted until 2026-02-16 02:06:38.3049082 +0000 UTC m=+1.799142502 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/76915cba-7c11-4bd8-9943-81de74e7781b-srv-cert") pod "catalog-operator-588944557d-2z8fq" (UID: "76915cba-7c11-4bd8-9943-81de74e7781b") : secret "catalog-operator-serving-cert" not found Feb 16 02:06:37.814903 master-0 kubenswrapper[7721]: I0216 02:06:37.804947 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/21686a6d-f685-4fb6-98af-3e8a39c5981b-telemetry-config\") pod \"cluster-monitoring-operator-756d64c8c4-5q4zs\" (UID: \"21686a6d-f685-4fb6-98af-3e8a39c5981b\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-5q4zs" Feb 16 02:06:37.814903 master-0 kubenswrapper[7721]: I0216 02:06:37.804968 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dbc5b101-936f-4bf3-bbf3-f30966b0ab50-webhook-cert\") pod \"network-node-identity-kffmg\" (UID: \"dbc5b101-936f-4bf3-bbf3-f30966b0ab50\") " pod="openshift-network-node-identity/network-node-identity-kffmg" Feb 16 02:06:37.814903 master-0 kubenswrapper[7721]: I0216 02:06:37.805006 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f91346c7-bde4-4fa2-ac27-b5f0d25eeb75-os-release\") pod \"multus-additional-cni-plugins-mvdkf\" (UID: \"f91346c7-bde4-4fa2-ac27-b5f0d25eeb75\") " pod="openshift-multus/multus-additional-cni-plugins-mvdkf" Feb 16 02:06:37.814903 master-0 kubenswrapper[7721]: I0216 02:06:37.805047 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/b2a83ddd-ffa5-4127-9099-91187ad9dbba-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-845gn\" (UID: \"b2a83ddd-ffa5-4127-9099-91187ad9dbba\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-845gn" Feb 16 02:06:37.814903 master-0 kubenswrapper[7721]: E0216 02:06:37.805150 7721 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 16 02:06:37.814903 master-0 kubenswrapper[7721]: E0216 02:06:37.805179 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b2a83ddd-ffa5-4127-9099-91187ad9dbba-node-tuning-operator-tls podName:b2a83ddd-ffa5-4127-9099-91187ad9dbba nodeName:}" failed. No retries permitted until 2026-02-16 02:06:38.305171276 +0000 UTC m=+1.799405538 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/b2a83ddd-ffa5-4127-9099-91187ad9dbba-node-tuning-operator-tls") pod "cluster-node-tuning-operator-ff6c9b66-845gn" (UID: "b2a83ddd-ffa5-4127-9099-91187ad9dbba") : secret "node-tuning-operator-tls" not found Feb 16 02:06:37.814903 master-0 kubenswrapper[7721]: I0216 02:06:37.805189 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dbc5b101-936f-4bf3-bbf3-f30966b0ab50-webhook-cert\") pod \"network-node-identity-kffmg\" (UID: \"dbc5b101-936f-4bf3-bbf3-f30966b0ab50\") " pod="openshift-network-node-identity/network-node-identity-kffmg" Feb 16 02:06:37.814903 master-0 kubenswrapper[7721]: I0216 02:06:37.805627 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a0540a70-a256-422b-a827-e564d0e67866-trusted-ca\") pod \"cluster-image-registry-operator-96c8c64b8-bxgpd\" (UID: \"a0540a70-a256-422b-a827-e564d0e67866\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-bxgpd" Feb 16 02:06:37.819410 master-0 kubenswrapper[7721]: I0216 02:06:37.818334 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 16 02:06:37.827996 master-0 kubenswrapper[7721]: I0216 02:06:37.827522 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/2a67f799-fd8d-4bee-9d67-720151c1650b-iptables-alerter-script\") pod \"iptables-alerter-9bnql\" (UID: \"2a67f799-fd8d-4bee-9d67-720151c1650b\") " pod="openshift-network-operator/iptables-alerter-9bnql" Feb 16 02:06:37.853719 master-0 kubenswrapper[7721]: I0216 02:06:37.853658 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zr872\" (UniqueName: \"kubernetes.io/projected/4a5b01c1-1231-4e69-8b6c-c4981b65b26e-kube-api-access-zr872\") pod \"kube-storage-version-migrator-operator-cd5474998-x2sh4\" (UID: \"4a5b01c1-1231-4e69-8b6c-c4981b65b26e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-x2sh4" Feb 16 02:06:37.871044 master-0 kubenswrapper[7721]: I0216 02:06:37.870940 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-kube-api-access\") pod \"cluster-version-operator-76959b6567-9fxxl\" (UID: \"864c0ef4-319c-457c-aa3b-adf0c3e5a0ff\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-9fxxl" Feb 16 02:06:37.890694 master-0 kubenswrapper[7721]: I0216 02:06:37.890558 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2582m\" (UniqueName: \"kubernetes.io/projected/d008dbd4-e713-4f2e-b64d-ca9cfc83a502-kube-api-access-2582m\") pod \"csi-snapshot-controller-operator-7b87b97578-8n9v4\" (UID: \"d008dbd4-e713-4f2e-b64d-ca9cfc83a502\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-8n9v4" Feb 16 02:06:37.906337 master-0 kubenswrapper[7721]: I0216 02:06:37.906228 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-host-var-lib-cni-multus\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:06:37.906478 master-0 kubenswrapper[7721]: I0216 02:06:37.906415 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-host-var-lib-cni-multus\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:06:37.906621 master-0 kubenswrapper[7721]: I0216 02:06:37.906554 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f91346c7-bde4-4fa2-ac27-b5f0d25eeb75-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mvdkf\" (UID: \"f91346c7-bde4-4fa2-ac27-b5f0d25eeb75\") " pod="openshift-multus/multus-additional-cni-plugins-mvdkf" Feb 16 02:06:37.906693 master-0 kubenswrapper[7721]: I0216 02:06:37.906634 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f91346c7-bde4-4fa2-ac27-b5f0d25eeb75-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mvdkf\" (UID: \"f91346c7-bde4-4fa2-ac27-b5f0d25eeb75\") " pod="openshift-multus/multus-additional-cni-plugins-mvdkf" Feb 16 02:06:37.906693 master-0 kubenswrapper[7721]: I0216 02:06:37.906658 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0540a70-a256-422b-a827-e564d0e67866-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-bxgpd\" (UID: \"a0540a70-a256-422b-a827-e564d0e67866\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-bxgpd" Feb 16 02:06:37.906835 master-0 kubenswrapper[7721]: I0216 02:06:37.906703 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-hostroot\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:06:37.906835 master-0 kubenswrapper[7721]: I0216 02:06:37.906752 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f91346c7-bde4-4fa2-ac27-b5f0d25eeb75-cnibin\") pod \"multus-additional-cni-plugins-mvdkf\" (UID: \"f91346c7-bde4-4fa2-ac27-b5f0d25eeb75\") " pod="openshift-multus/multus-additional-cni-plugins-mvdkf" Feb 16 02:06:37.906835 master-0 kubenswrapper[7721]: I0216 02:06:37.906787 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/bde83629-b39c-401e-bc30-5ce205638918-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-8nl7s\" (UID: \"bde83629-b39c-401e-bc30-5ce205638918\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-8nl7s" Feb 16 02:06:37.906835 master-0 kubenswrapper[7721]: E0216 02:06:37.906831 7721 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 16 02:06:37.907018 master-0 kubenswrapper[7721]: I0216 02:06:37.906840 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-hostroot\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:06:37.907018 master-0 kubenswrapper[7721]: I0216 02:06:37.906861 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f91346c7-bde4-4fa2-ac27-b5f0d25eeb75-cnibin\") pod \"multus-additional-cni-plugins-mvdkf\" (UID: \"f91346c7-bde4-4fa2-ac27-b5f0d25eeb75\") " pod="openshift-multus/multus-additional-cni-plugins-mvdkf" Feb 16 02:06:37.907018 master-0 kubenswrapper[7721]: I0216 02:06:37.906849 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-run-systemd\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:37.907018 master-0 kubenswrapper[7721]: E0216 02:06:37.906910 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a0540a70-a256-422b-a827-e564d0e67866-image-registry-operator-tls podName:a0540a70-a256-422b-a827-e564d0e67866 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:38.406880303 +0000 UTC m=+1.901114805 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/a0540a70-a256-422b-a827-e564d0e67866-image-registry-operator-tls") pod "cluster-image-registry-operator-96c8c64b8-bxgpd" (UID: "a0540a70-a256-422b-a827-e564d0e67866") : secret "image-registry-operator-tls" not found Feb 16 02:06:37.907018 master-0 kubenswrapper[7721]: I0216 02:06:37.906920 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-run-systemd\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:37.907018 master-0 kubenswrapper[7721]: I0216 02:06:37.906940 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-etc-openvswitch\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:37.907018 master-0 kubenswrapper[7721]: E0216 02:06:37.906943 7721 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 16 02:06:37.907018 master-0 kubenswrapper[7721]: I0216 02:06:37.906973 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-etc-openvswitch\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:37.907018 master-0 kubenswrapper[7721]: E0216 02:06:37.906994 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bde83629-b39c-401e-bc30-5ce205638918-marketplace-operator-metrics podName:bde83629-b39c-401e-bc30-5ce205638918 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:38.406980306 +0000 UTC m=+1.901214828 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/bde83629-b39c-401e-bc30-5ce205638918-marketplace-operator-metrics") pod "marketplace-operator-6cc5b65c6b-8nl7s" (UID: "bde83629-b39c-401e-bc30-5ce205638918") : secret "marketplace-operator-metrics" not found Feb 16 02:06:37.908023 master-0 kubenswrapper[7721]: I0216 02:06:37.907187 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-multus-cni-dir\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:06:37.908023 master-0 kubenswrapper[7721]: I0216 02:06:37.907295 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-multus-cni-dir\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:06:37.908023 master-0 kubenswrapper[7721]: I0216 02:06:37.907338 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-host-var-lib-cni-bin\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:06:37.908023 master-0 kubenswrapper[7721]: I0216 02:06:37.907368 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-multus-conf-dir\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:06:37.908023 master-0 kubenswrapper[7721]: I0216 02:06:37.907448 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-multus-conf-dir\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:06:37.908023 master-0 kubenswrapper[7721]: I0216 02:06:37.907464 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-host-var-lib-cni-bin\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:06:37.908023 master-0 kubenswrapper[7721]: I0216 02:06:37.907488 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-host-run-netns\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:06:37.908023 master-0 kubenswrapper[7721]: I0216 02:06:37.907569 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-var-lib-openvswitch\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:37.908023 master-0 kubenswrapper[7721]: I0216 02:06:37.907598 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:37.908023 master-0 kubenswrapper[7721]: I0216 02:06:37.907634 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-host-run-netns\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:06:37.908023 master-0 kubenswrapper[7721]: I0216 02:06:37.907696 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-var-lib-openvswitch\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:37.908023 master-0 kubenswrapper[7721]: I0216 02:06:37.907811 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:37.908023 master-0 kubenswrapper[7721]: I0216 02:06:37.907882 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-multus-socket-dir-parent\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:06:37.908023 master-0 kubenswrapper[7721]: I0216 02:06:37.907935 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2a67f799-fd8d-4bee-9d67-720151c1650b-host-slash\") pod \"iptables-alerter-9bnql\" (UID: \"2a67f799-fd8d-4bee-9d67-720151c1650b\") " pod="openshift-network-operator/iptables-alerter-9bnql" Feb 16 02:06:37.908023 master-0 kubenswrapper[7721]: I0216 02:06:37.907997 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-run-openvswitch\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:37.908023 master-0 kubenswrapper[7721]: I0216 02:06:37.908069 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2a67f799-fd8d-4bee-9d67-720151c1650b-host-slash\") pod \"iptables-alerter-9bnql\" (UID: \"2a67f799-fd8d-4bee-9d67-720151c1650b\") " pod="openshift-network-operator/iptables-alerter-9bnql" Feb 16 02:06:37.908023 master-0 kubenswrapper[7721]: I0216 02:06:37.908078 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-run-openvswitch\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:37.909697 master-0 kubenswrapper[7721]: I0216 02:06:37.908133 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-systemd-units\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:37.909697 master-0 kubenswrapper[7721]: I0216 02:06:37.908166 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-host-slash\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:37.909697 master-0 kubenswrapper[7721]: I0216 02:06:37.908242 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-systemd-units\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:37.909697 master-0 kubenswrapper[7721]: I0216 02:06:37.908367 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-host-slash\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:37.909697 master-0 kubenswrapper[7721]: I0216 02:06:37.908394 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-multus-socket-dir-parent\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:06:37.909697 master-0 kubenswrapper[7721]: I0216 02:06:37.908414 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-etc-kubernetes\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:06:37.909697 master-0 kubenswrapper[7721]: I0216 02:06:37.908489 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-etc-kubernetes\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:06:37.909697 master-0 kubenswrapper[7721]: I0216 02:06:37.908557 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f91346c7-bde4-4fa2-ac27-b5f0d25eeb75-system-cni-dir\") pod \"multus-additional-cni-plugins-mvdkf\" (UID: \"f91346c7-bde4-4fa2-ac27-b5f0d25eeb75\") " pod="openshift-multus/multus-additional-cni-plugins-mvdkf" Feb 16 02:06:37.909697 master-0 kubenswrapper[7721]: I0216 02:06:37.908592 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-system-cni-dir\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:06:37.909697 master-0 kubenswrapper[7721]: I0216 02:06:37.908661 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f91346c7-bde4-4fa2-ac27-b5f0d25eeb75-system-cni-dir\") pod \"multus-additional-cni-plugins-mvdkf\" (UID: \"f91346c7-bde4-4fa2-ac27-b5f0d25eeb75\") " pod="openshift-multus/multus-additional-cni-plugins-mvdkf" Feb 16 02:06:37.909697 master-0 kubenswrapper[7721]: I0216 02:06:37.908735 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfgxq\" (UniqueName: \"kubernetes.io/projected/e478bdcc-052e-42f8-91b6-58c26cfc9cfc-kube-api-access-pfgxq\") pod \"network-check-target-hswdj\" (UID: \"e478bdcc-052e-42f8-91b6-58c26cfc9cfc\") " pod="openshift-network-diagnostics/network-check-target-hswdj" Feb 16 02:06:37.909697 master-0 kubenswrapper[7721]: I0216 02:06:37.908799 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-system-cni-dir\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:06:37.909697 master-0 kubenswrapper[7721]: I0216 02:06:37.908982 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-host-var-lib-kubelet\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:06:37.909697 master-0 kubenswrapper[7721]: I0216 02:06:37.909027 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-node-log\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:37.909697 master-0 kubenswrapper[7721]: I0216 02:06:37.909047 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-host-var-lib-kubelet\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:06:37.909697 master-0 kubenswrapper[7721]: I0216 02:06:37.909064 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-host-cni-bin\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:37.909697 master-0 kubenswrapper[7721]: I0216 02:06:37.909102 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f91346c7-bde4-4fa2-ac27-b5f0d25eeb75-os-release\") pod \"multus-additional-cni-plugins-mvdkf\" (UID: \"f91346c7-bde4-4fa2-ac27-b5f0d25eeb75\") " pod="openshift-multus/multus-additional-cni-plugins-mvdkf" Feb 16 02:06:37.909697 master-0 kubenswrapper[7721]: I0216 02:06:37.909115 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-node-log\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:37.909697 master-0 kubenswrapper[7721]: I0216 02:06:37.909184 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-cnibin\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:06:37.909697 master-0 kubenswrapper[7721]: I0216 02:06:37.909225 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b6088119-1125-4271-8c0b-0675e700edd9-webhook-certs\") pod \"multus-admission-controller-7c64d55f8-62wr2\" (UID: \"b6088119-1125-4271-8c0b-0675e700edd9\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-62wr2" Feb 16 02:06:37.909697 master-0 kubenswrapper[7721]: I0216 02:06:37.909235 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-host-cni-bin\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:37.909697 master-0 kubenswrapper[7721]: I0216 02:06:37.909267 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-host-kubelet\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:37.909697 master-0 kubenswrapper[7721]: I0216 02:06:37.909294 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f91346c7-bde4-4fa2-ac27-b5f0d25eeb75-os-release\") pod \"multus-additional-cni-plugins-mvdkf\" (UID: \"f91346c7-bde4-4fa2-ac27-b5f0d25eeb75\") " pod="openshift-multus/multus-additional-cni-plugins-mvdkf" Feb 16 02:06:37.909697 master-0 kubenswrapper[7721]: I0216 02:06:37.909301 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-run-ovn\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:37.909697 master-0 kubenswrapper[7721]: I0216 02:06:37.909362 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-host-kubelet\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:37.909697 master-0 kubenswrapper[7721]: I0216 02:06:37.909278 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-cnibin\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:06:37.909697 master-0 kubenswrapper[7721]: I0216 02:06:37.909365 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/21686a6d-f685-4fb6-98af-3e8a39c5981b-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-5q4zs\" (UID: \"21686a6d-f685-4fb6-98af-3e8a39c5981b\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-5q4zs" Feb 16 02:06:37.909697 master-0 kubenswrapper[7721]: E0216 02:06:37.909417 7721 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 16 02:06:37.909697 master-0 kubenswrapper[7721]: I0216 02:06:37.909475 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-run-ovn\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:37.909697 master-0 kubenswrapper[7721]: I0216 02:06:37.909486 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-log-socket\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:37.909697 master-0 kubenswrapper[7721]: I0216 02:06:37.909450 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-log-socket\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:37.909697 master-0 kubenswrapper[7721]: E0216 02:06:37.909504 7721 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 16 02:06:37.909697 master-0 kubenswrapper[7721]: E0216 02:06:37.909513 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b6088119-1125-4271-8c0b-0675e700edd9-webhook-certs podName:b6088119-1125-4271-8c0b-0675e700edd9 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:38.409498778 +0000 UTC m=+1.903733040 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b6088119-1125-4271-8c0b-0675e700edd9-webhook-certs") pod "multus-admission-controller-7c64d55f8-62wr2" (UID: "b6088119-1125-4271-8c0b-0675e700edd9") : secret "multus-admission-controller-secret" not found Feb 16 02:06:37.909697 master-0 kubenswrapper[7721]: E0216 02:06:37.909663 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/21686a6d-f685-4fb6-98af-3e8a39c5981b-cluster-monitoring-operator-tls podName:21686a6d-f685-4fb6-98af-3e8a39c5981b nodeName:}" failed. No retries permitted until 2026-02-16 02:06:38.409651732 +0000 UTC m=+1.903886254 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/21686a6d-f685-4fb6-98af-3e8a39c5981b-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-756d64c8c4-5q4zs" (UID: "21686a6d-f685-4fb6-98af-3e8a39c5981b") : secret "cluster-monitoring-operator-tls" not found Feb 16 02:06:37.909697 master-0 kubenswrapper[7721]: I0216 02:06:37.909717 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7f0f9b7d-e663-4927-861b-a9544d483b6e-metrics-certs\") pod \"network-metrics-daemon-gn9mv\" (UID: \"7f0f9b7d-e663-4927-861b-a9544d483b6e\") " pod="openshift-multus/network-metrics-daemon-gn9mv" Feb 16 02:06:37.909697 master-0 kubenswrapper[7721]: I0216 02:06:37.909752 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-host-run-netns\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:37.911926 master-0 kubenswrapper[7721]: I0216 02:06:37.909780 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/23755f7f-dce6-4dcf-9664-22e3aedb5c81-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-tkqng\" (UID: \"23755f7f-dce6-4dcf-9664-22e3aedb5c81\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-tkqng" Feb 16 02:06:37.911926 master-0 kubenswrapper[7721]: I0216 02:06:37.909803 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-host-run-ovn-kubernetes\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:37.911926 master-0 kubenswrapper[7721]: I0216 02:06:37.909863 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-host-cni-netd\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:37.911926 master-0 kubenswrapper[7721]: I0216 02:06:37.909916 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-host-run-multus-certs\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:06:37.911926 master-0 kubenswrapper[7721]: E0216 02:06:37.909949 7721 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 16 02:06:37.911926 master-0 kubenswrapper[7721]: I0216 02:06:37.909951 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-host-cni-netd\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:37.911926 master-0 kubenswrapper[7721]: I0216 02:06:37.909966 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-os-release\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:06:37.911926 master-0 kubenswrapper[7721]: E0216 02:06:37.909986 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23755f7f-dce6-4dcf-9664-22e3aedb5c81-package-server-manager-serving-cert podName:23755f7f-dce6-4dcf-9664-22e3aedb5c81 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:38.40997478 +0000 UTC m=+1.904209042 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/23755f7f-dce6-4dcf-9664-22e3aedb5c81-package-server-manager-serving-cert") pod "package-server-manager-5c696dbdcd-tkqng" (UID: "23755f7f-dce6-4dcf-9664-22e3aedb5c81") : secret "package-server-manager-serving-cert" not found Feb 16 02:06:37.911926 master-0 kubenswrapper[7721]: I0216 02:06:37.910019 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-host-run-k8s-cni-cncf-io\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:06:37.911926 master-0 kubenswrapper[7721]: I0216 02:06:37.910031 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-host-run-multus-certs\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:06:37.911926 master-0 kubenswrapper[7721]: I0216 02:06:37.910058 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-os-release\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:06:37.911926 master-0 kubenswrapper[7721]: I0216 02:06:37.910062 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-host-run-ovn-kubernetes\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:37.911926 master-0 kubenswrapper[7721]: I0216 02:06:37.910097 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-host-run-k8s-cni-cncf-io\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:06:37.911926 master-0 kubenswrapper[7721]: I0216 02:06:37.910103 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-host-run-netns\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:37.911926 master-0 kubenswrapper[7721]: E0216 02:06:37.910167 7721 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Feb 16 02:06:37.911926 master-0 kubenswrapper[7721]: E0216 02:06:37.910208 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7f0f9b7d-e663-4927-861b-a9544d483b6e-metrics-certs podName:7f0f9b7d-e663-4927-861b-a9544d483b6e nodeName:}" failed. No retries permitted until 2026-02-16 02:06:38.410196496 +0000 UTC m=+1.904430758 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7f0f9b7d-e663-4927-861b-a9544d483b6e-metrics-certs") pod "network-metrics-daemon-gn9mv" (UID: "7f0f9b7d-e663-4927-861b-a9544d483b6e") : secret "metrics-daemon-secret" not found Feb 16 02:06:37.912961 master-0 kubenswrapper[7721]: I0216 02:06:37.912212 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xr7gn\" (UniqueName: \"kubernetes.io/projected/c9cd32bc-a13a-44ee-ba52-7bb335c7007b-kube-api-access-xr7gn\") pod \"authentication-operator-755d954778-bngv9\" (UID: \"c9cd32bc-a13a-44ee-ba52-7bb335c7007b\") " pod="openshift-authentication-operator/authentication-operator-755d954778-bngv9" Feb 16 02:06:37.943689 master-0 kubenswrapper[7721]: I0216 02:06:37.943556 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/04804a08-e3a5-46f3-abcb-967866834baa-bound-sa-token\") pod \"ingress-operator-c588d8cb4-nbjz6\" (UID: \"04804a08-e3a5-46f3-abcb-967866834baa\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nbjz6" Feb 16 02:06:37.949071 master-0 kubenswrapper[7721]: I0216 02:06:37.949002 7721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 02:06:37.956213 master-0 kubenswrapper[7721]: I0216 02:06:37.956163 7721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 02:06:37.959053 master-0 kubenswrapper[7721]: I0216 02:06:37.958998 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rc6w\" (UniqueName: \"kubernetes.io/projected/04804a08-e3a5-46f3-abcb-967866834baa-kube-api-access-8rc6w\") pod \"ingress-operator-c588d8cb4-nbjz6\" (UID: \"04804a08-e3a5-46f3-abcb-967866834baa\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nbjz6" Feb 16 02:06:37.969045 master-0 kubenswrapper[7721]: I0216 02:06:37.969000 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7llx6\" (UniqueName: \"kubernetes.io/projected/6c02961f-30ec-4405-b7fa-9c4192342ae9-kube-api-access-7llx6\") pod \"openshift-controller-manager-operator-5f5f84757d-b47jp\" (UID: \"6c02961f-30ec-4405-b7fa-9c4192342ae9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-b47jp" Feb 16 02:06:37.974594 master-0 kubenswrapper[7721]: I0216 02:06:37.974532 7721 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 02:06:38.030236 master-0 kubenswrapper[7721]: I0216 02:06:38.030103 7721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 02:06:38.031005 master-0 kubenswrapper[7721]: I0216 02:06:38.030947 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7fmj\" (UniqueName: \"kubernetes.io/projected/b2a83ddd-ffa5-4127-9099-91187ad9dbba-kube-api-access-t7fmj\") pod \"cluster-node-tuning-operator-ff6c9b66-845gn\" (UID: \"b2a83ddd-ffa5-4127-9099-91187ad9dbba\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-845gn" Feb 16 02:06:38.032079 master-0 kubenswrapper[7721]: I0216 02:06:38.031369 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bff42\" (UniqueName: \"kubernetes.io/projected/724ac845-3835-458b-9645-e665be135ff9-kube-api-access-bff42\") pod \"etcd-operator-67bf55ccdd-htjgz\" (UID: \"724ac845-3835-458b-9645-e665be135ff9\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-htjgz" Feb 16 02:06:38.036981 master-0 kubenswrapper[7721]: I0216 02:06:38.036917 7721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 02:06:38.044557 master-0 kubenswrapper[7721]: I0216 02:06:38.044494 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9sgx\" (UniqueName: \"kubernetes.io/projected/2ffa4db8-97da-42de-8e51-35680f518ca7-kube-api-access-t9sgx\") pod \"dns-operator-86b8869b79-4rfwq\" (UID: \"2ffa4db8-97da-42de-8e51-35680f518ca7\") " pod="openshift-dns-operator/dns-operator-86b8869b79-4rfwq" Feb 16 02:06:38.078979 master-0 kubenswrapper[7721]: I0216 02:06:38.078914 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nl7r8\" (UniqueName: \"kubernetes.io/projected/456e6c3a-c16c-470b-a0cd-bb79865b54f0-kube-api-access-nl7r8\") pod \"network-operator-6fcf4c966-dctqr\" (UID: \"456e6c3a-c16c-470b-a0cd-bb79865b54f0\") " pod="openshift-network-operator/network-operator-6fcf4c966-dctqr" Feb 16 02:06:38.079543 master-0 kubenswrapper[7721]: I0216 02:06:38.079495 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8lvq\" (UniqueName: \"kubernetes.io/projected/467d92a2-1cf3-418d-b41e-8e5f9d7a5b74-kube-api-access-f8lvq\") pod \"olm-operator-6b56bd877c-qwp9g\" (UID: \"467d92a2-1cf3-418d-b41e-8e5f9d7a5b74\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-qwp9g" Feb 16 02:06:38.092566 master-0 kubenswrapper[7721]: I0216 02:06:38.092520 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhz2m\" (UniqueName: \"kubernetes.io/projected/91938be6-9ae4-4849-abe8-fc842daecd23-kube-api-access-bhz2m\") pod \"service-ca-operator-5dc4688546-ck5nr\" (UID: \"91938be6-9ae4-4849-abe8-fc842daecd23\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-ck5nr" Feb 16 02:06:38.111557 master-0 kubenswrapper[7721]: I0216 02:06:38.111488 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6f8fj\" (UniqueName: \"kubernetes.io/projected/76915cba-7c11-4bd8-9943-81de74e7781b-kube-api-access-6f8fj\") pod \"catalog-operator-588944557d-2z8fq\" (UID: \"76915cba-7c11-4bd8-9943-81de74e7781b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-2z8fq" Feb 16 02:06:38.130390 master-0 kubenswrapper[7721]: I0216 02:06:38.130336 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2qvg\" (UniqueName: \"kubernetes.io/projected/9be9fd24-fdb1-43dc-80b8-68020427bfd7-kube-api-access-k2qvg\") pod \"openshift-config-operator-7c6bdb986f-zlbd2\" (UID: \"9be9fd24-fdb1-43dc-80b8-68020427bfd7\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-zlbd2" Feb 16 02:06:38.149595 master-0 kubenswrapper[7721]: I0216 02:06:38.149519 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcq6v\" (UniqueName: \"kubernetes.io/projected/e379cfaf-3a4c-40e7-8641-3524b3669295-kube-api-access-gcq6v\") pod \"openshift-apiserver-operator-6d4655d9cf-v7lmz\" (UID: \"e379cfaf-3a4c-40e7-8641-3524b3669295\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-v7lmz" Feb 16 02:06:38.170419 master-0 kubenswrapper[7721]: I0216 02:06:38.170359 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8d49c\" (UniqueName: \"kubernetes.io/projected/1f2d2601-481d-4e86-ac4c-3d34d5691261-kube-api-access-8d49c\") pod \"cluster-olm-operator-55b69c6c48-jshtp\" (UID: \"1f2d2601-481d-4e86-ac4c-3d34d5691261\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-jshtp" Feb 16 02:06:38.208224 master-0 kubenswrapper[7721]: I0216 02:06:38.208024 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a8f33151-61df-4b66-ba85-9ba210779059-kube-api-access\") pod \"kube-controller-manager-operator-78ff47c7c5-dgxhp\" (UID: \"a8f33151-61df-4b66-ba85-9ba210779059\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-dgxhp" Feb 16 02:06:38.228661 master-0 kubenswrapper[7721]: I0216 02:06:38.228577 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgpcj\" (UniqueName: \"kubernetes.io/projected/b6088119-1125-4271-8c0b-0675e700edd9-kube-api-access-jgpcj\") pod \"multus-admission-controller-7c64d55f8-62wr2\" (UID: \"b6088119-1125-4271-8c0b-0675e700edd9\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-62wr2" Feb 16 02:06:38.248801 master-0 kubenswrapper[7721]: I0216 02:06:38.248726 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24b6h\" (UniqueName: \"kubernetes.io/projected/bde83629-b39c-401e-bc30-5ce205638918-kube-api-access-24b6h\") pod \"marketplace-operator-6cc5b65c6b-8nl7s\" (UID: \"bde83629-b39c-401e-bc30-5ce205638918\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-8nl7s" Feb 16 02:06:38.271124 master-0 kubenswrapper[7721]: I0216 02:06:38.271044 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjsbs\" (UniqueName: \"kubernetes.io/projected/6dcef814-353e-4985-9afc-9e545f7853ae-kube-api-access-pjsbs\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:38.301895 master-0 kubenswrapper[7721]: I0216 02:06:38.298760 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a0540a70-a256-422b-a827-e564d0e67866-bound-sa-token\") pod \"cluster-image-registry-operator-96c8c64b8-bxgpd\" (UID: \"a0540a70-a256-422b-a827-e564d0e67866\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-bxgpd" Feb 16 02:06:38.314502 master-0 kubenswrapper[7721]: I0216 02:06:38.314456 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvf8t\" (UniqueName: \"kubernetes.io/projected/21686a6d-f685-4fb6-98af-3e8a39c5981b-kube-api-access-lvf8t\") pod \"cluster-monitoring-operator-756d64c8c4-5q4zs\" (UID: \"21686a6d-f685-4fb6-98af-3e8a39c5981b\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-5q4zs" Feb 16 02:06:38.336301 master-0 kubenswrapper[7721]: I0216 02:06:38.336259 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/04804a08-e3a5-46f3-abcb-967866834baa-metrics-tls\") pod \"ingress-operator-c588d8cb4-nbjz6\" (UID: \"04804a08-e3a5-46f3-abcb-967866834baa\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nbjz6" Feb 16 02:06:38.336463 master-0 kubenswrapper[7721]: I0216 02:06:38.336348 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-serving-cert\") pod \"cluster-version-operator-76959b6567-9fxxl\" (UID: \"864c0ef4-319c-457c-aa3b-adf0c3e5a0ff\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-9fxxl" Feb 16 02:06:38.336463 master-0 kubenswrapper[7721]: I0216 02:06:38.336406 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/76915cba-7c11-4bd8-9943-81de74e7781b-srv-cert\") pod \"catalog-operator-588944557d-2z8fq\" (UID: \"76915cba-7c11-4bd8-9943-81de74e7781b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-2z8fq" Feb 16 02:06:38.336463 master-0 kubenswrapper[7721]: I0216 02:06:38.336445 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/b2a83ddd-ffa5-4127-9099-91187ad9dbba-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-845gn\" (UID: \"b2a83ddd-ffa5-4127-9099-91187ad9dbba\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-845gn" Feb 16 02:06:38.336672 master-0 kubenswrapper[7721]: I0216 02:06:38.336489 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2ffa4db8-97da-42de-8e51-35680f518ca7-metrics-tls\") pod \"dns-operator-86b8869b79-4rfwq\" (UID: \"2ffa4db8-97da-42de-8e51-35680f518ca7\") " pod="openshift-dns-operator/dns-operator-86b8869b79-4rfwq" Feb 16 02:06:38.336672 master-0 kubenswrapper[7721]: I0216 02:06:38.336508 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b2a83ddd-ffa5-4127-9099-91187ad9dbba-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-845gn\" (UID: \"b2a83ddd-ffa5-4127-9099-91187ad9dbba\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-845gn" Feb 16 02:06:38.336672 master-0 kubenswrapper[7721]: I0216 02:06:38.336548 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/467d92a2-1cf3-418d-b41e-8e5f9d7a5b74-srv-cert\") pod \"olm-operator-6b56bd877c-qwp9g\" (UID: \"467d92a2-1cf3-418d-b41e-8e5f9d7a5b74\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-qwp9g" Feb 16 02:06:38.336672 master-0 kubenswrapper[7721]: E0216 02:06:38.336650 7721 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Feb 16 02:06:38.336901 master-0 kubenswrapper[7721]: E0216 02:06:38.336691 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/467d92a2-1cf3-418d-b41e-8e5f9d7a5b74-srv-cert podName:467d92a2-1cf3-418d-b41e-8e5f9d7a5b74 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:39.336676402 +0000 UTC m=+2.830910654 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/467d92a2-1cf3-418d-b41e-8e5f9d7a5b74-srv-cert") pod "olm-operator-6b56bd877c-qwp9g" (UID: "467d92a2-1cf3-418d-b41e-8e5f9d7a5b74") : secret "olm-operator-serving-cert" not found Feb 16 02:06:38.338043 master-0 kubenswrapper[7721]: E0216 02:06:38.337722 7721 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 16 02:06:38.338043 master-0 kubenswrapper[7721]: E0216 02:06:38.337759 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04804a08-e3a5-46f3-abcb-967866834baa-metrics-tls podName:04804a08-e3a5-46f3-abcb-967866834baa nodeName:}" failed. No retries permitted until 2026-02-16 02:06:39.337749429 +0000 UTC m=+2.831983691 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/04804a08-e3a5-46f3-abcb-967866834baa-metrics-tls") pod "ingress-operator-c588d8cb4-nbjz6" (UID: "04804a08-e3a5-46f3-abcb-967866834baa") : secret "metrics-tls" not found Feb 16 02:06:38.338043 master-0 kubenswrapper[7721]: E0216 02:06:38.337885 7721 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 16 02:06:38.338043 master-0 kubenswrapper[7721]: E0216 02:06:38.337994 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b2a83ddd-ffa5-4127-9099-91187ad9dbba-node-tuning-operator-tls podName:b2a83ddd-ffa5-4127-9099-91187ad9dbba nodeName:}" failed. No retries permitted until 2026-02-16 02:06:39.337962314 +0000 UTC m=+2.832196606 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/b2a83ddd-ffa5-4127-9099-91187ad9dbba-node-tuning-operator-tls") pod "cluster-node-tuning-operator-ff6c9b66-845gn" (UID: "b2a83ddd-ffa5-4127-9099-91187ad9dbba") : secret "node-tuning-operator-tls" not found Feb 16 02:06:38.339764 master-0 kubenswrapper[7721]: E0216 02:06:38.338244 7721 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 16 02:06:38.339764 master-0 kubenswrapper[7721]: E0216 02:06:38.338534 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ffa4db8-97da-42de-8e51-35680f518ca7-metrics-tls podName:2ffa4db8-97da-42de-8e51-35680f518ca7 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:39.338429976 +0000 UTC m=+2.832664298 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/2ffa4db8-97da-42de-8e51-35680f518ca7-metrics-tls") pod "dns-operator-86b8869b79-4rfwq" (UID: "2ffa4db8-97da-42de-8e51-35680f518ca7") : secret "metrics-tls" not found Feb 16 02:06:38.339764 master-0 kubenswrapper[7721]: E0216 02:06:38.338682 7721 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Feb 16 02:06:38.339764 master-0 kubenswrapper[7721]: E0216 02:06:38.338720 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/76915cba-7c11-4bd8-9943-81de74e7781b-srv-cert podName:76915cba-7c11-4bd8-9943-81de74e7781b nodeName:}" failed. No retries permitted until 2026-02-16 02:06:39.338712143 +0000 UTC m=+2.832946405 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/76915cba-7c11-4bd8-9943-81de74e7781b-srv-cert") pod "catalog-operator-588944557d-2z8fq" (UID: "76915cba-7c11-4bd8-9943-81de74e7781b") : secret "catalog-operator-serving-cert" not found Feb 16 02:06:38.339764 master-0 kubenswrapper[7721]: E0216 02:06:38.338543 7721 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 16 02:06:38.339764 master-0 kubenswrapper[7721]: E0216 02:06:38.339192 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b2a83ddd-ffa5-4127-9099-91187ad9dbba-apiservice-cert podName:b2a83ddd-ffa5-4127-9099-91187ad9dbba nodeName:}" failed. No retries permitted until 2026-02-16 02:06:39.339167894 +0000 UTC m=+2.833402196 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/b2a83ddd-ffa5-4127-9099-91187ad9dbba-apiservice-cert") pod "cluster-node-tuning-operator-ff6c9b66-845gn" (UID: "b2a83ddd-ffa5-4127-9099-91187ad9dbba") : secret "performance-addon-operator-webhook-cert" not found Feb 16 02:06:38.339764 master-0 kubenswrapper[7721]: E0216 02:06:38.338421 7721 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 16 02:06:38.339764 master-0 kubenswrapper[7721]: E0216 02:06:38.339474 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-serving-cert podName:864c0ef4-319c-457c-aa3b-adf0c3e5a0ff nodeName:}" failed. No retries permitted until 2026-02-16 02:06:39.339262697 +0000 UTC m=+2.833496989 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-serving-cert") pod "cluster-version-operator-76959b6567-9fxxl" (UID: "864c0ef4-319c-457c-aa3b-adf0c3e5a0ff") : secret "cluster-version-operator-serving-cert" not found Feb 16 02:06:38.344807 master-0 kubenswrapper[7721]: I0216 02:06:38.344733 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9p9r\" (UniqueName: \"kubernetes.io/projected/a0540a70-a256-422b-a827-e564d0e67866-kube-api-access-s9p9r\") pod \"cluster-image-registry-operator-96c8c64b8-bxgpd\" (UID: \"a0540a70-a256-422b-a827-e564d0e67866\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-bxgpd" Feb 16 02:06:38.363816 master-0 kubenswrapper[7721]: I0216 02:06:38.362974 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/980aa005-f51d-4ca2-aee6-a6fdeefd86d0-kube-api-access\") pod \"kube-apiserver-operator-54984b6678-dsjz2\" (UID: \"980aa005-f51d-4ca2-aee6-a6fdeefd86d0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-dsjz2" Feb 16 02:06:38.368755 master-0 kubenswrapper[7721]: E0216 02:06:38.368691 7721 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-scheduler-master-0\" already exists" pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 16 02:06:38.387554 master-0 kubenswrapper[7721]: E0216 02:06:38.387200 7721 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 02:06:38.412157 master-0 kubenswrapper[7721]: I0216 02:06:38.412064 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47lht\" (UniqueName: \"kubernetes.io/projected/2a67f799-fd8d-4bee-9d67-720151c1650b-kube-api-access-47lht\") pod \"iptables-alerter-9bnql\" (UID: \"2a67f799-fd8d-4bee-9d67-720151c1650b\") " pod="openshift-network-operator/iptables-alerter-9bnql" Feb 16 02:06:38.437551 master-0 kubenswrapper[7721]: I0216 02:06:38.437383 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0540a70-a256-422b-a827-e564d0e67866-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-bxgpd\" (UID: \"a0540a70-a256-422b-a827-e564d0e67866\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-bxgpd" Feb 16 02:06:38.437551 master-0 kubenswrapper[7721]: I0216 02:06:38.437513 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/bde83629-b39c-401e-bc30-5ce205638918-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-8nl7s\" (UID: \"bde83629-b39c-401e-bc30-5ce205638918\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-8nl7s" Feb 16 02:06:38.437818 master-0 kubenswrapper[7721]: I0216 02:06:38.437759 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b6088119-1125-4271-8c0b-0675e700edd9-webhook-certs\") pod \"multus-admission-controller-7c64d55f8-62wr2\" (UID: \"b6088119-1125-4271-8c0b-0675e700edd9\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-62wr2" Feb 16 02:06:38.437928 master-0 kubenswrapper[7721]: I0216 02:06:38.437834 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/21686a6d-f685-4fb6-98af-3e8a39c5981b-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-5q4zs\" (UID: \"21686a6d-f685-4fb6-98af-3e8a39c5981b\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-5q4zs" Feb 16 02:06:38.437928 master-0 kubenswrapper[7721]: I0216 02:06:38.437922 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7f0f9b7d-e663-4927-861b-a9544d483b6e-metrics-certs\") pod \"network-metrics-daemon-gn9mv\" (UID: \"7f0f9b7d-e663-4927-861b-a9544d483b6e\") " pod="openshift-multus/network-metrics-daemon-gn9mv" Feb 16 02:06:38.438082 master-0 kubenswrapper[7721]: I0216 02:06:38.437990 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/23755f7f-dce6-4dcf-9664-22e3aedb5c81-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-tkqng\" (UID: \"23755f7f-dce6-4dcf-9664-22e3aedb5c81\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-tkqng" Feb 16 02:06:38.438271 master-0 kubenswrapper[7721]: E0216 02:06:38.438191 7721 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 16 02:06:38.438513 master-0 kubenswrapper[7721]: E0216 02:06:38.438483 7721 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 16 02:06:38.438623 master-0 kubenswrapper[7721]: E0216 02:06:38.438600 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b6088119-1125-4271-8c0b-0675e700edd9-webhook-certs podName:b6088119-1125-4271-8c0b-0675e700edd9 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:39.438574514 +0000 UTC m=+2.932808816 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b6088119-1125-4271-8c0b-0675e700edd9-webhook-certs") pod "multus-admission-controller-7c64d55f8-62wr2" (UID: "b6088119-1125-4271-8c0b-0675e700edd9") : secret "multus-admission-controller-secret" not found Feb 16 02:06:38.438940 master-0 kubenswrapper[7721]: E0216 02:06:38.438907 7721 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 16 02:06:38.439004 master-0 kubenswrapper[7721]: E0216 02:06:38.438962 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bde83629-b39c-401e-bc30-5ce205638918-marketplace-operator-metrics podName:bde83629-b39c-401e-bc30-5ce205638918 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:39.438945243 +0000 UTC m=+2.933179505 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/bde83629-b39c-401e-bc30-5ce205638918-marketplace-operator-metrics") pod "marketplace-operator-6cc5b65c6b-8nl7s" (UID: "bde83629-b39c-401e-bc30-5ce205638918") : secret "marketplace-operator-metrics" not found Feb 16 02:06:38.439086 master-0 kubenswrapper[7721]: E0216 02:06:38.439006 7721 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 16 02:06:38.439086 master-0 kubenswrapper[7721]: E0216 02:06:38.439027 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a0540a70-a256-422b-a827-e564d0e67866-image-registry-operator-tls podName:a0540a70-a256-422b-a827-e564d0e67866 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:39.439020255 +0000 UTC m=+2.933254517 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/a0540a70-a256-422b-a827-e564d0e67866-image-registry-operator-tls") pod "cluster-image-registry-operator-96c8c64b8-bxgpd" (UID: "a0540a70-a256-422b-a827-e564d0e67866") : secret "image-registry-operator-tls" not found Feb 16 02:06:38.439602 master-0 kubenswrapper[7721]: I0216 02:06:38.439571 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5m4sb\" (UniqueName: \"kubernetes.io/projected/7f0f9b7d-e663-4927-861b-a9544d483b6e-kube-api-access-5m4sb\") pod \"network-metrics-daemon-gn9mv\" (UID: \"7f0f9b7d-e663-4927-861b-a9544d483b6e\") " pod="openshift-multus/network-metrics-daemon-gn9mv" Feb 16 02:06:38.439673 master-0 kubenswrapper[7721]: E0216 02:06:38.439641 7721 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 16 02:06:38.439673 master-0 kubenswrapper[7721]: E0216 02:06:38.439668 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/21686a6d-f685-4fb6-98af-3e8a39c5981b-cluster-monitoring-operator-tls podName:21686a6d-f685-4fb6-98af-3e8a39c5981b nodeName:}" failed. No retries permitted until 2026-02-16 02:06:39.439659751 +0000 UTC m=+2.933894013 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/21686a6d-f685-4fb6-98af-3e8a39c5981b-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-756d64c8c4-5q4zs" (UID: "21686a6d-f685-4fb6-98af-3e8a39c5981b") : secret "cluster-monitoring-operator-tls" not found Feb 16 02:06:38.439784 master-0 kubenswrapper[7721]: E0216 02:06:38.439718 7721 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Feb 16 02:06:38.439784 master-0 kubenswrapper[7721]: E0216 02:06:38.439738 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7f0f9b7d-e663-4927-861b-a9544d483b6e-metrics-certs podName:7f0f9b7d-e663-4927-861b-a9544d483b6e nodeName:}" failed. No retries permitted until 2026-02-16 02:06:39.439731992 +0000 UTC m=+2.933966254 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7f0f9b7d-e663-4927-861b-a9544d483b6e-metrics-certs") pod "network-metrics-daemon-gn9mv" (UID: "7f0f9b7d-e663-4927-861b-a9544d483b6e") : secret "metrics-daemon-secret" not found Feb 16 02:06:38.440042 master-0 kubenswrapper[7721]: E0216 02:06:38.439998 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23755f7f-dce6-4dcf-9664-22e3aedb5c81-package-server-manager-serving-cert podName:23755f7f-dce6-4dcf-9664-22e3aedb5c81 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:39.439969318 +0000 UTC m=+2.934203620 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/23755f7f-dce6-4dcf-9664-22e3aedb5c81-package-server-manager-serving-cert") pod "package-server-manager-5c696dbdcd-tkqng" (UID: "23755f7f-dce6-4dcf-9664-22e3aedb5c81") : secret "package-server-manager-serving-cert" not found Feb 16 02:06:38.447691 master-0 kubenswrapper[7721]: E0216 02:06:38.447634 7721 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-controller-manager-master-0\" already exists" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 02:06:38.481384 master-0 kubenswrapper[7721]: I0216 02:06:38.481330 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdllq\" (UniqueName: \"kubernetes.io/projected/430c146b-ceaf-411a-add6-ce949243aabf-kube-api-access-vdllq\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:06:38.487427 master-0 kubenswrapper[7721]: E0216 02:06:38.487389 7721 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 02:06:38.518843 master-0 kubenswrapper[7721]: I0216 02:06:38.518789 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58cq8\" (UniqueName: \"kubernetes.io/projected/f7317f91-9441-449f-9738-85da088cf94f-kube-api-access-58cq8\") pod \"ovnkube-control-plane-bb7ffbb8d-mcff9\" (UID: \"f7317f91-9441-449f-9738-85da088cf94f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-mcff9" Feb 16 02:06:38.527698 master-0 kubenswrapper[7721]: W0216 02:06:38.527650 7721 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Feb 16 02:06:38.527803 master-0 kubenswrapper[7721]: E0216 02:06:38.527751 7721 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0-master-0\" already exists" pod="openshift-etcd/etcd-master-0-master-0" Feb 16 02:06:38.559117 master-0 kubenswrapper[7721]: I0216 02:06:38.559050 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlnkb\" (UniqueName: \"kubernetes.io/projected/dbc5b101-936f-4bf3-bbf3-f30966b0ab50-kube-api-access-jlnkb\") pod \"network-node-identity-kffmg\" (UID: \"dbc5b101-936f-4bf3-bbf3-f30966b0ab50\") " pod="openshift-network-node-identity/network-node-identity-kffmg" Feb 16 02:06:38.571999 master-0 kubenswrapper[7721]: I0216 02:06:38.571940 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4gmn\" (UniqueName: \"kubernetes.io/projected/23755f7f-dce6-4dcf-9664-22e3aedb5c81-kube-api-access-n4gmn\") pod \"package-server-manager-5c696dbdcd-tkqng\" (UID: \"23755f7f-dce6-4dcf-9664-22e3aedb5c81\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-tkqng" Feb 16 02:06:38.602809 master-0 kubenswrapper[7721]: I0216 02:06:38.602759 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ns9l\" (UniqueName: \"kubernetes.io/projected/f91346c7-bde4-4fa2-ac27-b5f0d25eeb75-kube-api-access-4ns9l\") pod \"multus-additional-cni-plugins-mvdkf\" (UID: \"f91346c7-bde4-4fa2-ac27-b5f0d25eeb75\") " pod="openshift-multus/multus-additional-cni-plugins-mvdkf" Feb 16 02:06:38.612830 master-0 kubenswrapper[7721]: I0216 02:06:38.612781 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1743372f-bdb0-4558-b47b-3714f3aa3fde-kube-api-access\") pod \"openshift-kube-scheduler-operator-7485d55966-mmhcs\" (UID: \"1743372f-bdb0-4558-b47b-3714f3aa3fde\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-mmhcs" Feb 16 02:06:38.641234 master-0 kubenswrapper[7721]: I0216 02:06:38.641176 7721 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 16 02:06:38.648844 master-0 kubenswrapper[7721]: I0216 02:06:38.648761 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfgxq\" (UniqueName: \"kubernetes.io/projected/e478bdcc-052e-42f8-91b6-58c26cfc9cfc-kube-api-access-pfgxq\") pod \"network-check-target-hswdj\" (UID: \"e478bdcc-052e-42f8-91b6-58c26cfc9cfc\") " pod="openshift-network-diagnostics/network-check-target-hswdj" Feb 16 02:06:38.894829 master-0 kubenswrapper[7721]: I0216 02:06:38.894748 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-hswdj" Feb 16 02:06:38.994167 master-0 kubenswrapper[7721]: E0216 02:06:38.993771 7721 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39" Feb 16 02:06:38.994804 master-0 kubenswrapper[7721]: E0216 02:06:38.994519 7721 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-controller-manager-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39,Command:[cluster-kube-controller-manager-operator operator],Args:[--config=/var/run/configmaps/config/config.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39,ValueFrom:nil,},EnvVar{Name:CLUSTER_POLICY_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d54bd262ca625a326b01ea2bfd33db10a402c05590e6b710b0959712e1bf30b,ValueFrom:nil,},EnvVar{Name:TOOLS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ffa02bd6af0b44d4955aba57f727e13671f503393926be6d8965e31dcfcd6e3c,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.32,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:1.31.14,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-controller-manager-operator-78ff47c7c5-dgxhp_openshift-kube-controller-manager-operator(a8f33151-61df-4b66-ba85-9ba210779059): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 02:06:38.996085 master-0 kubenswrapper[7721]: E0216 02:06:38.996024 7721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-dgxhp" podUID="a8f33151-61df-4b66-ba85-9ba210779059" Feb 16 02:06:39.349997 master-0 kubenswrapper[7721]: I0216 02:06:39.349901 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/76915cba-7c11-4bd8-9943-81de74e7781b-srv-cert\") pod \"catalog-operator-588944557d-2z8fq\" (UID: \"76915cba-7c11-4bd8-9943-81de74e7781b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-2z8fq" Feb 16 02:06:39.350257 master-0 kubenswrapper[7721]: E0216 02:06:39.350085 7721 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Feb 16 02:06:39.350257 master-0 kubenswrapper[7721]: E0216 02:06:39.350190 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/76915cba-7c11-4bd8-9943-81de74e7781b-srv-cert podName:76915cba-7c11-4bd8-9943-81de74e7781b nodeName:}" failed. No retries permitted until 2026-02-16 02:06:41.350161608 +0000 UTC m=+4.844395910 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/76915cba-7c11-4bd8-9943-81de74e7781b-srv-cert") pod "catalog-operator-588944557d-2z8fq" (UID: "76915cba-7c11-4bd8-9943-81de74e7781b") : secret "catalog-operator-serving-cert" not found Feb 16 02:06:39.350257 master-0 kubenswrapper[7721]: I0216 02:06:39.350251 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/b2a83ddd-ffa5-4127-9099-91187ad9dbba-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-845gn\" (UID: \"b2a83ddd-ffa5-4127-9099-91187ad9dbba\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-845gn" Feb 16 02:06:39.350615 master-0 kubenswrapper[7721]: I0216 02:06:39.350361 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2ffa4db8-97da-42de-8e51-35680f518ca7-metrics-tls\") pod \"dns-operator-86b8869b79-4rfwq\" (UID: \"2ffa4db8-97da-42de-8e51-35680f518ca7\") " pod="openshift-dns-operator/dns-operator-86b8869b79-4rfwq" Feb 16 02:06:39.350615 master-0 kubenswrapper[7721]: I0216 02:06:39.350398 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b2a83ddd-ffa5-4127-9099-91187ad9dbba-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-845gn\" (UID: \"b2a83ddd-ffa5-4127-9099-91187ad9dbba\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-845gn" Feb 16 02:06:39.350615 master-0 kubenswrapper[7721]: I0216 02:06:39.350528 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/467d92a2-1cf3-418d-b41e-8e5f9d7a5b74-srv-cert\") pod \"olm-operator-6b56bd877c-qwp9g\" (UID: \"467d92a2-1cf3-418d-b41e-8e5f9d7a5b74\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-qwp9g" Feb 16 02:06:39.350615 master-0 kubenswrapper[7721]: I0216 02:06:39.350576 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/04804a08-e3a5-46f3-abcb-967866834baa-metrics-tls\") pod \"ingress-operator-c588d8cb4-nbjz6\" (UID: \"04804a08-e3a5-46f3-abcb-967866834baa\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nbjz6" Feb 16 02:06:39.351333 master-0 kubenswrapper[7721]: I0216 02:06:39.350707 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-serving-cert\") pod \"cluster-version-operator-76959b6567-9fxxl\" (UID: \"864c0ef4-319c-457c-aa3b-adf0c3e5a0ff\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-9fxxl" Feb 16 02:06:39.351333 master-0 kubenswrapper[7721]: E0216 02:06:39.350891 7721 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 16 02:06:39.351333 master-0 kubenswrapper[7721]: E0216 02:06:39.350934 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-serving-cert podName:864c0ef4-319c-457c-aa3b-adf0c3e5a0ff nodeName:}" failed. No retries permitted until 2026-02-16 02:06:41.350919287 +0000 UTC m=+4.845153579 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-serving-cert") pod "cluster-version-operator-76959b6567-9fxxl" (UID: "864c0ef4-319c-457c-aa3b-adf0c3e5a0ff") : secret "cluster-version-operator-serving-cert" not found Feb 16 02:06:39.351626 master-0 kubenswrapper[7721]: E0216 02:06:39.351417 7721 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 16 02:06:39.351626 master-0 kubenswrapper[7721]: E0216 02:06:39.351527 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b2a83ddd-ffa5-4127-9099-91187ad9dbba-node-tuning-operator-tls podName:b2a83ddd-ffa5-4127-9099-91187ad9dbba nodeName:}" failed. No retries permitted until 2026-02-16 02:06:41.351511761 +0000 UTC m=+4.845746063 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/b2a83ddd-ffa5-4127-9099-91187ad9dbba-node-tuning-operator-tls") pod "cluster-node-tuning-operator-ff6c9b66-845gn" (UID: "b2a83ddd-ffa5-4127-9099-91187ad9dbba") : secret "node-tuning-operator-tls" not found Feb 16 02:06:39.351626 master-0 kubenswrapper[7721]: E0216 02:06:39.351599 7721 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 16 02:06:39.351626 master-0 kubenswrapper[7721]: E0216 02:06:39.351633 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ffa4db8-97da-42de-8e51-35680f518ca7-metrics-tls podName:2ffa4db8-97da-42de-8e51-35680f518ca7 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:41.351622114 +0000 UTC m=+4.845856406 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/2ffa4db8-97da-42de-8e51-35680f518ca7-metrics-tls") pod "dns-operator-86b8869b79-4rfwq" (UID: "2ffa4db8-97da-42de-8e51-35680f518ca7") : secret "metrics-tls" not found Feb 16 02:06:39.351943 master-0 kubenswrapper[7721]: E0216 02:06:39.351691 7721 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 16 02:06:39.351943 master-0 kubenswrapper[7721]: E0216 02:06:39.351726 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b2a83ddd-ffa5-4127-9099-91187ad9dbba-apiservice-cert podName:b2a83ddd-ffa5-4127-9099-91187ad9dbba nodeName:}" failed. No retries permitted until 2026-02-16 02:06:41.351713576 +0000 UTC m=+4.845947868 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/b2a83ddd-ffa5-4127-9099-91187ad9dbba-apiservice-cert") pod "cluster-node-tuning-operator-ff6c9b66-845gn" (UID: "b2a83ddd-ffa5-4127-9099-91187ad9dbba") : secret "performance-addon-operator-webhook-cert" not found Feb 16 02:06:39.351943 master-0 kubenswrapper[7721]: E0216 02:06:39.351785 7721 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Feb 16 02:06:39.351943 master-0 kubenswrapper[7721]: E0216 02:06:39.351817 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/467d92a2-1cf3-418d-b41e-8e5f9d7a5b74-srv-cert podName:467d92a2-1cf3-418d-b41e-8e5f9d7a5b74 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:41.351805539 +0000 UTC m=+4.846039841 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/467d92a2-1cf3-418d-b41e-8e5f9d7a5b74-srv-cert") pod "olm-operator-6b56bd877c-qwp9g" (UID: "467d92a2-1cf3-418d-b41e-8e5f9d7a5b74") : secret "olm-operator-serving-cert" not found Feb 16 02:06:39.351943 master-0 kubenswrapper[7721]: E0216 02:06:39.351876 7721 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 16 02:06:39.351943 master-0 kubenswrapper[7721]: E0216 02:06:39.351907 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04804a08-e3a5-46f3-abcb-967866834baa-metrics-tls podName:04804a08-e3a5-46f3-abcb-967866834baa nodeName:}" failed. No retries permitted until 2026-02-16 02:06:41.351896541 +0000 UTC m=+4.846130833 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/04804a08-e3a5-46f3-abcb-967866834baa-metrics-tls") pod "ingress-operator-c588d8cb4-nbjz6" (UID: "04804a08-e3a5-46f3-abcb-967866834baa") : secret "metrics-tls" not found Feb 16 02:06:39.452227 master-0 kubenswrapper[7721]: I0216 02:06:39.452176 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/bde83629-b39c-401e-bc30-5ce205638918-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-8nl7s\" (UID: \"bde83629-b39c-401e-bc30-5ce205638918\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-8nl7s" Feb 16 02:06:39.452454 master-0 kubenswrapper[7721]: I0216 02:06:39.452259 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b6088119-1125-4271-8c0b-0675e700edd9-webhook-certs\") pod \"multus-admission-controller-7c64d55f8-62wr2\" (UID: \"b6088119-1125-4271-8c0b-0675e700edd9\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-62wr2" Feb 16 02:06:39.452454 master-0 kubenswrapper[7721]: I0216 02:06:39.452283 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/21686a6d-f685-4fb6-98af-3e8a39c5981b-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-5q4zs\" (UID: \"21686a6d-f685-4fb6-98af-3e8a39c5981b\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-5q4zs" Feb 16 02:06:39.452454 master-0 kubenswrapper[7721]: I0216 02:06:39.452321 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7f0f9b7d-e663-4927-861b-a9544d483b6e-metrics-certs\") pod \"network-metrics-daemon-gn9mv\" (UID: \"7f0f9b7d-e663-4927-861b-a9544d483b6e\") " pod="openshift-multus/network-metrics-daemon-gn9mv" Feb 16 02:06:39.452564 master-0 kubenswrapper[7721]: E0216 02:06:39.452498 7721 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 16 02:06:39.452637 master-0 kubenswrapper[7721]: E0216 02:06:39.452603 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bde83629-b39c-401e-bc30-5ce205638918-marketplace-operator-metrics podName:bde83629-b39c-401e-bc30-5ce205638918 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:41.452575062 +0000 UTC m=+4.946809554 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/bde83629-b39c-401e-bc30-5ce205638918-marketplace-operator-metrics") pod "marketplace-operator-6cc5b65c6b-8nl7s" (UID: "bde83629-b39c-401e-bc30-5ce205638918") : secret "marketplace-operator-metrics" not found Feb 16 02:06:39.452695 master-0 kubenswrapper[7721]: I0216 02:06:39.452603 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/23755f7f-dce6-4dcf-9664-22e3aedb5c81-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-tkqng\" (UID: \"23755f7f-dce6-4dcf-9664-22e3aedb5c81\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-tkqng" Feb 16 02:06:39.452695 master-0 kubenswrapper[7721]: E0216 02:06:39.452671 7721 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Feb 16 02:06:39.452767 master-0 kubenswrapper[7721]: E0216 02:06:39.452700 7721 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 16 02:06:39.452767 master-0 kubenswrapper[7721]: E0216 02:06:39.452718 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7f0f9b7d-e663-4927-861b-a9544d483b6e-metrics-certs podName:7f0f9b7d-e663-4927-861b-a9544d483b6e nodeName:}" failed. No retries permitted until 2026-02-16 02:06:41.452703765 +0000 UTC m=+4.946938027 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7f0f9b7d-e663-4927-861b-a9544d483b6e-metrics-certs") pod "network-metrics-daemon-gn9mv" (UID: "7f0f9b7d-e663-4927-861b-a9544d483b6e") : secret "metrics-daemon-secret" not found Feb 16 02:06:39.452767 master-0 kubenswrapper[7721]: E0216 02:06:39.452744 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23755f7f-dce6-4dcf-9664-22e3aedb5c81-package-server-manager-serving-cert podName:23755f7f-dce6-4dcf-9664-22e3aedb5c81 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:41.452730866 +0000 UTC m=+4.946965158 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/23755f7f-dce6-4dcf-9664-22e3aedb5c81-package-server-manager-serving-cert") pod "package-server-manager-5c696dbdcd-tkqng" (UID: "23755f7f-dce6-4dcf-9664-22e3aedb5c81") : secret "package-server-manager-serving-cert" not found Feb 16 02:06:39.452879 master-0 kubenswrapper[7721]: I0216 02:06:39.452786 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0540a70-a256-422b-a827-e564d0e67866-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-bxgpd\" (UID: \"a0540a70-a256-422b-a827-e564d0e67866\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-bxgpd" Feb 16 02:06:39.452879 master-0 kubenswrapper[7721]: E0216 02:06:39.452840 7721 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 16 02:06:39.452879 master-0 kubenswrapper[7721]: E0216 02:06:39.452852 7721 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 16 02:06:39.452879 master-0 kubenswrapper[7721]: E0216 02:06:39.452881 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/21686a6d-f685-4fb6-98af-3e8a39c5981b-cluster-monitoring-operator-tls podName:21686a6d-f685-4fb6-98af-3e8a39c5981b nodeName:}" failed. No retries permitted until 2026-02-16 02:06:41.452873929 +0000 UTC m=+4.947108191 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/21686a6d-f685-4fb6-98af-3e8a39c5981b-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-756d64c8c4-5q4zs" (UID: "21686a6d-f685-4fb6-98af-3e8a39c5981b") : secret "cluster-monitoring-operator-tls" not found Feb 16 02:06:39.453678 master-0 kubenswrapper[7721]: E0216 02:06:39.452917 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b6088119-1125-4271-8c0b-0675e700edd9-webhook-certs podName:b6088119-1125-4271-8c0b-0675e700edd9 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:41.45289329 +0000 UTC m=+4.947127592 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b6088119-1125-4271-8c0b-0675e700edd9-webhook-certs") pod "multus-admission-controller-7c64d55f8-62wr2" (UID: "b6088119-1125-4271-8c0b-0675e700edd9") : secret "multus-admission-controller-secret" not found Feb 16 02:06:39.453678 master-0 kubenswrapper[7721]: E0216 02:06:39.452923 7721 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 16 02:06:39.453678 master-0 kubenswrapper[7721]: E0216 02:06:39.452966 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a0540a70-a256-422b-a827-e564d0e67866-image-registry-operator-tls podName:a0540a70-a256-422b-a827-e564d0e67866 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:41.452953981 +0000 UTC m=+4.947188273 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/a0540a70-a256-422b-a827-e564d0e67866-image-registry-operator-tls") pod "cluster-image-registry-operator-96c8c64b8-bxgpd" (UID: "a0540a70-a256-422b-a827-e564d0e67866") : secret "image-registry-operator-tls" not found Feb 16 02:06:39.661304 master-0 kubenswrapper[7721]: E0216 02:06:39.651142 7721 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = reading blob sha256:c81d64f375332c491d874d6f95c581360e73c884a6c8d1fad90c74e286480cf7: Get \"https://quay.io/v2/openshift-release-dev/ocp-v4.0-art-dev/blobs/sha256:c81d64f375332c491d874d6f95c581360e73c884a6c8d1fad90c74e286480cf7\": context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd9324942b3d09b4b9a768f36b47be4e555d947910ee3d115fc5448c95f7399" Feb 16 02:06:39.661304 master-0 kubenswrapper[7721]: E0216 02:06:39.651977 7721 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:etcd-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd9324942b3d09b4b9a768f36b47be4e555d947910ee3d115fc5448c95f7399,Command:[cluster-etcd-operator operator],Args:[--config=/var/run/configmaps/config/config.yaml --terminate-on-files=/var/run/secrets/serving-cert/tls.crt --terminate-on-files=/var/run/secrets/serving-cert/tls.key --terminate-on-files=/var/run/secrets/etcd-client/tls.crt --terminate-on-files=/var/run/secrets/etcd-client/tls.key --terminate-on-files=/var/run/configmaps/etcd-ca/ca-bundle.crt --terminate-on-files=/var/run/configmaps/etcd-service-ca/service-ca.crt],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd9324942b3d09b4b9a768f36b47be4e555d947910ee3d115fc5448c95f7399,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.32,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:4.18.32,ValueFrom:nil,},EnvVar{Name:OPENSHIFT_PROFILE,Value:web,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etcd-ca,ReadOnly:false,MountPath:/var/run/configmaps/etcd-ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etcd-service-ca,ReadOnly:false,MountPath:/var/run/configmaps/etcd-service-ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etcd-client,ReadOnly:false,MountPath:/var/run/secrets/etcd-client,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bff42,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:healthz,Port:{0 8443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:30,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod etcd-operator-67bf55ccdd-htjgz_openshift-etcd-operator(724ac845-3835-458b-9645-e665be135ff9): ErrImagePull: rpc error: code = Canceled desc = reading blob sha256:c81d64f375332c491d874d6f95c581360e73c884a6c8d1fad90c74e286480cf7: Get \"https://quay.io/v2/openshift-release-dev/ocp-v4.0-art-dev/blobs/sha256:c81d64f375332c491d874d6f95c581360e73c884a6c8d1fad90c74e286480cf7\": context canceled" logger="UnhandledError" Feb 16 02:06:39.661304 master-0 kubenswrapper[7721]: E0216 02:06:39.653248 7721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = reading blob sha256:c81d64f375332c491d874d6f95c581360e73c884a6c8d1fad90c74e286480cf7: Get \\\"https://quay.io/v2/openshift-release-dev/ocp-v4.0-art-dev/blobs/sha256:c81d64f375332c491d874d6f95c581360e73c884a6c8d1fad90c74e286480cf7\\\": context canceled\"" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-htjgz" podUID="724ac845-3835-458b-9645-e665be135ff9" Feb 16 02:06:39.677259 master-0 kubenswrapper[7721]: E0216 02:06:39.677079 7721 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:13d06502c0f0a3c73f69bf8d0743718f7cfc46e71f4a12916517ad7e9bff17e1" Feb 16 02:06:39.680313 master-0 kubenswrapper[7721]: E0216 02:06:39.679610 7721 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:csi-snapshot-controller-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:13d06502c0f0a3c73f69bf8d0743718f7cfc46e71f4a12916517ad7e9bff17e1,Command:[],Args:[start -v=2],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPERAND_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a26b20d3ef7b75aeb05acf9be2702f9d478822c43f679ff578811843692b960c,ValueFrom:nil,},EnvVar{Name:WEBHOOK_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8729815d028242b26db06f2ff44816d022e0e7eac34b7b8df11d27d938fe057e,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.32,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:4.18.32,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2582m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-snapshot-controller-operator-7b87b97578-8n9v4_openshift-cluster-storage-operator(d008dbd4-e713-4f2e-b64d-ca9cfc83a502): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 02:06:39.681590 master-0 kubenswrapper[7721]: E0216 02:06:39.680789 7721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-snapshot-controller-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-8n9v4" podUID="d008dbd4-e713-4f2e-b64d-ca9cfc83a502" Feb 16 02:06:39.960309 master-0 kubenswrapper[7721]: I0216 02:06:39.959364 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 02:06:39.960309 master-0 kubenswrapper[7721]: I0216 02:06:39.959996 7721 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 02:06:39.966524 master-0 kubenswrapper[7721]: I0216 02:06:39.966482 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 02:06:39.988036 master-0 kubenswrapper[7721]: I0216 02:06:39.987975 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-hswdj"] Feb 16 02:06:40.003014 master-0 kubenswrapper[7721]: W0216 02:06:40.002928 7721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode478bdcc_052e_42f8_91b6_58c26cfc9cfc.slice/crio-4bdfcd16962800c84c212d75250cbdd940c7636b2eed7dc4de2b7ca286aca5c8 WatchSource:0}: Error finding container 4bdfcd16962800c84c212d75250cbdd940c7636b2eed7dc4de2b7ca286aca5c8: Status 404 returned error can't find the container with id 4bdfcd16962800c84c212d75250cbdd940c7636b2eed7dc4de2b7ca286aca5c8 Feb 16 02:06:40.815814 master-0 kubenswrapper[7721]: I0216 02:06:40.815709 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-v7lmz" event={"ID":"e379cfaf-3a4c-40e7-8641-3524b3669295","Type":"ContainerStarted","Data":"561891ec1509f7c4965b19f5a07719f12421d6e230fb355e2417164216f94e4e"} Feb 16 02:06:40.818816 master-0 kubenswrapper[7721]: I0216 02:06:40.818751 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-mmhcs" event={"ID":"1743372f-bdb0-4558-b47b-3714f3aa3fde","Type":"ContainerStarted","Data":"1f742ab76573db69bc143df83fcf581f4c09f3de9ec005f01809b1af5690b4d3"} Feb 16 02:06:40.825659 master-0 kubenswrapper[7721]: I0216 02:06:40.825598 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 02:06:40.825806 master-0 kubenswrapper[7721]: I0216 02:06:40.825743 7721 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 02:06:40.826702 master-0 kubenswrapper[7721]: I0216 02:06:40.826656 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-b47jp" event={"ID":"6c02961f-30ec-4405-b7fa-9c4192342ae9","Type":"ContainerStarted","Data":"4af3b63baf882cf9b9d02a791b48a2c854ad5ccd1fbd43903fb2e66b8e587e95"} Feb 16 02:06:40.830252 master-0 kubenswrapper[7721]: I0216 02:06:40.830196 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-ck5nr" event={"ID":"91938be6-9ae4-4849-abe8-fc842daecd23","Type":"ContainerStarted","Data":"3af1f9b9834764b079edacabd51db4c771ce412df5b31f88b96200c070e64727"} Feb 16 02:06:40.834048 master-0 kubenswrapper[7721]: I0216 02:06:40.832850 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-x2sh4" event={"ID":"4a5b01c1-1231-4e69-8b6c-c4981b65b26e","Type":"ContainerStarted","Data":"5fc216122a0910fe569f2678eb8d9427d5895c0ca4368e57e2f60b2f9f7164e2"} Feb 16 02:06:40.837497 master-0 kubenswrapper[7721]: I0216 02:06:40.836639 7721 generic.go:334] "Generic (PLEG): container finished" podID="9be9fd24-fdb1-43dc-80b8-68020427bfd7" containerID="c10263f4e3b822ac06417bf8ee62f4bdd1bc382e08a14d4e83e8823933674455" exitCode=0 Feb 16 02:06:40.837497 master-0 kubenswrapper[7721]: I0216 02:06:40.836701 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-zlbd2" event={"ID":"9be9fd24-fdb1-43dc-80b8-68020427bfd7","Type":"ContainerDied","Data":"c10263f4e3b822ac06417bf8ee62f4bdd1bc382e08a14d4e83e8823933674455"} Feb 16 02:06:40.848652 master-0 kubenswrapper[7721]: I0216 02:06:40.842960 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-755d954778-bngv9" event={"ID":"c9cd32bc-a13a-44ee-ba52-7bb335c7007b","Type":"ContainerStarted","Data":"3c59868e46c60a0139fbb9feace033d7ff3288c7e5f3febf6586656bff57983b"} Feb 16 02:06:40.848652 master-0 kubenswrapper[7721]: I0216 02:06:40.844558 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-hswdj" event={"ID":"e478bdcc-052e-42f8-91b6-58c26cfc9cfc","Type":"ContainerStarted","Data":"2f6a07e9f1245d9d11ff174025d456afa3273d956b384a9a3e7efb74485ce379"} Feb 16 02:06:40.848652 master-0 kubenswrapper[7721]: I0216 02:06:40.844584 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-hswdj" event={"ID":"e478bdcc-052e-42f8-91b6-58c26cfc9cfc","Type":"ContainerStarted","Data":"4bdfcd16962800c84c212d75250cbdd940c7636b2eed7dc4de2b7ca286aca5c8"} Feb 16 02:06:40.866491 master-0 kubenswrapper[7721]: I0216 02:06:40.862420 7721 generic.go:334] "Generic (PLEG): container finished" podID="1f2d2601-481d-4e86-ac4c-3d34d5691261" containerID="1a2437279ddd7677a612b841ab7564a6229cb4e0606b54d09369835e4da58be3" exitCode=0 Feb 16 02:06:40.866491 master-0 kubenswrapper[7721]: I0216 02:06:40.863092 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-jshtp" event={"ID":"1f2d2601-481d-4e86-ac4c-3d34d5691261","Type":"ContainerDied","Data":"1a2437279ddd7677a612b841ab7564a6229cb4e0606b54d09369835e4da58be3"} Feb 16 02:06:41.396117 master-0 kubenswrapper[7721]: I0216 02:06:41.396053 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-serving-cert\") pod \"cluster-version-operator-76959b6567-9fxxl\" (UID: \"864c0ef4-319c-457c-aa3b-adf0c3e5a0ff\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-9fxxl" Feb 16 02:06:41.396799 master-0 kubenswrapper[7721]: I0216 02:06:41.396140 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/76915cba-7c11-4bd8-9943-81de74e7781b-srv-cert\") pod \"catalog-operator-588944557d-2z8fq\" (UID: \"76915cba-7c11-4bd8-9943-81de74e7781b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-2z8fq" Feb 16 02:06:41.396799 master-0 kubenswrapper[7721]: I0216 02:06:41.396181 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/b2a83ddd-ffa5-4127-9099-91187ad9dbba-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-845gn\" (UID: \"b2a83ddd-ffa5-4127-9099-91187ad9dbba\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-845gn" Feb 16 02:06:41.396799 master-0 kubenswrapper[7721]: I0216 02:06:41.396246 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2ffa4db8-97da-42de-8e51-35680f518ca7-metrics-tls\") pod \"dns-operator-86b8869b79-4rfwq\" (UID: \"2ffa4db8-97da-42de-8e51-35680f518ca7\") " pod="openshift-dns-operator/dns-operator-86b8869b79-4rfwq" Feb 16 02:06:41.396799 master-0 kubenswrapper[7721]: I0216 02:06:41.396274 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b2a83ddd-ffa5-4127-9099-91187ad9dbba-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-845gn\" (UID: \"b2a83ddd-ffa5-4127-9099-91187ad9dbba\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-845gn" Feb 16 02:06:41.396799 master-0 kubenswrapper[7721]: I0216 02:06:41.396336 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/467d92a2-1cf3-418d-b41e-8e5f9d7a5b74-srv-cert\") pod \"olm-operator-6b56bd877c-qwp9g\" (UID: \"467d92a2-1cf3-418d-b41e-8e5f9d7a5b74\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-qwp9g" Feb 16 02:06:41.396799 master-0 kubenswrapper[7721]: I0216 02:06:41.396368 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/04804a08-e3a5-46f3-abcb-967866834baa-metrics-tls\") pod \"ingress-operator-c588d8cb4-nbjz6\" (UID: \"04804a08-e3a5-46f3-abcb-967866834baa\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nbjz6" Feb 16 02:06:41.396799 master-0 kubenswrapper[7721]: E0216 02:06:41.396568 7721 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 16 02:06:41.396799 master-0 kubenswrapper[7721]: E0216 02:06:41.396632 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04804a08-e3a5-46f3-abcb-967866834baa-metrics-tls podName:04804a08-e3a5-46f3-abcb-967866834baa nodeName:}" failed. No retries permitted until 2026-02-16 02:06:45.396611236 +0000 UTC m=+8.890845518 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/04804a08-e3a5-46f3-abcb-967866834baa-metrics-tls") pod "ingress-operator-c588d8cb4-nbjz6" (UID: "04804a08-e3a5-46f3-abcb-967866834baa") : secret "metrics-tls" not found Feb 16 02:06:41.397292 master-0 kubenswrapper[7721]: E0216 02:06:41.397133 7721 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 16 02:06:41.397292 master-0 kubenswrapper[7721]: E0216 02:06:41.397175 7721 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 16 02:06:41.397292 master-0 kubenswrapper[7721]: E0216 02:06:41.397225 7721 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 16 02:06:41.397292 master-0 kubenswrapper[7721]: E0216 02:06:41.397183 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b2a83ddd-ffa5-4127-9099-91187ad9dbba-node-tuning-operator-tls podName:b2a83ddd-ffa5-4127-9099-91187ad9dbba nodeName:}" failed. No retries permitted until 2026-02-16 02:06:45.39716927 +0000 UTC m=+8.891403552 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/b2a83ddd-ffa5-4127-9099-91187ad9dbba-node-tuning-operator-tls") pod "cluster-node-tuning-operator-ff6c9b66-845gn" (UID: "b2a83ddd-ffa5-4127-9099-91187ad9dbba") : secret "node-tuning-operator-tls" not found Feb 16 02:06:41.397292 master-0 kubenswrapper[7721]: E0216 02:06:41.397261 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ffa4db8-97da-42de-8e51-35680f518ca7-metrics-tls podName:2ffa4db8-97da-42de-8e51-35680f518ca7 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:45.397243602 +0000 UTC m=+8.891477864 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/2ffa4db8-97da-42de-8e51-35680f518ca7-metrics-tls") pod "dns-operator-86b8869b79-4rfwq" (UID: "2ffa4db8-97da-42de-8e51-35680f518ca7") : secret "metrics-tls" not found Feb 16 02:06:41.397292 master-0 kubenswrapper[7721]: E0216 02:06:41.397274 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-serving-cert podName:864c0ef4-319c-457c-aa3b-adf0c3e5a0ff nodeName:}" failed. No retries permitted until 2026-02-16 02:06:45.397267573 +0000 UTC m=+8.891501835 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-serving-cert") pod "cluster-version-operator-76959b6567-9fxxl" (UID: "864c0ef4-319c-457c-aa3b-adf0c3e5a0ff") : secret "cluster-version-operator-serving-cert" not found Feb 16 02:06:41.397516 master-0 kubenswrapper[7721]: E0216 02:06:41.397306 7721 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 16 02:06:41.397516 master-0 kubenswrapper[7721]: E0216 02:06:41.397338 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b2a83ddd-ffa5-4127-9099-91187ad9dbba-apiservice-cert podName:b2a83ddd-ffa5-4127-9099-91187ad9dbba nodeName:}" failed. No retries permitted until 2026-02-16 02:06:45.397327424 +0000 UTC m=+8.891561926 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/b2a83ddd-ffa5-4127-9099-91187ad9dbba-apiservice-cert") pod "cluster-node-tuning-operator-ff6c9b66-845gn" (UID: "b2a83ddd-ffa5-4127-9099-91187ad9dbba") : secret "performance-addon-operator-webhook-cert" not found Feb 16 02:06:41.397516 master-0 kubenswrapper[7721]: E0216 02:06:41.397392 7721 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Feb 16 02:06:41.397516 master-0 kubenswrapper[7721]: E0216 02:06:41.397423 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/467d92a2-1cf3-418d-b41e-8e5f9d7a5b74-srv-cert podName:467d92a2-1cf3-418d-b41e-8e5f9d7a5b74 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:45.397412356 +0000 UTC m=+8.891646828 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/467d92a2-1cf3-418d-b41e-8e5f9d7a5b74-srv-cert") pod "olm-operator-6b56bd877c-qwp9g" (UID: "467d92a2-1cf3-418d-b41e-8e5f9d7a5b74") : secret "olm-operator-serving-cert" not found Feb 16 02:06:41.397516 master-0 kubenswrapper[7721]: E0216 02:06:41.397500 7721 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Feb 16 02:06:41.397657 master-0 kubenswrapper[7721]: E0216 02:06:41.397533 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/76915cba-7c11-4bd8-9943-81de74e7781b-srv-cert podName:76915cba-7c11-4bd8-9943-81de74e7781b nodeName:}" failed. No retries permitted until 2026-02-16 02:06:45.397522289 +0000 UTC m=+8.891756791 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/76915cba-7c11-4bd8-9943-81de74e7781b-srv-cert") pod "catalog-operator-588944557d-2z8fq" (UID: "76915cba-7c11-4bd8-9943-81de74e7781b") : secret "catalog-operator-serving-cert" not found Feb 16 02:06:41.496861 master-0 kubenswrapper[7721]: I0216 02:06:41.496819 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b6088119-1125-4271-8c0b-0675e700edd9-webhook-certs\") pod \"multus-admission-controller-7c64d55f8-62wr2\" (UID: \"b6088119-1125-4271-8c0b-0675e700edd9\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-62wr2" Feb 16 02:06:41.497110 master-0 kubenswrapper[7721]: I0216 02:06:41.497090 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/21686a6d-f685-4fb6-98af-3e8a39c5981b-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-5q4zs\" (UID: \"21686a6d-f685-4fb6-98af-3e8a39c5981b\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-5q4zs" Feb 16 02:06:41.497371 master-0 kubenswrapper[7721]: E0216 02:06:41.497313 7721 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 16 02:06:41.497459 master-0 kubenswrapper[7721]: E0216 02:06:41.497447 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/21686a6d-f685-4fb6-98af-3e8a39c5981b-cluster-monitoring-operator-tls podName:21686a6d-f685-4fb6-98af-3e8a39c5981b nodeName:}" failed. No retries permitted until 2026-02-16 02:06:45.49740101 +0000 UTC m=+8.991635432 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/21686a6d-f685-4fb6-98af-3e8a39c5981b-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-756d64c8c4-5q4zs" (UID: "21686a6d-f685-4fb6-98af-3e8a39c5981b") : secret "cluster-monitoring-operator-tls" not found Feb 16 02:06:41.497521 master-0 kubenswrapper[7721]: E0216 02:06:41.497468 7721 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Feb 16 02:06:41.497571 master-0 kubenswrapper[7721]: E0216 02:06:41.497539 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7f0f9b7d-e663-4927-861b-a9544d483b6e-metrics-certs podName:7f0f9b7d-e663-4927-861b-a9544d483b6e nodeName:}" failed. No retries permitted until 2026-02-16 02:06:45.497522033 +0000 UTC m=+8.991756305 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7f0f9b7d-e663-4927-861b-a9544d483b6e-metrics-certs") pod "network-metrics-daemon-gn9mv" (UID: "7f0f9b7d-e663-4927-861b-a9544d483b6e") : secret "metrics-daemon-secret" not found Feb 16 02:06:41.497571 master-0 kubenswrapper[7721]: I0216 02:06:41.497335 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7f0f9b7d-e663-4927-861b-a9544d483b6e-metrics-certs\") pod \"network-metrics-daemon-gn9mv\" (UID: \"7f0f9b7d-e663-4927-861b-a9544d483b6e\") " pod="openshift-multus/network-metrics-daemon-gn9mv" Feb 16 02:06:41.497857 master-0 kubenswrapper[7721]: I0216 02:06:41.497810 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/23755f7f-dce6-4dcf-9664-22e3aedb5c81-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-tkqng\" (UID: \"23755f7f-dce6-4dcf-9664-22e3aedb5c81\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-tkqng" Feb 16 02:06:41.498005 master-0 kubenswrapper[7721]: E0216 02:06:41.497980 7721 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 16 02:06:41.498063 master-0 kubenswrapper[7721]: I0216 02:06:41.498014 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0540a70-a256-422b-a827-e564d0e67866-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-bxgpd\" (UID: \"a0540a70-a256-422b-a827-e564d0e67866\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-bxgpd" Feb 16 02:06:41.498106 master-0 kubenswrapper[7721]: I0216 02:06:41.498079 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/bde83629-b39c-401e-bc30-5ce205638918-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-8nl7s\" (UID: \"bde83629-b39c-401e-bc30-5ce205638918\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-8nl7s" Feb 16 02:06:41.498181 master-0 kubenswrapper[7721]: E0216 02:06:41.498157 7721 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 16 02:06:41.498234 master-0 kubenswrapper[7721]: E0216 02:06:41.498204 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a0540a70-a256-422b-a827-e564d0e67866-image-registry-operator-tls podName:a0540a70-a256-422b-a827-e564d0e67866 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:45.4981909 +0000 UTC m=+8.992425172 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/a0540a70-a256-422b-a827-e564d0e67866-image-registry-operator-tls") pod "cluster-image-registry-operator-96c8c64b8-bxgpd" (UID: "a0540a70-a256-422b-a827-e564d0e67866") : secret "image-registry-operator-tls" not found Feb 16 02:06:41.498285 master-0 kubenswrapper[7721]: E0216 02:06:41.498230 7721 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 16 02:06:41.498285 master-0 kubenswrapper[7721]: E0216 02:06:41.498267 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b6088119-1125-4271-8c0b-0675e700edd9-webhook-certs podName:b6088119-1125-4271-8c0b-0675e700edd9 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:45.498256531 +0000 UTC m=+8.992490803 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b6088119-1125-4271-8c0b-0675e700edd9-webhook-certs") pod "multus-admission-controller-7c64d55f8-62wr2" (UID: "b6088119-1125-4271-8c0b-0675e700edd9") : secret "multus-admission-controller-secret" not found Feb 16 02:06:41.498364 master-0 kubenswrapper[7721]: E0216 02:06:41.498290 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bde83629-b39c-401e-bc30-5ce205638918-marketplace-operator-metrics podName:bde83629-b39c-401e-bc30-5ce205638918 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:45.498278152 +0000 UTC m=+8.992512654 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/bde83629-b39c-401e-bc30-5ce205638918-marketplace-operator-metrics") pod "marketplace-operator-6cc5b65c6b-8nl7s" (UID: "bde83629-b39c-401e-bc30-5ce205638918") : secret "marketplace-operator-metrics" not found Feb 16 02:06:41.498565 master-0 kubenswrapper[7721]: E0216 02:06:41.498476 7721 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 16 02:06:41.498565 master-0 kubenswrapper[7721]: E0216 02:06:41.498543 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23755f7f-dce6-4dcf-9664-22e3aedb5c81-package-server-manager-serving-cert podName:23755f7f-dce6-4dcf-9664-22e3aedb5c81 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:45.498526838 +0000 UTC m=+8.992761110 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/23755f7f-dce6-4dcf-9664-22e3aedb5c81-package-server-manager-serving-cert") pod "package-server-manager-5c696dbdcd-tkqng" (UID: "23755f7f-dce6-4dcf-9664-22e3aedb5c81") : secret "package-server-manager-serving-cert" not found Feb 16 02:06:41.748565 master-0 kubenswrapper[7721]: I0216 02:06:41.747320 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:41.786471 master-0 kubenswrapper[7721]: I0216 02:06:41.786411 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:41.841731 master-0 kubenswrapper[7721]: I0216 02:06:41.841662 7721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 02:06:41.847144 master-0 kubenswrapper[7721]: I0216 02:06:41.847003 7721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 02:06:41.889463 master-0 kubenswrapper[7721]: I0216 02:06:41.889244 7721 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 02:06:41.895454 master-0 kubenswrapper[7721]: I0216 02:06:41.889770 7721 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 02:06:41.895454 master-0 kubenswrapper[7721]: I0216 02:06:41.890320 7721 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 02:06:41.895454 master-0 kubenswrapper[7721]: I0216 02:06:41.890331 7721 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 02:06:41.915454 master-0 kubenswrapper[7721]: I0216 02:06:41.904769 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 02:06:42.087893 master-0 kubenswrapper[7721]: I0216 02:06:42.087770 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-dc99ff586-t5hvw"] Feb 16 02:06:42.088067 master-0 kubenswrapper[7721]: E0216 02:06:42.087928 7721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fb79c4e-3171-4d3c-a0d1-ed1a93acafa8" containerName="assisted-installer-controller" Feb 16 02:06:42.088067 master-0 kubenswrapper[7721]: I0216 02:06:42.087939 7721 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fb79c4e-3171-4d3c-a0d1-ed1a93acafa8" containerName="assisted-installer-controller" Feb 16 02:06:42.088067 master-0 kubenswrapper[7721]: E0216 02:06:42.087952 7721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a874e346-456c-4e93-87bd-7b70434ddeb1" containerName="prober" Feb 16 02:06:42.088067 master-0 kubenswrapper[7721]: I0216 02:06:42.087958 7721 state_mem.go:107] "Deleted CPUSet assignment" podUID="a874e346-456c-4e93-87bd-7b70434ddeb1" containerName="prober" Feb 16 02:06:42.088067 master-0 kubenswrapper[7721]: I0216 02:06:42.088018 7721 memory_manager.go:354] "RemoveStaleState removing state" podUID="a874e346-456c-4e93-87bd-7b70434ddeb1" containerName="prober" Feb 16 02:06:42.088067 master-0 kubenswrapper[7721]: I0216 02:06:42.088026 7721 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fb79c4e-3171-4d3c-a0d1-ed1a93acafa8" containerName="assisted-installer-controller" Feb 16 02:06:42.089305 master-0 kubenswrapper[7721]: I0216 02:06:42.088364 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-dc99ff586-t5hvw" Feb 16 02:06:42.091013 master-0 kubenswrapper[7721]: I0216 02:06:42.090987 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 02:06:42.091862 master-0 kubenswrapper[7721]: I0216 02:06:42.091836 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 02:06:42.092421 master-0 kubenswrapper[7721]: I0216 02:06:42.092393 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 02:06:42.093257 master-0 kubenswrapper[7721]: I0216 02:06:42.093223 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 02:06:42.093584 master-0 kubenswrapper[7721]: I0216 02:06:42.093553 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 02:06:42.093735 master-0 kubenswrapper[7721]: I0216 02:06:42.093706 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 02:06:42.102547 master-0 kubenswrapper[7721]: I0216 02:06:42.102517 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-dc99ff586-t5hvw"] Feb 16 02:06:42.207247 master-0 kubenswrapper[7721]: I0216 02:06:42.207195 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b460e889-753e-44c7-91a7-dc3f60cf4ad3-config\") pod \"controller-manager-dc99ff586-t5hvw\" (UID: \"b460e889-753e-44c7-91a7-dc3f60cf4ad3\") " pod="openshift-controller-manager/controller-manager-dc99ff586-t5hvw" Feb 16 02:06:42.207247 master-0 kubenswrapper[7721]: I0216 02:06:42.207246 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gd6jx\" (UniqueName: \"kubernetes.io/projected/b460e889-753e-44c7-91a7-dc3f60cf4ad3-kube-api-access-gd6jx\") pod \"controller-manager-dc99ff586-t5hvw\" (UID: \"b460e889-753e-44c7-91a7-dc3f60cf4ad3\") " pod="openshift-controller-manager/controller-manager-dc99ff586-t5hvw" Feb 16 02:06:42.207551 master-0 kubenswrapper[7721]: I0216 02:06:42.207447 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b460e889-753e-44c7-91a7-dc3f60cf4ad3-serving-cert\") pod \"controller-manager-dc99ff586-t5hvw\" (UID: \"b460e889-753e-44c7-91a7-dc3f60cf4ad3\") " pod="openshift-controller-manager/controller-manager-dc99ff586-t5hvw" Feb 16 02:06:42.207551 master-0 kubenswrapper[7721]: I0216 02:06:42.207500 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b460e889-753e-44c7-91a7-dc3f60cf4ad3-client-ca\") pod \"controller-manager-dc99ff586-t5hvw\" (UID: \"b460e889-753e-44c7-91a7-dc3f60cf4ad3\") " pod="openshift-controller-manager/controller-manager-dc99ff586-t5hvw" Feb 16 02:06:42.207551 master-0 kubenswrapper[7721]: I0216 02:06:42.207527 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b460e889-753e-44c7-91a7-dc3f60cf4ad3-proxy-ca-bundles\") pod \"controller-manager-dc99ff586-t5hvw\" (UID: \"b460e889-753e-44c7-91a7-dc3f60cf4ad3\") " pod="openshift-controller-manager/controller-manager-dc99ff586-t5hvw" Feb 16 02:06:42.309114 master-0 kubenswrapper[7721]: I0216 02:06:42.308998 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b460e889-753e-44c7-91a7-dc3f60cf4ad3-proxy-ca-bundles\") pod \"controller-manager-dc99ff586-t5hvw\" (UID: \"b460e889-753e-44c7-91a7-dc3f60cf4ad3\") " pod="openshift-controller-manager/controller-manager-dc99ff586-t5hvw" Feb 16 02:06:42.309264 master-0 kubenswrapper[7721]: E0216 02:06:42.309187 7721 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: configmap "openshift-global-ca" not found Feb 16 02:06:42.309338 master-0 kubenswrapper[7721]: E0216 02:06:42.309299 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b460e889-753e-44c7-91a7-dc3f60cf4ad3-proxy-ca-bundles podName:b460e889-753e-44c7-91a7-dc3f60cf4ad3 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:42.809261408 +0000 UTC m=+6.303495670 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/b460e889-753e-44c7-91a7-dc3f60cf4ad3-proxy-ca-bundles") pod "controller-manager-dc99ff586-t5hvw" (UID: "b460e889-753e-44c7-91a7-dc3f60cf4ad3") : configmap "openshift-global-ca" not found Feb 16 02:06:42.309719 master-0 kubenswrapper[7721]: I0216 02:06:42.309678 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b460e889-753e-44c7-91a7-dc3f60cf4ad3-config\") pod \"controller-manager-dc99ff586-t5hvw\" (UID: \"b460e889-753e-44c7-91a7-dc3f60cf4ad3\") " pod="openshift-controller-manager/controller-manager-dc99ff586-t5hvw" Feb 16 02:06:42.309719 master-0 kubenswrapper[7721]: I0216 02:06:42.309716 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gd6jx\" (UniqueName: \"kubernetes.io/projected/b460e889-753e-44c7-91a7-dc3f60cf4ad3-kube-api-access-gd6jx\") pod \"controller-manager-dc99ff586-t5hvw\" (UID: \"b460e889-753e-44c7-91a7-dc3f60cf4ad3\") " pod="openshift-controller-manager/controller-manager-dc99ff586-t5hvw" Feb 16 02:06:42.309843 master-0 kubenswrapper[7721]: I0216 02:06:42.309779 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b460e889-753e-44c7-91a7-dc3f60cf4ad3-serving-cert\") pod \"controller-manager-dc99ff586-t5hvw\" (UID: \"b460e889-753e-44c7-91a7-dc3f60cf4ad3\") " pod="openshift-controller-manager/controller-manager-dc99ff586-t5hvw" Feb 16 02:06:42.309843 master-0 kubenswrapper[7721]: I0216 02:06:42.309800 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b460e889-753e-44c7-91a7-dc3f60cf4ad3-client-ca\") pod \"controller-manager-dc99ff586-t5hvw\" (UID: \"b460e889-753e-44c7-91a7-dc3f60cf4ad3\") " pod="openshift-controller-manager/controller-manager-dc99ff586-t5hvw" Feb 16 02:06:42.309921 master-0 kubenswrapper[7721]: E0216 02:06:42.309861 7721 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 16 02:06:42.309921 master-0 kubenswrapper[7721]: E0216 02:06:42.309884 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b460e889-753e-44c7-91a7-dc3f60cf4ad3-client-ca podName:b460e889-753e-44c7-91a7-dc3f60cf4ad3 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:42.809876443 +0000 UTC m=+6.304110705 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/b460e889-753e-44c7-91a7-dc3f60cf4ad3-client-ca") pod "controller-manager-dc99ff586-t5hvw" (UID: "b460e889-753e-44c7-91a7-dc3f60cf4ad3") : configmap "client-ca" not found Feb 16 02:06:42.310008 master-0 kubenswrapper[7721]: E0216 02:06:42.309947 7721 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: configmap "config" not found Feb 16 02:06:42.310008 master-0 kubenswrapper[7721]: E0216 02:06:42.309970 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b460e889-753e-44c7-91a7-dc3f60cf4ad3-config podName:b460e889-753e-44c7-91a7-dc3f60cf4ad3 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:42.809963845 +0000 UTC m=+6.304198107 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/b460e889-753e-44c7-91a7-dc3f60cf4ad3-config") pod "controller-manager-dc99ff586-t5hvw" (UID: "b460e889-753e-44c7-91a7-dc3f60cf4ad3") : configmap "config" not found Feb 16 02:06:42.310166 master-0 kubenswrapper[7721]: E0216 02:06:42.310146 7721 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Feb 16 02:06:42.310232 master-0 kubenswrapper[7721]: E0216 02:06:42.310174 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b460e889-753e-44c7-91a7-dc3f60cf4ad3-serving-cert podName:b460e889-753e-44c7-91a7-dc3f60cf4ad3 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:42.810168 +0000 UTC m=+6.304402262 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/b460e889-753e-44c7-91a7-dc3f60cf4ad3-serving-cert") pod "controller-manager-dc99ff586-t5hvw" (UID: "b460e889-753e-44c7-91a7-dc3f60cf4ad3") : secret "serving-cert" not found Feb 16 02:06:42.345927 master-0 kubenswrapper[7721]: I0216 02:06:42.345796 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gd6jx\" (UniqueName: \"kubernetes.io/projected/b460e889-753e-44c7-91a7-dc3f60cf4ad3-kube-api-access-gd6jx\") pod \"controller-manager-dc99ff586-t5hvw\" (UID: \"b460e889-753e-44c7-91a7-dc3f60cf4ad3\") " pod="openshift-controller-manager/controller-manager-dc99ff586-t5hvw" Feb 16 02:06:42.350387 master-0 kubenswrapper[7721]: I0216 02:06:42.350320 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-5bd989df77-sh2wj"] Feb 16 02:06:42.350909 master-0 kubenswrapper[7721]: I0216 02:06:42.350877 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-sh2wj" Feb 16 02:06:42.352680 master-0 kubenswrapper[7721]: I0216 02:06:42.352657 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 16 02:06:42.353729 master-0 kubenswrapper[7721]: I0216 02:06:42.353707 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 16 02:06:42.363890 master-0 kubenswrapper[7721]: I0216 02:06:42.363821 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-5bd989df77-sh2wj"] Feb 16 02:06:42.411396 master-0 kubenswrapper[7721]: I0216 02:06:42.411340 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnh76\" (UniqueName: \"kubernetes.io/projected/676adb95-3ffd-43e5-89e3-9d7a7d74df28-kube-api-access-lnh76\") pod \"migrator-5bd989df77-sh2wj\" (UID: \"676adb95-3ffd-43e5-89e3-9d7a7d74df28\") " pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-sh2wj" Feb 16 02:06:42.424175 master-0 kubenswrapper[7721]: I0216 02:06:42.424132 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 02:06:42.433538 master-0 kubenswrapper[7721]: I0216 02:06:42.433421 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 02:06:42.513005 master-0 kubenswrapper[7721]: I0216 02:06:42.512932 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnh76\" (UniqueName: \"kubernetes.io/projected/676adb95-3ffd-43e5-89e3-9d7a7d74df28-kube-api-access-lnh76\") pod \"migrator-5bd989df77-sh2wj\" (UID: \"676adb95-3ffd-43e5-89e3-9d7a7d74df28\") " pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-sh2wj" Feb 16 02:06:42.540801 master-0 kubenswrapper[7721]: I0216 02:06:42.540729 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnh76\" (UniqueName: \"kubernetes.io/projected/676adb95-3ffd-43e5-89e3-9d7a7d74df28-kube-api-access-lnh76\") pod \"migrator-5bd989df77-sh2wj\" (UID: \"676adb95-3ffd-43e5-89e3-9d7a7d74df28\") " pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-sh2wj" Feb 16 02:06:42.676067 master-0 kubenswrapper[7721]: I0216 02:06:42.675971 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-sh2wj" Feb 16 02:06:42.817595 master-0 kubenswrapper[7721]: I0216 02:06:42.817546 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b460e889-753e-44c7-91a7-dc3f60cf4ad3-config\") pod \"controller-manager-dc99ff586-t5hvw\" (UID: \"b460e889-753e-44c7-91a7-dc3f60cf4ad3\") " pod="openshift-controller-manager/controller-manager-dc99ff586-t5hvw" Feb 16 02:06:42.817824 master-0 kubenswrapper[7721]: I0216 02:06:42.817629 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b460e889-753e-44c7-91a7-dc3f60cf4ad3-serving-cert\") pod \"controller-manager-dc99ff586-t5hvw\" (UID: \"b460e889-753e-44c7-91a7-dc3f60cf4ad3\") " pod="openshift-controller-manager/controller-manager-dc99ff586-t5hvw" Feb 16 02:06:42.817824 master-0 kubenswrapper[7721]: I0216 02:06:42.817650 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b460e889-753e-44c7-91a7-dc3f60cf4ad3-client-ca\") pod \"controller-manager-dc99ff586-t5hvw\" (UID: \"b460e889-753e-44c7-91a7-dc3f60cf4ad3\") " pod="openshift-controller-manager/controller-manager-dc99ff586-t5hvw" Feb 16 02:06:42.817824 master-0 kubenswrapper[7721]: E0216 02:06:42.817783 7721 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: configmap "config" not found Feb 16 02:06:42.818031 master-0 kubenswrapper[7721]: E0216 02:06:42.817828 7721 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Feb 16 02:06:42.818031 master-0 kubenswrapper[7721]: E0216 02:06:42.817920 7721 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 16 02:06:42.818031 master-0 kubenswrapper[7721]: I0216 02:06:42.817946 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b460e889-753e-44c7-91a7-dc3f60cf4ad3-proxy-ca-bundles\") pod \"controller-manager-dc99ff586-t5hvw\" (UID: \"b460e889-753e-44c7-91a7-dc3f60cf4ad3\") " pod="openshift-controller-manager/controller-manager-dc99ff586-t5hvw" Feb 16 02:06:42.818031 master-0 kubenswrapper[7721]: E0216 02:06:42.817977 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b460e889-753e-44c7-91a7-dc3f60cf4ad3-client-ca podName:b460e889-753e-44c7-91a7-dc3f60cf4ad3 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:43.817942263 +0000 UTC m=+7.312176545 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/b460e889-753e-44c7-91a7-dc3f60cf4ad3-client-ca") pod "controller-manager-dc99ff586-t5hvw" (UID: "b460e889-753e-44c7-91a7-dc3f60cf4ad3") : configmap "client-ca" not found Feb 16 02:06:42.818275 master-0 kubenswrapper[7721]: E0216 02:06:42.818088 7721 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: configmap "openshift-global-ca" not found Feb 16 02:06:42.818275 master-0 kubenswrapper[7721]: E0216 02:06:42.818098 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b460e889-753e-44c7-91a7-dc3f60cf4ad3-serving-cert podName:b460e889-753e-44c7-91a7-dc3f60cf4ad3 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:43.818053006 +0000 UTC m=+7.312287298 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/b460e889-753e-44c7-91a7-dc3f60cf4ad3-serving-cert") pod "controller-manager-dc99ff586-t5hvw" (UID: "b460e889-753e-44c7-91a7-dc3f60cf4ad3") : secret "serving-cert" not found Feb 16 02:06:42.818275 master-0 kubenswrapper[7721]: E0216 02:06:42.818131 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b460e889-753e-44c7-91a7-dc3f60cf4ad3-config podName:b460e889-753e-44c7-91a7-dc3f60cf4ad3 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:43.818117158 +0000 UTC m=+7.312351450 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/b460e889-753e-44c7-91a7-dc3f60cf4ad3-config") pod "controller-manager-dc99ff586-t5hvw" (UID: "b460e889-753e-44c7-91a7-dc3f60cf4ad3") : configmap "config" not found Feb 16 02:06:42.818275 master-0 kubenswrapper[7721]: E0216 02:06:42.818156 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b460e889-753e-44c7-91a7-dc3f60cf4ad3-proxy-ca-bundles podName:b460e889-753e-44c7-91a7-dc3f60cf4ad3 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:43.818139908 +0000 UTC m=+7.312374170 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/b460e889-753e-44c7-91a7-dc3f60cf4ad3-proxy-ca-bundles") pod "controller-manager-dc99ff586-t5hvw" (UID: "b460e889-753e-44c7-91a7-dc3f60cf4ad3") : configmap "openshift-global-ca" not found Feb 16 02:06:43.285256 master-0 kubenswrapper[7721]: I0216 02:06:43.285141 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-676cd8b9b5-x6nhn"] Feb 16 02:06:43.286168 master-0 kubenswrapper[7721]: I0216 02:06:43.286115 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-676cd8b9b5-x6nhn" Feb 16 02:06:43.291132 master-0 kubenswrapper[7721]: I0216 02:06:43.291076 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 16 02:06:43.291132 master-0 kubenswrapper[7721]: I0216 02:06:43.291118 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 16 02:06:43.291323 master-0 kubenswrapper[7721]: I0216 02:06:43.291132 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 16 02:06:43.291974 master-0 kubenswrapper[7721]: I0216 02:06:43.291816 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 16 02:06:43.294876 master-0 kubenswrapper[7721]: I0216 02:06:43.294278 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-676cd8b9b5-x6nhn"] Feb 16 02:06:43.407524 master-0 kubenswrapper[7721]: I0216 02:06:43.406322 7721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-dc99ff586-t5hvw"] Feb 16 02:06:43.407524 master-0 kubenswrapper[7721]: E0216 02:06:43.407011 7721 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config proxy-ca-bundles serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-dc99ff586-t5hvw" podUID="b460e889-753e-44c7-91a7-dc3f60cf4ad3" Feb 16 02:06:43.412673 master-0 kubenswrapper[7721]: I0216 02:06:43.412608 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dc665b75-6qzmt"] Feb 16 02:06:43.413538 master-0 kubenswrapper[7721]: I0216 02:06:43.413337 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dc665b75-6qzmt" Feb 16 02:06:43.416746 master-0 kubenswrapper[7721]: I0216 02:06:43.416107 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 02:06:43.416746 master-0 kubenswrapper[7721]: I0216 02:06:43.416147 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 02:06:43.416746 master-0 kubenswrapper[7721]: I0216 02:06:43.416670 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 02:06:43.417454 master-0 kubenswrapper[7721]: I0216 02:06:43.417405 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 02:06:43.417644 master-0 kubenswrapper[7721]: I0216 02:06:43.417617 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 02:06:43.421833 master-0 kubenswrapper[7721]: I0216 02:06:43.421715 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dc665b75-6qzmt"] Feb 16 02:06:43.429555 master-0 kubenswrapper[7721]: I0216 02:06:43.428749 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r94gg\" (UniqueName: \"kubernetes.io/projected/9defdfff-eb18-4beb-9591-918d0e4b4236-kube-api-access-r94gg\") pod \"service-ca-676cd8b9b5-x6nhn\" (UID: \"9defdfff-eb18-4beb-9591-918d0e4b4236\") " pod="openshift-service-ca/service-ca-676cd8b9b5-x6nhn" Feb 16 02:06:43.429555 master-0 kubenswrapper[7721]: I0216 02:06:43.428824 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/9defdfff-eb18-4beb-9591-918d0e4b4236-signing-cabundle\") pod \"service-ca-676cd8b9b5-x6nhn\" (UID: \"9defdfff-eb18-4beb-9591-918d0e4b4236\") " pod="openshift-service-ca/service-ca-676cd8b9b5-x6nhn" Feb 16 02:06:43.429555 master-0 kubenswrapper[7721]: I0216 02:06:43.428909 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/9defdfff-eb18-4beb-9591-918d0e4b4236-signing-key\") pod \"service-ca-676cd8b9b5-x6nhn\" (UID: \"9defdfff-eb18-4beb-9591-918d0e4b4236\") " pod="openshift-service-ca/service-ca-676cd8b9b5-x6nhn" Feb 16 02:06:43.530620 master-0 kubenswrapper[7721]: I0216 02:06:43.530499 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07a1ea85-2ac8-4c6a-a585-c35ebf55b33d-config\") pod \"route-controller-manager-dc665b75-6qzmt\" (UID: \"07a1ea85-2ac8-4c6a-a585-c35ebf55b33d\") " pod="openshift-route-controller-manager/route-controller-manager-dc665b75-6qzmt" Feb 16 02:06:43.530620 master-0 kubenswrapper[7721]: I0216 02:06:43.530576 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r94gg\" (UniqueName: \"kubernetes.io/projected/9defdfff-eb18-4beb-9591-918d0e4b4236-kube-api-access-r94gg\") pod \"service-ca-676cd8b9b5-x6nhn\" (UID: \"9defdfff-eb18-4beb-9591-918d0e4b4236\") " pod="openshift-service-ca/service-ca-676cd8b9b5-x6nhn" Feb 16 02:06:43.530620 master-0 kubenswrapper[7721]: I0216 02:06:43.530621 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/9defdfff-eb18-4beb-9591-918d0e4b4236-signing-cabundle\") pod \"service-ca-676cd8b9b5-x6nhn\" (UID: \"9defdfff-eb18-4beb-9591-918d0e4b4236\") " pod="openshift-service-ca/service-ca-676cd8b9b5-x6nhn" Feb 16 02:06:43.530620 master-0 kubenswrapper[7721]: I0216 02:06:43.530641 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07a1ea85-2ac8-4c6a-a585-c35ebf55b33d-serving-cert\") pod \"route-controller-manager-dc665b75-6qzmt\" (UID: \"07a1ea85-2ac8-4c6a-a585-c35ebf55b33d\") " pod="openshift-route-controller-manager/route-controller-manager-dc665b75-6qzmt" Feb 16 02:06:43.531209 master-0 kubenswrapper[7721]: I0216 02:06:43.531064 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/9defdfff-eb18-4beb-9591-918d0e4b4236-signing-key\") pod \"service-ca-676cd8b9b5-x6nhn\" (UID: \"9defdfff-eb18-4beb-9591-918d0e4b4236\") " pod="openshift-service-ca/service-ca-676cd8b9b5-x6nhn" Feb 16 02:06:43.531209 master-0 kubenswrapper[7721]: I0216 02:06:43.531194 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/07a1ea85-2ac8-4c6a-a585-c35ebf55b33d-client-ca\") pod \"route-controller-manager-dc665b75-6qzmt\" (UID: \"07a1ea85-2ac8-4c6a-a585-c35ebf55b33d\") " pod="openshift-route-controller-manager/route-controller-manager-dc665b75-6qzmt" Feb 16 02:06:43.531544 master-0 kubenswrapper[7721]: I0216 02:06:43.531509 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5d9rl\" (UniqueName: \"kubernetes.io/projected/07a1ea85-2ac8-4c6a-a585-c35ebf55b33d-kube-api-access-5d9rl\") pod \"route-controller-manager-dc665b75-6qzmt\" (UID: \"07a1ea85-2ac8-4c6a-a585-c35ebf55b33d\") " pod="openshift-route-controller-manager/route-controller-manager-dc665b75-6qzmt" Feb 16 02:06:43.531746 master-0 kubenswrapper[7721]: I0216 02:06:43.531682 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/9defdfff-eb18-4beb-9591-918d0e4b4236-signing-cabundle\") pod \"service-ca-676cd8b9b5-x6nhn\" (UID: \"9defdfff-eb18-4beb-9591-918d0e4b4236\") " pod="openshift-service-ca/service-ca-676cd8b9b5-x6nhn" Feb 16 02:06:43.541310 master-0 kubenswrapper[7721]: I0216 02:06:43.540368 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/9defdfff-eb18-4beb-9591-918d0e4b4236-signing-key\") pod \"service-ca-676cd8b9b5-x6nhn\" (UID: \"9defdfff-eb18-4beb-9591-918d0e4b4236\") " pod="openshift-service-ca/service-ca-676cd8b9b5-x6nhn" Feb 16 02:06:43.564395 master-0 kubenswrapper[7721]: I0216 02:06:43.564306 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r94gg\" (UniqueName: \"kubernetes.io/projected/9defdfff-eb18-4beb-9591-918d0e4b4236-kube-api-access-r94gg\") pod \"service-ca-676cd8b9b5-x6nhn\" (UID: \"9defdfff-eb18-4beb-9591-918d0e4b4236\") " pod="openshift-service-ca/service-ca-676cd8b9b5-x6nhn" Feb 16 02:06:43.616030 master-0 kubenswrapper[7721]: I0216 02:06:43.615972 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-676cd8b9b5-x6nhn" Feb 16 02:06:43.633511 master-0 kubenswrapper[7721]: I0216 02:06:43.633412 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5d9rl\" (UniqueName: \"kubernetes.io/projected/07a1ea85-2ac8-4c6a-a585-c35ebf55b33d-kube-api-access-5d9rl\") pod \"route-controller-manager-dc665b75-6qzmt\" (UID: \"07a1ea85-2ac8-4c6a-a585-c35ebf55b33d\") " pod="openshift-route-controller-manager/route-controller-manager-dc665b75-6qzmt" Feb 16 02:06:43.633657 master-0 kubenswrapper[7721]: I0216 02:06:43.633539 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07a1ea85-2ac8-4c6a-a585-c35ebf55b33d-config\") pod \"route-controller-manager-dc665b75-6qzmt\" (UID: \"07a1ea85-2ac8-4c6a-a585-c35ebf55b33d\") " pod="openshift-route-controller-manager/route-controller-manager-dc665b75-6qzmt" Feb 16 02:06:43.633738 master-0 kubenswrapper[7721]: I0216 02:06:43.633637 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07a1ea85-2ac8-4c6a-a585-c35ebf55b33d-serving-cert\") pod \"route-controller-manager-dc665b75-6qzmt\" (UID: \"07a1ea85-2ac8-4c6a-a585-c35ebf55b33d\") " pod="openshift-route-controller-manager/route-controller-manager-dc665b75-6qzmt" Feb 16 02:06:43.633897 master-0 kubenswrapper[7721]: I0216 02:06:43.633815 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/07a1ea85-2ac8-4c6a-a585-c35ebf55b33d-client-ca\") pod \"route-controller-manager-dc665b75-6qzmt\" (UID: \"07a1ea85-2ac8-4c6a-a585-c35ebf55b33d\") " pod="openshift-route-controller-manager/route-controller-manager-dc665b75-6qzmt" Feb 16 02:06:43.633897 master-0 kubenswrapper[7721]: E0216 02:06:43.633837 7721 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Feb 16 02:06:43.634023 master-0 kubenswrapper[7721]: E0216 02:06:43.633986 7721 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 16 02:06:43.634093 master-0 kubenswrapper[7721]: E0216 02:06:43.634049 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07a1ea85-2ac8-4c6a-a585-c35ebf55b33d-serving-cert podName:07a1ea85-2ac8-4c6a-a585-c35ebf55b33d nodeName:}" failed. No retries permitted until 2026-02-16 02:06:44.133979365 +0000 UTC m=+7.628213667 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/07a1ea85-2ac8-4c6a-a585-c35ebf55b33d-serving-cert") pod "route-controller-manager-dc665b75-6qzmt" (UID: "07a1ea85-2ac8-4c6a-a585-c35ebf55b33d") : secret "serving-cert" not found Feb 16 02:06:43.634256 master-0 kubenswrapper[7721]: E0216 02:06:43.634211 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/07a1ea85-2ac8-4c6a-a585-c35ebf55b33d-client-ca podName:07a1ea85-2ac8-4c6a-a585-c35ebf55b33d nodeName:}" failed. No retries permitted until 2026-02-16 02:06:44.134187541 +0000 UTC m=+7.628421803 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/07a1ea85-2ac8-4c6a-a585-c35ebf55b33d-client-ca") pod "route-controller-manager-dc665b75-6qzmt" (UID: "07a1ea85-2ac8-4c6a-a585-c35ebf55b33d") : configmap "client-ca" not found Feb 16 02:06:43.637486 master-0 kubenswrapper[7721]: I0216 02:06:43.637425 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07a1ea85-2ac8-4c6a-a585-c35ebf55b33d-config\") pod \"route-controller-manager-dc665b75-6qzmt\" (UID: \"07a1ea85-2ac8-4c6a-a585-c35ebf55b33d\") " pod="openshift-route-controller-manager/route-controller-manager-dc665b75-6qzmt" Feb 16 02:06:43.652811 master-0 kubenswrapper[7721]: I0216 02:06:43.652724 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5d9rl\" (UniqueName: \"kubernetes.io/projected/07a1ea85-2ac8-4c6a-a585-c35ebf55b33d-kube-api-access-5d9rl\") pod \"route-controller-manager-dc665b75-6qzmt\" (UID: \"07a1ea85-2ac8-4c6a-a585-c35ebf55b33d\") " pod="openshift-route-controller-manager/route-controller-manager-dc665b75-6qzmt" Feb 16 02:06:43.836224 master-0 kubenswrapper[7721]: I0216 02:06:43.836031 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b460e889-753e-44c7-91a7-dc3f60cf4ad3-serving-cert\") pod \"controller-manager-dc99ff586-t5hvw\" (UID: \"b460e889-753e-44c7-91a7-dc3f60cf4ad3\") " pod="openshift-controller-manager/controller-manager-dc99ff586-t5hvw" Feb 16 02:06:43.836224 master-0 kubenswrapper[7721]: I0216 02:06:43.836108 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b460e889-753e-44c7-91a7-dc3f60cf4ad3-client-ca\") pod \"controller-manager-dc99ff586-t5hvw\" (UID: \"b460e889-753e-44c7-91a7-dc3f60cf4ad3\") " pod="openshift-controller-manager/controller-manager-dc99ff586-t5hvw" Feb 16 02:06:43.836224 master-0 kubenswrapper[7721]: I0216 02:06:43.836139 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b460e889-753e-44c7-91a7-dc3f60cf4ad3-proxy-ca-bundles\") pod \"controller-manager-dc99ff586-t5hvw\" (UID: \"b460e889-753e-44c7-91a7-dc3f60cf4ad3\") " pod="openshift-controller-manager/controller-manager-dc99ff586-t5hvw" Feb 16 02:06:43.836676 master-0 kubenswrapper[7721]: E0216 02:06:43.836362 7721 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Feb 16 02:06:43.836676 master-0 kubenswrapper[7721]: E0216 02:06:43.836548 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b460e889-753e-44c7-91a7-dc3f60cf4ad3-serving-cert podName:b460e889-753e-44c7-91a7-dc3f60cf4ad3 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:45.836509316 +0000 UTC m=+9.330743618 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/b460e889-753e-44c7-91a7-dc3f60cf4ad3-serving-cert") pod "controller-manager-dc99ff586-t5hvw" (UID: "b460e889-753e-44c7-91a7-dc3f60cf4ad3") : secret "serving-cert" not found Feb 16 02:06:43.836676 master-0 kubenswrapper[7721]: E0216 02:06:43.836674 7721 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 16 02:06:43.836875 master-0 kubenswrapper[7721]: E0216 02:06:43.836760 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b460e889-753e-44c7-91a7-dc3f60cf4ad3-client-ca podName:b460e889-753e-44c7-91a7-dc3f60cf4ad3 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:45.836736492 +0000 UTC m=+9.330970764 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/b460e889-753e-44c7-91a7-dc3f60cf4ad3-client-ca") pod "controller-manager-dc99ff586-t5hvw" (UID: "b460e889-753e-44c7-91a7-dc3f60cf4ad3") : configmap "client-ca" not found Feb 16 02:06:43.836943 master-0 kubenswrapper[7721]: I0216 02:06:43.836901 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b460e889-753e-44c7-91a7-dc3f60cf4ad3-config\") pod \"controller-manager-dc99ff586-t5hvw\" (UID: \"b460e889-753e-44c7-91a7-dc3f60cf4ad3\") " pod="openshift-controller-manager/controller-manager-dc99ff586-t5hvw" Feb 16 02:06:43.837739 master-0 kubenswrapper[7721]: I0216 02:06:43.837675 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b460e889-753e-44c7-91a7-dc3f60cf4ad3-proxy-ca-bundles\") pod \"controller-manager-dc99ff586-t5hvw\" (UID: \"b460e889-753e-44c7-91a7-dc3f60cf4ad3\") " pod="openshift-controller-manager/controller-manager-dc99ff586-t5hvw" Feb 16 02:06:43.837996 master-0 kubenswrapper[7721]: I0216 02:06:43.837949 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b460e889-753e-44c7-91a7-dc3f60cf4ad3-config\") pod \"controller-manager-dc99ff586-t5hvw\" (UID: \"b460e889-753e-44c7-91a7-dc3f60cf4ad3\") " pod="openshift-controller-manager/controller-manager-dc99ff586-t5hvw" Feb 16 02:06:43.898963 master-0 kubenswrapper[7721]: I0216 02:06:43.898882 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-dc99ff586-t5hvw" Feb 16 02:06:43.907495 master-0 kubenswrapper[7721]: I0216 02:06:43.907406 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-dc99ff586-t5hvw" Feb 16 02:06:44.043275 master-0 kubenswrapper[7721]: I0216 02:06:44.043170 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b460e889-753e-44c7-91a7-dc3f60cf4ad3-config\") pod \"b460e889-753e-44c7-91a7-dc3f60cf4ad3\" (UID: \"b460e889-753e-44c7-91a7-dc3f60cf4ad3\") " Feb 16 02:06:44.043521 master-0 kubenswrapper[7721]: I0216 02:06:44.043338 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gd6jx\" (UniqueName: \"kubernetes.io/projected/b460e889-753e-44c7-91a7-dc3f60cf4ad3-kube-api-access-gd6jx\") pod \"b460e889-753e-44c7-91a7-dc3f60cf4ad3\" (UID: \"b460e889-753e-44c7-91a7-dc3f60cf4ad3\") " Feb 16 02:06:44.043521 master-0 kubenswrapper[7721]: I0216 02:06:44.043382 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b460e889-753e-44c7-91a7-dc3f60cf4ad3-proxy-ca-bundles\") pod \"b460e889-753e-44c7-91a7-dc3f60cf4ad3\" (UID: \"b460e889-753e-44c7-91a7-dc3f60cf4ad3\") " Feb 16 02:06:44.044815 master-0 kubenswrapper[7721]: I0216 02:06:44.044777 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b460e889-753e-44c7-91a7-dc3f60cf4ad3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "b460e889-753e-44c7-91a7-dc3f60cf4ad3" (UID: "b460e889-753e-44c7-91a7-dc3f60cf4ad3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:06:44.045275 master-0 kubenswrapper[7721]: I0216 02:06:44.045232 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b460e889-753e-44c7-91a7-dc3f60cf4ad3-config" (OuterVolumeSpecName: "config") pod "b460e889-753e-44c7-91a7-dc3f60cf4ad3" (UID: "b460e889-753e-44c7-91a7-dc3f60cf4ad3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:06:44.051395 master-0 kubenswrapper[7721]: I0216 02:06:44.051349 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b460e889-753e-44c7-91a7-dc3f60cf4ad3-kube-api-access-gd6jx" (OuterVolumeSpecName: "kube-api-access-gd6jx") pod "b460e889-753e-44c7-91a7-dc3f60cf4ad3" (UID: "b460e889-753e-44c7-91a7-dc3f60cf4ad3"). InnerVolumeSpecName "kube-api-access-gd6jx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:06:44.148125 master-0 kubenswrapper[7721]: I0216 02:06:44.148082 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07a1ea85-2ac8-4c6a-a585-c35ebf55b33d-serving-cert\") pod \"route-controller-manager-dc665b75-6qzmt\" (UID: \"07a1ea85-2ac8-4c6a-a585-c35ebf55b33d\") " pod="openshift-route-controller-manager/route-controller-manager-dc665b75-6qzmt" Feb 16 02:06:44.148249 master-0 kubenswrapper[7721]: I0216 02:06:44.148200 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/07a1ea85-2ac8-4c6a-a585-c35ebf55b33d-client-ca\") pod \"route-controller-manager-dc665b75-6qzmt\" (UID: \"07a1ea85-2ac8-4c6a-a585-c35ebf55b33d\") " pod="openshift-route-controller-manager/route-controller-manager-dc665b75-6qzmt" Feb 16 02:06:44.148417 master-0 kubenswrapper[7721]: I0216 02:06:44.148399 7721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b460e889-753e-44c7-91a7-dc3f60cf4ad3-config\") on node \"master-0\" DevicePath \"\"" Feb 16 02:06:44.148496 master-0 kubenswrapper[7721]: I0216 02:06:44.148429 7721 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b460e889-753e-44c7-91a7-dc3f60cf4ad3-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Feb 16 02:06:44.148496 master-0 kubenswrapper[7721]: I0216 02:06:44.148473 7721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gd6jx\" (UniqueName: \"kubernetes.io/projected/b460e889-753e-44c7-91a7-dc3f60cf4ad3-kube-api-access-gd6jx\") on node \"master-0\" DevicePath \"\"" Feb 16 02:06:44.148584 master-0 kubenswrapper[7721]: E0216 02:06:44.148571 7721 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 16 02:06:44.148669 master-0 kubenswrapper[7721]: E0216 02:06:44.148643 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/07a1ea85-2ac8-4c6a-a585-c35ebf55b33d-client-ca podName:07a1ea85-2ac8-4c6a-a585-c35ebf55b33d nodeName:}" failed. No retries permitted until 2026-02-16 02:06:45.148617981 +0000 UTC m=+8.642852283 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/07a1ea85-2ac8-4c6a-a585-c35ebf55b33d-client-ca") pod "route-controller-manager-dc665b75-6qzmt" (UID: "07a1ea85-2ac8-4c6a-a585-c35ebf55b33d") : configmap "client-ca" not found Feb 16 02:06:44.149260 master-0 kubenswrapper[7721]: E0216 02:06:44.149239 7721 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Feb 16 02:06:44.149322 master-0 kubenswrapper[7721]: E0216 02:06:44.149296 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07a1ea85-2ac8-4c6a-a585-c35ebf55b33d-serving-cert podName:07a1ea85-2ac8-4c6a-a585-c35ebf55b33d nodeName:}" failed. No retries permitted until 2026-02-16 02:06:45.149280177 +0000 UTC m=+8.643514469 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/07a1ea85-2ac8-4c6a-a585-c35ebf55b33d-serving-cert") pod "route-controller-manager-dc665b75-6qzmt" (UID: "07a1ea85-2ac8-4c6a-a585-c35ebf55b33d") : secret "serving-cert" not found Feb 16 02:06:44.316368 master-0 kubenswrapper[7721]: I0216 02:06:44.315673 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-5bd989df77-sh2wj"] Feb 16 02:06:44.326493 master-0 kubenswrapper[7721]: I0216 02:06:44.326459 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-676cd8b9b5-x6nhn"] Feb 16 02:06:44.908506 master-0 kubenswrapper[7721]: I0216 02:06:44.906756 7721 generic.go:334] "Generic (PLEG): container finished" podID="1f2d2601-481d-4e86-ac4c-3d34d5691261" containerID="b6ab399a4728a0f121aaa4c90c21fdc28f10d4d7510c12e5c86883bb65e07354" exitCode=0 Feb 16 02:06:44.908506 master-0 kubenswrapper[7721]: I0216 02:06:44.906820 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-jshtp" event={"ID":"1f2d2601-481d-4e86-ac4c-3d34d5691261","Type":"ContainerDied","Data":"b6ab399a4728a0f121aaa4c90c21fdc28f10d4d7510c12e5c86883bb65e07354"} Feb 16 02:06:44.915755 master-0 kubenswrapper[7721]: I0216 02:06:44.915715 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-676cd8b9b5-x6nhn" event={"ID":"9defdfff-eb18-4beb-9591-918d0e4b4236","Type":"ContainerStarted","Data":"ea913d4d2d0edfcbff7d836320baff12a198f69ef86939ba8c7d3ee238eec033"} Feb 16 02:06:44.915755 master-0 kubenswrapper[7721]: I0216 02:06:44.915756 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-676cd8b9b5-x6nhn" event={"ID":"9defdfff-eb18-4beb-9591-918d0e4b4236","Type":"ContainerStarted","Data":"93c4687f8a629173f2b94639e83fbe20ff1ad3e33cea55c6d4a8747fb84f23bd"} Feb 16 02:06:44.918347 master-0 kubenswrapper[7721]: I0216 02:06:44.918310 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-zlbd2" event={"ID":"9be9fd24-fdb1-43dc-80b8-68020427bfd7","Type":"ContainerStarted","Data":"df7d86f073bbfdceeb06be9efc451e7abb0405476c53f59a41a6fb24d7d9750e"} Feb 16 02:06:44.918804 master-0 kubenswrapper[7721]: I0216 02:06:44.918752 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-zlbd2" Feb 16 02:06:44.931456 master-0 kubenswrapper[7721]: I0216 02:06:44.926405 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-9bnql" event={"ID":"2a67f799-fd8d-4bee-9d67-720151c1650b","Type":"ContainerStarted","Data":"0ea4258c33f51bc9d6b34437cd6d43a195da32b1f09112c54765848f3aa9f36e"} Feb 16 02:06:44.947754 master-0 kubenswrapper[7721]: I0216 02:06:44.944410 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-dc99ff586-t5hvw" Feb 16 02:06:44.947754 master-0 kubenswrapper[7721]: I0216 02:06:44.944731 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-sh2wj" event={"ID":"676adb95-3ffd-43e5-89e3-9d7a7d74df28","Type":"ContainerStarted","Data":"92905ae35545e079e87d8908f39688e6a1a52d219de95b9b7367aad88b020b28"} Feb 16 02:06:44.965481 master-0 kubenswrapper[7721]: I0216 02:06:44.965301 7721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-676cd8b9b5-x6nhn" podStartSLOduration=1.9652716479999999 podStartE2EDuration="1.965271648s" podCreationTimestamp="2026-02-16 02:06:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:06:44.963937715 +0000 UTC m=+8.458171977" watchObservedRunningTime="2026-02-16 02:06:44.965271648 +0000 UTC m=+8.459505950" Feb 16 02:06:45.075127 master-0 kubenswrapper[7721]: I0216 02:06:45.074973 7721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-dc99ff586-t5hvw"] Feb 16 02:06:45.084452 master-0 kubenswrapper[7721]: I0216 02:06:45.084370 7721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-dc99ff586-t5hvw"] Feb 16 02:06:45.120347 master-0 kubenswrapper[7721]: I0216 02:06:45.120286 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6dc6c44bd6-rj7sr"] Feb 16 02:06:45.121027 master-0 kubenswrapper[7721]: I0216 02:06:45.120997 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6dc6c44bd6-rj7sr" Feb 16 02:06:45.124941 master-0 kubenswrapper[7721]: I0216 02:06:45.124894 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 02:06:45.125114 master-0 kubenswrapper[7721]: I0216 02:06:45.125085 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 02:06:45.126047 master-0 kubenswrapper[7721]: I0216 02:06:45.126017 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 02:06:45.126193 master-0 kubenswrapper[7721]: I0216 02:06:45.126166 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 02:06:45.126637 master-0 kubenswrapper[7721]: I0216 02:06:45.126617 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 02:06:45.147516 master-0 kubenswrapper[7721]: I0216 02:06:45.147453 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6dc6c44bd6-rj7sr"] Feb 16 02:06:45.148871 master-0 kubenswrapper[7721]: I0216 02:06:45.148830 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 02:06:45.171488 master-0 kubenswrapper[7721]: I0216 02:06:45.167858 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07a1ea85-2ac8-4c6a-a585-c35ebf55b33d-serving-cert\") pod \"route-controller-manager-dc665b75-6qzmt\" (UID: \"07a1ea85-2ac8-4c6a-a585-c35ebf55b33d\") " pod="openshift-route-controller-manager/route-controller-manager-dc665b75-6qzmt" Feb 16 02:06:45.171488 master-0 kubenswrapper[7721]: I0216 02:06:45.167912 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/07a1ea85-2ac8-4c6a-a585-c35ebf55b33d-client-ca\") pod \"route-controller-manager-dc665b75-6qzmt\" (UID: \"07a1ea85-2ac8-4c6a-a585-c35ebf55b33d\") " pod="openshift-route-controller-manager/route-controller-manager-dc665b75-6qzmt" Feb 16 02:06:45.171488 master-0 kubenswrapper[7721]: I0216 02:06:45.167994 7721 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b460e889-753e-44c7-91a7-dc3f60cf4ad3-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 16 02:06:45.171488 master-0 kubenswrapper[7721]: I0216 02:06:45.168007 7721 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b460e889-753e-44c7-91a7-dc3f60cf4ad3-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 16 02:06:45.171488 master-0 kubenswrapper[7721]: E0216 02:06:45.168067 7721 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 16 02:06:45.171488 master-0 kubenswrapper[7721]: E0216 02:06:45.168109 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/07a1ea85-2ac8-4c6a-a585-c35ebf55b33d-client-ca podName:07a1ea85-2ac8-4c6a-a585-c35ebf55b33d nodeName:}" failed. No retries permitted until 2026-02-16 02:06:47.168094666 +0000 UTC m=+10.662328928 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/07a1ea85-2ac8-4c6a-a585-c35ebf55b33d-client-ca") pod "route-controller-manager-dc665b75-6qzmt" (UID: "07a1ea85-2ac8-4c6a-a585-c35ebf55b33d") : configmap "client-ca" not found Feb 16 02:06:45.171488 master-0 kubenswrapper[7721]: E0216 02:06:45.168424 7721 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Feb 16 02:06:45.171488 master-0 kubenswrapper[7721]: E0216 02:06:45.168480 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07a1ea85-2ac8-4c6a-a585-c35ebf55b33d-serving-cert podName:07a1ea85-2ac8-4c6a-a585-c35ebf55b33d nodeName:}" failed. No retries permitted until 2026-02-16 02:06:47.168472126 +0000 UTC m=+10.662706388 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/07a1ea85-2ac8-4c6a-a585-c35ebf55b33d-serving-cert") pod "route-controller-manager-dc665b75-6qzmt" (UID: "07a1ea85-2ac8-4c6a-a585-c35ebf55b33d") : secret "serving-cert" not found Feb 16 02:06:45.268971 master-0 kubenswrapper[7721]: I0216 02:06:45.268874 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6c0424f7-d2b6-4396-a051-7fd1dc97c67f-client-ca\") pod \"controller-manager-6dc6c44bd6-rj7sr\" (UID: \"6c0424f7-d2b6-4396-a051-7fd1dc97c67f\") " pod="openshift-controller-manager/controller-manager-6dc6c44bd6-rj7sr" Feb 16 02:06:45.269345 master-0 kubenswrapper[7721]: I0216 02:06:45.269094 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c0424f7-d2b6-4396-a051-7fd1dc97c67f-serving-cert\") pod \"controller-manager-6dc6c44bd6-rj7sr\" (UID: \"6c0424f7-d2b6-4396-a051-7fd1dc97c67f\") " pod="openshift-controller-manager/controller-manager-6dc6c44bd6-rj7sr" Feb 16 02:06:45.269345 master-0 kubenswrapper[7721]: I0216 02:06:45.269178 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lk8b7\" (UniqueName: \"kubernetes.io/projected/6c0424f7-d2b6-4396-a051-7fd1dc97c67f-kube-api-access-lk8b7\") pod \"controller-manager-6dc6c44bd6-rj7sr\" (UID: \"6c0424f7-d2b6-4396-a051-7fd1dc97c67f\") " pod="openshift-controller-manager/controller-manager-6dc6c44bd6-rj7sr" Feb 16 02:06:45.269345 master-0 kubenswrapper[7721]: I0216 02:06:45.269213 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c0424f7-d2b6-4396-a051-7fd1dc97c67f-config\") pod \"controller-manager-6dc6c44bd6-rj7sr\" (UID: \"6c0424f7-d2b6-4396-a051-7fd1dc97c67f\") " pod="openshift-controller-manager/controller-manager-6dc6c44bd6-rj7sr" Feb 16 02:06:45.269345 master-0 kubenswrapper[7721]: I0216 02:06:45.269321 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6c0424f7-d2b6-4396-a051-7fd1dc97c67f-proxy-ca-bundles\") pod \"controller-manager-6dc6c44bd6-rj7sr\" (UID: \"6c0424f7-d2b6-4396-a051-7fd1dc97c67f\") " pod="openshift-controller-manager/controller-manager-6dc6c44bd6-rj7sr" Feb 16 02:06:45.370627 master-0 kubenswrapper[7721]: I0216 02:06:45.370431 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c0424f7-d2b6-4396-a051-7fd1dc97c67f-serving-cert\") pod \"controller-manager-6dc6c44bd6-rj7sr\" (UID: \"6c0424f7-d2b6-4396-a051-7fd1dc97c67f\") " pod="openshift-controller-manager/controller-manager-6dc6c44bd6-rj7sr" Feb 16 02:06:45.370627 master-0 kubenswrapper[7721]: I0216 02:06:45.370587 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lk8b7\" (UniqueName: \"kubernetes.io/projected/6c0424f7-d2b6-4396-a051-7fd1dc97c67f-kube-api-access-lk8b7\") pod \"controller-manager-6dc6c44bd6-rj7sr\" (UID: \"6c0424f7-d2b6-4396-a051-7fd1dc97c67f\") " pod="openshift-controller-manager/controller-manager-6dc6c44bd6-rj7sr" Feb 16 02:06:45.370627 master-0 kubenswrapper[7721]: I0216 02:06:45.370629 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c0424f7-d2b6-4396-a051-7fd1dc97c67f-config\") pod \"controller-manager-6dc6c44bd6-rj7sr\" (UID: \"6c0424f7-d2b6-4396-a051-7fd1dc97c67f\") " pod="openshift-controller-manager/controller-manager-6dc6c44bd6-rj7sr" Feb 16 02:06:45.371078 master-0 kubenswrapper[7721]: I0216 02:06:45.370718 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6c0424f7-d2b6-4396-a051-7fd1dc97c67f-proxy-ca-bundles\") pod \"controller-manager-6dc6c44bd6-rj7sr\" (UID: \"6c0424f7-d2b6-4396-a051-7fd1dc97c67f\") " pod="openshift-controller-manager/controller-manager-6dc6c44bd6-rj7sr" Feb 16 02:06:45.371078 master-0 kubenswrapper[7721]: I0216 02:06:45.370788 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6c0424f7-d2b6-4396-a051-7fd1dc97c67f-client-ca\") pod \"controller-manager-6dc6c44bd6-rj7sr\" (UID: \"6c0424f7-d2b6-4396-a051-7fd1dc97c67f\") " pod="openshift-controller-manager/controller-manager-6dc6c44bd6-rj7sr" Feb 16 02:06:45.371078 master-0 kubenswrapper[7721]: E0216 02:06:45.370976 7721 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 16 02:06:45.371078 master-0 kubenswrapper[7721]: E0216 02:06:45.371055 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6c0424f7-d2b6-4396-a051-7fd1dc97c67f-client-ca podName:6c0424f7-d2b6-4396-a051-7fd1dc97c67f nodeName:}" failed. No retries permitted until 2026-02-16 02:06:45.871027937 +0000 UTC m=+9.365262229 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/6c0424f7-d2b6-4396-a051-7fd1dc97c67f-client-ca") pod "controller-manager-6dc6c44bd6-rj7sr" (UID: "6c0424f7-d2b6-4396-a051-7fd1dc97c67f") : configmap "client-ca" not found Feb 16 02:06:45.371409 master-0 kubenswrapper[7721]: E0216 02:06:45.371156 7721 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Feb 16 02:06:45.371409 master-0 kubenswrapper[7721]: E0216 02:06:45.371195 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6c0424f7-d2b6-4396-a051-7fd1dc97c67f-serving-cert podName:6c0424f7-d2b6-4396-a051-7fd1dc97c67f nodeName:}" failed. No retries permitted until 2026-02-16 02:06:45.871181651 +0000 UTC m=+9.365415953 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6c0424f7-d2b6-4396-a051-7fd1dc97c67f-serving-cert") pod "controller-manager-6dc6c44bd6-rj7sr" (UID: "6c0424f7-d2b6-4396-a051-7fd1dc97c67f") : secret "serving-cert" not found Feb 16 02:06:45.374280 master-0 kubenswrapper[7721]: I0216 02:06:45.374230 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c0424f7-d2b6-4396-a051-7fd1dc97c67f-config\") pod \"controller-manager-6dc6c44bd6-rj7sr\" (UID: \"6c0424f7-d2b6-4396-a051-7fd1dc97c67f\") " pod="openshift-controller-manager/controller-manager-6dc6c44bd6-rj7sr" Feb 16 02:06:45.376081 master-0 kubenswrapper[7721]: I0216 02:06:45.376022 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6c0424f7-d2b6-4396-a051-7fd1dc97c67f-proxy-ca-bundles\") pod \"controller-manager-6dc6c44bd6-rj7sr\" (UID: \"6c0424f7-d2b6-4396-a051-7fd1dc97c67f\") " pod="openshift-controller-manager/controller-manager-6dc6c44bd6-rj7sr" Feb 16 02:06:45.427824 master-0 kubenswrapper[7721]: I0216 02:06:45.427529 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lk8b7\" (UniqueName: \"kubernetes.io/projected/6c0424f7-d2b6-4396-a051-7fd1dc97c67f-kube-api-access-lk8b7\") pod \"controller-manager-6dc6c44bd6-rj7sr\" (UID: \"6c0424f7-d2b6-4396-a051-7fd1dc97c67f\") " pod="openshift-controller-manager/controller-manager-6dc6c44bd6-rj7sr" Feb 16 02:06:45.471601 master-0 kubenswrapper[7721]: I0216 02:06:45.471553 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/76915cba-7c11-4bd8-9943-81de74e7781b-srv-cert\") pod \"catalog-operator-588944557d-2z8fq\" (UID: \"76915cba-7c11-4bd8-9943-81de74e7781b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-2z8fq" Feb 16 02:06:45.471601 master-0 kubenswrapper[7721]: I0216 02:06:45.471598 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/b2a83ddd-ffa5-4127-9099-91187ad9dbba-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-845gn\" (UID: \"b2a83ddd-ffa5-4127-9099-91187ad9dbba\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-845gn" Feb 16 02:06:45.471601 master-0 kubenswrapper[7721]: I0216 02:06:45.471635 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2ffa4db8-97da-42de-8e51-35680f518ca7-metrics-tls\") pod \"dns-operator-86b8869b79-4rfwq\" (UID: \"2ffa4db8-97da-42de-8e51-35680f518ca7\") " pod="openshift-dns-operator/dns-operator-86b8869b79-4rfwq" Feb 16 02:06:45.472000 master-0 kubenswrapper[7721]: I0216 02:06:45.471653 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b2a83ddd-ffa5-4127-9099-91187ad9dbba-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-845gn\" (UID: \"b2a83ddd-ffa5-4127-9099-91187ad9dbba\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-845gn" Feb 16 02:06:45.472000 master-0 kubenswrapper[7721]: I0216 02:06:45.471691 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/467d92a2-1cf3-418d-b41e-8e5f9d7a5b74-srv-cert\") pod \"olm-operator-6b56bd877c-qwp9g\" (UID: \"467d92a2-1cf3-418d-b41e-8e5f9d7a5b74\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-qwp9g" Feb 16 02:06:45.472000 master-0 kubenswrapper[7721]: I0216 02:06:45.471710 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/04804a08-e3a5-46f3-abcb-967866834baa-metrics-tls\") pod \"ingress-operator-c588d8cb4-nbjz6\" (UID: \"04804a08-e3a5-46f3-abcb-967866834baa\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nbjz6" Feb 16 02:06:45.472000 master-0 kubenswrapper[7721]: I0216 02:06:45.471745 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-serving-cert\") pod \"cluster-version-operator-76959b6567-9fxxl\" (UID: \"864c0ef4-319c-457c-aa3b-adf0c3e5a0ff\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-9fxxl" Feb 16 02:06:45.472000 master-0 kubenswrapper[7721]: E0216 02:06:45.471867 7721 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 16 02:06:45.472000 master-0 kubenswrapper[7721]: E0216 02:06:45.471910 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-serving-cert podName:864c0ef4-319c-457c-aa3b-adf0c3e5a0ff nodeName:}" failed. No retries permitted until 2026-02-16 02:06:53.471895973 +0000 UTC m=+16.966130225 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-serving-cert") pod "cluster-version-operator-76959b6567-9fxxl" (UID: "864c0ef4-319c-457c-aa3b-adf0c3e5a0ff") : secret "cluster-version-operator-serving-cert" not found Feb 16 02:06:45.472373 master-0 kubenswrapper[7721]: E0216 02:06:45.472214 7721 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Feb 16 02:06:45.472373 master-0 kubenswrapper[7721]: E0216 02:06:45.472236 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/76915cba-7c11-4bd8-9943-81de74e7781b-srv-cert podName:76915cba-7c11-4bd8-9943-81de74e7781b nodeName:}" failed. No retries permitted until 2026-02-16 02:06:53.472229811 +0000 UTC m=+16.966464063 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/76915cba-7c11-4bd8-9943-81de74e7781b-srv-cert") pod "catalog-operator-588944557d-2z8fq" (UID: "76915cba-7c11-4bd8-9943-81de74e7781b") : secret "catalog-operator-serving-cert" not found Feb 16 02:06:45.472373 master-0 kubenswrapper[7721]: E0216 02:06:45.472266 7721 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 16 02:06:45.472373 master-0 kubenswrapper[7721]: E0216 02:06:45.472283 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b2a83ddd-ffa5-4127-9099-91187ad9dbba-node-tuning-operator-tls podName:b2a83ddd-ffa5-4127-9099-91187ad9dbba nodeName:}" failed. No retries permitted until 2026-02-16 02:06:53.472278293 +0000 UTC m=+16.966512555 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/b2a83ddd-ffa5-4127-9099-91187ad9dbba-node-tuning-operator-tls") pod "cluster-node-tuning-operator-ff6c9b66-845gn" (UID: "b2a83ddd-ffa5-4127-9099-91187ad9dbba") : secret "node-tuning-operator-tls" not found Feb 16 02:06:45.472373 master-0 kubenswrapper[7721]: E0216 02:06:45.472313 7721 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 16 02:06:45.472373 master-0 kubenswrapper[7721]: E0216 02:06:45.472327 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ffa4db8-97da-42de-8e51-35680f518ca7-metrics-tls podName:2ffa4db8-97da-42de-8e51-35680f518ca7 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:53.472322994 +0000 UTC m=+16.966557256 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/2ffa4db8-97da-42de-8e51-35680f518ca7-metrics-tls") pod "dns-operator-86b8869b79-4rfwq" (UID: "2ffa4db8-97da-42de-8e51-35680f518ca7") : secret "metrics-tls" not found Feb 16 02:06:45.472373 master-0 kubenswrapper[7721]: E0216 02:06:45.472355 7721 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 16 02:06:45.472373 master-0 kubenswrapper[7721]: E0216 02:06:45.472371 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b2a83ddd-ffa5-4127-9099-91187ad9dbba-apiservice-cert podName:b2a83ddd-ffa5-4127-9099-91187ad9dbba nodeName:}" failed. No retries permitted until 2026-02-16 02:06:53.472366235 +0000 UTC m=+16.966600497 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/b2a83ddd-ffa5-4127-9099-91187ad9dbba-apiservice-cert") pod "cluster-node-tuning-operator-ff6c9b66-845gn" (UID: "b2a83ddd-ffa5-4127-9099-91187ad9dbba") : secret "performance-addon-operator-webhook-cert" not found Feb 16 02:06:45.472925 master-0 kubenswrapper[7721]: E0216 02:06:45.472399 7721 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Feb 16 02:06:45.472925 master-0 kubenswrapper[7721]: E0216 02:06:45.472414 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/467d92a2-1cf3-418d-b41e-8e5f9d7a5b74-srv-cert podName:467d92a2-1cf3-418d-b41e-8e5f9d7a5b74 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:53.472409706 +0000 UTC m=+16.966643968 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/467d92a2-1cf3-418d-b41e-8e5f9d7a5b74-srv-cert") pod "olm-operator-6b56bd877c-qwp9g" (UID: "467d92a2-1cf3-418d-b41e-8e5f9d7a5b74") : secret "olm-operator-serving-cert" not found Feb 16 02:06:45.472925 master-0 kubenswrapper[7721]: E0216 02:06:45.472459 7721 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 16 02:06:45.472925 master-0 kubenswrapper[7721]: E0216 02:06:45.472476 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04804a08-e3a5-46f3-abcb-967866834baa-metrics-tls podName:04804a08-e3a5-46f3-abcb-967866834baa nodeName:}" failed. No retries permitted until 2026-02-16 02:06:53.472470787 +0000 UTC m=+16.966705049 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/04804a08-e3a5-46f3-abcb-967866834baa-metrics-tls") pod "ingress-operator-c588d8cb4-nbjz6" (UID: "04804a08-e3a5-46f3-abcb-967866834baa") : secret "metrics-tls" not found Feb 16 02:06:45.572877 master-0 kubenswrapper[7721]: I0216 02:06:45.572809 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b6088119-1125-4271-8c0b-0675e700edd9-webhook-certs\") pod \"multus-admission-controller-7c64d55f8-62wr2\" (UID: \"b6088119-1125-4271-8c0b-0675e700edd9\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-62wr2" Feb 16 02:06:45.573092 master-0 kubenswrapper[7721]: I0216 02:06:45.572883 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/21686a6d-f685-4fb6-98af-3e8a39c5981b-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-5q4zs\" (UID: \"21686a6d-f685-4fb6-98af-3e8a39c5981b\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-5q4zs" Feb 16 02:06:45.573092 master-0 kubenswrapper[7721]: I0216 02:06:45.572963 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7f0f9b7d-e663-4927-861b-a9544d483b6e-metrics-certs\") pod \"network-metrics-daemon-gn9mv\" (UID: \"7f0f9b7d-e663-4927-861b-a9544d483b6e\") " pod="openshift-multus/network-metrics-daemon-gn9mv" Feb 16 02:06:45.573092 master-0 kubenswrapper[7721]: I0216 02:06:45.572999 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/23755f7f-dce6-4dcf-9664-22e3aedb5c81-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-tkqng\" (UID: \"23755f7f-dce6-4dcf-9664-22e3aedb5c81\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-tkqng" Feb 16 02:06:45.573092 master-0 kubenswrapper[7721]: I0216 02:06:45.573065 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0540a70-a256-422b-a827-e564d0e67866-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-bxgpd\" (UID: \"a0540a70-a256-422b-a827-e564d0e67866\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-bxgpd" Feb 16 02:06:45.573351 master-0 kubenswrapper[7721]: I0216 02:06:45.573105 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/bde83629-b39c-401e-bc30-5ce205638918-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-8nl7s\" (UID: \"bde83629-b39c-401e-bc30-5ce205638918\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-8nl7s" Feb 16 02:06:45.573351 master-0 kubenswrapper[7721]: E0216 02:06:45.573278 7721 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 16 02:06:45.573351 master-0 kubenswrapper[7721]: E0216 02:06:45.573349 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bde83629-b39c-401e-bc30-5ce205638918-marketplace-operator-metrics podName:bde83629-b39c-401e-bc30-5ce205638918 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:53.573327013 +0000 UTC m=+17.067561305 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/bde83629-b39c-401e-bc30-5ce205638918-marketplace-operator-metrics") pod "marketplace-operator-6cc5b65c6b-8nl7s" (UID: "bde83629-b39c-401e-bc30-5ce205638918") : secret "marketplace-operator-metrics" not found Feb 16 02:06:45.573953 master-0 kubenswrapper[7721]: E0216 02:06:45.573911 7721 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 16 02:06:45.574084 master-0 kubenswrapper[7721]: E0216 02:06:45.573995 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b6088119-1125-4271-8c0b-0675e700edd9-webhook-certs podName:b6088119-1125-4271-8c0b-0675e700edd9 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:53.573968499 +0000 UTC m=+17.068202801 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b6088119-1125-4271-8c0b-0675e700edd9-webhook-certs") pod "multus-admission-controller-7c64d55f8-62wr2" (UID: "b6088119-1125-4271-8c0b-0675e700edd9") : secret "multus-admission-controller-secret" not found Feb 16 02:06:45.574084 master-0 kubenswrapper[7721]: E0216 02:06:45.574074 7721 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 16 02:06:45.574244 master-0 kubenswrapper[7721]: E0216 02:06:45.574114 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/21686a6d-f685-4fb6-98af-3e8a39c5981b-cluster-monitoring-operator-tls podName:21686a6d-f685-4fb6-98af-3e8a39c5981b nodeName:}" failed. No retries permitted until 2026-02-16 02:06:53.574100942 +0000 UTC m=+17.068335244 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/21686a6d-f685-4fb6-98af-3e8a39c5981b-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-756d64c8c4-5q4zs" (UID: "21686a6d-f685-4fb6-98af-3e8a39c5981b") : secret "cluster-monitoring-operator-tls" not found Feb 16 02:06:45.574244 master-0 kubenswrapper[7721]: E0216 02:06:45.574181 7721 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Feb 16 02:06:45.574244 master-0 kubenswrapper[7721]: E0216 02:06:45.574214 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7f0f9b7d-e663-4927-861b-a9544d483b6e-metrics-certs podName:7f0f9b7d-e663-4927-861b-a9544d483b6e nodeName:}" failed. No retries permitted until 2026-02-16 02:06:53.574202785 +0000 UTC m=+17.068437077 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7f0f9b7d-e663-4927-861b-a9544d483b6e-metrics-certs") pod "network-metrics-daemon-gn9mv" (UID: "7f0f9b7d-e663-4927-861b-a9544d483b6e") : secret "metrics-daemon-secret" not found Feb 16 02:06:45.574523 master-0 kubenswrapper[7721]: E0216 02:06:45.574275 7721 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 16 02:06:45.574523 master-0 kubenswrapper[7721]: E0216 02:06:45.574310 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23755f7f-dce6-4dcf-9664-22e3aedb5c81-package-server-manager-serving-cert podName:23755f7f-dce6-4dcf-9664-22e3aedb5c81 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:53.574298777 +0000 UTC m=+17.068533069 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/23755f7f-dce6-4dcf-9664-22e3aedb5c81-package-server-manager-serving-cert") pod "package-server-manager-5c696dbdcd-tkqng" (UID: "23755f7f-dce6-4dcf-9664-22e3aedb5c81") : secret "package-server-manager-serving-cert" not found Feb 16 02:06:45.574523 master-0 kubenswrapper[7721]: E0216 02:06:45.574387 7721 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 16 02:06:45.574523 master-0 kubenswrapper[7721]: E0216 02:06:45.574422 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a0540a70-a256-422b-a827-e564d0e67866-image-registry-operator-tls podName:a0540a70-a256-422b-a827-e564d0e67866 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:53.57441118 +0000 UTC m=+17.068645472 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/a0540a70-a256-422b-a827-e564d0e67866-image-registry-operator-tls") pod "cluster-image-registry-operator-96c8c64b8-bxgpd" (UID: "a0540a70-a256-422b-a827-e564d0e67866") : secret "image-registry-operator-tls" not found Feb 16 02:06:45.883459 master-0 kubenswrapper[7721]: I0216 02:06:45.883037 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c0424f7-d2b6-4396-a051-7fd1dc97c67f-serving-cert\") pod \"controller-manager-6dc6c44bd6-rj7sr\" (UID: \"6c0424f7-d2b6-4396-a051-7fd1dc97c67f\") " pod="openshift-controller-manager/controller-manager-6dc6c44bd6-rj7sr" Feb 16 02:06:45.883459 master-0 kubenswrapper[7721]: I0216 02:06:45.883157 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6c0424f7-d2b6-4396-a051-7fd1dc97c67f-client-ca\") pod \"controller-manager-6dc6c44bd6-rj7sr\" (UID: \"6c0424f7-d2b6-4396-a051-7fd1dc97c67f\") " pod="openshift-controller-manager/controller-manager-6dc6c44bd6-rj7sr" Feb 16 02:06:45.883459 master-0 kubenswrapper[7721]: E0216 02:06:45.883297 7721 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 16 02:06:45.883459 master-0 kubenswrapper[7721]: E0216 02:06:45.883344 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6c0424f7-d2b6-4396-a051-7fd1dc97c67f-client-ca podName:6c0424f7-d2b6-4396-a051-7fd1dc97c67f nodeName:}" failed. No retries permitted until 2026-02-16 02:06:46.883330034 +0000 UTC m=+10.377564296 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/6c0424f7-d2b6-4396-a051-7fd1dc97c67f-client-ca") pod "controller-manager-6dc6c44bd6-rj7sr" (UID: "6c0424f7-d2b6-4396-a051-7fd1dc97c67f") : configmap "client-ca" not found Feb 16 02:06:45.883748 master-0 kubenswrapper[7721]: E0216 02:06:45.883521 7721 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Feb 16 02:06:45.883748 master-0 kubenswrapper[7721]: E0216 02:06:45.883558 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6c0424f7-d2b6-4396-a051-7fd1dc97c67f-serving-cert podName:6c0424f7-d2b6-4396-a051-7fd1dc97c67f nodeName:}" failed. No retries permitted until 2026-02-16 02:06:46.88354981 +0000 UTC m=+10.377784072 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6c0424f7-d2b6-4396-a051-7fd1dc97c67f-serving-cert") pod "controller-manager-6dc6c44bd6-rj7sr" (UID: "6c0424f7-d2b6-4396-a051-7fd1dc97c67f") : secret "serving-cert" not found Feb 16 02:06:46.734590 master-0 kubenswrapper[7721]: I0216 02:06:46.734497 7721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b460e889-753e-44c7-91a7-dc3f60cf4ad3" path="/var/lib/kubelet/pods/b460e889-753e-44c7-91a7-dc3f60cf4ad3/volumes" Feb 16 02:06:46.899397 master-0 kubenswrapper[7721]: I0216 02:06:46.898583 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c0424f7-d2b6-4396-a051-7fd1dc97c67f-serving-cert\") pod \"controller-manager-6dc6c44bd6-rj7sr\" (UID: \"6c0424f7-d2b6-4396-a051-7fd1dc97c67f\") " pod="openshift-controller-manager/controller-manager-6dc6c44bd6-rj7sr" Feb 16 02:06:46.899397 master-0 kubenswrapper[7721]: E0216 02:06:46.898899 7721 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Feb 16 02:06:46.899397 master-0 kubenswrapper[7721]: E0216 02:06:46.899054 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6c0424f7-d2b6-4396-a051-7fd1dc97c67f-serving-cert podName:6c0424f7-d2b6-4396-a051-7fd1dc97c67f nodeName:}" failed. No retries permitted until 2026-02-16 02:06:48.899020285 +0000 UTC m=+12.393254737 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6c0424f7-d2b6-4396-a051-7fd1dc97c67f-serving-cert") pod "controller-manager-6dc6c44bd6-rj7sr" (UID: "6c0424f7-d2b6-4396-a051-7fd1dc97c67f") : secret "serving-cert" not found Feb 16 02:06:46.899397 master-0 kubenswrapper[7721]: I0216 02:06:46.899298 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6c0424f7-d2b6-4396-a051-7fd1dc97c67f-client-ca\") pod \"controller-manager-6dc6c44bd6-rj7sr\" (UID: \"6c0424f7-d2b6-4396-a051-7fd1dc97c67f\") " pod="openshift-controller-manager/controller-manager-6dc6c44bd6-rj7sr" Feb 16 02:06:46.900210 master-0 kubenswrapper[7721]: E0216 02:06:46.899546 7721 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 16 02:06:46.900210 master-0 kubenswrapper[7721]: E0216 02:06:46.899676 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6c0424f7-d2b6-4396-a051-7fd1dc97c67f-client-ca podName:6c0424f7-d2b6-4396-a051-7fd1dc97c67f nodeName:}" failed. No retries permitted until 2026-02-16 02:06:48.89964841 +0000 UTC m=+12.393882712 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/6c0424f7-d2b6-4396-a051-7fd1dc97c67f-client-ca") pod "controller-manager-6dc6c44bd6-rj7sr" (UID: "6c0424f7-d2b6-4396-a051-7fd1dc97c67f") : configmap "client-ca" not found Feb 16 02:06:46.962975 master-0 kubenswrapper[7721]: I0216 02:06:46.962886 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-sh2wj" event={"ID":"676adb95-3ffd-43e5-89e3-9d7a7d74df28","Type":"ContainerStarted","Data":"a7021ba955029794378d56cb0c3dec02aae2a156dd1401b9712938f44b3a7024"} Feb 16 02:06:46.962975 master-0 kubenswrapper[7721]: I0216 02:06:46.962955 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-sh2wj" event={"ID":"676adb95-3ffd-43e5-89e3-9d7a7d74df28","Type":"ContainerStarted","Data":"206971d07babe77f50887cc9e2c80647268c9526d19646429e62ad251c58989c"} Feb 16 02:06:46.979090 master-0 kubenswrapper[7721]: I0216 02:06:46.979031 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-hswdj" Feb 16 02:06:46.989511 master-0 kubenswrapper[7721]: I0216 02:06:46.989379 7721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-sh2wj" podStartSLOduration=3.056449552 podStartE2EDuration="4.989349348s" podCreationTimestamp="2026-02-16 02:06:42 +0000 UTC" firstStartedPulling="2026-02-16 02:06:44.360603837 +0000 UTC m=+7.854838099" lastFinishedPulling="2026-02-16 02:06:46.293503633 +0000 UTC m=+9.787737895" observedRunningTime="2026-02-16 02:06:46.984607399 +0000 UTC m=+10.478841701" watchObservedRunningTime="2026-02-16 02:06:46.989349348 +0000 UTC m=+10.483583640" Feb 16 02:06:47.203214 master-0 kubenswrapper[7721]: I0216 02:06:47.202748 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07a1ea85-2ac8-4c6a-a585-c35ebf55b33d-serving-cert\") pod \"route-controller-manager-dc665b75-6qzmt\" (UID: \"07a1ea85-2ac8-4c6a-a585-c35ebf55b33d\") " pod="openshift-route-controller-manager/route-controller-manager-dc665b75-6qzmt" Feb 16 02:06:47.203214 master-0 kubenswrapper[7721]: E0216 02:06:47.202956 7721 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Feb 16 02:06:47.203581 master-0 kubenswrapper[7721]: I0216 02:06:47.203256 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/07a1ea85-2ac8-4c6a-a585-c35ebf55b33d-client-ca\") pod \"route-controller-manager-dc665b75-6qzmt\" (UID: \"07a1ea85-2ac8-4c6a-a585-c35ebf55b33d\") " pod="openshift-route-controller-manager/route-controller-manager-dc665b75-6qzmt" Feb 16 02:06:47.203581 master-0 kubenswrapper[7721]: E0216 02:06:47.203307 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07a1ea85-2ac8-4c6a-a585-c35ebf55b33d-serving-cert podName:07a1ea85-2ac8-4c6a-a585-c35ebf55b33d nodeName:}" failed. No retries permitted until 2026-02-16 02:06:51.203281443 +0000 UTC m=+14.697515955 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/07a1ea85-2ac8-4c6a-a585-c35ebf55b33d-serving-cert") pod "route-controller-manager-dc665b75-6qzmt" (UID: "07a1ea85-2ac8-4c6a-a585-c35ebf55b33d") : secret "serving-cert" not found Feb 16 02:06:47.203581 master-0 kubenswrapper[7721]: E0216 02:06:47.203375 7721 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 16 02:06:47.203581 master-0 kubenswrapper[7721]: E0216 02:06:47.203481 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/07a1ea85-2ac8-4c6a-a585-c35ebf55b33d-client-ca podName:07a1ea85-2ac8-4c6a-a585-c35ebf55b33d nodeName:}" failed. No retries permitted until 2026-02-16 02:06:51.203428487 +0000 UTC m=+14.697662779 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/07a1ea85-2ac8-4c6a-a585-c35ebf55b33d-client-ca") pod "route-controller-manager-dc665b75-6qzmt" (UID: "07a1ea85-2ac8-4c6a-a585-c35ebf55b33d") : configmap "client-ca" not found Feb 16 02:06:47.621592 master-0 kubenswrapper[7721]: I0216 02:06:47.621421 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:47.621811 master-0 kubenswrapper[7721]: I0216 02:06:47.621618 7721 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 02:06:47.621811 master-0 kubenswrapper[7721]: I0216 02:06:47.621632 7721 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 02:06:47.706364 master-0 kubenswrapper[7721]: I0216 02:06:47.706267 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:06:47.968553 master-0 kubenswrapper[7721]: I0216 02:06:47.968265 7721 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 02:06:47.992934 master-0 kubenswrapper[7721]: I0216 02:06:47.992769 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-zlbd2" Feb 16 02:06:48.934148 master-0 kubenswrapper[7721]: I0216 02:06:48.934047 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6c0424f7-d2b6-4396-a051-7fd1dc97c67f-client-ca\") pod \"controller-manager-6dc6c44bd6-rj7sr\" (UID: \"6c0424f7-d2b6-4396-a051-7fd1dc97c67f\") " pod="openshift-controller-manager/controller-manager-6dc6c44bd6-rj7sr" Feb 16 02:06:48.934421 master-0 kubenswrapper[7721]: E0216 02:06:48.934333 7721 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 16 02:06:48.934421 master-0 kubenswrapper[7721]: I0216 02:06:48.934394 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c0424f7-d2b6-4396-a051-7fd1dc97c67f-serving-cert\") pod \"controller-manager-6dc6c44bd6-rj7sr\" (UID: \"6c0424f7-d2b6-4396-a051-7fd1dc97c67f\") " pod="openshift-controller-manager/controller-manager-6dc6c44bd6-rj7sr" Feb 16 02:06:48.934634 master-0 kubenswrapper[7721]: E0216 02:06:48.934517 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6c0424f7-d2b6-4396-a051-7fd1dc97c67f-client-ca podName:6c0424f7-d2b6-4396-a051-7fd1dc97c67f nodeName:}" failed. No retries permitted until 2026-02-16 02:06:52.934430288 +0000 UTC m=+16.428664760 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/6c0424f7-d2b6-4396-a051-7fd1dc97c67f-client-ca") pod "controller-manager-6dc6c44bd6-rj7sr" (UID: "6c0424f7-d2b6-4396-a051-7fd1dc97c67f") : configmap "client-ca" not found Feb 16 02:06:48.944700 master-0 kubenswrapper[7721]: I0216 02:06:48.944327 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c0424f7-d2b6-4396-a051-7fd1dc97c67f-serving-cert\") pod \"controller-manager-6dc6c44bd6-rj7sr\" (UID: \"6c0424f7-d2b6-4396-a051-7fd1dc97c67f\") " pod="openshift-controller-manager/controller-manager-6dc6c44bd6-rj7sr" Feb 16 02:06:48.975785 master-0 kubenswrapper[7721]: I0216 02:06:48.975725 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-jshtp" event={"ID":"1f2d2601-481d-4e86-ac4c-3d34d5691261","Type":"ContainerStarted","Data":"cb69fe6420f408760369adf6194d1b660de997c81bbd72a933db5281cb306f1b"} Feb 16 02:06:49.982827 master-0 kubenswrapper[7721]: I0216 02:06:49.982759 7721 generic.go:334] "Generic (PLEG): container finished" podID="9be9fd24-fdb1-43dc-80b8-68020427bfd7" containerID="df7d86f073bbfdceeb06be9efc451e7abb0405476c53f59a41a6fb24d7d9750e" exitCode=0 Feb 16 02:06:49.982827 master-0 kubenswrapper[7721]: I0216 02:06:49.982805 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-zlbd2" event={"ID":"9be9fd24-fdb1-43dc-80b8-68020427bfd7","Type":"ContainerDied","Data":"df7d86f073bbfdceeb06be9efc451e7abb0405476c53f59a41a6fb24d7d9750e"} Feb 16 02:06:49.983868 master-0 kubenswrapper[7721]: I0216 02:06:49.983135 7721 scope.go:117] "RemoveContainer" containerID="df7d86f073bbfdceeb06be9efc451e7abb0405476c53f59a41a6fb24d7d9750e" Feb 16 02:06:50.977998 master-0 kubenswrapper[7721]: I0216 02:06:50.977922 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-zlbd2" Feb 16 02:06:50.991226 master-0 kubenswrapper[7721]: I0216 02:06:50.991083 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-zlbd2" event={"ID":"9be9fd24-fdb1-43dc-80b8-68020427bfd7","Type":"ContainerStarted","Data":"bd11ac1d6de053ea502aba0e630fef47dce8c71db2927be44e676bacf23e8754"} Feb 16 02:06:50.992092 master-0 kubenswrapper[7721]: I0216 02:06:50.991746 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-zlbd2" Feb 16 02:06:51.270488 master-0 kubenswrapper[7721]: I0216 02:06:51.270331 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07a1ea85-2ac8-4c6a-a585-c35ebf55b33d-serving-cert\") pod \"route-controller-manager-dc665b75-6qzmt\" (UID: \"07a1ea85-2ac8-4c6a-a585-c35ebf55b33d\") " pod="openshift-route-controller-manager/route-controller-manager-dc665b75-6qzmt" Feb 16 02:06:51.270488 master-0 kubenswrapper[7721]: I0216 02:06:51.270462 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/07a1ea85-2ac8-4c6a-a585-c35ebf55b33d-client-ca\") pod \"route-controller-manager-dc665b75-6qzmt\" (UID: \"07a1ea85-2ac8-4c6a-a585-c35ebf55b33d\") " pod="openshift-route-controller-manager/route-controller-manager-dc665b75-6qzmt" Feb 16 02:06:51.270760 master-0 kubenswrapper[7721]: E0216 02:06:51.270513 7721 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Feb 16 02:06:51.270760 master-0 kubenswrapper[7721]: E0216 02:06:51.270567 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07a1ea85-2ac8-4c6a-a585-c35ebf55b33d-serving-cert podName:07a1ea85-2ac8-4c6a-a585-c35ebf55b33d nodeName:}" failed. No retries permitted until 2026-02-16 02:06:59.2705521 +0000 UTC m=+22.764786362 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/07a1ea85-2ac8-4c6a-a585-c35ebf55b33d-serving-cert") pod "route-controller-manager-dc665b75-6qzmt" (UID: "07a1ea85-2ac8-4c6a-a585-c35ebf55b33d") : secret "serving-cert" not found Feb 16 02:06:51.270760 master-0 kubenswrapper[7721]: E0216 02:06:51.270615 7721 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 16 02:06:51.270760 master-0 kubenswrapper[7721]: E0216 02:06:51.270668 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/07a1ea85-2ac8-4c6a-a585-c35ebf55b33d-client-ca podName:07a1ea85-2ac8-4c6a-a585-c35ebf55b33d nodeName:}" failed. No retries permitted until 2026-02-16 02:06:59.270651242 +0000 UTC m=+22.764885544 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/07a1ea85-2ac8-4c6a-a585-c35ebf55b33d-client-ca") pod "route-controller-manager-dc665b75-6qzmt" (UID: "07a1ea85-2ac8-4c6a-a585-c35ebf55b33d") : configmap "client-ca" not found Feb 16 02:06:51.998519 master-0 kubenswrapper[7721]: I0216 02:06:51.998081 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-dgxhp" event={"ID":"a8f33151-61df-4b66-ba85-9ba210779059","Type":"ContainerStarted","Data":"8cdb2cf816b95ba9c46ea2bd0950b6c6b1a6f09cea50132c976d896bf508decf"} Feb 16 02:06:52.639805 master-0 kubenswrapper[7721]: I0216 02:06:52.638833 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-996fc886d-zb7x5"] Feb 16 02:06:52.639805 master-0 kubenswrapper[7721]: I0216 02:06:52.639472 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-996fc886d-zb7x5" Feb 16 02:06:52.641533 master-0 kubenswrapper[7721]: I0216 02:06:52.641331 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-996fc886d-zb7x5"] Feb 16 02:06:52.650988 master-0 kubenswrapper[7721]: I0216 02:06:52.650933 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 16 02:06:52.651318 master-0 kubenswrapper[7721]: I0216 02:06:52.651018 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 16 02:06:52.651318 master-0 kubenswrapper[7721]: I0216 02:06:52.651110 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 16 02:06:52.651318 master-0 kubenswrapper[7721]: I0216 02:06:52.651312 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 16 02:06:52.651623 master-0 kubenswrapper[7721]: I0216 02:06:52.651360 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 16 02:06:52.651623 master-0 kubenswrapper[7721]: I0216 02:06:52.651585 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-0" Feb 16 02:06:52.651804 master-0 kubenswrapper[7721]: I0216 02:06:52.651692 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 16 02:06:52.652008 master-0 kubenswrapper[7721]: I0216 02:06:52.651885 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 16 02:06:52.665770 master-0 kubenswrapper[7721]: I0216 02:06:52.665665 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-0" Feb 16 02:06:52.671319 master-0 kubenswrapper[7721]: I0216 02:06:52.671087 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 16 02:06:52.696936 master-0 kubenswrapper[7721]: I0216 02:06:52.696851 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-etcd-client\") pod \"apiserver-996fc886d-zb7x5\" (UID: \"8db9c82a-bb53-4e2b-9af3-d3eebc530c35\") " pod="openshift-apiserver/apiserver-996fc886d-zb7x5" Feb 16 02:06:52.697205 master-0 kubenswrapper[7721]: I0216 02:06:52.696944 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-audit-dir\") pod \"apiserver-996fc886d-zb7x5\" (UID: \"8db9c82a-bb53-4e2b-9af3-d3eebc530c35\") " pod="openshift-apiserver/apiserver-996fc886d-zb7x5" Feb 16 02:06:52.697205 master-0 kubenswrapper[7721]: I0216 02:06:52.697004 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-trusted-ca-bundle\") pod \"apiserver-996fc886d-zb7x5\" (UID: \"8db9c82a-bb53-4e2b-9af3-d3eebc530c35\") " pod="openshift-apiserver/apiserver-996fc886d-zb7x5" Feb 16 02:06:52.697205 master-0 kubenswrapper[7721]: I0216 02:06:52.697174 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbpc4\" (UniqueName: \"kubernetes.io/projected/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-kube-api-access-rbpc4\") pod \"apiserver-996fc886d-zb7x5\" (UID: \"8db9c82a-bb53-4e2b-9af3-d3eebc530c35\") " pod="openshift-apiserver/apiserver-996fc886d-zb7x5" Feb 16 02:06:52.697592 master-0 kubenswrapper[7721]: I0216 02:06:52.697258 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-audit\") pod \"apiserver-996fc886d-zb7x5\" (UID: \"8db9c82a-bb53-4e2b-9af3-d3eebc530c35\") " pod="openshift-apiserver/apiserver-996fc886d-zb7x5" Feb 16 02:06:52.697592 master-0 kubenswrapper[7721]: I0216 02:06:52.697313 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-serving-cert\") pod \"apiserver-996fc886d-zb7x5\" (UID: \"8db9c82a-bb53-4e2b-9af3-d3eebc530c35\") " pod="openshift-apiserver/apiserver-996fc886d-zb7x5" Feb 16 02:06:52.697592 master-0 kubenswrapper[7721]: I0216 02:06:52.697415 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-image-import-ca\") pod \"apiserver-996fc886d-zb7x5\" (UID: \"8db9c82a-bb53-4e2b-9af3-d3eebc530c35\") " pod="openshift-apiserver/apiserver-996fc886d-zb7x5" Feb 16 02:06:52.697592 master-0 kubenswrapper[7721]: I0216 02:06:52.697507 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-etcd-serving-ca\") pod \"apiserver-996fc886d-zb7x5\" (UID: \"8db9c82a-bb53-4e2b-9af3-d3eebc530c35\") " pod="openshift-apiserver/apiserver-996fc886d-zb7x5" Feb 16 02:06:52.697592 master-0 kubenswrapper[7721]: I0216 02:06:52.697566 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-config\") pod \"apiserver-996fc886d-zb7x5\" (UID: \"8db9c82a-bb53-4e2b-9af3-d3eebc530c35\") " pod="openshift-apiserver/apiserver-996fc886d-zb7x5" Feb 16 02:06:52.697592 master-0 kubenswrapper[7721]: I0216 02:06:52.697590 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-encryption-config\") pod \"apiserver-996fc886d-zb7x5\" (UID: \"8db9c82a-bb53-4e2b-9af3-d3eebc530c35\") " pod="openshift-apiserver/apiserver-996fc886d-zb7x5" Feb 16 02:06:52.699086 master-0 kubenswrapper[7721]: I0216 02:06:52.697698 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-node-pullsecrets\") pod \"apiserver-996fc886d-zb7x5\" (UID: \"8db9c82a-bb53-4e2b-9af3-d3eebc530c35\") " pod="openshift-apiserver/apiserver-996fc886d-zb7x5" Feb 16 02:06:52.799093 master-0 kubenswrapper[7721]: I0216 02:06:52.799024 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-etcd-serving-ca\") pod \"apiserver-996fc886d-zb7x5\" (UID: \"8db9c82a-bb53-4e2b-9af3-d3eebc530c35\") " pod="openshift-apiserver/apiserver-996fc886d-zb7x5" Feb 16 02:06:52.799093 master-0 kubenswrapper[7721]: I0216 02:06:52.799112 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-config\") pod \"apiserver-996fc886d-zb7x5\" (UID: \"8db9c82a-bb53-4e2b-9af3-d3eebc530c35\") " pod="openshift-apiserver/apiserver-996fc886d-zb7x5" Feb 16 02:06:52.799523 master-0 kubenswrapper[7721]: E0216 02:06:52.799469 7721 configmap.go:193] Couldn't get configMap openshift-apiserver/etcd-serving-ca: configmap "etcd-serving-ca" not found Feb 16 02:06:52.799907 master-0 kubenswrapper[7721]: I0216 02:06:52.799501 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-encryption-config\") pod \"apiserver-996fc886d-zb7x5\" (UID: \"8db9c82a-bb53-4e2b-9af3-d3eebc530c35\") " pod="openshift-apiserver/apiserver-996fc886d-zb7x5" Feb 16 02:06:52.799907 master-0 kubenswrapper[7721]: E0216 02:06:52.799669 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-etcd-serving-ca podName:8db9c82a-bb53-4e2b-9af3-d3eebc530c35 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:53.299623965 +0000 UTC m=+16.793858267 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-etcd-serving-ca") pod "apiserver-996fc886d-zb7x5" (UID: "8db9c82a-bb53-4e2b-9af3-d3eebc530c35") : configmap "etcd-serving-ca" not found Feb 16 02:06:52.800044 master-0 kubenswrapper[7721]: I0216 02:06:52.799895 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-node-pullsecrets\") pod \"apiserver-996fc886d-zb7x5\" (UID: \"8db9c82a-bb53-4e2b-9af3-d3eebc530c35\") " pod="openshift-apiserver/apiserver-996fc886d-zb7x5" Feb 16 02:06:52.800044 master-0 kubenswrapper[7721]: I0216 02:06:52.800029 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-etcd-client\") pod \"apiserver-996fc886d-zb7x5\" (UID: \"8db9c82a-bb53-4e2b-9af3-d3eebc530c35\") " pod="openshift-apiserver/apiserver-996fc886d-zb7x5" Feb 16 02:06:52.801187 master-0 kubenswrapper[7721]: I0216 02:06:52.800106 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-config\") pod \"apiserver-996fc886d-zb7x5\" (UID: \"8db9c82a-bb53-4e2b-9af3-d3eebc530c35\") " pod="openshift-apiserver/apiserver-996fc886d-zb7x5" Feb 16 02:06:52.801187 master-0 kubenswrapper[7721]: I0216 02:06:52.800190 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-node-pullsecrets\") pod \"apiserver-996fc886d-zb7x5\" (UID: \"8db9c82a-bb53-4e2b-9af3-d3eebc530c35\") " pod="openshift-apiserver/apiserver-996fc886d-zb7x5" Feb 16 02:06:52.801187 master-0 kubenswrapper[7721]: I0216 02:06:52.800277 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-audit-dir\") pod \"apiserver-996fc886d-zb7x5\" (UID: \"8db9c82a-bb53-4e2b-9af3-d3eebc530c35\") " pod="openshift-apiserver/apiserver-996fc886d-zb7x5" Feb 16 02:06:52.801187 master-0 kubenswrapper[7721]: I0216 02:06:52.800354 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-trusted-ca-bundle\") pod \"apiserver-996fc886d-zb7x5\" (UID: \"8db9c82a-bb53-4e2b-9af3-d3eebc530c35\") " pod="openshift-apiserver/apiserver-996fc886d-zb7x5" Feb 16 02:06:52.801187 master-0 kubenswrapper[7721]: E0216 02:06:52.800389 7721 secret.go:189] Couldn't get secret openshift-apiserver/etcd-client: secret "etcd-client" not found Feb 16 02:06:52.801187 master-0 kubenswrapper[7721]: I0216 02:06:52.800416 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbpc4\" (UniqueName: \"kubernetes.io/projected/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-kube-api-access-rbpc4\") pod \"apiserver-996fc886d-zb7x5\" (UID: \"8db9c82a-bb53-4e2b-9af3-d3eebc530c35\") " pod="openshift-apiserver/apiserver-996fc886d-zb7x5" Feb 16 02:06:52.801187 master-0 kubenswrapper[7721]: I0216 02:06:52.800470 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-audit-dir\") pod \"apiserver-996fc886d-zb7x5\" (UID: \"8db9c82a-bb53-4e2b-9af3-d3eebc530c35\") " pod="openshift-apiserver/apiserver-996fc886d-zb7x5" Feb 16 02:06:52.801187 master-0 kubenswrapper[7721]: E0216 02:06:52.800504 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-etcd-client podName:8db9c82a-bb53-4e2b-9af3-d3eebc530c35 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:53.300484927 +0000 UTC m=+16.794719229 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-etcd-client") pod "apiserver-996fc886d-zb7x5" (UID: "8db9c82a-bb53-4e2b-9af3-d3eebc530c35") : secret "etcd-client" not found Feb 16 02:06:52.801187 master-0 kubenswrapper[7721]: I0216 02:06:52.800560 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-audit\") pod \"apiserver-996fc886d-zb7x5\" (UID: \"8db9c82a-bb53-4e2b-9af3-d3eebc530c35\") " pod="openshift-apiserver/apiserver-996fc886d-zb7x5" Feb 16 02:06:52.801187 master-0 kubenswrapper[7721]: I0216 02:06:52.800626 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-serving-cert\") pod \"apiserver-996fc886d-zb7x5\" (UID: \"8db9c82a-bb53-4e2b-9af3-d3eebc530c35\") " pod="openshift-apiserver/apiserver-996fc886d-zb7x5" Feb 16 02:06:52.801187 master-0 kubenswrapper[7721]: E0216 02:06:52.800675 7721 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Feb 16 02:06:52.801187 master-0 kubenswrapper[7721]: E0216 02:06:52.800766 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-audit podName:8db9c82a-bb53-4e2b-9af3-d3eebc530c35 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:53.300738263 +0000 UTC m=+16.794972525 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-audit") pod "apiserver-996fc886d-zb7x5" (UID: "8db9c82a-bb53-4e2b-9af3-d3eebc530c35") : configmap "audit-0" not found Feb 16 02:06:52.801187 master-0 kubenswrapper[7721]: I0216 02:06:52.800811 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-image-import-ca\") pod \"apiserver-996fc886d-zb7x5\" (UID: \"8db9c82a-bb53-4e2b-9af3-d3eebc530c35\") " pod="openshift-apiserver/apiserver-996fc886d-zb7x5" Feb 16 02:06:52.801187 master-0 kubenswrapper[7721]: E0216 02:06:52.800841 7721 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Feb 16 02:06:52.801187 master-0 kubenswrapper[7721]: E0216 02:06:52.800900 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-serving-cert podName:8db9c82a-bb53-4e2b-9af3-d3eebc530c35 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:53.300885027 +0000 UTC m=+16.795119319 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-serving-cert") pod "apiserver-996fc886d-zb7x5" (UID: "8db9c82a-bb53-4e2b-9af3-d3eebc530c35") : secret "serving-cert" not found Feb 16 02:06:52.801187 master-0 kubenswrapper[7721]: I0216 02:06:52.801216 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-trusted-ca-bundle\") pod \"apiserver-996fc886d-zb7x5\" (UID: \"8db9c82a-bb53-4e2b-9af3-d3eebc530c35\") " pod="openshift-apiserver/apiserver-996fc886d-zb7x5" Feb 16 02:06:52.802713 master-0 kubenswrapper[7721]: I0216 02:06:52.802493 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-image-import-ca\") pod \"apiserver-996fc886d-zb7x5\" (UID: \"8db9c82a-bb53-4e2b-9af3-d3eebc530c35\") " pod="openshift-apiserver/apiserver-996fc886d-zb7x5" Feb 16 02:06:52.809527 master-0 kubenswrapper[7721]: I0216 02:06:52.808292 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-encryption-config\") pod \"apiserver-996fc886d-zb7x5\" (UID: \"8db9c82a-bb53-4e2b-9af3-d3eebc530c35\") " pod="openshift-apiserver/apiserver-996fc886d-zb7x5" Feb 16 02:06:52.820348 master-0 kubenswrapper[7721]: I0216 02:06:52.820292 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbpc4\" (UniqueName: \"kubernetes.io/projected/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-kube-api-access-rbpc4\") pod \"apiserver-996fc886d-zb7x5\" (UID: \"8db9c82a-bb53-4e2b-9af3-d3eebc530c35\") " pod="openshift-apiserver/apiserver-996fc886d-zb7x5" Feb 16 02:06:53.003361 master-0 kubenswrapper[7721]: I0216 02:06:53.003280 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6c0424f7-d2b6-4396-a051-7fd1dc97c67f-client-ca\") pod \"controller-manager-6dc6c44bd6-rj7sr\" (UID: \"6c0424f7-d2b6-4396-a051-7fd1dc97c67f\") " pod="openshift-controller-manager/controller-manager-6dc6c44bd6-rj7sr" Feb 16 02:06:53.004557 master-0 kubenswrapper[7721]: E0216 02:06:53.003564 7721 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 16 02:06:53.004557 master-0 kubenswrapper[7721]: E0216 02:06:53.003645 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6c0424f7-d2b6-4396-a051-7fd1dc97c67f-client-ca podName:6c0424f7-d2b6-4396-a051-7fd1dc97c67f nodeName:}" failed. No retries permitted until 2026-02-16 02:07:01.003623863 +0000 UTC m=+24.497858165 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/6c0424f7-d2b6-4396-a051-7fd1dc97c67f-client-ca") pod "controller-manager-6dc6c44bd6-rj7sr" (UID: "6c0424f7-d2b6-4396-a051-7fd1dc97c67f") : configmap "client-ca" not found Feb 16 02:06:53.306145 master-0 kubenswrapper[7721]: I0216 02:06:53.306083 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-etcd-client\") pod \"apiserver-996fc886d-zb7x5\" (UID: \"8db9c82a-bb53-4e2b-9af3-d3eebc530c35\") " pod="openshift-apiserver/apiserver-996fc886d-zb7x5" Feb 16 02:06:53.306405 master-0 kubenswrapper[7721]: E0216 02:06:53.306350 7721 secret.go:189] Couldn't get secret openshift-apiserver/etcd-client: secret "etcd-client" not found Feb 16 02:06:53.306504 master-0 kubenswrapper[7721]: E0216 02:06:53.306484 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-etcd-client podName:8db9c82a-bb53-4e2b-9af3-d3eebc530c35 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:54.306455086 +0000 UTC m=+17.800689398 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-etcd-client") pod "apiserver-996fc886d-zb7x5" (UID: "8db9c82a-bb53-4e2b-9af3-d3eebc530c35") : secret "etcd-client" not found Feb 16 02:06:53.306504 master-0 kubenswrapper[7721]: I0216 02:06:53.306471 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-audit\") pod \"apiserver-996fc886d-zb7x5\" (UID: \"8db9c82a-bb53-4e2b-9af3-d3eebc530c35\") " pod="openshift-apiserver/apiserver-996fc886d-zb7x5" Feb 16 02:06:53.306620 master-0 kubenswrapper[7721]: I0216 02:06:53.306541 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-serving-cert\") pod \"apiserver-996fc886d-zb7x5\" (UID: \"8db9c82a-bb53-4e2b-9af3-d3eebc530c35\") " pod="openshift-apiserver/apiserver-996fc886d-zb7x5" Feb 16 02:06:53.306620 master-0 kubenswrapper[7721]: E0216 02:06:53.306563 7721 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Feb 16 02:06:53.306712 master-0 kubenswrapper[7721]: E0216 02:06:53.306637 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-audit podName:8db9c82a-bb53-4e2b-9af3-d3eebc530c35 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:54.306613259 +0000 UTC m=+17.800847561 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-audit") pod "apiserver-996fc886d-zb7x5" (UID: "8db9c82a-bb53-4e2b-9af3-d3eebc530c35") : configmap "audit-0" not found Feb 16 02:06:53.306712 master-0 kubenswrapper[7721]: E0216 02:06:53.306706 7721 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Feb 16 02:06:53.306807 master-0 kubenswrapper[7721]: E0216 02:06:53.306760 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-serving-cert podName:8db9c82a-bb53-4e2b-9af3-d3eebc530c35 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:54.306745833 +0000 UTC m=+17.800980115 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-serving-cert") pod "apiserver-996fc886d-zb7x5" (UID: "8db9c82a-bb53-4e2b-9af3-d3eebc530c35") : secret "serving-cert" not found Feb 16 02:06:53.306807 master-0 kubenswrapper[7721]: I0216 02:06:53.306756 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-etcd-serving-ca\") pod \"apiserver-996fc886d-zb7x5\" (UID: \"8db9c82a-bb53-4e2b-9af3-d3eebc530c35\") " pod="openshift-apiserver/apiserver-996fc886d-zb7x5" Feb 16 02:06:53.306891 master-0 kubenswrapper[7721]: E0216 02:06:53.306842 7721 configmap.go:193] Couldn't get configMap openshift-apiserver/etcd-serving-ca: configmap "etcd-serving-ca" not found Feb 16 02:06:53.306941 master-0 kubenswrapper[7721]: E0216 02:06:53.306902 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-etcd-serving-ca podName:8db9c82a-bb53-4e2b-9af3-d3eebc530c35 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:54.306886826 +0000 UTC m=+17.801121128 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-etcd-serving-ca") pod "apiserver-996fc886d-zb7x5" (UID: "8db9c82a-bb53-4e2b-9af3-d3eebc530c35") : configmap "etcd-serving-ca" not found Feb 16 02:06:53.516102 master-0 kubenswrapper[7721]: I0216 02:06:53.515728 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/76915cba-7c11-4bd8-9943-81de74e7781b-srv-cert\") pod \"catalog-operator-588944557d-2z8fq\" (UID: \"76915cba-7c11-4bd8-9943-81de74e7781b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-2z8fq" Feb 16 02:06:53.516223 master-0 kubenswrapper[7721]: I0216 02:06:53.516123 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/b2a83ddd-ffa5-4127-9099-91187ad9dbba-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-845gn\" (UID: \"b2a83ddd-ffa5-4127-9099-91187ad9dbba\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-845gn" Feb 16 02:06:53.516223 master-0 kubenswrapper[7721]: I0216 02:06:53.516210 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2ffa4db8-97da-42de-8e51-35680f518ca7-metrics-tls\") pod \"dns-operator-86b8869b79-4rfwq\" (UID: \"2ffa4db8-97da-42de-8e51-35680f518ca7\") " pod="openshift-dns-operator/dns-operator-86b8869b79-4rfwq" Feb 16 02:06:53.516311 master-0 kubenswrapper[7721]: I0216 02:06:53.516244 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b2a83ddd-ffa5-4127-9099-91187ad9dbba-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-845gn\" (UID: \"b2a83ddd-ffa5-4127-9099-91187ad9dbba\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-845gn" Feb 16 02:06:53.516356 master-0 kubenswrapper[7721]: I0216 02:06:53.516338 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/467d92a2-1cf3-418d-b41e-8e5f9d7a5b74-srv-cert\") pod \"olm-operator-6b56bd877c-qwp9g\" (UID: \"467d92a2-1cf3-418d-b41e-8e5f9d7a5b74\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-qwp9g" Feb 16 02:06:53.516400 master-0 kubenswrapper[7721]: I0216 02:06:53.516376 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/04804a08-e3a5-46f3-abcb-967866834baa-metrics-tls\") pod \"ingress-operator-c588d8cb4-nbjz6\" (UID: \"04804a08-e3a5-46f3-abcb-967866834baa\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nbjz6" Feb 16 02:06:53.516529 master-0 kubenswrapper[7721]: I0216 02:06:53.516496 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-serving-cert\") pod \"cluster-version-operator-76959b6567-9fxxl\" (UID: \"864c0ef4-319c-457c-aa3b-adf0c3e5a0ff\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-9fxxl" Feb 16 02:06:53.517082 master-0 kubenswrapper[7721]: E0216 02:06:53.517023 7721 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Feb 16 02:06:53.517246 master-0 kubenswrapper[7721]: E0216 02:06:53.517209 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/467d92a2-1cf3-418d-b41e-8e5f9d7a5b74-srv-cert podName:467d92a2-1cf3-418d-b41e-8e5f9d7a5b74 nodeName:}" failed. No retries permitted until 2026-02-16 02:07:09.517139539 +0000 UTC m=+33.011373841 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/467d92a2-1cf3-418d-b41e-8e5f9d7a5b74-srv-cert") pod "olm-operator-6b56bd877c-qwp9g" (UID: "467d92a2-1cf3-418d-b41e-8e5f9d7a5b74") : secret "olm-operator-serving-cert" not found Feb 16 02:06:53.517852 master-0 kubenswrapper[7721]: E0216 02:06:53.517813 7721 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Feb 16 02:06:53.517944 master-0 kubenswrapper[7721]: E0216 02:06:53.517894 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/76915cba-7c11-4bd8-9943-81de74e7781b-srv-cert podName:76915cba-7c11-4bd8-9943-81de74e7781b nodeName:}" failed. No retries permitted until 2026-02-16 02:07:09.517870737 +0000 UTC m=+33.012105029 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/76915cba-7c11-4bd8-9943-81de74e7781b-srv-cert") pod "catalog-operator-588944557d-2z8fq" (UID: "76915cba-7c11-4bd8-9943-81de74e7781b") : secret "catalog-operator-serving-cert" not found Feb 16 02:06:53.522014 master-0 kubenswrapper[7721]: I0216 02:06:53.521950 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2ffa4db8-97da-42de-8e51-35680f518ca7-metrics-tls\") pod \"dns-operator-86b8869b79-4rfwq\" (UID: \"2ffa4db8-97da-42de-8e51-35680f518ca7\") " pod="openshift-dns-operator/dns-operator-86b8869b79-4rfwq" Feb 16 02:06:53.522210 master-0 kubenswrapper[7721]: I0216 02:06:53.522167 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b2a83ddd-ffa5-4127-9099-91187ad9dbba-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-845gn\" (UID: \"b2a83ddd-ffa5-4127-9099-91187ad9dbba\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-845gn" Feb 16 02:06:53.522283 master-0 kubenswrapper[7721]: I0216 02:06:53.522187 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/04804a08-e3a5-46f3-abcb-967866834baa-metrics-tls\") pod \"ingress-operator-c588d8cb4-nbjz6\" (UID: \"04804a08-e3a5-46f3-abcb-967866834baa\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nbjz6" Feb 16 02:06:53.522884 master-0 kubenswrapper[7721]: I0216 02:06:53.522820 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/b2a83ddd-ffa5-4127-9099-91187ad9dbba-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-845gn\" (UID: \"b2a83ddd-ffa5-4127-9099-91187ad9dbba\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-845gn" Feb 16 02:06:53.529316 master-0 kubenswrapper[7721]: I0216 02:06:53.529261 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-serving-cert\") pod \"cluster-version-operator-76959b6567-9fxxl\" (UID: \"864c0ef4-319c-457c-aa3b-adf0c3e5a0ff\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-9fxxl" Feb 16 02:06:53.572632 master-0 kubenswrapper[7721]: I0216 02:06:53.572487 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-86b8869b79-4rfwq" Feb 16 02:06:53.576544 master-0 kubenswrapper[7721]: I0216 02:06:53.576505 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-845gn" Feb 16 02:06:53.582766 master-0 kubenswrapper[7721]: I0216 02:06:53.582547 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nbjz6" Feb 16 02:06:53.584297 master-0 kubenswrapper[7721]: I0216 02:06:53.582929 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-76959b6567-9fxxl" Feb 16 02:06:53.618031 master-0 kubenswrapper[7721]: I0216 02:06:53.617974 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b6088119-1125-4271-8c0b-0675e700edd9-webhook-certs\") pod \"multus-admission-controller-7c64d55f8-62wr2\" (UID: \"b6088119-1125-4271-8c0b-0675e700edd9\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-62wr2" Feb 16 02:06:53.618103 master-0 kubenswrapper[7721]: I0216 02:06:53.618052 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/21686a6d-f685-4fb6-98af-3e8a39c5981b-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-5q4zs\" (UID: \"21686a6d-f685-4fb6-98af-3e8a39c5981b\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-5q4zs" Feb 16 02:06:53.618315 master-0 kubenswrapper[7721]: I0216 02:06:53.618137 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7f0f9b7d-e663-4927-861b-a9544d483b6e-metrics-certs\") pod \"network-metrics-daemon-gn9mv\" (UID: \"7f0f9b7d-e663-4927-861b-a9544d483b6e\") " pod="openshift-multus/network-metrics-daemon-gn9mv" Feb 16 02:06:53.618315 master-0 kubenswrapper[7721]: I0216 02:06:53.618191 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/23755f7f-dce6-4dcf-9664-22e3aedb5c81-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-tkqng\" (UID: \"23755f7f-dce6-4dcf-9664-22e3aedb5c81\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-tkqng" Feb 16 02:06:53.618315 master-0 kubenswrapper[7721]: I0216 02:06:53.618254 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0540a70-a256-422b-a827-e564d0e67866-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-bxgpd\" (UID: \"a0540a70-a256-422b-a827-e564d0e67866\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-bxgpd" Feb 16 02:06:53.618315 master-0 kubenswrapper[7721]: I0216 02:06:53.618293 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/bde83629-b39c-401e-bc30-5ce205638918-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-8nl7s\" (UID: \"bde83629-b39c-401e-bc30-5ce205638918\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-8nl7s" Feb 16 02:06:53.618722 master-0 kubenswrapper[7721]: E0216 02:06:53.618656 7721 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 16 02:06:53.618802 master-0 kubenswrapper[7721]: E0216 02:06:53.618777 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bde83629-b39c-401e-bc30-5ce205638918-marketplace-operator-metrics podName:bde83629-b39c-401e-bc30-5ce205638918 nodeName:}" failed. No retries permitted until 2026-02-16 02:07:09.618750583 +0000 UTC m=+33.112984865 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/bde83629-b39c-401e-bc30-5ce205638918-marketplace-operator-metrics") pod "marketplace-operator-6cc5b65c6b-8nl7s" (UID: "bde83629-b39c-401e-bc30-5ce205638918") : secret "marketplace-operator-metrics" not found Feb 16 02:06:53.619273 master-0 kubenswrapper[7721]: E0216 02:06:53.619248 7721 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 16 02:06:53.619325 master-0 kubenswrapper[7721]: E0216 02:06:53.619306 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b6088119-1125-4271-8c0b-0675e700edd9-webhook-certs podName:b6088119-1125-4271-8c0b-0675e700edd9 nodeName:}" failed. No retries permitted until 2026-02-16 02:07:09.619288577 +0000 UTC m=+33.113522859 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b6088119-1125-4271-8c0b-0675e700edd9-webhook-certs") pod "multus-admission-controller-7c64d55f8-62wr2" (UID: "b6088119-1125-4271-8c0b-0675e700edd9") : secret "multus-admission-controller-secret" not found Feb 16 02:06:53.619397 master-0 kubenswrapper[7721]: E0216 02:06:53.619367 7721 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 16 02:06:53.619445 master-0 kubenswrapper[7721]: E0216 02:06:53.619410 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/21686a6d-f685-4fb6-98af-3e8a39c5981b-cluster-monitoring-operator-tls podName:21686a6d-f685-4fb6-98af-3e8a39c5981b nodeName:}" failed. No retries permitted until 2026-02-16 02:07:09.619397869 +0000 UTC m=+33.113632151 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/21686a6d-f685-4fb6-98af-3e8a39c5981b-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-756d64c8c4-5q4zs" (UID: "21686a6d-f685-4fb6-98af-3e8a39c5981b") : secret "cluster-monitoring-operator-tls" not found Feb 16 02:06:53.619519 master-0 kubenswrapper[7721]: E0216 02:06:53.619493 7721 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Feb 16 02:06:53.619560 master-0 kubenswrapper[7721]: E0216 02:06:53.619538 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7f0f9b7d-e663-4927-861b-a9544d483b6e-metrics-certs podName:7f0f9b7d-e663-4927-861b-a9544d483b6e nodeName:}" failed. No retries permitted until 2026-02-16 02:07:09.619526153 +0000 UTC m=+33.113760425 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7f0f9b7d-e663-4927-861b-a9544d483b6e-metrics-certs") pod "network-metrics-daemon-gn9mv" (UID: "7f0f9b7d-e663-4927-861b-a9544d483b6e") : secret "metrics-daemon-secret" not found Feb 16 02:06:53.619624 master-0 kubenswrapper[7721]: E0216 02:06:53.619595 7721 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 16 02:06:53.619665 master-0 kubenswrapper[7721]: E0216 02:06:53.619635 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23755f7f-dce6-4dcf-9664-22e3aedb5c81-package-server-manager-serving-cert podName:23755f7f-dce6-4dcf-9664-22e3aedb5c81 nodeName:}" failed. No retries permitted until 2026-02-16 02:07:09.619623605 +0000 UTC m=+33.113857887 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/23755f7f-dce6-4dcf-9664-22e3aedb5c81-package-server-manager-serving-cert") pod "package-server-manager-5c696dbdcd-tkqng" (UID: "23755f7f-dce6-4dcf-9664-22e3aedb5c81") : secret "package-server-manager-serving-cert" not found Feb 16 02:06:53.637080 master-0 kubenswrapper[7721]: I0216 02:06:53.636920 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0540a70-a256-422b-a827-e564d0e67866-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-bxgpd\" (UID: \"a0540a70-a256-422b-a827-e564d0e67866\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-bxgpd" Feb 16 02:06:53.886926 master-0 kubenswrapper[7721]: I0216 02:06:53.886397 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-bxgpd" Feb 16 02:06:53.966449 master-0 kubenswrapper[7721]: I0216 02:06:53.966367 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-86b8869b79-4rfwq"] Feb 16 02:06:53.980880 master-0 kubenswrapper[7721]: W0216 02:06:53.980811 7721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ffa4db8_97da_42de_8e51_35680f518ca7.slice/crio-4e9f20e89ac525e352545c86571266e96559d8a418a9a58ef9e04f14e5bd7411 WatchSource:0}: Error finding container 4e9f20e89ac525e352545c86571266e96559d8a418a9a58ef9e04f14e5bd7411: Status 404 returned error can't find the container with id 4e9f20e89ac525e352545c86571266e96559d8a418a9a58ef9e04f14e5bd7411 Feb 16 02:06:53.984669 master-0 kubenswrapper[7721]: I0216 02:06:53.984635 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-zlbd2" Feb 16 02:06:53.990716 master-0 kubenswrapper[7721]: I0216 02:06:53.990669 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-845gn"] Feb 16 02:06:54.024282 master-0 kubenswrapper[7721]: I0216 02:06:54.014205 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-845gn" event={"ID":"b2a83ddd-ffa5-4127-9099-91187ad9dbba","Type":"ContainerStarted","Data":"4134014c45e6845c874e6a32e463bf4a0bdd4d27746b06893f36026417f8e6db"} Feb 16 02:06:54.024282 master-0 kubenswrapper[7721]: I0216 02:06:54.016978 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-htjgz" event={"ID":"724ac845-3835-458b-9645-e665be135ff9","Type":"ContainerStarted","Data":"7d8525382e7c303df250ff37074c2b59dae064f1c16fab17985b8492c29587df"} Feb 16 02:06:54.024282 master-0 kubenswrapper[7721]: I0216 02:06:54.019144 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-86b8869b79-4rfwq" event={"ID":"2ffa4db8-97da-42de-8e51-35680f518ca7","Type":"ContainerStarted","Data":"4e9f20e89ac525e352545c86571266e96559d8a418a9a58ef9e04f14e5bd7411"} Feb 16 02:06:54.024282 master-0 kubenswrapper[7721]: I0216 02:06:54.020341 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-c588d8cb4-nbjz6"] Feb 16 02:06:54.024282 master-0 kubenswrapper[7721]: I0216 02:06:54.021767 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-8n9v4" event={"ID":"d008dbd4-e713-4f2e-b64d-ca9cfc83a502","Type":"ContainerStarted","Data":"8d10d5c80bfcb2b99d3038fe840daed1c6f8b22f08ebd048407ce8281cbb2534"} Feb 16 02:06:54.028572 master-0 kubenswrapper[7721]: I0216 02:06:54.028398 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-76959b6567-9fxxl" event={"ID":"864c0ef4-319c-457c-aa3b-adf0c3e5a0ff","Type":"ContainerStarted","Data":"3882e37ffb30b9304e7e790d6887d03af1856850530011753433622539e4cab4"} Feb 16 02:06:54.040134 master-0 kubenswrapper[7721]: I0216 02:06:54.039603 7721 generic.go:334] "Generic (PLEG): container finished" podID="1f2d2601-481d-4e86-ac4c-3d34d5691261" containerID="cb69fe6420f408760369adf6194d1b660de997c81bbd72a933db5281cb306f1b" exitCode=0 Feb 16 02:06:54.040134 master-0 kubenswrapper[7721]: I0216 02:06:54.039676 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-jshtp" event={"ID":"1f2d2601-481d-4e86-ac4c-3d34d5691261","Type":"ContainerDied","Data":"cb69fe6420f408760369adf6194d1b660de997c81bbd72a933db5281cb306f1b"} Feb 16 02:06:54.041277 master-0 kubenswrapper[7721]: I0216 02:06:54.041209 7721 scope.go:117] "RemoveContainer" containerID="cb69fe6420f408760369adf6194d1b660de997c81bbd72a933db5281cb306f1b" Feb 16 02:06:54.111312 master-0 kubenswrapper[7721]: I0216 02:06:54.111261 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-96c8c64b8-bxgpd"] Feb 16 02:06:54.330294 master-0 kubenswrapper[7721]: I0216 02:06:54.329959 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-audit\") pod \"apiserver-996fc886d-zb7x5\" (UID: \"8db9c82a-bb53-4e2b-9af3-d3eebc530c35\") " pod="openshift-apiserver/apiserver-996fc886d-zb7x5" Feb 16 02:06:54.330294 master-0 kubenswrapper[7721]: I0216 02:06:54.330016 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-serving-cert\") pod \"apiserver-996fc886d-zb7x5\" (UID: \"8db9c82a-bb53-4e2b-9af3-d3eebc530c35\") " pod="openshift-apiserver/apiserver-996fc886d-zb7x5" Feb 16 02:06:54.330294 master-0 kubenswrapper[7721]: I0216 02:06:54.330093 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-etcd-serving-ca\") pod \"apiserver-996fc886d-zb7x5\" (UID: \"8db9c82a-bb53-4e2b-9af3-d3eebc530c35\") " pod="openshift-apiserver/apiserver-996fc886d-zb7x5" Feb 16 02:06:54.330294 master-0 kubenswrapper[7721]: I0216 02:06:54.330169 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-etcd-client\") pod \"apiserver-996fc886d-zb7x5\" (UID: \"8db9c82a-bb53-4e2b-9af3-d3eebc530c35\") " pod="openshift-apiserver/apiserver-996fc886d-zb7x5" Feb 16 02:06:54.330294 master-0 kubenswrapper[7721]: E0216 02:06:54.330299 7721 secret.go:189] Couldn't get secret openshift-apiserver/etcd-client: secret "etcd-client" not found Feb 16 02:06:54.330760 master-0 kubenswrapper[7721]: E0216 02:06:54.330351 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-etcd-client podName:8db9c82a-bb53-4e2b-9af3-d3eebc530c35 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:56.33033581 +0000 UTC m=+19.824570092 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-etcd-client") pod "apiserver-996fc886d-zb7x5" (UID: "8db9c82a-bb53-4e2b-9af3-d3eebc530c35") : secret "etcd-client" not found Feb 16 02:06:54.330760 master-0 kubenswrapper[7721]: E0216 02:06:54.330749 7721 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Feb 16 02:06:54.330836 master-0 kubenswrapper[7721]: E0216 02:06:54.330784 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-audit podName:8db9c82a-bb53-4e2b-9af3-d3eebc530c35 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:56.330773601 +0000 UTC m=+19.825007873 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-audit") pod "apiserver-996fc886d-zb7x5" (UID: "8db9c82a-bb53-4e2b-9af3-d3eebc530c35") : configmap "audit-0" not found Feb 16 02:06:54.330873 master-0 kubenswrapper[7721]: E0216 02:06:54.330836 7721 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Feb 16 02:06:54.330873 master-0 kubenswrapper[7721]: E0216 02:06:54.330862 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-serving-cert podName:8db9c82a-bb53-4e2b-9af3-d3eebc530c35 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:56.330853053 +0000 UTC m=+19.825087325 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-serving-cert") pod "apiserver-996fc886d-zb7x5" (UID: "8db9c82a-bb53-4e2b-9af3-d3eebc530c35") : secret "serving-cert" not found Feb 16 02:06:54.330945 master-0 kubenswrapper[7721]: E0216 02:06:54.330898 7721 configmap.go:193] Couldn't get configMap openshift-apiserver/etcd-serving-ca: configmap "etcd-serving-ca" not found Feb 16 02:06:54.330945 master-0 kubenswrapper[7721]: E0216 02:06:54.330920 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-etcd-serving-ca podName:8db9c82a-bb53-4e2b-9af3-d3eebc530c35 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:56.330913405 +0000 UTC m=+19.825147687 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-etcd-serving-ca") pod "apiserver-996fc886d-zb7x5" (UID: "8db9c82a-bb53-4e2b-9af3-d3eebc530c35") : configmap "etcd-serving-ca" not found Feb 16 02:06:54.578518 master-0 kubenswrapper[7721]: I0216 02:06:54.578463 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-466x9"] Feb 16 02:06:54.582223 master-0 kubenswrapper[7721]: I0216 02:06:54.582206 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-466x9" Feb 16 02:06:54.593769 master-0 kubenswrapper[7721]: I0216 02:06:54.593710 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-466x9"] Feb 16 02:06:54.644499 master-0 kubenswrapper[7721]: I0216 02:06:54.636993 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7276\" (UniqueName: \"kubernetes.io/projected/a3065737-c7c0-4fbb-b484-f2a9204d4908-kube-api-access-w7276\") pod \"csi-snapshot-controller-74b6595c6d-466x9\" (UID: \"a3065737-c7c0-4fbb-b484-f2a9204d4908\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-466x9" Feb 16 02:06:54.740655 master-0 kubenswrapper[7721]: I0216 02:06:54.739330 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7276\" (UniqueName: \"kubernetes.io/projected/a3065737-c7c0-4fbb-b484-f2a9204d4908-kube-api-access-w7276\") pod \"csi-snapshot-controller-74b6595c6d-466x9\" (UID: \"a3065737-c7c0-4fbb-b484-f2a9204d4908\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-466x9" Feb 16 02:06:54.762501 master-0 kubenswrapper[7721]: I0216 02:06:54.761135 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7276\" (UniqueName: \"kubernetes.io/projected/a3065737-c7c0-4fbb-b484-f2a9204d4908-kube-api-access-w7276\") pod \"csi-snapshot-controller-74b6595c6d-466x9\" (UID: \"a3065737-c7c0-4fbb-b484-f2a9204d4908\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-466x9" Feb 16 02:06:54.914660 master-0 kubenswrapper[7721]: I0216 02:06:54.914545 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-466x9" Feb 16 02:06:55.046480 master-0 kubenswrapper[7721]: I0216 02:06:55.046385 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nbjz6" event={"ID":"04804a08-e3a5-46f3-abcb-967866834baa","Type":"ContainerStarted","Data":"93b137c9da7cc55e696e731bc17c8d146d60020ad34798363a1b97a514dd88c5"} Feb 16 02:06:55.049947 master-0 kubenswrapper[7721]: I0216 02:06:55.049882 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-bxgpd" event={"ID":"a0540a70-a256-422b-a827-e564d0e67866","Type":"ContainerStarted","Data":"ec4a831847dbd9a3830625bc2566b19d885784da6d7562dca0d18bf050003dad"} Feb 16 02:06:55.053702 master-0 kubenswrapper[7721]: I0216 02:06:55.053621 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-jshtp" event={"ID":"1f2d2601-481d-4e86-ac4c-3d34d5691261","Type":"ContainerStarted","Data":"748211ea15fe0b3ea676e1fc8e3f1abb179693db22eafb9d15047d193f62c672"} Feb 16 02:06:55.127561 master-0 kubenswrapper[7721]: I0216 02:06:55.127518 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-466x9"] Feb 16 02:06:55.138049 master-0 kubenswrapper[7721]: W0216 02:06:55.138009 7721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda3065737_c7c0_4fbb_b484_f2a9204d4908.slice/crio-1a32ad8e692aa770e92a476bea483378880963d5f68eec068f4e2762350b7f00 WatchSource:0}: Error finding container 1a32ad8e692aa770e92a476bea483378880963d5f68eec068f4e2762350b7f00: Status 404 returned error can't find the container with id 1a32ad8e692aa770e92a476bea483378880963d5f68eec068f4e2762350b7f00 Feb 16 02:06:56.058610 master-0 kubenswrapper[7721]: I0216 02:06:56.058020 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-466x9" event={"ID":"a3065737-c7c0-4fbb-b484-f2a9204d4908","Type":"ContainerStarted","Data":"1a32ad8e692aa770e92a476bea483378880963d5f68eec068f4e2762350b7f00"} Feb 16 02:06:56.360667 master-0 kubenswrapper[7721]: I0216 02:06:56.360542 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-etcd-client\") pod \"apiserver-996fc886d-zb7x5\" (UID: \"8db9c82a-bb53-4e2b-9af3-d3eebc530c35\") " pod="openshift-apiserver/apiserver-996fc886d-zb7x5" Feb 16 02:06:56.360811 master-0 kubenswrapper[7721]: E0216 02:06:56.360747 7721 secret.go:189] Couldn't get secret openshift-apiserver/etcd-client: secret "etcd-client" not found Feb 16 02:06:56.360844 master-0 kubenswrapper[7721]: E0216 02:06:56.360828 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-etcd-client podName:8db9c82a-bb53-4e2b-9af3-d3eebc530c35 nodeName:}" failed. No retries permitted until 2026-02-16 02:07:00.36080336 +0000 UTC m=+23.855037662 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-etcd-client") pod "apiserver-996fc886d-zb7x5" (UID: "8db9c82a-bb53-4e2b-9af3-d3eebc530c35") : secret "etcd-client" not found Feb 16 02:06:56.361190 master-0 kubenswrapper[7721]: I0216 02:06:56.360909 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-audit\") pod \"apiserver-996fc886d-zb7x5\" (UID: \"8db9c82a-bb53-4e2b-9af3-d3eebc530c35\") " pod="openshift-apiserver/apiserver-996fc886d-zb7x5" Feb 16 02:06:56.361190 master-0 kubenswrapper[7721]: I0216 02:06:56.360936 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-serving-cert\") pod \"apiserver-996fc886d-zb7x5\" (UID: \"8db9c82a-bb53-4e2b-9af3-d3eebc530c35\") " pod="openshift-apiserver/apiserver-996fc886d-zb7x5" Feb 16 02:06:56.361190 master-0 kubenswrapper[7721]: I0216 02:06:56.360987 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-etcd-serving-ca\") pod \"apiserver-996fc886d-zb7x5\" (UID: \"8db9c82a-bb53-4e2b-9af3-d3eebc530c35\") " pod="openshift-apiserver/apiserver-996fc886d-zb7x5" Feb 16 02:06:56.361190 master-0 kubenswrapper[7721]: E0216 02:06:56.361055 7721 configmap.go:193] Couldn't get configMap openshift-apiserver/etcd-serving-ca: configmap "etcd-serving-ca" not found Feb 16 02:06:56.361190 master-0 kubenswrapper[7721]: E0216 02:06:56.361081 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-etcd-serving-ca podName:8db9c82a-bb53-4e2b-9af3-d3eebc530c35 nodeName:}" failed. No retries permitted until 2026-02-16 02:07:00.361072297 +0000 UTC m=+23.855306559 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-etcd-serving-ca") pod "apiserver-996fc886d-zb7x5" (UID: "8db9c82a-bb53-4e2b-9af3-d3eebc530c35") : configmap "etcd-serving-ca" not found Feb 16 02:06:56.361190 master-0 kubenswrapper[7721]: E0216 02:06:56.361105 7721 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Feb 16 02:06:56.361190 master-0 kubenswrapper[7721]: E0216 02:06:56.361122 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-audit podName:8db9c82a-bb53-4e2b-9af3-d3eebc530c35 nodeName:}" failed. No retries permitted until 2026-02-16 02:07:00.361116058 +0000 UTC m=+23.855350320 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-audit") pod "apiserver-996fc886d-zb7x5" (UID: "8db9c82a-bb53-4e2b-9af3-d3eebc530c35") : configmap "audit-0" not found Feb 16 02:06:56.361190 master-0 kubenswrapper[7721]: E0216 02:06:56.361158 7721 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Feb 16 02:06:56.361190 master-0 kubenswrapper[7721]: E0216 02:06:56.361174 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-serving-cert podName:8db9c82a-bb53-4e2b-9af3-d3eebc530c35 nodeName:}" failed. No retries permitted until 2026-02-16 02:07:00.361168689 +0000 UTC m=+23.855402951 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-serving-cert") pod "apiserver-996fc886d-zb7x5" (UID: "8db9c82a-bb53-4e2b-9af3-d3eebc530c35") : secret "serving-cert" not found Feb 16 02:06:57.854741 master-0 kubenswrapper[7721]: I0216 02:06:57.854645 7721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-996fc886d-zb7x5"] Feb 16 02:06:57.856038 master-0 kubenswrapper[7721]: E0216 02:06:57.855817 7721 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[audit etcd-client etcd-serving-ca serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-apiserver/apiserver-996fc886d-zb7x5" podUID="8db9c82a-bb53-4e2b-9af3-d3eebc530c35" Feb 16 02:06:58.072202 master-0 kubenswrapper[7721]: I0216 02:06:58.072121 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-996fc886d-zb7x5" Feb 16 02:06:58.085476 master-0 kubenswrapper[7721]: I0216 02:06:58.083962 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-996fc886d-zb7x5" Feb 16 02:06:58.192741 master-0 kubenswrapper[7721]: I0216 02:06:58.192178 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-node-pullsecrets\") pod \"8db9c82a-bb53-4e2b-9af3-d3eebc530c35\" (UID: \"8db9c82a-bb53-4e2b-9af3-d3eebc530c35\") " Feb 16 02:06:58.193099 master-0 kubenswrapper[7721]: I0216 02:06:58.192777 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-audit-dir\") pod \"8db9c82a-bb53-4e2b-9af3-d3eebc530c35\" (UID: \"8db9c82a-bb53-4e2b-9af3-d3eebc530c35\") " Feb 16 02:06:58.193099 master-0 kubenswrapper[7721]: I0216 02:06:58.192315 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "8db9c82a-bb53-4e2b-9af3-d3eebc530c35" (UID: "8db9c82a-bb53-4e2b-9af3-d3eebc530c35"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:06:58.193099 master-0 kubenswrapper[7721]: I0216 02:06:58.192833 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-image-import-ca\") pod \"8db9c82a-bb53-4e2b-9af3-d3eebc530c35\" (UID: \"8db9c82a-bb53-4e2b-9af3-d3eebc530c35\") " Feb 16 02:06:58.193099 master-0 kubenswrapper[7721]: I0216 02:06:58.192879 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-trusted-ca-bundle\") pod \"8db9c82a-bb53-4e2b-9af3-d3eebc530c35\" (UID: \"8db9c82a-bb53-4e2b-9af3-d3eebc530c35\") " Feb 16 02:06:58.193099 master-0 kubenswrapper[7721]: I0216 02:06:58.192936 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-config\") pod \"8db9c82a-bb53-4e2b-9af3-d3eebc530c35\" (UID: \"8db9c82a-bb53-4e2b-9af3-d3eebc530c35\") " Feb 16 02:06:58.193099 master-0 kubenswrapper[7721]: I0216 02:06:58.192995 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-encryption-config\") pod \"8db9c82a-bb53-4e2b-9af3-d3eebc530c35\" (UID: \"8db9c82a-bb53-4e2b-9af3-d3eebc530c35\") " Feb 16 02:06:58.193099 master-0 kubenswrapper[7721]: I0216 02:06:58.193037 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbpc4\" (UniqueName: \"kubernetes.io/projected/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-kube-api-access-rbpc4\") pod \"8db9c82a-bb53-4e2b-9af3-d3eebc530c35\" (UID: \"8db9c82a-bb53-4e2b-9af3-d3eebc530c35\") " Feb 16 02:06:58.193951 master-0 kubenswrapper[7721]: I0216 02:06:58.193011 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "8db9c82a-bb53-4e2b-9af3-d3eebc530c35" (UID: "8db9c82a-bb53-4e2b-9af3-d3eebc530c35"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:06:58.194035 master-0 kubenswrapper[7721]: I0216 02:06:58.193927 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "8db9c82a-bb53-4e2b-9af3-d3eebc530c35" (UID: "8db9c82a-bb53-4e2b-9af3-d3eebc530c35"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:06:58.194035 master-0 kubenswrapper[7721]: I0216 02:06:58.193940 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-config" (OuterVolumeSpecName: "config") pod "8db9c82a-bb53-4e2b-9af3-d3eebc530c35" (UID: "8db9c82a-bb53-4e2b-9af3-d3eebc530c35"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:06:58.194244 master-0 kubenswrapper[7721]: I0216 02:06:58.194177 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "8db9c82a-bb53-4e2b-9af3-d3eebc530c35" (UID: "8db9c82a-bb53-4e2b-9af3-d3eebc530c35"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:06:58.200159 master-0 kubenswrapper[7721]: I0216 02:06:58.200072 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "8db9c82a-bb53-4e2b-9af3-d3eebc530c35" (UID: "8db9c82a-bb53-4e2b-9af3-d3eebc530c35"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:06:58.200475 master-0 kubenswrapper[7721]: I0216 02:06:58.200416 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-kube-api-access-rbpc4" (OuterVolumeSpecName: "kube-api-access-rbpc4") pod "8db9c82a-bb53-4e2b-9af3-d3eebc530c35" (UID: "8db9c82a-bb53-4e2b-9af3-d3eebc530c35"). InnerVolumeSpecName "kube-api-access-rbpc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:06:58.296582 master-0 kubenswrapper[7721]: I0216 02:06:58.295791 7721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rbpc4\" (UniqueName: \"kubernetes.io/projected/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-kube-api-access-rbpc4\") on node \"master-0\" DevicePath \"\"" Feb 16 02:06:58.296582 master-0 kubenswrapper[7721]: I0216 02:06:58.295856 7721 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-node-pullsecrets\") on node \"master-0\" DevicePath \"\"" Feb 16 02:06:58.296582 master-0 kubenswrapper[7721]: I0216 02:06:58.295877 7721 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-audit-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 02:06:58.296582 master-0 kubenswrapper[7721]: I0216 02:06:58.295902 7721 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 02:06:58.296582 master-0 kubenswrapper[7721]: I0216 02:06:58.295921 7721 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-image-import-ca\") on node \"master-0\" DevicePath \"\"" Feb 16 02:06:58.296582 master-0 kubenswrapper[7721]: I0216 02:06:58.295939 7721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-config\") on node \"master-0\" DevicePath \"\"" Feb 16 02:06:58.296582 master-0 kubenswrapper[7721]: I0216 02:06:58.295957 7721 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-encryption-config\") on node \"master-0\" DevicePath \"\"" Feb 16 02:06:59.002994 master-0 kubenswrapper[7721]: I0216 02:06:59.002647 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Feb 16 02:06:59.003600 master-0 kubenswrapper[7721]: I0216 02:06:59.003478 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Feb 16 02:06:59.007159 master-0 kubenswrapper[7721]: I0216 02:06:59.006690 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Feb 16 02:06:59.011563 master-0 kubenswrapper[7721]: I0216 02:06:59.011521 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Feb 16 02:06:59.079380 master-0 kubenswrapper[7721]: I0216 02:06:59.079330 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-996fc886d-zb7x5" Feb 16 02:06:59.112368 master-0 kubenswrapper[7721]: I0216 02:06:59.112326 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/45476b57-538b-4031-80c9-8025a49e8e88-kube-api-access\") pod \"installer-1-master-0\" (UID: \"45476b57-538b-4031-80c9-8025a49e8e88\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 16 02:06:59.112637 master-0 kubenswrapper[7721]: I0216 02:06:59.112618 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/45476b57-538b-4031-80c9-8025a49e8e88-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"45476b57-538b-4031-80c9-8025a49e8e88\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 16 02:06:59.112829 master-0 kubenswrapper[7721]: I0216 02:06:59.112815 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/45476b57-538b-4031-80c9-8025a49e8e88-var-lock\") pod \"installer-1-master-0\" (UID: \"45476b57-538b-4031-80c9-8025a49e8e88\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 16 02:06:59.143908 master-0 kubenswrapper[7721]: I0216 02:06:59.143867 7721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-996fc886d-zb7x5"] Feb 16 02:06:59.145760 master-0 kubenswrapper[7721]: I0216 02:06:59.145744 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-7b55c578b7-cvwml"] Feb 16 02:06:59.146648 master-0 kubenswrapper[7721]: I0216 02:06:59.146633 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7b55c578b7-cvwml" Feb 16 02:06:59.154457 master-0 kubenswrapper[7721]: I0216 02:06:59.149851 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 16 02:06:59.154457 master-0 kubenswrapper[7721]: I0216 02:06:59.150096 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 16 02:06:59.154457 master-0 kubenswrapper[7721]: I0216 02:06:59.150234 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 16 02:06:59.154457 master-0 kubenswrapper[7721]: I0216 02:06:59.150371 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 16 02:06:59.154457 master-0 kubenswrapper[7721]: I0216 02:06:59.150490 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 16 02:06:59.154457 master-0 kubenswrapper[7721]: I0216 02:06:59.150618 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 16 02:06:59.154457 master-0 kubenswrapper[7721]: I0216 02:06:59.150771 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 16 02:06:59.154457 master-0 kubenswrapper[7721]: I0216 02:06:59.150882 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 16 02:06:59.154457 master-0 kubenswrapper[7721]: I0216 02:06:59.151013 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 16 02:06:59.169275 master-0 kubenswrapper[7721]: I0216 02:06:59.166784 7721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-apiserver/apiserver-996fc886d-zb7x5"] Feb 16 02:06:59.173648 master-0 kubenswrapper[7721]: I0216 02:06:59.173612 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 16 02:06:59.176906 master-0 kubenswrapper[7721]: I0216 02:06:59.176877 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-7b55c578b7-cvwml"] Feb 16 02:06:59.214071 master-0 kubenswrapper[7721]: I0216 02:06:59.213893 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/45476b57-538b-4031-80c9-8025a49e8e88-kube-api-access\") pod \"installer-1-master-0\" (UID: \"45476b57-538b-4031-80c9-8025a49e8e88\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 16 02:06:59.214071 master-0 kubenswrapper[7721]: I0216 02:06:59.213947 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/45476b57-538b-4031-80c9-8025a49e8e88-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"45476b57-538b-4031-80c9-8025a49e8e88\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 16 02:06:59.214474 master-0 kubenswrapper[7721]: I0216 02:06:59.214368 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/89960c43-d761-48e4-a1e0-b25013788ac4-etcd-client\") pod \"apiserver-7b55c578b7-cvwml\" (UID: \"89960c43-d761-48e4-a1e0-b25013788ac4\") " pod="openshift-apiserver/apiserver-7b55c578b7-cvwml" Feb 16 02:06:59.214624 master-0 kubenswrapper[7721]: I0216 02:06:59.214589 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/89960c43-d761-48e4-a1e0-b25013788ac4-encryption-config\") pod \"apiserver-7b55c578b7-cvwml\" (UID: \"89960c43-d761-48e4-a1e0-b25013788ac4\") " pod="openshift-apiserver/apiserver-7b55c578b7-cvwml" Feb 16 02:06:59.214823 master-0 kubenswrapper[7721]: I0216 02:06:59.214453 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/45476b57-538b-4031-80c9-8025a49e8e88-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"45476b57-538b-4031-80c9-8025a49e8e88\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 16 02:06:59.214990 master-0 kubenswrapper[7721]: I0216 02:06:59.214937 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/89960c43-d761-48e4-a1e0-b25013788ac4-node-pullsecrets\") pod \"apiserver-7b55c578b7-cvwml\" (UID: \"89960c43-d761-48e4-a1e0-b25013788ac4\") " pod="openshift-apiserver/apiserver-7b55c578b7-cvwml" Feb 16 02:06:59.215133 master-0 kubenswrapper[7721]: I0216 02:06:59.215115 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89960c43-d761-48e4-a1e0-b25013788ac4-trusted-ca-bundle\") pod \"apiserver-7b55c578b7-cvwml\" (UID: \"89960c43-d761-48e4-a1e0-b25013788ac4\") " pod="openshift-apiserver/apiserver-7b55c578b7-cvwml" Feb 16 02:06:59.215270 master-0 kubenswrapper[7721]: I0216 02:06:59.215252 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/45476b57-538b-4031-80c9-8025a49e8e88-var-lock\") pod \"installer-1-master-0\" (UID: \"45476b57-538b-4031-80c9-8025a49e8e88\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 16 02:06:59.215450 master-0 kubenswrapper[7721]: I0216 02:06:59.215379 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/45476b57-538b-4031-80c9-8025a49e8e88-var-lock\") pod \"installer-1-master-0\" (UID: \"45476b57-538b-4031-80c9-8025a49e8e88\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 16 02:06:59.215450 master-0 kubenswrapper[7721]: I0216 02:06:59.215389 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89960c43-d761-48e4-a1e0-b25013788ac4-serving-cert\") pod \"apiserver-7b55c578b7-cvwml\" (UID: \"89960c43-d761-48e4-a1e0-b25013788ac4\") " pod="openshift-apiserver/apiserver-7b55c578b7-cvwml" Feb 16 02:06:59.215557 master-0 kubenswrapper[7721]: I0216 02:06:59.215528 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/89960c43-d761-48e4-a1e0-b25013788ac4-audit-dir\") pod \"apiserver-7b55c578b7-cvwml\" (UID: \"89960c43-d761-48e4-a1e0-b25013788ac4\") " pod="openshift-apiserver/apiserver-7b55c578b7-cvwml" Feb 16 02:06:59.215605 master-0 kubenswrapper[7721]: I0216 02:06:59.215560 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/89960c43-d761-48e4-a1e0-b25013788ac4-audit\") pod \"apiserver-7b55c578b7-cvwml\" (UID: \"89960c43-d761-48e4-a1e0-b25013788ac4\") " pod="openshift-apiserver/apiserver-7b55c578b7-cvwml" Feb 16 02:06:59.215605 master-0 kubenswrapper[7721]: I0216 02:06:59.215582 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/89960c43-d761-48e4-a1e0-b25013788ac4-etcd-serving-ca\") pod \"apiserver-7b55c578b7-cvwml\" (UID: \"89960c43-d761-48e4-a1e0-b25013788ac4\") " pod="openshift-apiserver/apiserver-7b55c578b7-cvwml" Feb 16 02:06:59.215693 master-0 kubenswrapper[7721]: I0216 02:06:59.215609 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/89960c43-d761-48e4-a1e0-b25013788ac4-image-import-ca\") pod \"apiserver-7b55c578b7-cvwml\" (UID: \"89960c43-d761-48e4-a1e0-b25013788ac4\") " pod="openshift-apiserver/apiserver-7b55c578b7-cvwml" Feb 16 02:06:59.215693 master-0 kubenswrapper[7721]: I0216 02:06:59.215672 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pl2fd\" (UniqueName: \"kubernetes.io/projected/89960c43-d761-48e4-a1e0-b25013788ac4-kube-api-access-pl2fd\") pod \"apiserver-7b55c578b7-cvwml\" (UID: \"89960c43-d761-48e4-a1e0-b25013788ac4\") " pod="openshift-apiserver/apiserver-7b55c578b7-cvwml" Feb 16 02:06:59.215797 master-0 kubenswrapper[7721]: I0216 02:06:59.215768 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89960c43-d761-48e4-a1e0-b25013788ac4-config\") pod \"apiserver-7b55c578b7-cvwml\" (UID: \"89960c43-d761-48e4-a1e0-b25013788ac4\") " pod="openshift-apiserver/apiserver-7b55c578b7-cvwml" Feb 16 02:06:59.215856 master-0 kubenswrapper[7721]: I0216 02:06:59.215820 7721 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-etcd-client\") on node \"master-0\" DevicePath \"\"" Feb 16 02:06:59.215856 master-0 kubenswrapper[7721]: I0216 02:06:59.215834 7721 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 16 02:06:59.215856 master-0 kubenswrapper[7721]: I0216 02:06:59.215849 7721 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-etcd-serving-ca\") on node \"master-0\" DevicePath \"\"" Feb 16 02:06:59.215981 master-0 kubenswrapper[7721]: I0216 02:06:59.215863 7721 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8db9c82a-bb53-4e2b-9af3-d3eebc530c35-audit\") on node \"master-0\" DevicePath \"\"" Feb 16 02:06:59.234465 master-0 kubenswrapper[7721]: I0216 02:06:59.234417 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/45476b57-538b-4031-80c9-8025a49e8e88-kube-api-access\") pod \"installer-1-master-0\" (UID: \"45476b57-538b-4031-80c9-8025a49e8e88\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 16 02:06:59.317097 master-0 kubenswrapper[7721]: I0216 02:06:59.316954 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89960c43-d761-48e4-a1e0-b25013788ac4-trusted-ca-bundle\") pod \"apiserver-7b55c578b7-cvwml\" (UID: \"89960c43-d761-48e4-a1e0-b25013788ac4\") " pod="openshift-apiserver/apiserver-7b55c578b7-cvwml" Feb 16 02:06:59.317097 master-0 kubenswrapper[7721]: I0216 02:06:59.317004 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89960c43-d761-48e4-a1e0-b25013788ac4-serving-cert\") pod \"apiserver-7b55c578b7-cvwml\" (UID: \"89960c43-d761-48e4-a1e0-b25013788ac4\") " pod="openshift-apiserver/apiserver-7b55c578b7-cvwml" Feb 16 02:06:59.317097 master-0 kubenswrapper[7721]: I0216 02:06:59.317060 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/89960c43-d761-48e4-a1e0-b25013788ac4-audit-dir\") pod \"apiserver-7b55c578b7-cvwml\" (UID: \"89960c43-d761-48e4-a1e0-b25013788ac4\") " pod="openshift-apiserver/apiserver-7b55c578b7-cvwml" Feb 16 02:06:59.317097 master-0 kubenswrapper[7721]: I0216 02:06:59.317087 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/89960c43-d761-48e4-a1e0-b25013788ac4-audit\") pod \"apiserver-7b55c578b7-cvwml\" (UID: \"89960c43-d761-48e4-a1e0-b25013788ac4\") " pod="openshift-apiserver/apiserver-7b55c578b7-cvwml" Feb 16 02:06:59.317097 master-0 kubenswrapper[7721]: I0216 02:06:59.317107 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/89960c43-d761-48e4-a1e0-b25013788ac4-etcd-serving-ca\") pod \"apiserver-7b55c578b7-cvwml\" (UID: \"89960c43-d761-48e4-a1e0-b25013788ac4\") " pod="openshift-apiserver/apiserver-7b55c578b7-cvwml" Feb 16 02:06:59.317517 master-0 kubenswrapper[7721]: I0216 02:06:59.317129 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/89960c43-d761-48e4-a1e0-b25013788ac4-image-import-ca\") pod \"apiserver-7b55c578b7-cvwml\" (UID: \"89960c43-d761-48e4-a1e0-b25013788ac4\") " pod="openshift-apiserver/apiserver-7b55c578b7-cvwml" Feb 16 02:06:59.317517 master-0 kubenswrapper[7721]: I0216 02:06:59.317156 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pl2fd\" (UniqueName: \"kubernetes.io/projected/89960c43-d761-48e4-a1e0-b25013788ac4-kube-api-access-pl2fd\") pod \"apiserver-7b55c578b7-cvwml\" (UID: \"89960c43-d761-48e4-a1e0-b25013788ac4\") " pod="openshift-apiserver/apiserver-7b55c578b7-cvwml" Feb 16 02:06:59.317517 master-0 kubenswrapper[7721]: I0216 02:06:59.317194 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89960c43-d761-48e4-a1e0-b25013788ac4-config\") pod \"apiserver-7b55c578b7-cvwml\" (UID: \"89960c43-d761-48e4-a1e0-b25013788ac4\") " pod="openshift-apiserver/apiserver-7b55c578b7-cvwml" Feb 16 02:06:59.317517 master-0 kubenswrapper[7721]: I0216 02:06:59.317229 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/89960c43-d761-48e4-a1e0-b25013788ac4-encryption-config\") pod \"apiserver-7b55c578b7-cvwml\" (UID: \"89960c43-d761-48e4-a1e0-b25013788ac4\") " pod="openshift-apiserver/apiserver-7b55c578b7-cvwml" Feb 16 02:06:59.317517 master-0 kubenswrapper[7721]: I0216 02:06:59.317248 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/89960c43-d761-48e4-a1e0-b25013788ac4-etcd-client\") pod \"apiserver-7b55c578b7-cvwml\" (UID: \"89960c43-d761-48e4-a1e0-b25013788ac4\") " pod="openshift-apiserver/apiserver-7b55c578b7-cvwml" Feb 16 02:06:59.317517 master-0 kubenswrapper[7721]: I0216 02:06:59.317269 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07a1ea85-2ac8-4c6a-a585-c35ebf55b33d-serving-cert\") pod \"route-controller-manager-dc665b75-6qzmt\" (UID: \"07a1ea85-2ac8-4c6a-a585-c35ebf55b33d\") " pod="openshift-route-controller-manager/route-controller-manager-dc665b75-6qzmt" Feb 16 02:06:59.317517 master-0 kubenswrapper[7721]: I0216 02:06:59.317302 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/89960c43-d761-48e4-a1e0-b25013788ac4-node-pullsecrets\") pod \"apiserver-7b55c578b7-cvwml\" (UID: \"89960c43-d761-48e4-a1e0-b25013788ac4\") " pod="openshift-apiserver/apiserver-7b55c578b7-cvwml" Feb 16 02:06:59.317517 master-0 kubenswrapper[7721]: I0216 02:06:59.317324 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/07a1ea85-2ac8-4c6a-a585-c35ebf55b33d-client-ca\") pod \"route-controller-manager-dc665b75-6qzmt\" (UID: \"07a1ea85-2ac8-4c6a-a585-c35ebf55b33d\") " pod="openshift-route-controller-manager/route-controller-manager-dc665b75-6qzmt" Feb 16 02:06:59.317517 master-0 kubenswrapper[7721]: E0216 02:06:59.317420 7721 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 16 02:06:59.317517 master-0 kubenswrapper[7721]: E0216 02:06:59.317491 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/07a1ea85-2ac8-4c6a-a585-c35ebf55b33d-client-ca podName:07a1ea85-2ac8-4c6a-a585-c35ebf55b33d nodeName:}" failed. No retries permitted until 2026-02-16 02:07:15.317475552 +0000 UTC m=+38.811709814 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/07a1ea85-2ac8-4c6a-a585-c35ebf55b33d-client-ca") pod "route-controller-manager-dc665b75-6qzmt" (UID: "07a1ea85-2ac8-4c6a-a585-c35ebf55b33d") : configmap "client-ca" not found Feb 16 02:06:59.318543 master-0 kubenswrapper[7721]: E0216 02:06:59.318103 7721 secret.go:189] Couldn't get secret openshift-apiserver/etcd-client: secret "etcd-client" not found Feb 16 02:06:59.318543 master-0 kubenswrapper[7721]: E0216 02:06:59.318274 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/89960c43-d761-48e4-a1e0-b25013788ac4-etcd-client podName:89960c43-d761-48e4-a1e0-b25013788ac4 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:59.81820368 +0000 UTC m=+23.312437952 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/89960c43-d761-48e4-a1e0-b25013788ac4-etcd-client") pod "apiserver-7b55c578b7-cvwml" (UID: "89960c43-d761-48e4-a1e0-b25013788ac4") : secret "etcd-client" not found Feb 16 02:06:59.318543 master-0 kubenswrapper[7721]: I0216 02:06:59.318323 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/89960c43-d761-48e4-a1e0-b25013788ac4-node-pullsecrets\") pod \"apiserver-7b55c578b7-cvwml\" (UID: \"89960c43-d761-48e4-a1e0-b25013788ac4\") " pod="openshift-apiserver/apiserver-7b55c578b7-cvwml" Feb 16 02:06:59.318543 master-0 kubenswrapper[7721]: E0216 02:06:59.318475 7721 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Feb 16 02:06:59.318543 master-0 kubenswrapper[7721]: E0216 02:06:59.318520 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/89960c43-d761-48e4-a1e0-b25013788ac4-serving-cert podName:89960c43-d761-48e4-a1e0-b25013788ac4 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:59.818509368 +0000 UTC m=+23.312743650 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/89960c43-d761-48e4-a1e0-b25013788ac4-serving-cert") pod "apiserver-7b55c578b7-cvwml" (UID: "89960c43-d761-48e4-a1e0-b25013788ac4") : secret "serving-cert" not found Feb 16 02:06:59.319617 master-0 kubenswrapper[7721]: I0216 02:06:59.318657 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89960c43-d761-48e4-a1e0-b25013788ac4-trusted-ca-bundle\") pod \"apiserver-7b55c578b7-cvwml\" (UID: \"89960c43-d761-48e4-a1e0-b25013788ac4\") " pod="openshift-apiserver/apiserver-7b55c578b7-cvwml" Feb 16 02:06:59.319617 master-0 kubenswrapper[7721]: I0216 02:06:59.318719 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/89960c43-d761-48e4-a1e0-b25013788ac4-audit-dir\") pod \"apiserver-7b55c578b7-cvwml\" (UID: \"89960c43-d761-48e4-a1e0-b25013788ac4\") " pod="openshift-apiserver/apiserver-7b55c578b7-cvwml" Feb 16 02:06:59.319617 master-0 kubenswrapper[7721]: E0216 02:06:59.318840 7721 configmap.go:193] Couldn't get configMap openshift-apiserver/etcd-serving-ca: configmap "etcd-serving-ca" not found Feb 16 02:06:59.319617 master-0 kubenswrapper[7721]: E0216 02:06:59.318886 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/89960c43-d761-48e4-a1e0-b25013788ac4-etcd-serving-ca podName:89960c43-d761-48e4-a1e0-b25013788ac4 nodeName:}" failed. No retries permitted until 2026-02-16 02:06:59.818872677 +0000 UTC m=+23.313106949 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/89960c43-d761-48e4-a1e0-b25013788ac4-etcd-serving-ca") pod "apiserver-7b55c578b7-cvwml" (UID: "89960c43-d761-48e4-a1e0-b25013788ac4") : configmap "etcd-serving-ca" not found Feb 16 02:06:59.324007 master-0 kubenswrapper[7721]: I0216 02:06:59.320622 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/89960c43-d761-48e4-a1e0-b25013788ac4-image-import-ca\") pod \"apiserver-7b55c578b7-cvwml\" (UID: \"89960c43-d761-48e4-a1e0-b25013788ac4\") " pod="openshift-apiserver/apiserver-7b55c578b7-cvwml" Feb 16 02:06:59.324007 master-0 kubenswrapper[7721]: I0216 02:06:59.322368 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89960c43-d761-48e4-a1e0-b25013788ac4-config\") pod \"apiserver-7b55c578b7-cvwml\" (UID: \"89960c43-d761-48e4-a1e0-b25013788ac4\") " pod="openshift-apiserver/apiserver-7b55c578b7-cvwml" Feb 16 02:06:59.324007 master-0 kubenswrapper[7721]: I0216 02:06:59.322746 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/89960c43-d761-48e4-a1e0-b25013788ac4-encryption-config\") pod \"apiserver-7b55c578b7-cvwml\" (UID: \"89960c43-d761-48e4-a1e0-b25013788ac4\") " pod="openshift-apiserver/apiserver-7b55c578b7-cvwml" Feb 16 02:06:59.324007 master-0 kubenswrapper[7721]: I0216 02:06:59.323509 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/89960c43-d761-48e4-a1e0-b25013788ac4-audit\") pod \"apiserver-7b55c578b7-cvwml\" (UID: \"89960c43-d761-48e4-a1e0-b25013788ac4\") " pod="openshift-apiserver/apiserver-7b55c578b7-cvwml" Feb 16 02:06:59.326731 master-0 kubenswrapper[7721]: I0216 02:06:59.326682 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Feb 16 02:06:59.327127 master-0 kubenswrapper[7721]: I0216 02:06:59.326749 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07a1ea85-2ac8-4c6a-a585-c35ebf55b33d-serving-cert\") pod \"route-controller-manager-dc665b75-6qzmt\" (UID: \"07a1ea85-2ac8-4c6a-a585-c35ebf55b33d\") " pod="openshift-route-controller-manager/route-controller-manager-dc665b75-6qzmt" Feb 16 02:06:59.344085 master-0 kubenswrapper[7721]: I0216 02:06:59.344028 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pl2fd\" (UniqueName: \"kubernetes.io/projected/89960c43-d761-48e4-a1e0-b25013788ac4-kube-api-access-pl2fd\") pod \"apiserver-7b55c578b7-cvwml\" (UID: \"89960c43-d761-48e4-a1e0-b25013788ac4\") " pod="openshift-apiserver/apiserver-7b55c578b7-cvwml" Feb 16 02:06:59.823711 master-0 kubenswrapper[7721]: I0216 02:06:59.823603 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89960c43-d761-48e4-a1e0-b25013788ac4-serving-cert\") pod \"apiserver-7b55c578b7-cvwml\" (UID: \"89960c43-d761-48e4-a1e0-b25013788ac4\") " pod="openshift-apiserver/apiserver-7b55c578b7-cvwml" Feb 16 02:06:59.824214 master-0 kubenswrapper[7721]: I0216 02:06:59.824094 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/89960c43-d761-48e4-a1e0-b25013788ac4-etcd-serving-ca\") pod \"apiserver-7b55c578b7-cvwml\" (UID: \"89960c43-d761-48e4-a1e0-b25013788ac4\") " pod="openshift-apiserver/apiserver-7b55c578b7-cvwml" Feb 16 02:06:59.824392 master-0 kubenswrapper[7721]: E0216 02:06:59.824294 7721 configmap.go:193] Couldn't get configMap openshift-apiserver/etcd-serving-ca: configmap "etcd-serving-ca" not found Feb 16 02:06:59.824569 master-0 kubenswrapper[7721]: E0216 02:06:59.824418 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/89960c43-d761-48e4-a1e0-b25013788ac4-etcd-serving-ca podName:89960c43-d761-48e4-a1e0-b25013788ac4 nodeName:}" failed. No retries permitted until 2026-02-16 02:07:00.824384174 +0000 UTC m=+24.318618556 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/89960c43-d761-48e4-a1e0-b25013788ac4-etcd-serving-ca") pod "apiserver-7b55c578b7-cvwml" (UID: "89960c43-d761-48e4-a1e0-b25013788ac4") : configmap "etcd-serving-ca" not found Feb 16 02:06:59.824569 master-0 kubenswrapper[7721]: I0216 02:06:59.824516 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/89960c43-d761-48e4-a1e0-b25013788ac4-etcd-client\") pod \"apiserver-7b55c578b7-cvwml\" (UID: \"89960c43-d761-48e4-a1e0-b25013788ac4\") " pod="openshift-apiserver/apiserver-7b55c578b7-cvwml" Feb 16 02:06:59.824783 master-0 kubenswrapper[7721]: E0216 02:06:59.824708 7721 secret.go:189] Couldn't get secret openshift-apiserver/etcd-client: secret "etcd-client" not found Feb 16 02:06:59.824869 master-0 kubenswrapper[7721]: E0216 02:06:59.824817 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/89960c43-d761-48e4-a1e0-b25013788ac4-etcd-client podName:89960c43-d761-48e4-a1e0-b25013788ac4 nodeName:}" failed. No retries permitted until 2026-02-16 02:07:00.824780554 +0000 UTC m=+24.319015016 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/89960c43-d761-48e4-a1e0-b25013788ac4-etcd-client") pod "apiserver-7b55c578b7-cvwml" (UID: "89960c43-d761-48e4-a1e0-b25013788ac4") : secret "etcd-client" not found Feb 16 02:06:59.830204 master-0 kubenswrapper[7721]: I0216 02:06:59.830150 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89960c43-d761-48e4-a1e0-b25013788ac4-serving-cert\") pod \"apiserver-7b55c578b7-cvwml\" (UID: \"89960c43-d761-48e4-a1e0-b25013788ac4\") " pod="openshift-apiserver/apiserver-7b55c578b7-cvwml" Feb 16 02:07:00.731953 master-0 kubenswrapper[7721]: I0216 02:07:00.731859 7721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8db9c82a-bb53-4e2b-9af3-d3eebc530c35" path="/var/lib/kubelet/pods/8db9c82a-bb53-4e2b-9af3-d3eebc530c35/volumes" Feb 16 02:07:00.844102 master-0 kubenswrapper[7721]: I0216 02:07:00.843996 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/89960c43-d761-48e4-a1e0-b25013788ac4-etcd-client\") pod \"apiserver-7b55c578b7-cvwml\" (UID: \"89960c43-d761-48e4-a1e0-b25013788ac4\") " pod="openshift-apiserver/apiserver-7b55c578b7-cvwml" Feb 16 02:07:00.844406 master-0 kubenswrapper[7721]: I0216 02:07:00.844311 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/89960c43-d761-48e4-a1e0-b25013788ac4-etcd-serving-ca\") pod \"apiserver-7b55c578b7-cvwml\" (UID: \"89960c43-d761-48e4-a1e0-b25013788ac4\") " pod="openshift-apiserver/apiserver-7b55c578b7-cvwml" Feb 16 02:07:00.844406 master-0 kubenswrapper[7721]: E0216 02:07:00.844314 7721 secret.go:189] Couldn't get secret openshift-apiserver/etcd-client: secret "etcd-client" not found Feb 16 02:07:00.844597 master-0 kubenswrapper[7721]: E0216 02:07:00.844464 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/89960c43-d761-48e4-a1e0-b25013788ac4-etcd-client podName:89960c43-d761-48e4-a1e0-b25013788ac4 nodeName:}" failed. No retries permitted until 2026-02-16 02:07:02.84440656 +0000 UTC m=+26.338640862 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/89960c43-d761-48e4-a1e0-b25013788ac4-etcd-client") pod "apiserver-7b55c578b7-cvwml" (UID: "89960c43-d761-48e4-a1e0-b25013788ac4") : secret "etcd-client" not found Feb 16 02:07:00.845708 master-0 kubenswrapper[7721]: I0216 02:07:00.845643 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/89960c43-d761-48e4-a1e0-b25013788ac4-etcd-serving-ca\") pod \"apiserver-7b55c578b7-cvwml\" (UID: \"89960c43-d761-48e4-a1e0-b25013788ac4\") " pod="openshift-apiserver/apiserver-7b55c578b7-cvwml" Feb 16 02:07:01.047582 master-0 kubenswrapper[7721]: I0216 02:07:01.047459 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6c0424f7-d2b6-4396-a051-7fd1dc97c67f-client-ca\") pod \"controller-manager-6dc6c44bd6-rj7sr\" (UID: \"6c0424f7-d2b6-4396-a051-7fd1dc97c67f\") " pod="openshift-controller-manager/controller-manager-6dc6c44bd6-rj7sr" Feb 16 02:07:01.047771 master-0 kubenswrapper[7721]: E0216 02:07:01.047725 7721 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 16 02:07:01.047823 master-0 kubenswrapper[7721]: E0216 02:07:01.047805 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6c0424f7-d2b6-4396-a051-7fd1dc97c67f-client-ca podName:6c0424f7-d2b6-4396-a051-7fd1dc97c67f nodeName:}" failed. No retries permitted until 2026-02-16 02:07:17.047786598 +0000 UTC m=+40.542020870 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/6c0424f7-d2b6-4396-a051-7fd1dc97c67f-client-ca") pod "controller-manager-6dc6c44bd6-rj7sr" (UID: "6c0424f7-d2b6-4396-a051-7fd1dc97c67f") : configmap "client-ca" not found Feb 16 02:07:01.208425 master-0 kubenswrapper[7721]: I0216 02:07:01.208358 7721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6dc6c44bd6-rj7sr"] Feb 16 02:07:01.208697 master-0 kubenswrapper[7721]: E0216 02:07:01.208661 7721 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-6dc6c44bd6-rj7sr" podUID="6c0424f7-d2b6-4396-a051-7fd1dc97c67f" Feb 16 02:07:01.241888 master-0 kubenswrapper[7721]: I0216 02:07:01.241821 7721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dc665b75-6qzmt"] Feb 16 02:07:01.242551 master-0 kubenswrapper[7721]: E0216 02:07:01.242519 7721 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-route-controller-manager/route-controller-manager-dc665b75-6qzmt" podUID="07a1ea85-2ac8-4c6a-a585-c35ebf55b33d" Feb 16 02:07:02.096397 master-0 kubenswrapper[7721]: I0216 02:07:02.096107 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6dc6c44bd6-rj7sr" Feb 16 02:07:02.096397 master-0 kubenswrapper[7721]: I0216 02:07:02.096213 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dc665b75-6qzmt" Feb 16 02:07:02.111245 master-0 kubenswrapper[7721]: I0216 02:07:02.111213 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6dc6c44bd6-rj7sr" Feb 16 02:07:02.114584 master-0 kubenswrapper[7721]: I0216 02:07:02.114546 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dc665b75-6qzmt" Feb 16 02:07:02.161289 master-0 kubenswrapper[7721]: I0216 02:07:02.160878 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lk8b7\" (UniqueName: \"kubernetes.io/projected/6c0424f7-d2b6-4396-a051-7fd1dc97c67f-kube-api-access-lk8b7\") pod \"6c0424f7-d2b6-4396-a051-7fd1dc97c67f\" (UID: \"6c0424f7-d2b6-4396-a051-7fd1dc97c67f\") " Feb 16 02:07:02.161289 master-0 kubenswrapper[7721]: I0216 02:07:02.161274 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c0424f7-d2b6-4396-a051-7fd1dc97c67f-config\") pod \"6c0424f7-d2b6-4396-a051-7fd1dc97c67f\" (UID: \"6c0424f7-d2b6-4396-a051-7fd1dc97c67f\") " Feb 16 02:07:02.161640 master-0 kubenswrapper[7721]: I0216 02:07:02.161325 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07a1ea85-2ac8-4c6a-a585-c35ebf55b33d-serving-cert\") pod \"07a1ea85-2ac8-4c6a-a585-c35ebf55b33d\" (UID: \"07a1ea85-2ac8-4c6a-a585-c35ebf55b33d\") " Feb 16 02:07:02.161640 master-0 kubenswrapper[7721]: I0216 02:07:02.161352 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c0424f7-d2b6-4396-a051-7fd1dc97c67f-serving-cert\") pod \"6c0424f7-d2b6-4396-a051-7fd1dc97c67f\" (UID: \"6c0424f7-d2b6-4396-a051-7fd1dc97c67f\") " Feb 16 02:07:02.161640 master-0 kubenswrapper[7721]: I0216 02:07:02.161390 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6c0424f7-d2b6-4396-a051-7fd1dc97c67f-proxy-ca-bundles\") pod \"6c0424f7-d2b6-4396-a051-7fd1dc97c67f\" (UID: \"6c0424f7-d2b6-4396-a051-7fd1dc97c67f\") " Feb 16 02:07:02.161640 master-0 kubenswrapper[7721]: I0216 02:07:02.161422 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5d9rl\" (UniqueName: \"kubernetes.io/projected/07a1ea85-2ac8-4c6a-a585-c35ebf55b33d-kube-api-access-5d9rl\") pod \"07a1ea85-2ac8-4c6a-a585-c35ebf55b33d\" (UID: \"07a1ea85-2ac8-4c6a-a585-c35ebf55b33d\") " Feb 16 02:07:02.161640 master-0 kubenswrapper[7721]: I0216 02:07:02.161485 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07a1ea85-2ac8-4c6a-a585-c35ebf55b33d-config\") pod \"07a1ea85-2ac8-4c6a-a585-c35ebf55b33d\" (UID: \"07a1ea85-2ac8-4c6a-a585-c35ebf55b33d\") " Feb 16 02:07:02.163259 master-0 kubenswrapper[7721]: I0216 02:07:02.162821 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07a1ea85-2ac8-4c6a-a585-c35ebf55b33d-config" (OuterVolumeSpecName: "config") pod "07a1ea85-2ac8-4c6a-a585-c35ebf55b33d" (UID: "07a1ea85-2ac8-4c6a-a585-c35ebf55b33d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:07:02.163365 master-0 kubenswrapper[7721]: I0216 02:07:02.163295 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c0424f7-d2b6-4396-a051-7fd1dc97c67f-config" (OuterVolumeSpecName: "config") pod "6c0424f7-d2b6-4396-a051-7fd1dc97c67f" (UID: "6c0424f7-d2b6-4396-a051-7fd1dc97c67f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:07:02.164015 master-0 kubenswrapper[7721]: I0216 02:07:02.163921 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c0424f7-d2b6-4396-a051-7fd1dc97c67f-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "6c0424f7-d2b6-4396-a051-7fd1dc97c67f" (UID: "6c0424f7-d2b6-4396-a051-7fd1dc97c67f"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:07:02.167265 master-0 kubenswrapper[7721]: I0216 02:07:02.167093 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c0424f7-d2b6-4396-a051-7fd1dc97c67f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6c0424f7-d2b6-4396-a051-7fd1dc97c67f" (UID: "6c0424f7-d2b6-4396-a051-7fd1dc97c67f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:07:02.167534 master-0 kubenswrapper[7721]: I0216 02:07:02.167371 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07a1ea85-2ac8-4c6a-a585-c35ebf55b33d-kube-api-access-5d9rl" (OuterVolumeSpecName: "kube-api-access-5d9rl") pod "07a1ea85-2ac8-4c6a-a585-c35ebf55b33d" (UID: "07a1ea85-2ac8-4c6a-a585-c35ebf55b33d"). InnerVolumeSpecName "kube-api-access-5d9rl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:07:02.168283 master-0 kubenswrapper[7721]: I0216 02:07:02.168207 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c0424f7-d2b6-4396-a051-7fd1dc97c67f-kube-api-access-lk8b7" (OuterVolumeSpecName: "kube-api-access-lk8b7") pod "6c0424f7-d2b6-4396-a051-7fd1dc97c67f" (UID: "6c0424f7-d2b6-4396-a051-7fd1dc97c67f"). InnerVolumeSpecName "kube-api-access-lk8b7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:07:02.168686 master-0 kubenswrapper[7721]: I0216 02:07:02.168620 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07a1ea85-2ac8-4c6a-a585-c35ebf55b33d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "07a1ea85-2ac8-4c6a-a585-c35ebf55b33d" (UID: "07a1ea85-2ac8-4c6a-a585-c35ebf55b33d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:07:02.264025 master-0 kubenswrapper[7721]: I0216 02:07:02.263609 7721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07a1ea85-2ac8-4c6a-a585-c35ebf55b33d-config\") on node \"master-0\" DevicePath \"\"" Feb 16 02:07:02.264233 master-0 kubenswrapper[7721]: I0216 02:07:02.264026 7721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lk8b7\" (UniqueName: \"kubernetes.io/projected/6c0424f7-d2b6-4396-a051-7fd1dc97c67f-kube-api-access-lk8b7\") on node \"master-0\" DevicePath \"\"" Feb 16 02:07:02.264233 master-0 kubenswrapper[7721]: I0216 02:07:02.264071 7721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c0424f7-d2b6-4396-a051-7fd1dc97c67f-config\") on node \"master-0\" DevicePath \"\"" Feb 16 02:07:02.264233 master-0 kubenswrapper[7721]: I0216 02:07:02.264083 7721 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07a1ea85-2ac8-4c6a-a585-c35ebf55b33d-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 16 02:07:02.264233 master-0 kubenswrapper[7721]: I0216 02:07:02.264099 7721 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c0424f7-d2b6-4396-a051-7fd1dc97c67f-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 16 02:07:02.264233 master-0 kubenswrapper[7721]: I0216 02:07:02.264111 7721 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6c0424f7-d2b6-4396-a051-7fd1dc97c67f-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Feb 16 02:07:02.264464 master-0 kubenswrapper[7721]: I0216 02:07:02.264256 7721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5d9rl\" (UniqueName: \"kubernetes.io/projected/07a1ea85-2ac8-4c6a-a585-c35ebf55b33d-kube-api-access-5d9rl\") on node \"master-0\" DevicePath \"\"" Feb 16 02:07:02.322812 master-0 kubenswrapper[7721]: I0216 02:07:02.322769 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Feb 16 02:07:02.768048 master-0 kubenswrapper[7721]: I0216 02:07:02.767484 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-node-tuning-operator/tuned-vvw25"] Feb 16 02:07:02.769016 master-0 kubenswrapper[7721]: I0216 02:07:02.768968 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:07:02.873730 master-0 kubenswrapper[7721]: I0216 02:07:02.873494 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/00ef3b03-55dc-4661-b7fd-1e586c45b5de-etc-modprobe-d\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:07:02.873730 master-0 kubenswrapper[7721]: I0216 02:07:02.873580 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/00ef3b03-55dc-4661-b7fd-1e586c45b5de-tmp\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:07:02.873730 master-0 kubenswrapper[7721]: I0216 02:07:02.873632 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/00ef3b03-55dc-4661-b7fd-1e586c45b5de-host\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:07:02.873730 master-0 kubenswrapper[7721]: I0216 02:07:02.873655 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/00ef3b03-55dc-4661-b7fd-1e586c45b5de-etc-sysconfig\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:07:02.873730 master-0 kubenswrapper[7721]: I0216 02:07:02.873724 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/00ef3b03-55dc-4661-b7fd-1e586c45b5de-etc-systemd\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:07:02.873730 master-0 kubenswrapper[7721]: I0216 02:07:02.873750 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/00ef3b03-55dc-4661-b7fd-1e586c45b5de-etc-tuned\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:07:02.874272 master-0 kubenswrapper[7721]: I0216 02:07:02.873853 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/89960c43-d761-48e4-a1e0-b25013788ac4-etcd-client\") pod \"apiserver-7b55c578b7-cvwml\" (UID: \"89960c43-d761-48e4-a1e0-b25013788ac4\") " pod="openshift-apiserver/apiserver-7b55c578b7-cvwml" Feb 16 02:07:02.874272 master-0 kubenswrapper[7721]: I0216 02:07:02.873916 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/00ef3b03-55dc-4661-b7fd-1e586c45b5de-lib-modules\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:07:02.874272 master-0 kubenswrapper[7721]: I0216 02:07:02.873978 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/00ef3b03-55dc-4661-b7fd-1e586c45b5de-etc-sysctl-d\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:07:02.874272 master-0 kubenswrapper[7721]: I0216 02:07:02.874009 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/00ef3b03-55dc-4661-b7fd-1e586c45b5de-etc-sysctl-conf\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:07:02.874272 master-0 kubenswrapper[7721]: E0216 02:07:02.874034 7721 secret.go:189] Couldn't get secret openshift-apiserver/etcd-client: secret "etcd-client" not found Feb 16 02:07:02.874272 master-0 kubenswrapper[7721]: I0216 02:07:02.874093 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/00ef3b03-55dc-4661-b7fd-1e586c45b5de-var-lib-kubelet\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:07:02.874272 master-0 kubenswrapper[7721]: E0216 02:07:02.874112 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/89960c43-d761-48e4-a1e0-b25013788ac4-etcd-client podName:89960c43-d761-48e4-a1e0-b25013788ac4 nodeName:}" failed. No retries permitted until 2026-02-16 02:07:06.874089338 +0000 UTC m=+30.368323610 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/89960c43-d761-48e4-a1e0-b25013788ac4-etcd-client") pod "apiserver-7b55c578b7-cvwml" (UID: "89960c43-d761-48e4-a1e0-b25013788ac4") : secret "etcd-client" not found Feb 16 02:07:02.874272 master-0 kubenswrapper[7721]: I0216 02:07:02.874132 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/00ef3b03-55dc-4661-b7fd-1e586c45b5de-etc-kubernetes\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:07:02.874272 master-0 kubenswrapper[7721]: I0216 02:07:02.874155 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kj7r\" (UniqueName: \"kubernetes.io/projected/00ef3b03-55dc-4661-b7fd-1e586c45b5de-kube-api-access-7kj7r\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:07:02.874272 master-0 kubenswrapper[7721]: I0216 02:07:02.874204 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/00ef3b03-55dc-4661-b7fd-1e586c45b5de-run\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:07:02.875782 master-0 kubenswrapper[7721]: I0216 02:07:02.874336 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/00ef3b03-55dc-4661-b7fd-1e586c45b5de-sys\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:07:02.976131 master-0 kubenswrapper[7721]: I0216 02:07:02.976073 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/00ef3b03-55dc-4661-b7fd-1e586c45b5de-etc-tuned\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:07:02.976281 master-0 kubenswrapper[7721]: I0216 02:07:02.976171 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/00ef3b03-55dc-4661-b7fd-1e586c45b5de-lib-modules\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:07:02.976281 master-0 kubenswrapper[7721]: I0216 02:07:02.976244 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/00ef3b03-55dc-4661-b7fd-1e586c45b5de-etc-sysctl-d\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:07:02.976587 master-0 kubenswrapper[7721]: I0216 02:07:02.976510 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/00ef3b03-55dc-4661-b7fd-1e586c45b5de-etc-sysctl-conf\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:07:02.976587 master-0 kubenswrapper[7721]: I0216 02:07:02.976577 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/00ef3b03-55dc-4661-b7fd-1e586c45b5de-etc-kubernetes\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:07:02.977740 master-0 kubenswrapper[7721]: I0216 02:07:02.976612 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/00ef3b03-55dc-4661-b7fd-1e586c45b5de-var-lib-kubelet\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:07:02.977740 master-0 kubenswrapper[7721]: I0216 02:07:02.976651 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kj7r\" (UniqueName: \"kubernetes.io/projected/00ef3b03-55dc-4661-b7fd-1e586c45b5de-kube-api-access-7kj7r\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:07:02.977740 master-0 kubenswrapper[7721]: I0216 02:07:02.976696 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/00ef3b03-55dc-4661-b7fd-1e586c45b5de-etc-sysctl-d\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:07:02.977740 master-0 kubenswrapper[7721]: I0216 02:07:02.976766 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/00ef3b03-55dc-4661-b7fd-1e586c45b5de-var-lib-kubelet\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:07:02.977740 master-0 kubenswrapper[7721]: I0216 02:07:02.976810 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/00ef3b03-55dc-4661-b7fd-1e586c45b5de-etc-sysctl-conf\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:07:02.977740 master-0 kubenswrapper[7721]: I0216 02:07:02.976867 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/00ef3b03-55dc-4661-b7fd-1e586c45b5de-etc-kubernetes\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:07:02.977740 master-0 kubenswrapper[7721]: I0216 02:07:02.976923 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/00ef3b03-55dc-4661-b7fd-1e586c45b5de-run\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:07:02.977740 master-0 kubenswrapper[7721]: I0216 02:07:02.976965 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/00ef3b03-55dc-4661-b7fd-1e586c45b5de-run\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:07:02.977740 master-0 kubenswrapper[7721]: I0216 02:07:02.977057 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/00ef3b03-55dc-4661-b7fd-1e586c45b5de-sys\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:07:02.977740 master-0 kubenswrapper[7721]: I0216 02:07:02.977122 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/00ef3b03-55dc-4661-b7fd-1e586c45b5de-lib-modules\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:07:02.977740 master-0 kubenswrapper[7721]: I0216 02:07:02.977126 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/00ef3b03-55dc-4661-b7fd-1e586c45b5de-etc-modprobe-d\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:07:02.977740 master-0 kubenswrapper[7721]: I0216 02:07:02.977171 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/00ef3b03-55dc-4661-b7fd-1e586c45b5de-tmp\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:07:02.977740 master-0 kubenswrapper[7721]: I0216 02:07:02.977183 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/00ef3b03-55dc-4661-b7fd-1e586c45b5de-sys\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:07:02.977740 master-0 kubenswrapper[7721]: I0216 02:07:02.977222 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/00ef3b03-55dc-4661-b7fd-1e586c45b5de-host\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:07:02.977740 master-0 kubenswrapper[7721]: I0216 02:07:02.977303 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/00ef3b03-55dc-4661-b7fd-1e586c45b5de-etc-sysconfig\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:07:02.977740 master-0 kubenswrapper[7721]: I0216 02:07:02.977351 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/00ef3b03-55dc-4661-b7fd-1e586c45b5de-etc-modprobe-d\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:07:02.977740 master-0 kubenswrapper[7721]: I0216 02:07:02.977397 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/00ef3b03-55dc-4661-b7fd-1e586c45b5de-host\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:07:02.977740 master-0 kubenswrapper[7721]: I0216 02:07:02.977474 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/00ef3b03-55dc-4661-b7fd-1e586c45b5de-etc-systemd\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:07:02.977740 master-0 kubenswrapper[7721]: I0216 02:07:02.977591 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/00ef3b03-55dc-4661-b7fd-1e586c45b5de-etc-sysconfig\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:07:02.977740 master-0 kubenswrapper[7721]: I0216 02:07:02.977629 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/00ef3b03-55dc-4661-b7fd-1e586c45b5de-etc-systemd\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:07:02.984376 master-0 kubenswrapper[7721]: I0216 02:07:02.984317 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/00ef3b03-55dc-4661-b7fd-1e586c45b5de-etc-tuned\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:07:02.984630 master-0 kubenswrapper[7721]: I0216 02:07:02.984409 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/00ef3b03-55dc-4661-b7fd-1e586c45b5de-tmp\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:07:03.103272 master-0 kubenswrapper[7721]: I0216 02:07:03.103203 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-845gn" event={"ID":"b2a83ddd-ffa5-4127-9099-91187ad9dbba","Type":"ContainerStarted","Data":"50a2835b716fb250bc2c699d4e73c9bb348121fd917ece4f764fb89f9869e12c"} Feb 16 02:07:03.106211 master-0 kubenswrapper[7721]: I0216 02:07:03.106130 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-bxgpd" event={"ID":"a0540a70-a256-422b-a827-e564d0e67866","Type":"ContainerStarted","Data":"30efe53db3804e482e25f9cbbd12869b15c2e11c0cc6c671d7115288061dd618"} Feb 16 02:07:03.108832 master-0 kubenswrapper[7721]: I0216 02:07:03.108776 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"45476b57-538b-4031-80c9-8025a49e8e88","Type":"ContainerStarted","Data":"844468f5a50bbaafc70df5bd3186ac9f161b117793658af967159de5ce3fa619"} Feb 16 02:07:03.108832 master-0 kubenswrapper[7721]: I0216 02:07:03.108818 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"45476b57-538b-4031-80c9-8025a49e8e88","Type":"ContainerStarted","Data":"fd6a595d794b352d399441301e60f9c24a74357bb8a6e67bdca2c5e538615037"} Feb 16 02:07:03.111250 master-0 kubenswrapper[7721]: I0216 02:07:03.111178 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-466x9" event={"ID":"a3065737-c7c0-4fbb-b484-f2a9204d4908","Type":"ContainerStarted","Data":"37393f3209e22fdba80463ac1612aee9793e0477a277020982d8df5dfbf209db"} Feb 16 02:07:03.120102 master-0 kubenswrapper[7721]: I0216 02:07:03.120029 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-86b8869b79-4rfwq" event={"ID":"2ffa4db8-97da-42de-8e51-35680f518ca7","Type":"ContainerStarted","Data":"94907b842f240d2c95aa18b7208afcdd2c48d89e45fad946689635854319f574"} Feb 16 02:07:03.120238 master-0 kubenswrapper[7721]: I0216 02:07:03.120113 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-86b8869b79-4rfwq" event={"ID":"2ffa4db8-97da-42de-8e51-35680f518ca7","Type":"ContainerStarted","Data":"0164607851a01dd61823c3d5055f9bfba2e409fdb0687dc576a048692dd6f1ae"} Feb 16 02:07:03.123099 master-0 kubenswrapper[7721]: I0216 02:07:03.123022 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-76959b6567-9fxxl" event={"ID":"864c0ef4-319c-457c-aa3b-adf0c3e5a0ff","Type":"ContainerStarted","Data":"84ef78af69cbe8091fbf2ce8a0527fe0720ef4fc8feb2eb70bee52acdec1eabf"} Feb 16 02:07:03.129625 master-0 kubenswrapper[7721]: I0216 02:07:03.129511 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dc665b75-6qzmt" Feb 16 02:07:03.131834 master-0 kubenswrapper[7721]: I0216 02:07:03.131155 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nbjz6" event={"ID":"04804a08-e3a5-46f3-abcb-967866834baa","Type":"ContainerStarted","Data":"e7f0a0eea4d904b15f8bede1e9264a49ffd2a0b983dd56cce81b31ebad1fe28d"} Feb 16 02:07:03.131834 master-0 kubenswrapper[7721]: I0216 02:07:03.131222 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nbjz6" event={"ID":"04804a08-e3a5-46f3-abcb-967866834baa","Type":"ContainerStarted","Data":"87993edba6f07930300de55e54a0440afea4e88c5ea50fe933142a412c18bfd2"} Feb 16 02:07:03.131834 master-0 kubenswrapper[7721]: I0216 02:07:03.131296 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6dc6c44bd6-rj7sr" Feb 16 02:07:03.547330 master-0 kubenswrapper[7721]: I0216 02:07:03.547279 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kj7r\" (UniqueName: \"kubernetes.io/projected/00ef3b03-55dc-4661-b7fd-1e586c45b5de-kube-api-access-7kj7r\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:07:03.696507 master-0 kubenswrapper[7721]: I0216 02:07:03.696396 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:07:03.720209 master-0 kubenswrapper[7721]: W0216 02:07:03.719652 7721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00ef3b03_55dc_4661_b7fd_1e586c45b5de.slice/crio-ee473217ed918c364de6da6546ce4f74e28f6fa5ca8444708e6841db76200402 WatchSource:0}: Error finding container ee473217ed918c364de6da6546ce4f74e28f6fa5ca8444708e6841db76200402: Status 404 returned error can't find the container with id ee473217ed918c364de6da6546ce4f74e28f6fa5ca8444708e6841db76200402 Feb 16 02:07:04.134707 master-0 kubenswrapper[7721]: I0216 02:07:04.134555 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-vvw25" event={"ID":"00ef3b03-55dc-4661-b7fd-1e586c45b5de","Type":"ContainerStarted","Data":"ee473217ed918c364de6da6546ce4f74e28f6fa5ca8444708e6841db76200402"} Feb 16 02:07:04.195770 master-0 kubenswrapper[7721]: I0216 02:07:04.195708 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8b946f9d6-8xg2l"] Feb 16 02:07:04.196375 master-0 kubenswrapper[7721]: I0216 02:07:04.196341 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8b946f9d6-8xg2l" Feb 16 02:07:04.204542 master-0 kubenswrapper[7721]: I0216 02:07:04.204491 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 02:07:04.206714 master-0 kubenswrapper[7721]: I0216 02:07:04.206674 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 02:07:04.206921 master-0 kubenswrapper[7721]: I0216 02:07:04.206889 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 02:07:04.207095 master-0 kubenswrapper[7721]: I0216 02:07:04.207059 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 02:07:04.207266 master-0 kubenswrapper[7721]: I0216 02:07:04.207233 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 02:07:04.298965 master-0 kubenswrapper[7721]: I0216 02:07:04.298908 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e80c9628-20e2-4326-b1f5-810fd755d7ca-client-ca\") pod \"route-controller-manager-8b946f9d6-8xg2l\" (UID: \"e80c9628-20e2-4326-b1f5-810fd755d7ca\") " pod="openshift-route-controller-manager/route-controller-manager-8b946f9d6-8xg2l" Feb 16 02:07:04.299181 master-0 kubenswrapper[7721]: I0216 02:07:04.298993 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvm7n\" (UniqueName: \"kubernetes.io/projected/e80c9628-20e2-4326-b1f5-810fd755d7ca-kube-api-access-bvm7n\") pod \"route-controller-manager-8b946f9d6-8xg2l\" (UID: \"e80c9628-20e2-4326-b1f5-810fd755d7ca\") " pod="openshift-route-controller-manager/route-controller-manager-8b946f9d6-8xg2l" Feb 16 02:07:04.299181 master-0 kubenswrapper[7721]: I0216 02:07:04.299026 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e80c9628-20e2-4326-b1f5-810fd755d7ca-config\") pod \"route-controller-manager-8b946f9d6-8xg2l\" (UID: \"e80c9628-20e2-4326-b1f5-810fd755d7ca\") " pod="openshift-route-controller-manager/route-controller-manager-8b946f9d6-8xg2l" Feb 16 02:07:04.299181 master-0 kubenswrapper[7721]: I0216 02:07:04.299131 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e80c9628-20e2-4326-b1f5-810fd755d7ca-serving-cert\") pod \"route-controller-manager-8b946f9d6-8xg2l\" (UID: \"e80c9628-20e2-4326-b1f5-810fd755d7ca\") " pod="openshift-route-controller-manager/route-controller-manager-8b946f9d6-8xg2l" Feb 16 02:07:04.400839 master-0 kubenswrapper[7721]: I0216 02:07:04.400695 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e80c9628-20e2-4326-b1f5-810fd755d7ca-client-ca\") pod \"route-controller-manager-8b946f9d6-8xg2l\" (UID: \"e80c9628-20e2-4326-b1f5-810fd755d7ca\") " pod="openshift-route-controller-manager/route-controller-manager-8b946f9d6-8xg2l" Feb 16 02:07:04.400839 master-0 kubenswrapper[7721]: I0216 02:07:04.400766 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvm7n\" (UniqueName: \"kubernetes.io/projected/e80c9628-20e2-4326-b1f5-810fd755d7ca-kube-api-access-bvm7n\") pod \"route-controller-manager-8b946f9d6-8xg2l\" (UID: \"e80c9628-20e2-4326-b1f5-810fd755d7ca\") " pod="openshift-route-controller-manager/route-controller-manager-8b946f9d6-8xg2l" Feb 16 02:07:04.401174 master-0 kubenswrapper[7721]: I0216 02:07:04.400984 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e80c9628-20e2-4326-b1f5-810fd755d7ca-config\") pod \"route-controller-manager-8b946f9d6-8xg2l\" (UID: \"e80c9628-20e2-4326-b1f5-810fd755d7ca\") " pod="openshift-route-controller-manager/route-controller-manager-8b946f9d6-8xg2l" Feb 16 02:07:04.401174 master-0 kubenswrapper[7721]: I0216 02:07:04.401067 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e80c9628-20e2-4326-b1f5-810fd755d7ca-serving-cert\") pod \"route-controller-manager-8b946f9d6-8xg2l\" (UID: \"e80c9628-20e2-4326-b1f5-810fd755d7ca\") " pod="openshift-route-controller-manager/route-controller-manager-8b946f9d6-8xg2l" Feb 16 02:07:04.402518 master-0 kubenswrapper[7721]: I0216 02:07:04.402239 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e80c9628-20e2-4326-b1f5-810fd755d7ca-client-ca\") pod \"route-controller-manager-8b946f9d6-8xg2l\" (UID: \"e80c9628-20e2-4326-b1f5-810fd755d7ca\") " pod="openshift-route-controller-manager/route-controller-manager-8b946f9d6-8xg2l" Feb 16 02:07:04.403233 master-0 kubenswrapper[7721]: I0216 02:07:04.403155 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e80c9628-20e2-4326-b1f5-810fd755d7ca-config\") pod \"route-controller-manager-8b946f9d6-8xg2l\" (UID: \"e80c9628-20e2-4326-b1f5-810fd755d7ca\") " pod="openshift-route-controller-manager/route-controller-manager-8b946f9d6-8xg2l" Feb 16 02:07:04.407737 master-0 kubenswrapper[7721]: I0216 02:07:04.407683 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e80c9628-20e2-4326-b1f5-810fd755d7ca-serving-cert\") pod \"route-controller-manager-8b946f9d6-8xg2l\" (UID: \"e80c9628-20e2-4326-b1f5-810fd755d7ca\") " pod="openshift-route-controller-manager/route-controller-manager-8b946f9d6-8xg2l" Feb 16 02:07:04.637077 master-0 kubenswrapper[7721]: I0216 02:07:04.636970 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8b946f9d6-8xg2l"] Feb 16 02:07:04.882879 master-0 kubenswrapper[7721]: I0216 02:07:04.882771 7721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dc665b75-6qzmt"] Feb 16 02:07:05.141809 master-0 kubenswrapper[7721]: I0216 02:07:05.141161 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-vvw25" event={"ID":"00ef3b03-55dc-4661-b7fd-1e586c45b5de","Type":"ContainerStarted","Data":"d23d03f9e1c86a1c579d3f7865dfa6ddaf4a60e4b3bf058bbda45a237a1a5681"} Feb 16 02:07:05.267020 master-0 kubenswrapper[7721]: I0216 02:07:05.266906 7721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dc665b75-6qzmt"] Feb 16 02:07:05.311767 master-0 kubenswrapper[7721]: I0216 02:07:05.311693 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvm7n\" (UniqueName: \"kubernetes.io/projected/e80c9628-20e2-4326-b1f5-810fd755d7ca-kube-api-access-bvm7n\") pod \"route-controller-manager-8b946f9d6-8xg2l\" (UID: \"e80c9628-20e2-4326-b1f5-810fd755d7ca\") " pod="openshift-route-controller-manager/route-controller-manager-8b946f9d6-8xg2l" Feb 16 02:07:05.314146 master-0 kubenswrapper[7721]: I0216 02:07:05.314070 7721 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/07a1ea85-2ac8-4c6a-a585-c35ebf55b33d-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 16 02:07:05.427698 master-0 kubenswrapper[7721]: I0216 02:07:05.427056 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8b946f9d6-8xg2l" Feb 16 02:07:06.098813 master-0 kubenswrapper[7721]: I0216 02:07:06.098624 7721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-7b55c578b7-cvwml"] Feb 16 02:07:06.105142 master-0 kubenswrapper[7721]: E0216 02:07:06.099109 7721 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etcd-client], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-apiserver/apiserver-7b55c578b7-cvwml" podUID="89960c43-d761-48e4-a1e0-b25013788ac4" Feb 16 02:07:06.109975 master-0 kubenswrapper[7721]: I0216 02:07:06.109885 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8b946f9d6-8xg2l"] Feb 16 02:07:06.112243 master-0 kubenswrapper[7721]: I0216 02:07:06.112054 7721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-466x9" podStartSLOduration=5.15512907 podStartE2EDuration="12.112015684s" podCreationTimestamp="2026-02-16 02:06:54 +0000 UTC" firstStartedPulling="2026-02-16 02:06:55.143629844 +0000 UTC m=+18.637864116" lastFinishedPulling="2026-02-16 02:07:02.100516428 +0000 UTC m=+25.594750730" observedRunningTime="2026-02-16 02:07:06.104205881 +0000 UTC m=+29.598440183" watchObservedRunningTime="2026-02-16 02:07:06.112015684 +0000 UTC m=+29.606250026" Feb 16 02:07:06.130476 master-0 kubenswrapper[7721]: W0216 02:07:06.130370 7721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode80c9628_20e2_4326_b1f5_810fd755d7ca.slice/crio-c8075716643da52e20d813a5b534de5957e55b1d15acbbf38c5599862fa2e772 WatchSource:0}: Error finding container c8075716643da52e20d813a5b534de5957e55b1d15acbbf38c5599862fa2e772: Status 404 returned error can't find the container with id c8075716643da52e20d813a5b534de5957e55b1d15acbbf38c5599862fa2e772 Feb 16 02:07:06.149465 master-0 kubenswrapper[7721]: I0216 02:07:06.148841 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8b946f9d6-8xg2l" event={"ID":"e80c9628-20e2-4326-b1f5-810fd755d7ca","Type":"ContainerStarted","Data":"c8075716643da52e20d813a5b534de5957e55b1d15acbbf38c5599862fa2e772"} Feb 16 02:07:06.149465 master-0 kubenswrapper[7721]: I0216 02:07:06.149151 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7b55c578b7-cvwml" Feb 16 02:07:06.160705 master-0 kubenswrapper[7721]: I0216 02:07:06.160649 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7b55c578b7-cvwml" Feb 16 02:07:06.230513 master-0 kubenswrapper[7721]: I0216 02:07:06.230417 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/89960c43-d761-48e4-a1e0-b25013788ac4-audit\") pod \"89960c43-d761-48e4-a1e0-b25013788ac4\" (UID: \"89960c43-d761-48e4-a1e0-b25013788ac4\") " Feb 16 02:07:06.230681 master-0 kubenswrapper[7721]: I0216 02:07:06.230539 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89960c43-d761-48e4-a1e0-b25013788ac4-config\") pod \"89960c43-d761-48e4-a1e0-b25013788ac4\" (UID: \"89960c43-d761-48e4-a1e0-b25013788ac4\") " Feb 16 02:07:06.230681 master-0 kubenswrapper[7721]: I0216 02:07:06.230603 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/89960c43-d761-48e4-a1e0-b25013788ac4-image-import-ca\") pod \"89960c43-d761-48e4-a1e0-b25013788ac4\" (UID: \"89960c43-d761-48e4-a1e0-b25013788ac4\") " Feb 16 02:07:06.230681 master-0 kubenswrapper[7721]: I0216 02:07:06.230647 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/89960c43-d761-48e4-a1e0-b25013788ac4-audit-dir\") pod \"89960c43-d761-48e4-a1e0-b25013788ac4\" (UID: \"89960c43-d761-48e4-a1e0-b25013788ac4\") " Feb 16 02:07:06.230873 master-0 kubenswrapper[7721]: I0216 02:07:06.230705 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/89960c43-d761-48e4-a1e0-b25013788ac4-encryption-config\") pod \"89960c43-d761-48e4-a1e0-b25013788ac4\" (UID: \"89960c43-d761-48e4-a1e0-b25013788ac4\") " Feb 16 02:07:06.230873 master-0 kubenswrapper[7721]: I0216 02:07:06.230748 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pl2fd\" (UniqueName: \"kubernetes.io/projected/89960c43-d761-48e4-a1e0-b25013788ac4-kube-api-access-pl2fd\") pod \"89960c43-d761-48e4-a1e0-b25013788ac4\" (UID: \"89960c43-d761-48e4-a1e0-b25013788ac4\") " Feb 16 02:07:06.230873 master-0 kubenswrapper[7721]: I0216 02:07:06.230807 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/89960c43-d761-48e4-a1e0-b25013788ac4-node-pullsecrets\") pod \"89960c43-d761-48e4-a1e0-b25013788ac4\" (UID: \"89960c43-d761-48e4-a1e0-b25013788ac4\") " Feb 16 02:07:06.230873 master-0 kubenswrapper[7721]: I0216 02:07:06.230843 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/89960c43-d761-48e4-a1e0-b25013788ac4-etcd-serving-ca\") pod \"89960c43-d761-48e4-a1e0-b25013788ac4\" (UID: \"89960c43-d761-48e4-a1e0-b25013788ac4\") " Feb 16 02:07:06.231092 master-0 kubenswrapper[7721]: I0216 02:07:06.230894 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89960c43-d761-48e4-a1e0-b25013788ac4-serving-cert\") pod \"89960c43-d761-48e4-a1e0-b25013788ac4\" (UID: \"89960c43-d761-48e4-a1e0-b25013788ac4\") " Feb 16 02:07:06.231092 master-0 kubenswrapper[7721]: I0216 02:07:06.230952 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89960c43-d761-48e4-a1e0-b25013788ac4-trusted-ca-bundle\") pod \"89960c43-d761-48e4-a1e0-b25013788ac4\" (UID: \"89960c43-d761-48e4-a1e0-b25013788ac4\") " Feb 16 02:07:06.231092 master-0 kubenswrapper[7721]: I0216 02:07:06.231029 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89960c43-d761-48e4-a1e0-b25013788ac4-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "89960c43-d761-48e4-a1e0-b25013788ac4" (UID: "89960c43-d761-48e4-a1e0-b25013788ac4"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:07:06.232721 master-0 kubenswrapper[7721]: I0216 02:07:06.231154 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89960c43-d761-48e4-a1e0-b25013788ac4-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "89960c43-d761-48e4-a1e0-b25013788ac4" (UID: "89960c43-d761-48e4-a1e0-b25013788ac4"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:07:06.232721 master-0 kubenswrapper[7721]: I0216 02:07:06.231256 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89960c43-d761-48e4-a1e0-b25013788ac4-audit" (OuterVolumeSpecName: "audit") pod "89960c43-d761-48e4-a1e0-b25013788ac4" (UID: "89960c43-d761-48e4-a1e0-b25013788ac4"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:07:06.232721 master-0 kubenswrapper[7721]: I0216 02:07:06.231783 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89960c43-d761-48e4-a1e0-b25013788ac4-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "89960c43-d761-48e4-a1e0-b25013788ac4" (UID: "89960c43-d761-48e4-a1e0-b25013788ac4"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:07:06.232721 master-0 kubenswrapper[7721]: I0216 02:07:06.232092 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89960c43-d761-48e4-a1e0-b25013788ac4-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "89960c43-d761-48e4-a1e0-b25013788ac4" (UID: "89960c43-d761-48e4-a1e0-b25013788ac4"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:07:06.232721 master-0 kubenswrapper[7721]: I0216 02:07:06.232132 7721 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/89960c43-d761-48e4-a1e0-b25013788ac4-image-import-ca\") on node \"master-0\" DevicePath \"\"" Feb 16 02:07:06.232721 master-0 kubenswrapper[7721]: I0216 02:07:06.232200 7721 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/89960c43-d761-48e4-a1e0-b25013788ac4-audit-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 02:07:06.232721 master-0 kubenswrapper[7721]: I0216 02:07:06.232225 7721 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/89960c43-d761-48e4-a1e0-b25013788ac4-node-pullsecrets\") on node \"master-0\" DevicePath \"\"" Feb 16 02:07:06.232721 master-0 kubenswrapper[7721]: I0216 02:07:06.232252 7721 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/89960c43-d761-48e4-a1e0-b25013788ac4-audit\") on node \"master-0\" DevicePath \"\"" Feb 16 02:07:06.232721 master-0 kubenswrapper[7721]: I0216 02:07:06.232139 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89960c43-d761-48e4-a1e0-b25013788ac4-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "89960c43-d761-48e4-a1e0-b25013788ac4" (UID: "89960c43-d761-48e4-a1e0-b25013788ac4"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:07:06.233282 master-0 kubenswrapper[7721]: I0216 02:07:06.232869 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89960c43-d761-48e4-a1e0-b25013788ac4-config" (OuterVolumeSpecName: "config") pod "89960c43-d761-48e4-a1e0-b25013788ac4" (UID: "89960c43-d761-48e4-a1e0-b25013788ac4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:07:06.236919 master-0 kubenswrapper[7721]: I0216 02:07:06.236839 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89960c43-d761-48e4-a1e0-b25013788ac4-kube-api-access-pl2fd" (OuterVolumeSpecName: "kube-api-access-pl2fd") pod "89960c43-d761-48e4-a1e0-b25013788ac4" (UID: "89960c43-d761-48e4-a1e0-b25013788ac4"). InnerVolumeSpecName "kube-api-access-pl2fd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:07:06.237414 master-0 kubenswrapper[7721]: I0216 02:07:06.237310 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89960c43-d761-48e4-a1e0-b25013788ac4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "89960c43-d761-48e4-a1e0-b25013788ac4" (UID: "89960c43-d761-48e4-a1e0-b25013788ac4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:07:06.237804 master-0 kubenswrapper[7721]: I0216 02:07:06.237654 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89960c43-d761-48e4-a1e0-b25013788ac4-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "89960c43-d761-48e4-a1e0-b25013788ac4" (UID: "89960c43-d761-48e4-a1e0-b25013788ac4"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:07:06.334466 master-0 kubenswrapper[7721]: I0216 02:07:06.334360 7721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pl2fd\" (UniqueName: \"kubernetes.io/projected/89960c43-d761-48e4-a1e0-b25013788ac4-kube-api-access-pl2fd\") on node \"master-0\" DevicePath \"\"" Feb 16 02:07:06.334466 master-0 kubenswrapper[7721]: I0216 02:07:06.334461 7721 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/89960c43-d761-48e4-a1e0-b25013788ac4-etcd-serving-ca\") on node \"master-0\" DevicePath \"\"" Feb 16 02:07:06.334652 master-0 kubenswrapper[7721]: I0216 02:07:06.334485 7721 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89960c43-d761-48e4-a1e0-b25013788ac4-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 16 02:07:06.334652 master-0 kubenswrapper[7721]: I0216 02:07:06.334504 7721 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89960c43-d761-48e4-a1e0-b25013788ac4-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 02:07:06.334652 master-0 kubenswrapper[7721]: I0216 02:07:06.334524 7721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89960c43-d761-48e4-a1e0-b25013788ac4-config\") on node \"master-0\" DevicePath \"\"" Feb 16 02:07:06.334652 master-0 kubenswrapper[7721]: I0216 02:07:06.334544 7721 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/89960c43-d761-48e4-a1e0-b25013788ac4-encryption-config\") on node \"master-0\" DevicePath \"\"" Feb 16 02:07:06.527074 master-0 kubenswrapper[7721]: I0216 02:07:06.527011 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-catalogd/catalogd-controller-manager-67bc7c997f-zc2br"] Feb 16 02:07:06.527771 master-0 kubenswrapper[7721]: I0216 02:07:06.527749 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-zc2br" Feb 16 02:07:06.532966 master-0 kubenswrapper[7721]: I0216 02:07:06.532919 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Feb 16 02:07:06.533468 master-0 kubenswrapper[7721]: I0216 02:07:06.533412 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Feb 16 02:07:06.533635 master-0 kubenswrapper[7721]: I0216 02:07:06.533592 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Feb 16 02:07:06.557529 master-0 kubenswrapper[7721]: I0216 02:07:06.557478 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-catalogd/catalogd-controller-manager-67bc7c997f-zc2br"] Feb 16 02:07:06.577850 master-0 kubenswrapper[7721]: I0216 02:07:06.577657 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Feb 16 02:07:06.594837 master-0 kubenswrapper[7721]: I0216 02:07:06.593462 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-85c9b89969-g9lcm"] Feb 16 02:07:06.594837 master-0 kubenswrapper[7721]: I0216 02:07:06.594094 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-g9lcm" Feb 16 02:07:06.601062 master-0 kubenswrapper[7721]: I0216 02:07:06.601023 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Feb 16 02:07:06.618118 master-0 kubenswrapper[7721]: I0216 02:07:06.614785 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Feb 16 02:07:06.620515 master-0 kubenswrapper[7721]: I0216 02:07:06.620455 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Feb 16 02:07:06.634493 master-0 kubenswrapper[7721]: I0216 02:07:06.633451 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-85c9b89969-g9lcm"] Feb 16 02:07:06.639938 master-0 kubenswrapper[7721]: I0216 02:07:06.639889 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/857357a1-dc98-4dd5-98b3-c94b1ddf9dec-etc-docker\") pod \"catalogd-controller-manager-67bc7c997f-zc2br\" (UID: \"857357a1-dc98-4dd5-98b3-c94b1ddf9dec\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-zc2br" Feb 16 02:07:06.640113 master-0 kubenswrapper[7721]: I0216 02:07:06.639947 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/27d876a7-6a48-4942-ad96-ed8ed3aa104b-etc-containers\") pod \"operator-controller-controller-manager-85c9b89969-g9lcm\" (UID: \"27d876a7-6a48-4942-ad96-ed8ed3aa104b\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-g9lcm" Feb 16 02:07:06.640113 master-0 kubenswrapper[7721]: I0216 02:07:06.639978 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/857357a1-dc98-4dd5-98b3-c94b1ddf9dec-ca-certs\") pod \"catalogd-controller-manager-67bc7c997f-zc2br\" (UID: \"857357a1-dc98-4dd5-98b3-c94b1ddf9dec\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-zc2br" Feb 16 02:07:06.640113 master-0 kubenswrapper[7721]: I0216 02:07:06.639996 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/857357a1-dc98-4dd5-98b3-c94b1ddf9dec-catalogserver-certs\") pod \"catalogd-controller-manager-67bc7c997f-zc2br\" (UID: \"857357a1-dc98-4dd5-98b3-c94b1ddf9dec\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-zc2br" Feb 16 02:07:06.640113 master-0 kubenswrapper[7721]: I0216 02:07:06.640016 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/857357a1-dc98-4dd5-98b3-c94b1ddf9dec-etc-containers\") pod \"catalogd-controller-manager-67bc7c997f-zc2br\" (UID: \"857357a1-dc98-4dd5-98b3-c94b1ddf9dec\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-zc2br" Feb 16 02:07:06.640113 master-0 kubenswrapper[7721]: I0216 02:07:06.640044 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/857357a1-dc98-4dd5-98b3-c94b1ddf9dec-cache\") pod \"catalogd-controller-manager-67bc7c997f-zc2br\" (UID: \"857357a1-dc98-4dd5-98b3-c94b1ddf9dec\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-zc2br" Feb 16 02:07:06.640113 master-0 kubenswrapper[7721]: I0216 02:07:06.640069 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/27d876a7-6a48-4942-ad96-ed8ed3aa104b-etc-docker\") pod \"operator-controller-controller-manager-85c9b89969-g9lcm\" (UID: \"27d876a7-6a48-4942-ad96-ed8ed3aa104b\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-g9lcm" Feb 16 02:07:06.640113 master-0 kubenswrapper[7721]: I0216 02:07:06.640084 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kf7tw\" (UniqueName: \"kubernetes.io/projected/27d876a7-6a48-4942-ad96-ed8ed3aa104b-kube-api-access-kf7tw\") pod \"operator-controller-controller-manager-85c9b89969-g9lcm\" (UID: \"27d876a7-6a48-4942-ad96-ed8ed3aa104b\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-g9lcm" Feb 16 02:07:06.640113 master-0 kubenswrapper[7721]: I0216 02:07:06.640106 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lj6v2\" (UniqueName: \"kubernetes.io/projected/857357a1-dc98-4dd5-98b3-c94b1ddf9dec-kube-api-access-lj6v2\") pod \"catalogd-controller-manager-67bc7c997f-zc2br\" (UID: \"857357a1-dc98-4dd5-98b3-c94b1ddf9dec\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-zc2br" Feb 16 02:07:06.640343 master-0 kubenswrapper[7721]: I0216 02:07:06.640131 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/27d876a7-6a48-4942-ad96-ed8ed3aa104b-cache\") pod \"operator-controller-controller-manager-85c9b89969-g9lcm\" (UID: \"27d876a7-6a48-4942-ad96-ed8ed3aa104b\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-g9lcm" Feb 16 02:07:06.640343 master-0 kubenswrapper[7721]: I0216 02:07:06.640149 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/27d876a7-6a48-4942-ad96-ed8ed3aa104b-ca-certs\") pod \"operator-controller-controller-manager-85c9b89969-g9lcm\" (UID: \"27d876a7-6a48-4942-ad96-ed8ed3aa104b\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-g9lcm" Feb 16 02:07:06.678975 master-0 kubenswrapper[7721]: I0216 02:07:06.678915 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6b5857dcf-vw8hp"] Feb 16 02:07:06.680022 master-0 kubenswrapper[7721]: I0216 02:07:06.679998 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b5857dcf-vw8hp" Feb 16 02:07:06.694013 master-0 kubenswrapper[7721]: I0216 02:07:06.691734 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 02:07:06.694013 master-0 kubenswrapper[7721]: I0216 02:07:06.691790 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 02:07:06.694013 master-0 kubenswrapper[7721]: I0216 02:07:06.692096 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 02:07:06.694013 master-0 kubenswrapper[7721]: I0216 02:07:06.692330 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 02:07:06.694013 master-0 kubenswrapper[7721]: I0216 02:07:06.693024 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 02:07:06.698071 master-0 kubenswrapper[7721]: I0216 02:07:06.698039 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 02:07:06.706905 master-0 kubenswrapper[7721]: I0216 02:07:06.706872 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6b5857dcf-vw8hp"] Feb 16 02:07:06.707090 master-0 kubenswrapper[7721]: I0216 02:07:06.707080 7721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6dc6c44bd6-rj7sr"] Feb 16 02:07:06.721919 master-0 kubenswrapper[7721]: I0216 02:07:06.721829 7721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6dc6c44bd6-rj7sr"] Feb 16 02:07:06.738281 master-0 kubenswrapper[7721]: I0216 02:07:06.738219 7721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07a1ea85-2ac8-4c6a-a585-c35ebf55b33d" path="/var/lib/kubelet/pods/07a1ea85-2ac8-4c6a-a585-c35ebf55b33d/volumes" Feb 16 02:07:06.740961 master-0 kubenswrapper[7721]: I0216 02:07:06.738891 7721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c0424f7-d2b6-4396-a051-7fd1dc97c67f" path="/var/lib/kubelet/pods/6c0424f7-d2b6-4396-a051-7fd1dc97c67f/volumes" Feb 16 02:07:06.740961 master-0 kubenswrapper[7721]: I0216 02:07:06.740225 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:07:06.740961 master-0 kubenswrapper[7721]: I0216 02:07:06.740403 7721 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 02:07:06.748490 master-0 kubenswrapper[7721]: I0216 02:07:06.741185 7721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-1-master-0" podStartSLOduration=8.74115732 podStartE2EDuration="8.74115732s" podCreationTimestamp="2026-02-16 02:06:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:07:06.739861088 +0000 UTC m=+30.234095370" watchObservedRunningTime="2026-02-16 02:07:06.74115732 +0000 UTC m=+30.235391582" Feb 16 02:07:06.748490 master-0 kubenswrapper[7721]: I0216 02:07:06.742863 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/857357a1-dc98-4dd5-98b3-c94b1ddf9dec-etc-docker\") pod \"catalogd-controller-manager-67bc7c997f-zc2br\" (UID: \"857357a1-dc98-4dd5-98b3-c94b1ddf9dec\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-zc2br" Feb 16 02:07:06.748490 master-0 kubenswrapper[7721]: I0216 02:07:06.742897 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/70143f1c-5cf7-45b6-9490-8aa5535443c0-client-ca\") pod \"controller-manager-6b5857dcf-vw8hp\" (UID: \"70143f1c-5cf7-45b6-9490-8aa5535443c0\") " pod="openshift-controller-manager/controller-manager-6b5857dcf-vw8hp" Feb 16 02:07:06.748490 master-0 kubenswrapper[7721]: I0216 02:07:06.742948 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/27d876a7-6a48-4942-ad96-ed8ed3aa104b-etc-containers\") pod \"operator-controller-controller-manager-85c9b89969-g9lcm\" (UID: \"27d876a7-6a48-4942-ad96-ed8ed3aa104b\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-g9lcm" Feb 16 02:07:06.748490 master-0 kubenswrapper[7721]: I0216 02:07:06.742970 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/857357a1-dc98-4dd5-98b3-c94b1ddf9dec-ca-certs\") pod \"catalogd-controller-manager-67bc7c997f-zc2br\" (UID: \"857357a1-dc98-4dd5-98b3-c94b1ddf9dec\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-zc2br" Feb 16 02:07:06.748490 master-0 kubenswrapper[7721]: I0216 02:07:06.742990 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/857357a1-dc98-4dd5-98b3-c94b1ddf9dec-catalogserver-certs\") pod \"catalogd-controller-manager-67bc7c997f-zc2br\" (UID: \"857357a1-dc98-4dd5-98b3-c94b1ddf9dec\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-zc2br" Feb 16 02:07:06.748490 master-0 kubenswrapper[7721]: I0216 02:07:06.743019 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/857357a1-dc98-4dd5-98b3-c94b1ddf9dec-etc-containers\") pod \"catalogd-controller-manager-67bc7c997f-zc2br\" (UID: \"857357a1-dc98-4dd5-98b3-c94b1ddf9dec\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-zc2br" Feb 16 02:07:06.748490 master-0 kubenswrapper[7721]: I0216 02:07:06.743044 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/857357a1-dc98-4dd5-98b3-c94b1ddf9dec-cache\") pod \"catalogd-controller-manager-67bc7c997f-zc2br\" (UID: \"857357a1-dc98-4dd5-98b3-c94b1ddf9dec\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-zc2br" Feb 16 02:07:06.748490 master-0 kubenswrapper[7721]: I0216 02:07:06.743067 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kf7tw\" (UniqueName: \"kubernetes.io/projected/27d876a7-6a48-4942-ad96-ed8ed3aa104b-kube-api-access-kf7tw\") pod \"operator-controller-controller-manager-85c9b89969-g9lcm\" (UID: \"27d876a7-6a48-4942-ad96-ed8ed3aa104b\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-g9lcm" Feb 16 02:07:06.748490 master-0 kubenswrapper[7721]: I0216 02:07:06.743084 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/27d876a7-6a48-4942-ad96-ed8ed3aa104b-etc-docker\") pod \"operator-controller-controller-manager-85c9b89969-g9lcm\" (UID: \"27d876a7-6a48-4942-ad96-ed8ed3aa104b\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-g9lcm" Feb 16 02:07:06.748490 master-0 kubenswrapper[7721]: I0216 02:07:06.743102 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lj6v2\" (UniqueName: \"kubernetes.io/projected/857357a1-dc98-4dd5-98b3-c94b1ddf9dec-kube-api-access-lj6v2\") pod \"catalogd-controller-manager-67bc7c997f-zc2br\" (UID: \"857357a1-dc98-4dd5-98b3-c94b1ddf9dec\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-zc2br" Feb 16 02:07:06.748490 master-0 kubenswrapper[7721]: I0216 02:07:06.743125 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/27d876a7-6a48-4942-ad96-ed8ed3aa104b-cache\") pod \"operator-controller-controller-manager-85c9b89969-g9lcm\" (UID: \"27d876a7-6a48-4942-ad96-ed8ed3aa104b\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-g9lcm" Feb 16 02:07:06.748490 master-0 kubenswrapper[7721]: I0216 02:07:06.743147 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/27d876a7-6a48-4942-ad96-ed8ed3aa104b-ca-certs\") pod \"operator-controller-controller-manager-85c9b89969-g9lcm\" (UID: \"27d876a7-6a48-4942-ad96-ed8ed3aa104b\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-g9lcm" Feb 16 02:07:06.748490 master-0 kubenswrapper[7721]: I0216 02:07:06.743164 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70143f1c-5cf7-45b6-9490-8aa5535443c0-config\") pod \"controller-manager-6b5857dcf-vw8hp\" (UID: \"70143f1c-5cf7-45b6-9490-8aa5535443c0\") " pod="openshift-controller-manager/controller-manager-6b5857dcf-vw8hp" Feb 16 02:07:06.748490 master-0 kubenswrapper[7721]: I0216 02:07:06.743202 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/70143f1c-5cf7-45b6-9490-8aa5535443c0-proxy-ca-bundles\") pod \"controller-manager-6b5857dcf-vw8hp\" (UID: \"70143f1c-5cf7-45b6-9490-8aa5535443c0\") " pod="openshift-controller-manager/controller-manager-6b5857dcf-vw8hp" Feb 16 02:07:06.748490 master-0 kubenswrapper[7721]: I0216 02:07:06.743283 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6v2sc\" (UniqueName: \"kubernetes.io/projected/70143f1c-5cf7-45b6-9490-8aa5535443c0-kube-api-access-6v2sc\") pod \"controller-manager-6b5857dcf-vw8hp\" (UID: \"70143f1c-5cf7-45b6-9490-8aa5535443c0\") " pod="openshift-controller-manager/controller-manager-6b5857dcf-vw8hp" Feb 16 02:07:06.748490 master-0 kubenswrapper[7721]: I0216 02:07:06.743303 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70143f1c-5cf7-45b6-9490-8aa5535443c0-serving-cert\") pod \"controller-manager-6b5857dcf-vw8hp\" (UID: \"70143f1c-5cf7-45b6-9490-8aa5535443c0\") " pod="openshift-controller-manager/controller-manager-6b5857dcf-vw8hp" Feb 16 02:07:06.748490 master-0 kubenswrapper[7721]: I0216 02:07:06.743509 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/857357a1-dc98-4dd5-98b3-c94b1ddf9dec-etc-docker\") pod \"catalogd-controller-manager-67bc7c997f-zc2br\" (UID: \"857357a1-dc98-4dd5-98b3-c94b1ddf9dec\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-zc2br" Feb 16 02:07:06.748490 master-0 kubenswrapper[7721]: I0216 02:07:06.743850 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/27d876a7-6a48-4942-ad96-ed8ed3aa104b-cache\") pod \"operator-controller-controller-manager-85c9b89969-g9lcm\" (UID: \"27d876a7-6a48-4942-ad96-ed8ed3aa104b\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-g9lcm" Feb 16 02:07:06.748490 master-0 kubenswrapper[7721]: I0216 02:07:06.743890 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/27d876a7-6a48-4942-ad96-ed8ed3aa104b-etc-docker\") pod \"operator-controller-controller-manager-85c9b89969-g9lcm\" (UID: \"27d876a7-6a48-4942-ad96-ed8ed3aa104b\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-g9lcm" Feb 16 02:07:06.748490 master-0 kubenswrapper[7721]: I0216 02:07:06.744183 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/27d876a7-6a48-4942-ad96-ed8ed3aa104b-etc-containers\") pod \"operator-controller-controller-manager-85c9b89969-g9lcm\" (UID: \"27d876a7-6a48-4942-ad96-ed8ed3aa104b\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-g9lcm" Feb 16 02:07:06.756824 master-0 kubenswrapper[7721]: I0216 02:07:06.752389 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/857357a1-dc98-4dd5-98b3-c94b1ddf9dec-ca-certs\") pod \"catalogd-controller-manager-67bc7c997f-zc2br\" (UID: \"857357a1-dc98-4dd5-98b3-c94b1ddf9dec\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-zc2br" Feb 16 02:07:06.756824 master-0 kubenswrapper[7721]: I0216 02:07:06.752582 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/857357a1-dc98-4dd5-98b3-c94b1ddf9dec-etc-containers\") pod \"catalogd-controller-manager-67bc7c997f-zc2br\" (UID: \"857357a1-dc98-4dd5-98b3-c94b1ddf9dec\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-zc2br" Feb 16 02:07:06.756824 master-0 kubenswrapper[7721]: I0216 02:07:06.752915 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/27d876a7-6a48-4942-ad96-ed8ed3aa104b-ca-certs\") pod \"operator-controller-controller-manager-85c9b89969-g9lcm\" (UID: \"27d876a7-6a48-4942-ad96-ed8ed3aa104b\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-g9lcm" Feb 16 02:07:06.756824 master-0 kubenswrapper[7721]: I0216 02:07:06.753191 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/857357a1-dc98-4dd5-98b3-c94b1ddf9dec-cache\") pod \"catalogd-controller-manager-67bc7c997f-zc2br\" (UID: \"857357a1-dc98-4dd5-98b3-c94b1ddf9dec\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-zc2br" Feb 16 02:07:06.756824 master-0 kubenswrapper[7721]: I0216 02:07:06.755719 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/857357a1-dc98-4dd5-98b3-c94b1ddf9dec-catalogserver-certs\") pod \"catalogd-controller-manager-67bc7c997f-zc2br\" (UID: \"857357a1-dc98-4dd5-98b3-c94b1ddf9dec\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-zc2br" Feb 16 02:07:06.779184 master-0 kubenswrapper[7721]: I0216 02:07:06.777882 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:07:06.782456 master-0 kubenswrapper[7721]: I0216 02:07:06.781710 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kf7tw\" (UniqueName: \"kubernetes.io/projected/27d876a7-6a48-4942-ad96-ed8ed3aa104b-kube-api-access-kf7tw\") pod \"operator-controller-controller-manager-85c9b89969-g9lcm\" (UID: \"27d876a7-6a48-4942-ad96-ed8ed3aa104b\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-g9lcm" Feb 16 02:07:06.787995 master-0 kubenswrapper[7721]: I0216 02:07:06.787951 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lj6v2\" (UniqueName: \"kubernetes.io/projected/857357a1-dc98-4dd5-98b3-c94b1ddf9dec-kube-api-access-lj6v2\") pod \"catalogd-controller-manager-67bc7c997f-zc2br\" (UID: \"857357a1-dc98-4dd5-98b3-c94b1ddf9dec\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-zc2br" Feb 16 02:07:06.816953 master-0 kubenswrapper[7721]: I0216 02:07:06.816912 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-njlg6"] Feb 16 02:07:06.817640 master-0 kubenswrapper[7721]: I0216 02:07:06.817620 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-njlg6" Feb 16 02:07:06.820291 master-0 kubenswrapper[7721]: I0216 02:07:06.820257 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 16 02:07:06.821611 master-0 kubenswrapper[7721]: I0216 02:07:06.821557 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 16 02:07:06.821666 master-0 kubenswrapper[7721]: I0216 02:07:06.821614 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 16 02:07:06.822352 master-0 kubenswrapper[7721]: I0216 02:07:06.821614 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 16 02:07:06.827412 master-0 kubenswrapper[7721]: I0216 02:07:06.827382 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-njlg6"] Feb 16 02:07:06.843942 master-0 kubenswrapper[7721]: I0216 02:07:06.842663 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-zc2br" Feb 16 02:07:06.846321 master-0 kubenswrapper[7721]: I0216 02:07:06.845868 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70143f1c-5cf7-45b6-9490-8aa5535443c0-config\") pod \"controller-manager-6b5857dcf-vw8hp\" (UID: \"70143f1c-5cf7-45b6-9490-8aa5535443c0\") " pod="openshift-controller-manager/controller-manager-6b5857dcf-vw8hp" Feb 16 02:07:06.846321 master-0 kubenswrapper[7721]: I0216 02:07:06.845917 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/70143f1c-5cf7-45b6-9490-8aa5535443c0-proxy-ca-bundles\") pod \"controller-manager-6b5857dcf-vw8hp\" (UID: \"70143f1c-5cf7-45b6-9490-8aa5535443c0\") " pod="openshift-controller-manager/controller-manager-6b5857dcf-vw8hp" Feb 16 02:07:06.846321 master-0 kubenswrapper[7721]: I0216 02:07:06.845940 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6v2sc\" (UniqueName: \"kubernetes.io/projected/70143f1c-5cf7-45b6-9490-8aa5535443c0-kube-api-access-6v2sc\") pod \"controller-manager-6b5857dcf-vw8hp\" (UID: \"70143f1c-5cf7-45b6-9490-8aa5535443c0\") " pod="openshift-controller-manager/controller-manager-6b5857dcf-vw8hp" Feb 16 02:07:06.846321 master-0 kubenswrapper[7721]: I0216 02:07:06.845990 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70143f1c-5cf7-45b6-9490-8aa5535443c0-serving-cert\") pod \"controller-manager-6b5857dcf-vw8hp\" (UID: \"70143f1c-5cf7-45b6-9490-8aa5535443c0\") " pod="openshift-controller-manager/controller-manager-6b5857dcf-vw8hp" Feb 16 02:07:06.846321 master-0 kubenswrapper[7721]: I0216 02:07:06.846305 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/70143f1c-5cf7-45b6-9490-8aa5535443c0-client-ca\") pod \"controller-manager-6b5857dcf-vw8hp\" (UID: \"70143f1c-5cf7-45b6-9490-8aa5535443c0\") " pod="openshift-controller-manager/controller-manager-6b5857dcf-vw8hp" Feb 16 02:07:06.846496 master-0 kubenswrapper[7721]: I0216 02:07:06.846424 7721 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6c0424f7-d2b6-4396-a051-7fd1dc97c67f-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 16 02:07:06.847881 master-0 kubenswrapper[7721]: I0216 02:07:06.847828 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70143f1c-5cf7-45b6-9490-8aa5535443c0-config\") pod \"controller-manager-6b5857dcf-vw8hp\" (UID: \"70143f1c-5cf7-45b6-9490-8aa5535443c0\") " pod="openshift-controller-manager/controller-manager-6b5857dcf-vw8hp" Feb 16 02:07:06.851622 master-0 kubenswrapper[7721]: I0216 02:07:06.849352 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70143f1c-5cf7-45b6-9490-8aa5535443c0-serving-cert\") pod \"controller-manager-6b5857dcf-vw8hp\" (UID: \"70143f1c-5cf7-45b6-9490-8aa5535443c0\") " pod="openshift-controller-manager/controller-manager-6b5857dcf-vw8hp" Feb 16 02:07:06.851622 master-0 kubenswrapper[7721]: I0216 02:07:06.850049 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/70143f1c-5cf7-45b6-9490-8aa5535443c0-client-ca\") pod \"controller-manager-6b5857dcf-vw8hp\" (UID: \"70143f1c-5cf7-45b6-9490-8aa5535443c0\") " pod="openshift-controller-manager/controller-manager-6b5857dcf-vw8hp" Feb 16 02:07:06.851622 master-0 kubenswrapper[7721]: I0216 02:07:06.851002 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/70143f1c-5cf7-45b6-9490-8aa5535443c0-proxy-ca-bundles\") pod \"controller-manager-6b5857dcf-vw8hp\" (UID: \"70143f1c-5cf7-45b6-9490-8aa5535443c0\") " pod="openshift-controller-manager/controller-manager-6b5857dcf-vw8hp" Feb 16 02:07:06.880697 master-0 kubenswrapper[7721]: I0216 02:07:06.878629 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6v2sc\" (UniqueName: \"kubernetes.io/projected/70143f1c-5cf7-45b6-9490-8aa5535443c0-kube-api-access-6v2sc\") pod \"controller-manager-6b5857dcf-vw8hp\" (UID: \"70143f1c-5cf7-45b6-9490-8aa5535443c0\") " pod="openshift-controller-manager/controller-manager-6b5857dcf-vw8hp" Feb 16 02:07:06.926501 master-0 kubenswrapper[7721]: I0216 02:07:06.925973 7721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-node-tuning-operator/tuned-vvw25" podStartSLOduration=4.925954137 podStartE2EDuration="4.925954137s" podCreationTimestamp="2026-02-16 02:07:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:07:06.923908216 +0000 UTC m=+30.418142478" watchObservedRunningTime="2026-02-16 02:07:06.925954137 +0000 UTC m=+30.420188399" Feb 16 02:07:06.928204 master-0 kubenswrapper[7721]: I0216 02:07:06.928177 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-747969bcdd-dth9n"] Feb 16 02:07:06.936237 master-0 kubenswrapper[7721]: I0216 02:07:06.934697 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-747969bcdd-dth9n" Feb 16 02:07:06.936237 master-0 kubenswrapper[7721]: I0216 02:07:06.935945 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-747969bcdd-dth9n"] Feb 16 02:07:06.939063 master-0 kubenswrapper[7721]: I0216 02:07:06.939016 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 16 02:07:06.940796 master-0 kubenswrapper[7721]: I0216 02:07:06.939485 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 16 02:07:06.942701 master-0 kubenswrapper[7721]: I0216 02:07:06.942675 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 16 02:07:06.942911 master-0 kubenswrapper[7721]: I0216 02:07:06.942878 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 16 02:07:06.943003 master-0 kubenswrapper[7721]: I0216 02:07:06.942989 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 16 02:07:06.943121 master-0 kubenswrapper[7721]: I0216 02:07:06.943107 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 16 02:07:06.943230 master-0 kubenswrapper[7721]: I0216 02:07:06.943216 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 16 02:07:06.943327 master-0 kubenswrapper[7721]: I0216 02:07:06.943314 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 16 02:07:06.946947 master-0 kubenswrapper[7721]: I0216 02:07:06.946917 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/89960c43-d761-48e4-a1e0-b25013788ac4-etcd-client\") pod \"apiserver-7b55c578b7-cvwml\" (UID: \"89960c43-d761-48e4-a1e0-b25013788ac4\") " pod="openshift-apiserver/apiserver-7b55c578b7-cvwml" Feb 16 02:07:06.947003 master-0 kubenswrapper[7721]: I0216 02:07:06.946958 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7ac81030-35d1-4d86-844d-65d1156d8944-config-volume\") pod \"dns-default-njlg6\" (UID: \"7ac81030-35d1-4d86-844d-65d1156d8944\") " pod="openshift-dns/dns-default-njlg6" Feb 16 02:07:06.947003 master-0 kubenswrapper[7721]: I0216 02:07:06.946999 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqmhs\" (UniqueName: \"kubernetes.io/projected/7ac81030-35d1-4d86-844d-65d1156d8944-kube-api-access-lqmhs\") pod \"dns-default-njlg6\" (UID: \"7ac81030-35d1-4d86-844d-65d1156d8944\") " pod="openshift-dns/dns-default-njlg6" Feb 16 02:07:06.947088 master-0 kubenswrapper[7721]: I0216 02:07:06.947065 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7ac81030-35d1-4d86-844d-65d1156d8944-metrics-tls\") pod \"dns-default-njlg6\" (UID: \"7ac81030-35d1-4d86-844d-65d1156d8944\") " pod="openshift-dns/dns-default-njlg6" Feb 16 02:07:06.950892 master-0 kubenswrapper[7721]: I0216 02:07:06.950861 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/89960c43-d761-48e4-a1e0-b25013788ac4-etcd-client\") pod \"apiserver-7b55c578b7-cvwml\" (UID: \"89960c43-d761-48e4-a1e0-b25013788ac4\") " pod="openshift-apiserver/apiserver-7b55c578b7-cvwml" Feb 16 02:07:06.962060 master-0 kubenswrapper[7721]: I0216 02:07:06.961337 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-g9lcm" Feb 16 02:07:07.031593 master-0 kubenswrapper[7721]: I0216 02:07:07.023900 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b5857dcf-vw8hp" Feb 16 02:07:07.050793 master-0 kubenswrapper[7721]: I0216 02:07:07.050732 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/89960c43-d761-48e4-a1e0-b25013788ac4-etcd-client\") pod \"89960c43-d761-48e4-a1e0-b25013788ac4\" (UID: \"89960c43-d761-48e4-a1e0-b25013788ac4\") " Feb 16 02:07:07.051203 master-0 kubenswrapper[7721]: I0216 02:07:07.050934 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d8b1d77b-0955-44f3-a780-e8b6813aff0b-etcd-client\") pod \"apiserver-747969bcdd-dth9n\" (UID: \"d8b1d77b-0955-44f3-a780-e8b6813aff0b\") " pod="openshift-oauth-apiserver/apiserver-747969bcdd-dth9n" Feb 16 02:07:07.051203 master-0 kubenswrapper[7721]: I0216 02:07:07.050967 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d8b1d77b-0955-44f3-a780-e8b6813aff0b-etcd-serving-ca\") pod \"apiserver-747969bcdd-dth9n\" (UID: \"d8b1d77b-0955-44f3-a780-e8b6813aff0b\") " pod="openshift-oauth-apiserver/apiserver-747969bcdd-dth9n" Feb 16 02:07:07.051203 master-0 kubenswrapper[7721]: I0216 02:07:07.051006 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8b1d77b-0955-44f3-a780-e8b6813aff0b-trusted-ca-bundle\") pod \"apiserver-747969bcdd-dth9n\" (UID: \"d8b1d77b-0955-44f3-a780-e8b6813aff0b\") " pod="openshift-oauth-apiserver/apiserver-747969bcdd-dth9n" Feb 16 02:07:07.051203 master-0 kubenswrapper[7721]: I0216 02:07:07.051029 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7ac81030-35d1-4d86-844d-65d1156d8944-metrics-tls\") pod \"dns-default-njlg6\" (UID: \"7ac81030-35d1-4d86-844d-65d1156d8944\") " pod="openshift-dns/dns-default-njlg6" Feb 16 02:07:07.051203 master-0 kubenswrapper[7721]: I0216 02:07:07.051068 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7ac81030-35d1-4d86-844d-65d1156d8944-config-volume\") pod \"dns-default-njlg6\" (UID: \"7ac81030-35d1-4d86-844d-65d1156d8944\") " pod="openshift-dns/dns-default-njlg6" Feb 16 02:07:07.051203 master-0 kubenswrapper[7721]: I0216 02:07:07.051088 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8b1d77b-0955-44f3-a780-e8b6813aff0b-serving-cert\") pod \"apiserver-747969bcdd-dth9n\" (UID: \"d8b1d77b-0955-44f3-a780-e8b6813aff0b\") " pod="openshift-oauth-apiserver/apiserver-747969bcdd-dth9n" Feb 16 02:07:07.051203 master-0 kubenswrapper[7721]: I0216 02:07:07.051104 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d8b1d77b-0955-44f3-a780-e8b6813aff0b-audit-dir\") pod \"apiserver-747969bcdd-dth9n\" (UID: \"d8b1d77b-0955-44f3-a780-e8b6813aff0b\") " pod="openshift-oauth-apiserver/apiserver-747969bcdd-dth9n" Feb 16 02:07:07.051203 master-0 kubenswrapper[7721]: I0216 02:07:07.051128 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9jxm\" (UniqueName: \"kubernetes.io/projected/d8b1d77b-0955-44f3-a780-e8b6813aff0b-kube-api-access-r9jxm\") pod \"apiserver-747969bcdd-dth9n\" (UID: \"d8b1d77b-0955-44f3-a780-e8b6813aff0b\") " pod="openshift-oauth-apiserver/apiserver-747969bcdd-dth9n" Feb 16 02:07:07.051203 master-0 kubenswrapper[7721]: I0216 02:07:07.051143 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d8b1d77b-0955-44f3-a780-e8b6813aff0b-encryption-config\") pod \"apiserver-747969bcdd-dth9n\" (UID: \"d8b1d77b-0955-44f3-a780-e8b6813aff0b\") " pod="openshift-oauth-apiserver/apiserver-747969bcdd-dth9n" Feb 16 02:07:07.051203 master-0 kubenswrapper[7721]: I0216 02:07:07.051161 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqmhs\" (UniqueName: \"kubernetes.io/projected/7ac81030-35d1-4d86-844d-65d1156d8944-kube-api-access-lqmhs\") pod \"dns-default-njlg6\" (UID: \"7ac81030-35d1-4d86-844d-65d1156d8944\") " pod="openshift-dns/dns-default-njlg6" Feb 16 02:07:07.051203 master-0 kubenswrapper[7721]: I0216 02:07:07.051176 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d8b1d77b-0955-44f3-a780-e8b6813aff0b-audit-policies\") pod \"apiserver-747969bcdd-dth9n\" (UID: \"d8b1d77b-0955-44f3-a780-e8b6813aff0b\") " pod="openshift-oauth-apiserver/apiserver-747969bcdd-dth9n" Feb 16 02:07:07.053227 master-0 kubenswrapper[7721]: I0216 02:07:07.053192 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7ac81030-35d1-4d86-844d-65d1156d8944-config-volume\") pod \"dns-default-njlg6\" (UID: \"7ac81030-35d1-4d86-844d-65d1156d8944\") " pod="openshift-dns/dns-default-njlg6" Feb 16 02:07:07.055863 master-0 kubenswrapper[7721]: I0216 02:07:07.055781 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7ac81030-35d1-4d86-844d-65d1156d8944-metrics-tls\") pod \"dns-default-njlg6\" (UID: \"7ac81030-35d1-4d86-844d-65d1156d8944\") " pod="openshift-dns/dns-default-njlg6" Feb 16 02:07:07.062179 master-0 kubenswrapper[7721]: I0216 02:07:07.062106 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89960c43-d761-48e4-a1e0-b25013788ac4-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "89960c43-d761-48e4-a1e0-b25013788ac4" (UID: "89960c43-d761-48e4-a1e0-b25013788ac4"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:07:07.074640 master-0 kubenswrapper[7721]: I0216 02:07:07.074561 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqmhs\" (UniqueName: \"kubernetes.io/projected/7ac81030-35d1-4d86-844d-65d1156d8944-kube-api-access-lqmhs\") pod \"dns-default-njlg6\" (UID: \"7ac81030-35d1-4d86-844d-65d1156d8944\") " pod="openshift-dns/dns-default-njlg6" Feb 16 02:07:07.154239 master-0 kubenswrapper[7721]: I0216 02:07:07.151902 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d8b1d77b-0955-44f3-a780-e8b6813aff0b-etcd-client\") pod \"apiserver-747969bcdd-dth9n\" (UID: \"d8b1d77b-0955-44f3-a780-e8b6813aff0b\") " pod="openshift-oauth-apiserver/apiserver-747969bcdd-dth9n" Feb 16 02:07:07.154239 master-0 kubenswrapper[7721]: I0216 02:07:07.151983 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d8b1d77b-0955-44f3-a780-e8b6813aff0b-etcd-serving-ca\") pod \"apiserver-747969bcdd-dth9n\" (UID: \"d8b1d77b-0955-44f3-a780-e8b6813aff0b\") " pod="openshift-oauth-apiserver/apiserver-747969bcdd-dth9n" Feb 16 02:07:07.154239 master-0 kubenswrapper[7721]: I0216 02:07:07.152047 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8b1d77b-0955-44f3-a780-e8b6813aff0b-trusted-ca-bundle\") pod \"apiserver-747969bcdd-dth9n\" (UID: \"d8b1d77b-0955-44f3-a780-e8b6813aff0b\") " pod="openshift-oauth-apiserver/apiserver-747969bcdd-dth9n" Feb 16 02:07:07.154239 master-0 kubenswrapper[7721]: I0216 02:07:07.152096 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8b1d77b-0955-44f3-a780-e8b6813aff0b-serving-cert\") pod \"apiserver-747969bcdd-dth9n\" (UID: \"d8b1d77b-0955-44f3-a780-e8b6813aff0b\") " pod="openshift-oauth-apiserver/apiserver-747969bcdd-dth9n" Feb 16 02:07:07.154239 master-0 kubenswrapper[7721]: I0216 02:07:07.152111 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d8b1d77b-0955-44f3-a780-e8b6813aff0b-audit-dir\") pod \"apiserver-747969bcdd-dth9n\" (UID: \"d8b1d77b-0955-44f3-a780-e8b6813aff0b\") " pod="openshift-oauth-apiserver/apiserver-747969bcdd-dth9n" Feb 16 02:07:07.154239 master-0 kubenswrapper[7721]: I0216 02:07:07.152133 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9jxm\" (UniqueName: \"kubernetes.io/projected/d8b1d77b-0955-44f3-a780-e8b6813aff0b-kube-api-access-r9jxm\") pod \"apiserver-747969bcdd-dth9n\" (UID: \"d8b1d77b-0955-44f3-a780-e8b6813aff0b\") " pod="openshift-oauth-apiserver/apiserver-747969bcdd-dth9n" Feb 16 02:07:07.154239 master-0 kubenswrapper[7721]: I0216 02:07:07.152148 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d8b1d77b-0955-44f3-a780-e8b6813aff0b-encryption-config\") pod \"apiserver-747969bcdd-dth9n\" (UID: \"d8b1d77b-0955-44f3-a780-e8b6813aff0b\") " pod="openshift-oauth-apiserver/apiserver-747969bcdd-dth9n" Feb 16 02:07:07.154239 master-0 kubenswrapper[7721]: I0216 02:07:07.152164 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d8b1d77b-0955-44f3-a780-e8b6813aff0b-audit-policies\") pod \"apiserver-747969bcdd-dth9n\" (UID: \"d8b1d77b-0955-44f3-a780-e8b6813aff0b\") " pod="openshift-oauth-apiserver/apiserver-747969bcdd-dth9n" Feb 16 02:07:07.154239 master-0 kubenswrapper[7721]: I0216 02:07:07.152219 7721 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/89960c43-d761-48e4-a1e0-b25013788ac4-etcd-client\") on node \"master-0\" DevicePath \"\"" Feb 16 02:07:07.154239 master-0 kubenswrapper[7721]: I0216 02:07:07.152861 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d8b1d77b-0955-44f3-a780-e8b6813aff0b-audit-policies\") pod \"apiserver-747969bcdd-dth9n\" (UID: \"d8b1d77b-0955-44f3-a780-e8b6813aff0b\") " pod="openshift-oauth-apiserver/apiserver-747969bcdd-dth9n" Feb 16 02:07:07.154239 master-0 kubenswrapper[7721]: I0216 02:07:07.153466 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-njlg6" Feb 16 02:07:07.154239 master-0 kubenswrapper[7721]: I0216 02:07:07.154240 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d8b1d77b-0955-44f3-a780-e8b6813aff0b-audit-dir\") pod \"apiserver-747969bcdd-dth9n\" (UID: \"d8b1d77b-0955-44f3-a780-e8b6813aff0b\") " pod="openshift-oauth-apiserver/apiserver-747969bcdd-dth9n" Feb 16 02:07:07.155828 master-0 kubenswrapper[7721]: I0216 02:07:07.154624 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d8b1d77b-0955-44f3-a780-e8b6813aff0b-etcd-serving-ca\") pod \"apiserver-747969bcdd-dth9n\" (UID: \"d8b1d77b-0955-44f3-a780-e8b6813aff0b\") " pod="openshift-oauth-apiserver/apiserver-747969bcdd-dth9n" Feb 16 02:07:07.155828 master-0 kubenswrapper[7721]: I0216 02:07:07.154690 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8b1d77b-0955-44f3-a780-e8b6813aff0b-trusted-ca-bundle\") pod \"apiserver-747969bcdd-dth9n\" (UID: \"d8b1d77b-0955-44f3-a780-e8b6813aff0b\") " pod="openshift-oauth-apiserver/apiserver-747969bcdd-dth9n" Feb 16 02:07:07.156111 master-0 kubenswrapper[7721]: I0216 02:07:07.156080 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8b1d77b-0955-44f3-a780-e8b6813aff0b-serving-cert\") pod \"apiserver-747969bcdd-dth9n\" (UID: \"d8b1d77b-0955-44f3-a780-e8b6813aff0b\") " pod="openshift-oauth-apiserver/apiserver-747969bcdd-dth9n" Feb 16 02:07:07.158545 master-0 kubenswrapper[7721]: I0216 02:07:07.158494 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d8b1d77b-0955-44f3-a780-e8b6813aff0b-encryption-config\") pod \"apiserver-747969bcdd-dth9n\" (UID: \"d8b1d77b-0955-44f3-a780-e8b6813aff0b\") " pod="openshift-oauth-apiserver/apiserver-747969bcdd-dth9n" Feb 16 02:07:07.158882 master-0 kubenswrapper[7721]: I0216 02:07:07.158846 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7b55c578b7-cvwml" Feb 16 02:07:07.172468 master-0 kubenswrapper[7721]: I0216 02:07:07.172144 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d8b1d77b-0955-44f3-a780-e8b6813aff0b-etcd-client\") pod \"apiserver-747969bcdd-dth9n\" (UID: \"d8b1d77b-0955-44f3-a780-e8b6813aff0b\") " pod="openshift-oauth-apiserver/apiserver-747969bcdd-dth9n" Feb 16 02:07:07.177047 master-0 kubenswrapper[7721]: I0216 02:07:07.176989 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9jxm\" (UniqueName: \"kubernetes.io/projected/d8b1d77b-0955-44f3-a780-e8b6813aff0b-kube-api-access-r9jxm\") pod \"apiserver-747969bcdd-dth9n\" (UID: \"d8b1d77b-0955-44f3-a780-e8b6813aff0b\") " pod="openshift-oauth-apiserver/apiserver-747969bcdd-dth9n" Feb 16 02:07:07.207503 master-0 kubenswrapper[7721]: I0216 02:07:07.207307 7721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-7b55c578b7-cvwml"] Feb 16 02:07:07.215314 master-0 kubenswrapper[7721]: I0216 02:07:07.215183 7721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-apiserver/apiserver-7b55c578b7-cvwml"] Feb 16 02:07:07.260689 master-0 kubenswrapper[7721]: I0216 02:07:07.259783 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-747969bcdd-dth9n" Feb 16 02:07:07.303710 master-0 kubenswrapper[7721]: I0216 02:07:07.303577 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-7tjn9"] Feb 16 02:07:07.304399 master-0 kubenswrapper[7721]: I0216 02:07:07.304373 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-7tjn9" Feb 16 02:07:07.316886 master-0 kubenswrapper[7721]: I0216 02:07:07.315098 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-catalogd/catalogd-controller-manager-67bc7c997f-zc2br"] Feb 16 02:07:07.327773 master-0 kubenswrapper[7721]: W0216 02:07:07.327697 7721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod857357a1_dc98_4dd5_98b3_c94b1ddf9dec.slice/crio-277a843980e79e0d5b023668b83b4bebf3c5a0fcb2193476c696948786d785da WatchSource:0}: Error finding container 277a843980e79e0d5b023668b83b4bebf3c5a0fcb2193476c696948786d785da: Status 404 returned error can't find the container with id 277a843980e79e0d5b023668b83b4bebf3c5a0fcb2193476c696948786d785da Feb 16 02:07:07.420206 master-0 kubenswrapper[7721]: I0216 02:07:07.420122 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-85c9b89969-g9lcm"] Feb 16 02:07:07.436027 master-0 kubenswrapper[7721]: I0216 02:07:07.435996 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-67f79f9544-nhm26"] Feb 16 02:07:07.436918 master-0 kubenswrapper[7721]: I0216 02:07:07.436898 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67f79f9544-nhm26" Feb 16 02:07:07.439093 master-0 kubenswrapper[7721]: I0216 02:07:07.439061 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 16 02:07:07.439153 master-0 kubenswrapper[7721]: I0216 02:07:07.439136 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 16 02:07:07.439600 master-0 kubenswrapper[7721]: I0216 02:07:07.439582 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 16 02:07:07.440092 master-0 kubenswrapper[7721]: I0216 02:07:07.440057 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 16 02:07:07.440253 master-0 kubenswrapper[7721]: I0216 02:07:07.440227 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 16 02:07:07.440924 master-0 kubenswrapper[7721]: I0216 02:07:07.440903 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 16 02:07:07.444134 master-0 kubenswrapper[7721]: I0216 02:07:07.441461 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 16 02:07:07.444134 master-0 kubenswrapper[7721]: I0216 02:07:07.441680 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 16 02:07:07.444134 master-0 kubenswrapper[7721]: I0216 02:07:07.442091 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 16 02:07:07.449819 master-0 kubenswrapper[7721]: I0216 02:07:07.449773 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 16 02:07:07.450640 master-0 kubenswrapper[7721]: I0216 02:07:07.450602 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-67f79f9544-nhm26"] Feb 16 02:07:07.464564 master-0 kubenswrapper[7721]: I0216 02:07:07.464513 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/1a07cd28-a33d-4abd-9198-ba82bacd51ba-hosts-file\") pod \"node-resolver-7tjn9\" (UID: \"1a07cd28-a33d-4abd-9198-ba82bacd51ba\") " pod="openshift-dns/node-resolver-7tjn9" Feb 16 02:07:07.464679 master-0 kubenswrapper[7721]: I0216 02:07:07.464661 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j9vb\" (UniqueName: \"kubernetes.io/projected/1a07cd28-a33d-4abd-9198-ba82bacd51ba-kube-api-access-5j9vb\") pod \"node-resolver-7tjn9\" (UID: \"1a07cd28-a33d-4abd-9198-ba82bacd51ba\") " pod="openshift-dns/node-resolver-7tjn9" Feb 16 02:07:07.492384 master-0 kubenswrapper[7721]: I0216 02:07:07.488737 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6b5857dcf-vw8hp"] Feb 16 02:07:07.565570 master-0 kubenswrapper[7721]: I0216 02:07:07.565497 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ab3f83a0-cda8-441c-9069-455a731b0a89-image-import-ca\") pod \"apiserver-67f79f9544-nhm26\" (UID: \"ab3f83a0-cda8-441c-9069-455a731b0a89\") " pod="openshift-apiserver/apiserver-67f79f9544-nhm26" Feb 16 02:07:07.565570 master-0 kubenswrapper[7721]: I0216 02:07:07.565562 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ab3f83a0-cda8-441c-9069-455a731b0a89-node-pullsecrets\") pod \"apiserver-67f79f9544-nhm26\" (UID: \"ab3f83a0-cda8-441c-9069-455a731b0a89\") " pod="openshift-apiserver/apiserver-67f79f9544-nhm26" Feb 16 02:07:07.565744 master-0 kubenswrapper[7721]: I0216 02:07:07.565657 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab3f83a0-cda8-441c-9069-455a731b0a89-serving-cert\") pod \"apiserver-67f79f9544-nhm26\" (UID: \"ab3f83a0-cda8-441c-9069-455a731b0a89\") " pod="openshift-apiserver/apiserver-67f79f9544-nhm26" Feb 16 02:07:07.565775 master-0 kubenswrapper[7721]: I0216 02:07:07.565759 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/1a07cd28-a33d-4abd-9198-ba82bacd51ba-hosts-file\") pod \"node-resolver-7tjn9\" (UID: \"1a07cd28-a33d-4abd-9198-ba82bacd51ba\") " pod="openshift-dns/node-resolver-7tjn9" Feb 16 02:07:07.565865 master-0 kubenswrapper[7721]: I0216 02:07:07.565835 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ab3f83a0-cda8-441c-9069-455a731b0a89-audit\") pod \"apiserver-67f79f9544-nhm26\" (UID: \"ab3f83a0-cda8-441c-9069-455a731b0a89\") " pod="openshift-apiserver/apiserver-67f79f9544-nhm26" Feb 16 02:07:07.565922 master-0 kubenswrapper[7721]: I0216 02:07:07.565903 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab3f83a0-cda8-441c-9069-455a731b0a89-trusted-ca-bundle\") pod \"apiserver-67f79f9544-nhm26\" (UID: \"ab3f83a0-cda8-441c-9069-455a731b0a89\") " pod="openshift-apiserver/apiserver-67f79f9544-nhm26" Feb 16 02:07:07.566057 master-0 kubenswrapper[7721]: I0216 02:07:07.566034 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5j9vb\" (UniqueName: \"kubernetes.io/projected/1a07cd28-a33d-4abd-9198-ba82bacd51ba-kube-api-access-5j9vb\") pod \"node-resolver-7tjn9\" (UID: \"1a07cd28-a33d-4abd-9198-ba82bacd51ba\") " pod="openshift-dns/node-resolver-7tjn9" Feb 16 02:07:07.566095 master-0 kubenswrapper[7721]: I0216 02:07:07.566087 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ab3f83a0-cda8-441c-9069-455a731b0a89-etcd-serving-ca\") pod \"apiserver-67f79f9544-nhm26\" (UID: \"ab3f83a0-cda8-441c-9069-455a731b0a89\") " pod="openshift-apiserver/apiserver-67f79f9544-nhm26" Feb 16 02:07:07.566128 master-0 kubenswrapper[7721]: I0216 02:07:07.566024 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/1a07cd28-a33d-4abd-9198-ba82bacd51ba-hosts-file\") pod \"node-resolver-7tjn9\" (UID: \"1a07cd28-a33d-4abd-9198-ba82bacd51ba\") " pod="openshift-dns/node-resolver-7tjn9" Feb 16 02:07:07.566160 master-0 kubenswrapper[7721]: I0216 02:07:07.566111 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab3f83a0-cda8-441c-9069-455a731b0a89-config\") pod \"apiserver-67f79f9544-nhm26\" (UID: \"ab3f83a0-cda8-441c-9069-455a731b0a89\") " pod="openshift-apiserver/apiserver-67f79f9544-nhm26" Feb 16 02:07:07.566209 master-0 kubenswrapper[7721]: I0216 02:07:07.566191 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ab3f83a0-cda8-441c-9069-455a731b0a89-etcd-client\") pod \"apiserver-67f79f9544-nhm26\" (UID: \"ab3f83a0-cda8-441c-9069-455a731b0a89\") " pod="openshift-apiserver/apiserver-67f79f9544-nhm26" Feb 16 02:07:07.566266 master-0 kubenswrapper[7721]: I0216 02:07:07.566227 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ab3f83a0-cda8-441c-9069-455a731b0a89-audit-dir\") pod \"apiserver-67f79f9544-nhm26\" (UID: \"ab3f83a0-cda8-441c-9069-455a731b0a89\") " pod="openshift-apiserver/apiserver-67f79f9544-nhm26" Feb 16 02:07:07.566313 master-0 kubenswrapper[7721]: I0216 02:07:07.566295 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlhf7\" (UniqueName: \"kubernetes.io/projected/ab3f83a0-cda8-441c-9069-455a731b0a89-kube-api-access-xlhf7\") pod \"apiserver-67f79f9544-nhm26\" (UID: \"ab3f83a0-cda8-441c-9069-455a731b0a89\") " pod="openshift-apiserver/apiserver-67f79f9544-nhm26" Feb 16 02:07:07.566376 master-0 kubenswrapper[7721]: I0216 02:07:07.566361 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ab3f83a0-cda8-441c-9069-455a731b0a89-encryption-config\") pod \"apiserver-67f79f9544-nhm26\" (UID: \"ab3f83a0-cda8-441c-9069-455a731b0a89\") " pod="openshift-apiserver/apiserver-67f79f9544-nhm26" Feb 16 02:07:07.588483 master-0 kubenswrapper[7721]: I0216 02:07:07.588263 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5j9vb\" (UniqueName: \"kubernetes.io/projected/1a07cd28-a33d-4abd-9198-ba82bacd51ba-kube-api-access-5j9vb\") pod \"node-resolver-7tjn9\" (UID: \"1a07cd28-a33d-4abd-9198-ba82bacd51ba\") " pod="openshift-dns/node-resolver-7tjn9" Feb 16 02:07:07.601262 master-0 kubenswrapper[7721]: I0216 02:07:07.601202 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-njlg6"] Feb 16 02:07:07.644154 master-0 kubenswrapper[7721]: I0216 02:07:07.644108 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-7tjn9" Feb 16 02:07:07.670533 master-0 kubenswrapper[7721]: I0216 02:07:07.670070 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ab3f83a0-cda8-441c-9069-455a731b0a89-image-import-ca\") pod \"apiserver-67f79f9544-nhm26\" (UID: \"ab3f83a0-cda8-441c-9069-455a731b0a89\") " pod="openshift-apiserver/apiserver-67f79f9544-nhm26" Feb 16 02:07:07.670533 master-0 kubenswrapper[7721]: I0216 02:07:07.670106 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ab3f83a0-cda8-441c-9069-455a731b0a89-node-pullsecrets\") pod \"apiserver-67f79f9544-nhm26\" (UID: \"ab3f83a0-cda8-441c-9069-455a731b0a89\") " pod="openshift-apiserver/apiserver-67f79f9544-nhm26" Feb 16 02:07:07.670533 master-0 kubenswrapper[7721]: I0216 02:07:07.670127 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab3f83a0-cda8-441c-9069-455a731b0a89-serving-cert\") pod \"apiserver-67f79f9544-nhm26\" (UID: \"ab3f83a0-cda8-441c-9069-455a731b0a89\") " pod="openshift-apiserver/apiserver-67f79f9544-nhm26" Feb 16 02:07:07.670533 master-0 kubenswrapper[7721]: I0216 02:07:07.670163 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ab3f83a0-cda8-441c-9069-455a731b0a89-audit\") pod \"apiserver-67f79f9544-nhm26\" (UID: \"ab3f83a0-cda8-441c-9069-455a731b0a89\") " pod="openshift-apiserver/apiserver-67f79f9544-nhm26" Feb 16 02:07:07.670533 master-0 kubenswrapper[7721]: I0216 02:07:07.670415 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab3f83a0-cda8-441c-9069-455a731b0a89-trusted-ca-bundle\") pod \"apiserver-67f79f9544-nhm26\" (UID: \"ab3f83a0-cda8-441c-9069-455a731b0a89\") " pod="openshift-apiserver/apiserver-67f79f9544-nhm26" Feb 16 02:07:07.670709 master-0 kubenswrapper[7721]: I0216 02:07:07.670549 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ab3f83a0-cda8-441c-9069-455a731b0a89-etcd-serving-ca\") pod \"apiserver-67f79f9544-nhm26\" (UID: \"ab3f83a0-cda8-441c-9069-455a731b0a89\") " pod="openshift-apiserver/apiserver-67f79f9544-nhm26" Feb 16 02:07:07.670709 master-0 kubenswrapper[7721]: I0216 02:07:07.670570 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab3f83a0-cda8-441c-9069-455a731b0a89-config\") pod \"apiserver-67f79f9544-nhm26\" (UID: \"ab3f83a0-cda8-441c-9069-455a731b0a89\") " pod="openshift-apiserver/apiserver-67f79f9544-nhm26" Feb 16 02:07:07.670709 master-0 kubenswrapper[7721]: I0216 02:07:07.670589 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ab3f83a0-cda8-441c-9069-455a731b0a89-etcd-client\") pod \"apiserver-67f79f9544-nhm26\" (UID: \"ab3f83a0-cda8-441c-9069-455a731b0a89\") " pod="openshift-apiserver/apiserver-67f79f9544-nhm26" Feb 16 02:07:07.670709 master-0 kubenswrapper[7721]: I0216 02:07:07.670608 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ab3f83a0-cda8-441c-9069-455a731b0a89-audit-dir\") pod \"apiserver-67f79f9544-nhm26\" (UID: \"ab3f83a0-cda8-441c-9069-455a731b0a89\") " pod="openshift-apiserver/apiserver-67f79f9544-nhm26" Feb 16 02:07:07.670709 master-0 kubenswrapper[7721]: I0216 02:07:07.670632 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlhf7\" (UniqueName: \"kubernetes.io/projected/ab3f83a0-cda8-441c-9069-455a731b0a89-kube-api-access-xlhf7\") pod \"apiserver-67f79f9544-nhm26\" (UID: \"ab3f83a0-cda8-441c-9069-455a731b0a89\") " pod="openshift-apiserver/apiserver-67f79f9544-nhm26" Feb 16 02:07:07.670709 master-0 kubenswrapper[7721]: I0216 02:07:07.670665 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ab3f83a0-cda8-441c-9069-455a731b0a89-encryption-config\") pod \"apiserver-67f79f9544-nhm26\" (UID: \"ab3f83a0-cda8-441c-9069-455a731b0a89\") " pod="openshift-apiserver/apiserver-67f79f9544-nhm26" Feb 16 02:07:07.670865 master-0 kubenswrapper[7721]: I0216 02:07:07.670766 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ab3f83a0-cda8-441c-9069-455a731b0a89-image-import-ca\") pod \"apiserver-67f79f9544-nhm26\" (UID: \"ab3f83a0-cda8-441c-9069-455a731b0a89\") " pod="openshift-apiserver/apiserver-67f79f9544-nhm26" Feb 16 02:07:07.670898 master-0 kubenswrapper[7721]: I0216 02:07:07.670850 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ab3f83a0-cda8-441c-9069-455a731b0a89-node-pullsecrets\") pod \"apiserver-67f79f9544-nhm26\" (UID: \"ab3f83a0-cda8-441c-9069-455a731b0a89\") " pod="openshift-apiserver/apiserver-67f79f9544-nhm26" Feb 16 02:07:07.671226 master-0 kubenswrapper[7721]: I0216 02:07:07.671190 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ab3f83a0-cda8-441c-9069-455a731b0a89-audit\") pod \"apiserver-67f79f9544-nhm26\" (UID: \"ab3f83a0-cda8-441c-9069-455a731b0a89\") " pod="openshift-apiserver/apiserver-67f79f9544-nhm26" Feb 16 02:07:07.671368 master-0 kubenswrapper[7721]: I0216 02:07:07.671339 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ab3f83a0-cda8-441c-9069-455a731b0a89-audit-dir\") pod \"apiserver-67f79f9544-nhm26\" (UID: \"ab3f83a0-cda8-441c-9069-455a731b0a89\") " pod="openshift-apiserver/apiserver-67f79f9544-nhm26" Feb 16 02:07:07.676310 master-0 kubenswrapper[7721]: I0216 02:07:07.676268 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab3f83a0-cda8-441c-9069-455a731b0a89-config\") pod \"apiserver-67f79f9544-nhm26\" (UID: \"ab3f83a0-cda8-441c-9069-455a731b0a89\") " pod="openshift-apiserver/apiserver-67f79f9544-nhm26" Feb 16 02:07:07.679289 master-0 kubenswrapper[7721]: I0216 02:07:07.679255 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ab3f83a0-cda8-441c-9069-455a731b0a89-etcd-client\") pod \"apiserver-67f79f9544-nhm26\" (UID: \"ab3f83a0-cda8-441c-9069-455a731b0a89\") " pod="openshift-apiserver/apiserver-67f79f9544-nhm26" Feb 16 02:07:07.679637 master-0 kubenswrapper[7721]: I0216 02:07:07.679606 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ab3f83a0-cda8-441c-9069-455a731b0a89-encryption-config\") pod \"apiserver-67f79f9544-nhm26\" (UID: \"ab3f83a0-cda8-441c-9069-455a731b0a89\") " pod="openshift-apiserver/apiserver-67f79f9544-nhm26" Feb 16 02:07:07.679976 master-0 kubenswrapper[7721]: I0216 02:07:07.679848 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ab3f83a0-cda8-441c-9069-455a731b0a89-etcd-serving-ca\") pod \"apiserver-67f79f9544-nhm26\" (UID: \"ab3f83a0-cda8-441c-9069-455a731b0a89\") " pod="openshift-apiserver/apiserver-67f79f9544-nhm26" Feb 16 02:07:07.680344 master-0 kubenswrapper[7721]: I0216 02:07:07.680092 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab3f83a0-cda8-441c-9069-455a731b0a89-trusted-ca-bundle\") pod \"apiserver-67f79f9544-nhm26\" (UID: \"ab3f83a0-cda8-441c-9069-455a731b0a89\") " pod="openshift-apiserver/apiserver-67f79f9544-nhm26" Feb 16 02:07:07.681886 master-0 kubenswrapper[7721]: I0216 02:07:07.681857 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab3f83a0-cda8-441c-9069-455a731b0a89-serving-cert\") pod \"apiserver-67f79f9544-nhm26\" (UID: \"ab3f83a0-cda8-441c-9069-455a731b0a89\") " pod="openshift-apiserver/apiserver-67f79f9544-nhm26" Feb 16 02:07:07.706319 master-0 kubenswrapper[7721]: I0216 02:07:07.701785 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-747969bcdd-dth9n"] Feb 16 02:07:07.706319 master-0 kubenswrapper[7721]: W0216 02:07:07.704859 7721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd8b1d77b_0955_44f3_a780_e8b6813aff0b.slice/crio-e1b1da20eba3376b5d5cc12627127c03688f3c11231a9b29f93539b8e871d089 WatchSource:0}: Error finding container e1b1da20eba3376b5d5cc12627127c03688f3c11231a9b29f93539b8e871d089: Status 404 returned error can't find the container with id e1b1da20eba3376b5d5cc12627127c03688f3c11231a9b29f93539b8e871d089 Feb 16 02:07:07.727542 master-0 kubenswrapper[7721]: I0216 02:07:07.725877 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlhf7\" (UniqueName: \"kubernetes.io/projected/ab3f83a0-cda8-441c-9069-455a731b0a89-kube-api-access-xlhf7\") pod \"apiserver-67f79f9544-nhm26\" (UID: \"ab3f83a0-cda8-441c-9069-455a731b0a89\") " pod="openshift-apiserver/apiserver-67f79f9544-nhm26" Feb 16 02:07:07.765290 master-0 kubenswrapper[7721]: I0216 02:07:07.765236 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67f79f9544-nhm26" Feb 16 02:07:08.166760 master-0 kubenswrapper[7721]: I0216 02:07:08.166688 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-7tjn9" event={"ID":"1a07cd28-a33d-4abd-9198-ba82bacd51ba","Type":"ContainerStarted","Data":"854eb9f114609aff404e3deeebb715aaa4039b8c1ca28fab4d83b7e0663f8755"} Feb 16 02:07:08.166760 master-0 kubenswrapper[7721]: I0216 02:07:08.166744 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-7tjn9" event={"ID":"1a07cd28-a33d-4abd-9198-ba82bacd51ba","Type":"ContainerStarted","Data":"5ae32b90c0d9ee58a1aa24c15f184b908d4118e753afef2c6de3006c4e387eaa"} Feb 16 02:07:08.171400 master-0 kubenswrapper[7721]: I0216 02:07:08.171278 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-747969bcdd-dth9n" event={"ID":"d8b1d77b-0955-44f3-a780-e8b6813aff0b","Type":"ContainerStarted","Data":"e1b1da20eba3376b5d5cc12627127c03688f3c11231a9b29f93539b8e871d089"} Feb 16 02:07:08.175089 master-0 kubenswrapper[7721]: I0216 02:07:08.174366 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-njlg6" event={"ID":"7ac81030-35d1-4d86-844d-65d1156d8944","Type":"ContainerStarted","Data":"72eeb0715f00e0e3f5313c94b059bdc92b87e369f23bdc44266053d9ec61b371"} Feb 16 02:07:08.177923 master-0 kubenswrapper[7721]: I0216 02:07:08.177855 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-g9lcm" event={"ID":"27d876a7-6a48-4942-ad96-ed8ed3aa104b","Type":"ContainerStarted","Data":"c271ac813a8f5a3a39832ee7ead6a7d3a824c336acfc0b6cc420bc51065ffc80"} Feb 16 02:07:08.178084 master-0 kubenswrapper[7721]: I0216 02:07:08.177940 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-g9lcm" event={"ID":"27d876a7-6a48-4942-ad96-ed8ed3aa104b","Type":"ContainerStarted","Data":"940771c91c013a004b3132c01c764c048ed22316fa2e21d7b58deed65f3ed4cf"} Feb 16 02:07:08.178084 master-0 kubenswrapper[7721]: I0216 02:07:08.177964 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-g9lcm" event={"ID":"27d876a7-6a48-4942-ad96-ed8ed3aa104b","Type":"ContainerStarted","Data":"49e455816343db1118d91c9ccd06253262823aebbe81edbfd55679229021e38d"} Feb 16 02:07:08.178223 master-0 kubenswrapper[7721]: I0216 02:07:08.178112 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-g9lcm" Feb 16 02:07:08.179937 master-0 kubenswrapper[7721]: I0216 02:07:08.179882 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b5857dcf-vw8hp" event={"ID":"70143f1c-5cf7-45b6-9490-8aa5535443c0","Type":"ContainerStarted","Data":"9db2b93c65513a36c60691617c314fa4cc54b5ac8a400f33061ef12f09836b7a"} Feb 16 02:07:08.181812 master-0 kubenswrapper[7721]: I0216 02:07:08.181757 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-zc2br" event={"ID":"857357a1-dc98-4dd5-98b3-c94b1ddf9dec","Type":"ContainerStarted","Data":"5636e1e80751f3a3c96789a21a3143daf15c7ab0cfa132d87dcb28a679f13f01"} Feb 16 02:07:08.181812 master-0 kubenswrapper[7721]: I0216 02:07:08.181803 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-zc2br" event={"ID":"857357a1-dc98-4dd5-98b3-c94b1ddf9dec","Type":"ContainerStarted","Data":"d63a8c86661f89a3a49c867fa184a4421a34ce6f73b059534b4b61541e8a7cb9"} Feb 16 02:07:08.182004 master-0 kubenswrapper[7721]: I0216 02:07:08.181823 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-zc2br" event={"ID":"857357a1-dc98-4dd5-98b3-c94b1ddf9dec","Type":"ContainerStarted","Data":"277a843980e79e0d5b023668b83b4bebf3c5a0fcb2193476c696948786d785da"} Feb 16 02:07:08.182977 master-0 kubenswrapper[7721]: I0216 02:07:08.182927 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-zc2br" Feb 16 02:07:08.236358 master-0 kubenswrapper[7721]: I0216 02:07:08.232787 7721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-7tjn9" podStartSLOduration=1.232763152 podStartE2EDuration="1.232763152s" podCreationTimestamp="2026-02-16 02:07:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:07:08.231253594 +0000 UTC m=+31.725487866" watchObservedRunningTime="2026-02-16 02:07:08.232763152 +0000 UTC m=+31.726997434" Feb 16 02:07:08.352503 master-0 kubenswrapper[7721]: I0216 02:07:08.336280 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-67f79f9544-nhm26"] Feb 16 02:07:08.408466 master-0 kubenswrapper[7721]: I0216 02:07:08.393428 7721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-g9lcm" podStartSLOduration=2.393400699 podStartE2EDuration="2.393400699s" podCreationTimestamp="2026-02-16 02:07:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:07:08.353932969 +0000 UTC m=+31.848167261" watchObservedRunningTime="2026-02-16 02:07:08.393400699 +0000 UTC m=+31.887634961" Feb 16 02:07:08.430602 master-0 kubenswrapper[7721]: I0216 02:07:08.411043 7721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-67f79f9544-nhm26"] Feb 16 02:07:08.431857 master-0 kubenswrapper[7721]: I0216 02:07:08.431779 7721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-zc2br" podStartSLOduration=2.431744921 podStartE2EDuration="2.431744921s" podCreationTimestamp="2026-02-16 02:07:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:07:08.408960625 +0000 UTC m=+31.903194887" watchObservedRunningTime="2026-02-16 02:07:08.431744921 +0000 UTC m=+31.925979183" Feb 16 02:07:08.733225 master-0 kubenswrapper[7721]: I0216 02:07:08.733076 7721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89960c43-d761-48e4-a1e0-b25013788ac4" path="/var/lib/kubelet/pods/89960c43-d761-48e4-a1e0-b25013788ac4/volumes" Feb 16 02:07:09.522397 master-0 kubenswrapper[7721]: I0216 02:07:09.522297 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/76915cba-7c11-4bd8-9943-81de74e7781b-srv-cert\") pod \"catalog-operator-588944557d-2z8fq\" (UID: \"76915cba-7c11-4bd8-9943-81de74e7781b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-2z8fq" Feb 16 02:07:09.522397 master-0 kubenswrapper[7721]: I0216 02:07:09.522386 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/467d92a2-1cf3-418d-b41e-8e5f9d7a5b74-srv-cert\") pod \"olm-operator-6b56bd877c-qwp9g\" (UID: \"467d92a2-1cf3-418d-b41e-8e5f9d7a5b74\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-qwp9g" Feb 16 02:07:09.535044 master-0 kubenswrapper[7721]: I0216 02:07:09.532087 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/76915cba-7c11-4bd8-9943-81de74e7781b-srv-cert\") pod \"catalog-operator-588944557d-2z8fq\" (UID: \"76915cba-7c11-4bd8-9943-81de74e7781b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-2z8fq" Feb 16 02:07:09.537799 master-0 kubenswrapper[7721]: I0216 02:07:09.536647 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/467d92a2-1cf3-418d-b41e-8e5f9d7a5b74-srv-cert\") pod \"olm-operator-6b56bd877c-qwp9g\" (UID: \"467d92a2-1cf3-418d-b41e-8e5f9d7a5b74\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-qwp9g" Feb 16 02:07:09.623405 master-0 kubenswrapper[7721]: I0216 02:07:09.623336 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b6088119-1125-4271-8c0b-0675e700edd9-webhook-certs\") pod \"multus-admission-controller-7c64d55f8-62wr2\" (UID: \"b6088119-1125-4271-8c0b-0675e700edd9\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-62wr2" Feb 16 02:07:09.623405 master-0 kubenswrapper[7721]: I0216 02:07:09.623402 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/23755f7f-dce6-4dcf-9664-22e3aedb5c81-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-tkqng\" (UID: \"23755f7f-dce6-4dcf-9664-22e3aedb5c81\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-tkqng" Feb 16 02:07:09.623863 master-0 kubenswrapper[7721]: I0216 02:07:09.623463 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/21686a6d-f685-4fb6-98af-3e8a39c5981b-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-5q4zs\" (UID: \"21686a6d-f685-4fb6-98af-3e8a39c5981b\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-5q4zs" Feb 16 02:07:09.623863 master-0 kubenswrapper[7721]: I0216 02:07:09.623491 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7f0f9b7d-e663-4927-861b-a9544d483b6e-metrics-certs\") pod \"network-metrics-daemon-gn9mv\" (UID: \"7f0f9b7d-e663-4927-861b-a9544d483b6e\") " pod="openshift-multus/network-metrics-daemon-gn9mv" Feb 16 02:07:09.623863 master-0 kubenswrapper[7721]: I0216 02:07:09.623515 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/bde83629-b39c-401e-bc30-5ce205638918-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-8nl7s\" (UID: \"bde83629-b39c-401e-bc30-5ce205638918\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-8nl7s" Feb 16 02:07:09.627002 master-0 kubenswrapper[7721]: I0216 02:07:09.626964 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7f0f9b7d-e663-4927-861b-a9544d483b6e-metrics-certs\") pod \"network-metrics-daemon-gn9mv\" (UID: \"7f0f9b7d-e663-4927-861b-a9544d483b6e\") " pod="openshift-multus/network-metrics-daemon-gn9mv" Feb 16 02:07:09.627963 master-0 kubenswrapper[7721]: I0216 02:07:09.627911 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/23755f7f-dce6-4dcf-9664-22e3aedb5c81-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-tkqng\" (UID: \"23755f7f-dce6-4dcf-9664-22e3aedb5c81\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-tkqng" Feb 16 02:07:09.628984 master-0 kubenswrapper[7721]: I0216 02:07:09.628928 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/bde83629-b39c-401e-bc30-5ce205638918-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-8nl7s\" (UID: \"bde83629-b39c-401e-bc30-5ce205638918\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-8nl7s" Feb 16 02:07:09.632950 master-0 kubenswrapper[7721]: I0216 02:07:09.632861 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b6088119-1125-4271-8c0b-0675e700edd9-webhook-certs\") pod \"multus-admission-controller-7c64d55f8-62wr2\" (UID: \"b6088119-1125-4271-8c0b-0675e700edd9\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-62wr2" Feb 16 02:07:09.638589 master-0 kubenswrapper[7721]: I0216 02:07:09.638535 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/21686a6d-f685-4fb6-98af-3e8a39c5981b-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-5q4zs\" (UID: \"21686a6d-f685-4fb6-98af-3e8a39c5981b\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-5q4zs" Feb 16 02:07:09.735024 master-0 kubenswrapper[7721]: W0216 02:07:09.734425 7721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab3f83a0_cda8_441c_9069_455a731b0a89.slice/crio-b7926415c82753ccf4491a81d02ad31d5faae903226a791550a4520f43659b98 WatchSource:0}: Error finding container b7926415c82753ccf4491a81d02ad31d5faae903226a791550a4520f43659b98: Status 404 returned error can't find the container with id b7926415c82753ccf4491a81d02ad31d5faae903226a791550a4520f43659b98 Feb 16 02:07:09.772093 master-0 kubenswrapper[7721]: I0216 02:07:09.772028 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-qwp9g" Feb 16 02:07:09.772372 master-0 kubenswrapper[7721]: I0216 02:07:09.772212 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-2z8fq" Feb 16 02:07:09.785950 master-0 kubenswrapper[7721]: I0216 02:07:09.785907 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-8nl7s" Feb 16 02:07:09.792269 master-0 kubenswrapper[7721]: I0216 02:07:09.792229 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-5q4zs" Feb 16 02:07:09.792823 master-0 kubenswrapper[7721]: I0216 02:07:09.792792 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-tkqng" Feb 16 02:07:09.794598 master-0 kubenswrapper[7721]: I0216 02:07:09.794559 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-7c64d55f8-62wr2" Feb 16 02:07:09.795104 master-0 kubenswrapper[7721]: I0216 02:07:09.795058 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gn9mv" Feb 16 02:07:10.202016 master-0 kubenswrapper[7721]: I0216 02:07:10.201930 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67f79f9544-nhm26" event={"ID":"ab3f83a0-cda8-441c-9069-455a731b0a89","Type":"ContainerStarted","Data":"b7926415c82753ccf4491a81d02ad31d5faae903226a791550a4520f43659b98"} Feb 16 02:07:11.199126 master-0 kubenswrapper[7721]: I0216 02:07:11.198924 7721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Feb 16 02:07:11.200059 master-0 kubenswrapper[7721]: I0216 02:07:11.199958 7721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-1-master-0" podUID="45476b57-538b-4031-80c9-8025a49e8e88" containerName="installer" containerID="cri-o://844468f5a50bbaafc70df5bd3186ac9f161b117793658af967159de5ce3fa619" gracePeriod=30 Feb 16 02:07:12.225057 master-0 kubenswrapper[7721]: I0216 02:07:12.224456 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b5857dcf-vw8hp" event={"ID":"70143f1c-5cf7-45b6-9490-8aa5535443c0","Type":"ContainerStarted","Data":"0d84b73e94769f5bb383da7748f5fe5a5e16a29541c8ca48421a9e237e1dd5ac"} Feb 16 02:07:12.227226 master-0 kubenswrapper[7721]: I0216 02:07:12.227168 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6b5857dcf-vw8hp" Feb 16 02:07:12.235664 master-0 kubenswrapper[7721]: I0216 02:07:12.233617 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8b946f9d6-8xg2l" event={"ID":"e80c9628-20e2-4326-b1f5-810fd755d7ca","Type":"ContainerStarted","Data":"c3139c1b519564ebf90db17e138f8f50b4797f5c614e087f4bcb2f54580a8b86"} Feb 16 02:07:12.235664 master-0 kubenswrapper[7721]: I0216 02:07:12.233816 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-8b946f9d6-8xg2l" Feb 16 02:07:12.238733 master-0 kubenswrapper[7721]: I0216 02:07:12.238683 7721 generic.go:334] "Generic (PLEG): container finished" podID="d8b1d77b-0955-44f3-a780-e8b6813aff0b" containerID="16aa4cd5e26bb981ce33f09412128895c5e974af71a06a250577d00d2a158d8c" exitCode=0 Feb 16 02:07:12.239504 master-0 kubenswrapper[7721]: I0216 02:07:12.238741 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-747969bcdd-dth9n" event={"ID":"d8b1d77b-0955-44f3-a780-e8b6813aff0b","Type":"ContainerDied","Data":"16aa4cd5e26bb981ce33f09412128895c5e974af71a06a250577d00d2a158d8c"} Feb 16 02:07:12.250779 master-0 kubenswrapper[7721]: I0216 02:07:12.249889 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6b5857dcf-vw8hp" Feb 16 02:07:12.259233 master-0 kubenswrapper[7721]: I0216 02:07:12.253787 7721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6b5857dcf-vw8hp" podStartSLOduration=7.096010649 podStartE2EDuration="11.253768065s" podCreationTimestamp="2026-02-16 02:07:01 +0000 UTC" firstStartedPulling="2026-02-16 02:07:07.52033228 +0000 UTC m=+31.014566542" lastFinishedPulling="2026-02-16 02:07:11.678089686 +0000 UTC m=+35.172323958" observedRunningTime="2026-02-16 02:07:12.250224077 +0000 UTC m=+35.744458339" watchObservedRunningTime="2026-02-16 02:07:12.253768065 +0000 UTC m=+35.748002327" Feb 16 02:07:12.259233 master-0 kubenswrapper[7721]: I0216 02:07:12.256059 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-njlg6" event={"ID":"7ac81030-35d1-4d86-844d-65d1156d8944","Type":"ContainerStarted","Data":"3182b128ec7b8364565c3a695a97d17b1f213934954fd9a42fb754724af0c532"} Feb 16 02:07:12.312200 master-0 kubenswrapper[7721]: I0216 02:07:12.311899 7721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-8b946f9d6-8xg2l" podStartSLOduration=5.776874627 podStartE2EDuration="11.311869367s" podCreationTimestamp="2026-02-16 02:07:01 +0000 UTC" firstStartedPulling="2026-02-16 02:07:06.135489187 +0000 UTC m=+29.629723459" lastFinishedPulling="2026-02-16 02:07:11.670483907 +0000 UTC m=+35.164718199" observedRunningTime="2026-02-16 02:07:12.282498578 +0000 UTC m=+35.776732840" watchObservedRunningTime="2026-02-16 02:07:12.311869367 +0000 UTC m=+35.806103639" Feb 16 02:07:12.333250 master-0 kubenswrapper[7721]: I0216 02:07:12.333189 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-756d64c8c4-5q4zs"] Feb 16 02:07:12.364181 master-0 kubenswrapper[7721]: I0216 02:07:12.364102 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-tkqng"] Feb 16 02:07:12.364883 master-0 kubenswrapper[7721]: I0216 02:07:12.364845 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-qwp9g"] Feb 16 02:07:12.366468 master-0 kubenswrapper[7721]: I0216 02:07:12.366443 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-6cc5b65c6b-8nl7s"] Feb 16 02:07:12.368029 master-0 kubenswrapper[7721]: I0216 02:07:12.368002 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-gn9mv"] Feb 16 02:07:12.449685 master-0 kubenswrapper[7721]: I0216 02:07:12.444962 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-8b946f9d6-8xg2l" Feb 16 02:07:12.568712 master-0 kubenswrapper[7721]: I0216 02:07:12.568494 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-7c64d55f8-62wr2"] Feb 16 02:07:12.570484 master-0 kubenswrapper[7721]: I0216 02:07:12.570068 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-588944557d-2z8fq"] Feb 16 02:07:12.572168 master-0 kubenswrapper[7721]: W0216 02:07:12.572037 7721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6088119_1125_4271_8c0b_0675e700edd9.slice/crio-2acb9f3d5afb22161d4bddb28de7cc78d9278231cc5c43495ca4ac920d494a5e WatchSource:0}: Error finding container 2acb9f3d5afb22161d4bddb28de7cc78d9278231cc5c43495ca4ac920d494a5e: Status 404 returned error can't find the container with id 2acb9f3d5afb22161d4bddb28de7cc78d9278231cc5c43495ca4ac920d494a5e Feb 16 02:07:12.644920 master-0 kubenswrapper[7721]: I0216 02:07:12.644012 7721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-oauth-apiserver/apiserver-747969bcdd-dth9n"] Feb 16 02:07:13.263586 master-0 kubenswrapper[7721]: I0216 02:07:13.263500 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-8nl7s" event={"ID":"bde83629-b39c-401e-bc30-5ce205638918","Type":"ContainerStarted","Data":"7fa429b0e25c1a3fe1e0505256e1e19c0180fa5596c0ad7692ae5d9ed02cf363"} Feb 16 02:07:13.265223 master-0 kubenswrapper[7721]: I0216 02:07:13.265198 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-tkqng" event={"ID":"23755f7f-dce6-4dcf-9664-22e3aedb5c81","Type":"ContainerStarted","Data":"c8796a4f760bb08512498223d9e68fdd1a396c25f9b1ae5dc3aeedc8a5ba0006"} Feb 16 02:07:13.265223 master-0 kubenswrapper[7721]: I0216 02:07:13.265222 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-tkqng" event={"ID":"23755f7f-dce6-4dcf-9664-22e3aedb5c81","Type":"ContainerStarted","Data":"d84f8ee868ca64ba6d178e5808a6769d2388e0cd861fe9fb0b41b3a95b7ca11c"} Feb 16 02:07:13.267020 master-0 kubenswrapper[7721]: I0216 02:07:13.266969 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-2z8fq" event={"ID":"76915cba-7c11-4bd8-9943-81de74e7781b","Type":"ContainerStarted","Data":"47db8b5a98f2dc9d4943b35b3435ff0c482e2b9de6a84290f317a3f7a8c32db3"} Feb 16 02:07:13.271721 master-0 kubenswrapper[7721]: I0216 02:07:13.271686 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-njlg6" event={"ID":"7ac81030-35d1-4d86-844d-65d1156d8944","Type":"ContainerStarted","Data":"023fa503120756be248084f278b6e2896cdac8acf4d3e01530e888ded1bd0c15"} Feb 16 02:07:13.271905 master-0 kubenswrapper[7721]: I0216 02:07:13.271878 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-njlg6" Feb 16 02:07:13.273158 master-0 kubenswrapper[7721]: I0216 02:07:13.273124 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7c64d55f8-62wr2" event={"ID":"b6088119-1125-4271-8c0b-0675e700edd9","Type":"ContainerStarted","Data":"2acb9f3d5afb22161d4bddb28de7cc78d9278231cc5c43495ca4ac920d494a5e"} Feb 16 02:07:13.274510 master-0 kubenswrapper[7721]: I0216 02:07:13.274468 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-5q4zs" event={"ID":"21686a6d-f685-4fb6-98af-3e8a39c5981b","Type":"ContainerStarted","Data":"8ec6f01b2f5ea3a8202f9a73fc87e859e09b3484fd1471c52da0bdebc2c97dba"} Feb 16 02:07:13.275851 master-0 kubenswrapper[7721]: I0216 02:07:13.275812 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-qwp9g" event={"ID":"467d92a2-1cf3-418d-b41e-8e5f9d7a5b74","Type":"ContainerStarted","Data":"28301e075bce0b536459155be0ee0c5701514de23367f3127b28f30bb9102319"} Feb 16 02:07:13.278190 master-0 kubenswrapper[7721]: I0216 02:07:13.278159 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-gn9mv" event={"ID":"7f0f9b7d-e663-4927-861b-a9544d483b6e","Type":"ContainerStarted","Data":"62a59e4f54ec7d1606e491cb0a7ae58230aff5c54f133cd8f1f5aab5922fd486"} Feb 16 02:07:13.281593 master-0 kubenswrapper[7721]: I0216 02:07:13.281560 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-747969bcdd-dth9n" event={"ID":"d8b1d77b-0955-44f3-a780-e8b6813aff0b","Type":"ContainerStarted","Data":"83c4f2970f98a6eeab62cbdfc7dd76ac41aa01e82c1c4f0bce75a351953e7d24"} Feb 16 02:07:13.281883 master-0 kubenswrapper[7721]: I0216 02:07:13.281841 7721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-oauth-apiserver/apiserver-747969bcdd-dth9n" podUID="d8b1d77b-0955-44f3-a780-e8b6813aff0b" containerName="oauth-apiserver" containerID="cri-o://83c4f2970f98a6eeab62cbdfc7dd76ac41aa01e82c1c4f0bce75a351953e7d24" gracePeriod=120 Feb 16 02:07:13.293888 master-0 kubenswrapper[7721]: I0216 02:07:13.293536 7721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-njlg6" podStartSLOduration=3.2610406149999998 podStartE2EDuration="7.293514212s" podCreationTimestamp="2026-02-16 02:07:06 +0000 UTC" firstStartedPulling="2026-02-16 02:07:07.637921308 +0000 UTC m=+31.132155560" lastFinishedPulling="2026-02-16 02:07:11.670394855 +0000 UTC m=+35.164629157" observedRunningTime="2026-02-16 02:07:13.293137362 +0000 UTC m=+36.787371624" watchObservedRunningTime="2026-02-16 02:07:13.293514212 +0000 UTC m=+36.787748474" Feb 16 02:07:13.317919 master-0 kubenswrapper[7721]: I0216 02:07:13.317523 7721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-747969bcdd-dth9n" podStartSLOduration=3.269091874 podStartE2EDuration="7.317496627s" podCreationTimestamp="2026-02-16 02:07:06 +0000 UTC" firstStartedPulling="2026-02-16 02:07:07.712102019 +0000 UTC m=+31.206336281" lastFinishedPulling="2026-02-16 02:07:11.760506772 +0000 UTC m=+35.254741034" observedRunningTime="2026-02-16 02:07:13.316332408 +0000 UTC m=+36.810566680" watchObservedRunningTime="2026-02-16 02:07:13.317496627 +0000 UTC m=+36.811730889" Feb 16 02:07:13.594136 master-0 kubenswrapper[7721]: I0216 02:07:13.594022 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Feb 16 02:07:13.594902 master-0 kubenswrapper[7721]: I0216 02:07:13.594600 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Feb 16 02:07:13.601350 master-0 kubenswrapper[7721]: I0216 02:07:13.601248 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Feb 16 02:07:13.703101 master-0 kubenswrapper[7721]: I0216 02:07:13.702837 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/96142dec-d2af-459f-a45e-8e29ca30d4fd-var-lock\") pod \"installer-2-master-0\" (UID: \"96142dec-d2af-459f-a45e-8e29ca30d4fd\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 16 02:07:13.703101 master-0 kubenswrapper[7721]: I0216 02:07:13.703002 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/96142dec-d2af-459f-a45e-8e29ca30d4fd-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"96142dec-d2af-459f-a45e-8e29ca30d4fd\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 16 02:07:13.704280 master-0 kubenswrapper[7721]: I0216 02:07:13.703674 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/96142dec-d2af-459f-a45e-8e29ca30d4fd-kube-api-access\") pod \"installer-2-master-0\" (UID: \"96142dec-d2af-459f-a45e-8e29ca30d4fd\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 16 02:07:13.804977 master-0 kubenswrapper[7721]: I0216 02:07:13.804919 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/96142dec-d2af-459f-a45e-8e29ca30d4fd-var-lock\") pod \"installer-2-master-0\" (UID: \"96142dec-d2af-459f-a45e-8e29ca30d4fd\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 16 02:07:13.805164 master-0 kubenswrapper[7721]: I0216 02:07:13.804996 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/96142dec-d2af-459f-a45e-8e29ca30d4fd-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"96142dec-d2af-459f-a45e-8e29ca30d4fd\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 16 02:07:13.805164 master-0 kubenswrapper[7721]: I0216 02:07:13.805026 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/96142dec-d2af-459f-a45e-8e29ca30d4fd-kube-api-access\") pod \"installer-2-master-0\" (UID: \"96142dec-d2af-459f-a45e-8e29ca30d4fd\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 16 02:07:13.805535 master-0 kubenswrapper[7721]: I0216 02:07:13.805509 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/96142dec-d2af-459f-a45e-8e29ca30d4fd-var-lock\") pod \"installer-2-master-0\" (UID: \"96142dec-d2af-459f-a45e-8e29ca30d4fd\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 16 02:07:13.805588 master-0 kubenswrapper[7721]: I0216 02:07:13.805549 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/96142dec-d2af-459f-a45e-8e29ca30d4fd-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"96142dec-d2af-459f-a45e-8e29ca30d4fd\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 16 02:07:13.821006 master-0 kubenswrapper[7721]: I0216 02:07:13.820966 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/96142dec-d2af-459f-a45e-8e29ca30d4fd-kube-api-access\") pod \"installer-2-master-0\" (UID: \"96142dec-d2af-459f-a45e-8e29ca30d4fd\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 16 02:07:13.912618 master-0 kubenswrapper[7721]: I0216 02:07:13.912510 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Feb 16 02:07:16.849194 master-0 kubenswrapper[7721]: I0216 02:07:16.849137 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-zc2br" Feb 16 02:07:16.971458 master-0 kubenswrapper[7721]: I0216 02:07:16.969367 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-g9lcm" Feb 16 02:07:16.984323 master-0 kubenswrapper[7721]: I0216 02:07:16.983613 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-hswdj" Feb 16 02:07:17.261908 master-0 kubenswrapper[7721]: I0216 02:07:17.261777 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-747969bcdd-dth9n" Feb 16 02:07:18.007466 master-0 kubenswrapper[7721]: I0216 02:07:18.002354 7721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-version/cluster-version-operator-76959b6567-9fxxl"] Feb 16 02:07:18.007466 master-0 kubenswrapper[7721]: I0216 02:07:18.002625 7721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-version/cluster-version-operator-76959b6567-9fxxl" podUID="864c0ef4-319c-457c-aa3b-adf0c3e5a0ff" containerName="cluster-version-operator" containerID="cri-o://84ef78af69cbe8091fbf2ce8a0527fe0720ef4fc8feb2eb70bee52acdec1eabf" gracePeriod=130 Feb 16 02:07:18.309895 master-0 kubenswrapper[7721]: I0216 02:07:18.309733 7721 generic.go:334] "Generic (PLEG): container finished" podID="864c0ef4-319c-457c-aa3b-adf0c3e5a0ff" containerID="84ef78af69cbe8091fbf2ce8a0527fe0720ef4fc8feb2eb70bee52acdec1eabf" exitCode=0 Feb 16 02:07:18.309895 master-0 kubenswrapper[7721]: I0216 02:07:18.309829 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-76959b6567-9fxxl" event={"ID":"864c0ef4-319c-457c-aa3b-adf0c3e5a0ff","Type":"ContainerDied","Data":"84ef78af69cbe8091fbf2ce8a0527fe0720ef4fc8feb2eb70bee52acdec1eabf"} Feb 16 02:07:20.876098 master-0 kubenswrapper[7721]: I0216 02:07:20.875271 7721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-76959b6567-9fxxl" Feb 16 02:07:21.003097 master-0 kubenswrapper[7721]: I0216 02:07:20.997849 7721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Feb 16 02:07:21.053548 master-0 kubenswrapper[7721]: I0216 02:07:21.053506 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-serving-cert\") pod \"864c0ef4-319c-457c-aa3b-adf0c3e5a0ff\" (UID: \"864c0ef4-319c-457c-aa3b-adf0c3e5a0ff\") " Feb 16 02:07:21.053697 master-0 kubenswrapper[7721]: I0216 02:07:21.053554 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-etc-ssl-certs\") pod \"864c0ef4-319c-457c-aa3b-adf0c3e5a0ff\" (UID: \"864c0ef4-319c-457c-aa3b-adf0c3e5a0ff\") " Feb 16 02:07:21.053697 master-0 kubenswrapper[7721]: I0216 02:07:21.053598 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-kube-api-access\") pod \"864c0ef4-319c-457c-aa3b-adf0c3e5a0ff\" (UID: \"864c0ef4-319c-457c-aa3b-adf0c3e5a0ff\") " Feb 16 02:07:21.053697 master-0 kubenswrapper[7721]: I0216 02:07:21.053630 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-service-ca\") pod \"864c0ef4-319c-457c-aa3b-adf0c3e5a0ff\" (UID: \"864c0ef4-319c-457c-aa3b-adf0c3e5a0ff\") " Feb 16 02:07:21.053697 master-0 kubenswrapper[7721]: I0216 02:07:21.053656 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-etc-cvo-updatepayloads\") pod \"864c0ef4-319c-457c-aa3b-adf0c3e5a0ff\" (UID: \"864c0ef4-319c-457c-aa3b-adf0c3e5a0ff\") " Feb 16 02:07:21.053809 master-0 kubenswrapper[7721]: I0216 02:07:21.053749 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-etc-ssl-certs" (OuterVolumeSpecName: "etc-ssl-certs") pod "864c0ef4-319c-457c-aa3b-adf0c3e5a0ff" (UID: "864c0ef4-319c-457c-aa3b-adf0c3e5a0ff"). InnerVolumeSpecName "etc-ssl-certs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:07:21.053873 master-0 kubenswrapper[7721]: I0216 02:07:21.053855 7721 reconciler_common.go:293] "Volume detached for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-etc-ssl-certs\") on node \"master-0\" DevicePath \"\"" Feb 16 02:07:21.055676 master-0 kubenswrapper[7721]: I0216 02:07:21.055609 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-service-ca" (OuterVolumeSpecName: "service-ca") pod "864c0ef4-319c-457c-aa3b-adf0c3e5a0ff" (UID: "864c0ef4-319c-457c-aa3b-adf0c3e5a0ff"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:07:21.055730 master-0 kubenswrapper[7721]: I0216 02:07:21.055699 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-etc-cvo-updatepayloads" (OuterVolumeSpecName: "etc-cvo-updatepayloads") pod "864c0ef4-319c-457c-aa3b-adf0c3e5a0ff" (UID: "864c0ef4-319c-457c-aa3b-adf0c3e5a0ff"). InnerVolumeSpecName "etc-cvo-updatepayloads". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:07:21.059241 master-0 kubenswrapper[7721]: I0216 02:07:21.059188 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "864c0ef4-319c-457c-aa3b-adf0c3e5a0ff" (UID: "864c0ef4-319c-457c-aa3b-adf0c3e5a0ff"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:07:21.062846 master-0 kubenswrapper[7721]: I0216 02:07:21.062463 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "864c0ef4-319c-457c-aa3b-adf0c3e5a0ff" (UID: "864c0ef4-319c-457c-aa3b-adf0c3e5a0ff"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:07:21.116832 master-0 kubenswrapper[7721]: I0216 02:07:21.115137 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-1-master-0"] Feb 16 02:07:21.116832 master-0 kubenswrapper[7721]: E0216 02:07:21.115369 7721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="864c0ef4-319c-457c-aa3b-adf0c3e5a0ff" containerName="cluster-version-operator" Feb 16 02:07:21.116832 master-0 kubenswrapper[7721]: I0216 02:07:21.115385 7721 state_mem.go:107] "Deleted CPUSet assignment" podUID="864c0ef4-319c-457c-aa3b-adf0c3e5a0ff" containerName="cluster-version-operator" Feb 16 02:07:21.116832 master-0 kubenswrapper[7721]: I0216 02:07:21.115492 7721 memory_manager.go:354] "RemoveStaleState removing state" podUID="864c0ef4-319c-457c-aa3b-adf0c3e5a0ff" containerName="cluster-version-operator" Feb 16 02:07:21.116832 master-0 kubenswrapper[7721]: I0216 02:07:21.115862 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Feb 16 02:07:21.118189 master-0 kubenswrapper[7721]: I0216 02:07:21.118144 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"kube-root-ca.crt" Feb 16 02:07:21.128999 master-0 kubenswrapper[7721]: I0216 02:07:21.128368 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-1-master-0"] Feb 16 02:07:21.154469 master-0 kubenswrapper[7721]: I0216 02:07:21.154393 7721 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 16 02:07:21.158517 master-0 kubenswrapper[7721]: I0216 02:07:21.154715 7721 reconciler_common.go:293] "Volume detached for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-etc-cvo-updatepayloads\") on node \"master-0\" DevicePath \"\"" Feb 16 02:07:21.158517 master-0 kubenswrapper[7721]: I0216 02:07:21.154745 7721 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 16 02:07:21.158517 master-0 kubenswrapper[7721]: I0216 02:07:21.154756 7721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 16 02:07:21.257515 master-0 kubenswrapper[7721]: I0216 02:07:21.257408 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4733c2df-0f5a-4696-b8c6-2568ebc7debc-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"4733c2df-0f5a-4696-b8c6-2568ebc7debc\") " pod="openshift-etcd/installer-1-master-0" Feb 16 02:07:21.257515 master-0 kubenswrapper[7721]: I0216 02:07:21.257484 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4733c2df-0f5a-4696-b8c6-2568ebc7debc-var-lock\") pod \"installer-1-master-0\" (UID: \"4733c2df-0f5a-4696-b8c6-2568ebc7debc\") " pod="openshift-etcd/installer-1-master-0" Feb 16 02:07:21.257515 master-0 kubenswrapper[7721]: I0216 02:07:21.257505 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4733c2df-0f5a-4696-b8c6-2568ebc7debc-kube-api-access\") pod \"installer-1-master-0\" (UID: \"4733c2df-0f5a-4696-b8c6-2568ebc7debc\") " pod="openshift-etcd/installer-1-master-0" Feb 16 02:07:21.268489 master-0 kubenswrapper[7721]: I0216 02:07:21.265568 7721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6b5857dcf-vw8hp"] Feb 16 02:07:21.268489 master-0 kubenswrapper[7721]: I0216 02:07:21.265898 7721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6b5857dcf-vw8hp" podUID="70143f1c-5cf7-45b6-9490-8aa5535443c0" containerName="controller-manager" containerID="cri-o://0d84b73e94769f5bb383da7748f5fe5a5e16a29541c8ca48421a9e237e1dd5ac" gracePeriod=30 Feb 16 02:07:21.286197 master-0 kubenswrapper[7721]: I0216 02:07:21.285649 7721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Feb 16 02:07:21.299261 master-0 kubenswrapper[7721]: I0216 02:07:21.296222 7721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8b946f9d6-8xg2l"] Feb 16 02:07:21.299261 master-0 kubenswrapper[7721]: I0216 02:07:21.296611 7721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-8b946f9d6-8xg2l" podUID="e80c9628-20e2-4326-b1f5-810fd755d7ca" containerName="route-controller-manager" containerID="cri-o://c3139c1b519564ebf90db17e138f8f50b4797f5c614e087f4bcb2f54580a8b86" gracePeriod=30 Feb 16 02:07:21.344566 master-0 kubenswrapper[7721]: I0216 02:07:21.342005 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7c64d55f8-62wr2" event={"ID":"b6088119-1125-4271-8c0b-0675e700edd9","Type":"ContainerStarted","Data":"1ad7005008299c6a17798fe23c749281008c22dc9cf9892a566f5f5d5a934a24"} Feb 16 02:07:21.347512 master-0 kubenswrapper[7721]: I0216 02:07:21.346556 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-5q4zs" event={"ID":"21686a6d-f685-4fb6-98af-3e8a39c5981b","Type":"ContainerStarted","Data":"f19c0db2c5c9e48c5edd2936388785d2e52f4913a586e0e1aef49faccfb6f1de"} Feb 16 02:07:21.352790 master-0 kubenswrapper[7721]: I0216 02:07:21.352046 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-qwp9g" event={"ID":"467d92a2-1cf3-418d-b41e-8e5f9d7a5b74","Type":"ContainerStarted","Data":"53bb6dfce420388b911925622f667674fa6239f4f9c397171d6328d4738fab3f"} Feb 16 02:07:21.352790 master-0 kubenswrapper[7721]: I0216 02:07:21.352587 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-qwp9g" Feb 16 02:07:21.354600 master-0 kubenswrapper[7721]: I0216 02:07:21.354493 7721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-76959b6567-9fxxl" Feb 16 02:07:21.356688 master-0 kubenswrapper[7721]: I0216 02:07:21.354704 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-76959b6567-9fxxl" event={"ID":"864c0ef4-319c-457c-aa3b-adf0c3e5a0ff","Type":"ContainerDied","Data":"3882e37ffb30b9304e7e790d6887d03af1856850530011753433622539e4cab4"} Feb 16 02:07:21.356688 master-0 kubenswrapper[7721]: W0216 02:07:21.356600 7721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod96142dec_d2af_459f_a45e_8e29ca30d4fd.slice/crio-aecda77080879d1c94103f2409e2d691f37346ebab3adcb041c89ffa50e6d4c1 WatchSource:0}: Error finding container aecda77080879d1c94103f2409e2d691f37346ebab3adcb041c89ffa50e6d4c1: Status 404 returned error can't find the container with id aecda77080879d1c94103f2409e2d691f37346ebab3adcb041c89ffa50e6d4c1 Feb 16 02:07:21.356869 master-0 kubenswrapper[7721]: I0216 02:07:21.356793 7721 scope.go:117] "RemoveContainer" containerID="84ef78af69cbe8091fbf2ce8a0527fe0720ef4fc8feb2eb70bee52acdec1eabf" Feb 16 02:07:21.358662 master-0 kubenswrapper[7721]: I0216 02:07:21.358163 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4733c2df-0f5a-4696-b8c6-2568ebc7debc-var-lock\") pod \"installer-1-master-0\" (UID: \"4733c2df-0f5a-4696-b8c6-2568ebc7debc\") " pod="openshift-etcd/installer-1-master-0" Feb 16 02:07:21.358662 master-0 kubenswrapper[7721]: I0216 02:07:21.358218 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4733c2df-0f5a-4696-b8c6-2568ebc7debc-kube-api-access\") pod \"installer-1-master-0\" (UID: \"4733c2df-0f5a-4696-b8c6-2568ebc7debc\") " pod="openshift-etcd/installer-1-master-0" Feb 16 02:07:21.358662 master-0 kubenswrapper[7721]: I0216 02:07:21.358257 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4733c2df-0f5a-4696-b8c6-2568ebc7debc-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"4733c2df-0f5a-4696-b8c6-2568ebc7debc\") " pod="openshift-etcd/installer-1-master-0" Feb 16 02:07:21.361038 master-0 kubenswrapper[7721]: I0216 02:07:21.359635 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4733c2df-0f5a-4696-b8c6-2568ebc7debc-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"4733c2df-0f5a-4696-b8c6-2568ebc7debc\") " pod="openshift-etcd/installer-1-master-0" Feb 16 02:07:21.365541 master-0 kubenswrapper[7721]: I0216 02:07:21.362919 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4733c2df-0f5a-4696-b8c6-2568ebc7debc-var-lock\") pod \"installer-1-master-0\" (UID: \"4733c2df-0f5a-4696-b8c6-2568ebc7debc\") " pod="openshift-etcd/installer-1-master-0" Feb 16 02:07:21.365541 master-0 kubenswrapper[7721]: I0216 02:07:21.363280 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-tkqng" event={"ID":"23755f7f-dce6-4dcf-9664-22e3aedb5c81","Type":"ContainerStarted","Data":"2f12531c82f370a1fa09ec7f01326ed0fd582df87939a5c0bd560230586f4734"} Feb 16 02:07:21.365541 master-0 kubenswrapper[7721]: I0216 02:07:21.363459 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-tkqng" Feb 16 02:07:21.365541 master-0 kubenswrapper[7721]: I0216 02:07:21.365127 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-2z8fq" event={"ID":"76915cba-7c11-4bd8-9943-81de74e7781b","Type":"ContainerStarted","Data":"6df9afe0b4e2ceadcdd8779f874831c8d8eace0ab8f743be66a2864b8eb1b140"} Feb 16 02:07:21.366607 master-0 kubenswrapper[7721]: I0216 02:07:21.365989 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-2z8fq" Feb 16 02:07:21.368398 master-0 kubenswrapper[7721]: I0216 02:07:21.368265 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-gn9mv" event={"ID":"7f0f9b7d-e663-4927-861b-a9544d483b6e","Type":"ContainerStarted","Data":"3b8fc043937a6531bb169606abb9959b7b8165d000a55e33530731cf255e350b"} Feb 16 02:07:21.372346 master-0 kubenswrapper[7721]: I0216 02:07:21.372317 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-qwp9g" Feb 16 02:07:21.374527 master-0 kubenswrapper[7721]: I0216 02:07:21.374483 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-2z8fq" Feb 16 02:07:21.384620 master-0 kubenswrapper[7721]: I0216 02:07:21.383645 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-8nl7s" event={"ID":"bde83629-b39c-401e-bc30-5ce205638918","Type":"ContainerStarted","Data":"4d828055a40abd365d5f9304f3bb2e1eea303420e0dae2b1729b6e96c17c65b6"} Feb 16 02:07:21.384620 master-0 kubenswrapper[7721]: I0216 02:07:21.384540 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-8nl7s" Feb 16 02:07:21.389711 master-0 kubenswrapper[7721]: I0216 02:07:21.386277 7721 patch_prober.go:28] interesting pod/marketplace-operator-6cc5b65c6b-8nl7s container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.23:8080/healthz\": dial tcp 10.128.0.23:8080: connect: connection refused" start-of-body= Feb 16 02:07:21.389711 master-0 kubenswrapper[7721]: I0216 02:07:21.386333 7721 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-8nl7s" podUID="bde83629-b39c-401e-bc30-5ce205638918" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.23:8080/healthz\": dial tcp 10.128.0.23:8080: connect: connection refused" Feb 16 02:07:21.400416 master-0 kubenswrapper[7721]: I0216 02:07:21.400267 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4733c2df-0f5a-4696-b8c6-2568ebc7debc-kube-api-access\") pod \"installer-1-master-0\" (UID: \"4733c2df-0f5a-4696-b8c6-2568ebc7debc\") " pod="openshift-etcd/installer-1-master-0" Feb 16 02:07:21.485598 master-0 kubenswrapper[7721]: I0216 02:07:21.484807 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Feb 16 02:07:21.501392 master-0 kubenswrapper[7721]: I0216 02:07:21.499145 7721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-version/cluster-version-operator-76959b6567-9fxxl"] Feb 16 02:07:21.509700 master-0 kubenswrapper[7721]: I0216 02:07:21.505530 7721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cluster-version/cluster-version-operator-76959b6567-9fxxl"] Feb 16 02:07:21.560312 master-0 kubenswrapper[7721]: I0216 02:07:21.560261 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-649c4f5445-lzvc4"] Feb 16 02:07:21.571319 master-0 kubenswrapper[7721]: I0216 02:07:21.571209 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-649c4f5445-lzvc4" Feb 16 02:07:21.574378 master-0 kubenswrapper[7721]: I0216 02:07:21.574339 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 16 02:07:21.574604 master-0 kubenswrapper[7721]: I0216 02:07:21.574559 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 16 02:07:21.575482 master-0 kubenswrapper[7721]: I0216 02:07:21.575344 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 16 02:07:21.669486 master-0 kubenswrapper[7721]: I0216 02:07:21.669297 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ad700b17-ba2a-41d4-8bec-538a009a613b-serving-cert\") pod \"cluster-version-operator-649c4f5445-lzvc4\" (UID: \"ad700b17-ba2a-41d4-8bec-538a009a613b\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-lzvc4" Feb 16 02:07:21.669486 master-0 kubenswrapper[7721]: I0216 02:07:21.669394 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ad700b17-ba2a-41d4-8bec-538a009a613b-kube-api-access\") pod \"cluster-version-operator-649c4f5445-lzvc4\" (UID: \"ad700b17-ba2a-41d4-8bec-538a009a613b\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-lzvc4" Feb 16 02:07:21.669486 master-0 kubenswrapper[7721]: I0216 02:07:21.669423 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ad700b17-ba2a-41d4-8bec-538a009a613b-etc-cvo-updatepayloads\") pod \"cluster-version-operator-649c4f5445-lzvc4\" (UID: \"ad700b17-ba2a-41d4-8bec-538a009a613b\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-lzvc4" Feb 16 02:07:21.669486 master-0 kubenswrapper[7721]: I0216 02:07:21.669465 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ad700b17-ba2a-41d4-8bec-538a009a613b-service-ca\") pod \"cluster-version-operator-649c4f5445-lzvc4\" (UID: \"ad700b17-ba2a-41d4-8bec-538a009a613b\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-lzvc4" Feb 16 02:07:21.669486 master-0 kubenswrapper[7721]: I0216 02:07:21.669492 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ad700b17-ba2a-41d4-8bec-538a009a613b-etc-ssl-certs\") pod \"cluster-version-operator-649c4f5445-lzvc4\" (UID: \"ad700b17-ba2a-41d4-8bec-538a009a613b\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-lzvc4" Feb 16 02:07:21.744695 master-0 kubenswrapper[7721]: I0216 02:07:21.744642 7721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b5857dcf-vw8hp" Feb 16 02:07:21.770484 master-0 kubenswrapper[7721]: I0216 02:07:21.770428 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ad700b17-ba2a-41d4-8bec-538a009a613b-kube-api-access\") pod \"cluster-version-operator-649c4f5445-lzvc4\" (UID: \"ad700b17-ba2a-41d4-8bec-538a009a613b\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-lzvc4" Feb 16 02:07:21.770675 master-0 kubenswrapper[7721]: I0216 02:07:21.770487 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ad700b17-ba2a-41d4-8bec-538a009a613b-etc-cvo-updatepayloads\") pod \"cluster-version-operator-649c4f5445-lzvc4\" (UID: \"ad700b17-ba2a-41d4-8bec-538a009a613b\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-lzvc4" Feb 16 02:07:21.770675 master-0 kubenswrapper[7721]: I0216 02:07:21.770549 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ad700b17-ba2a-41d4-8bec-538a009a613b-service-ca\") pod \"cluster-version-operator-649c4f5445-lzvc4\" (UID: \"ad700b17-ba2a-41d4-8bec-538a009a613b\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-lzvc4" Feb 16 02:07:21.770675 master-0 kubenswrapper[7721]: I0216 02:07:21.770570 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ad700b17-ba2a-41d4-8bec-538a009a613b-etc-ssl-certs\") pod \"cluster-version-operator-649c4f5445-lzvc4\" (UID: \"ad700b17-ba2a-41d4-8bec-538a009a613b\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-lzvc4" Feb 16 02:07:21.770675 master-0 kubenswrapper[7721]: I0216 02:07:21.770608 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ad700b17-ba2a-41d4-8bec-538a009a613b-serving-cert\") pod \"cluster-version-operator-649c4f5445-lzvc4\" (UID: \"ad700b17-ba2a-41d4-8bec-538a009a613b\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-lzvc4" Feb 16 02:07:21.771099 master-0 kubenswrapper[7721]: I0216 02:07:21.771053 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ad700b17-ba2a-41d4-8bec-538a009a613b-etc-cvo-updatepayloads\") pod \"cluster-version-operator-649c4f5445-lzvc4\" (UID: \"ad700b17-ba2a-41d4-8bec-538a009a613b\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-lzvc4" Feb 16 02:07:21.771163 master-0 kubenswrapper[7721]: I0216 02:07:21.771118 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ad700b17-ba2a-41d4-8bec-538a009a613b-etc-ssl-certs\") pod \"cluster-version-operator-649c4f5445-lzvc4\" (UID: \"ad700b17-ba2a-41d4-8bec-538a009a613b\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-lzvc4" Feb 16 02:07:21.773253 master-0 kubenswrapper[7721]: I0216 02:07:21.773205 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ad700b17-ba2a-41d4-8bec-538a009a613b-service-ca\") pod \"cluster-version-operator-649c4f5445-lzvc4\" (UID: \"ad700b17-ba2a-41d4-8bec-538a009a613b\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-lzvc4" Feb 16 02:07:21.774692 master-0 kubenswrapper[7721]: I0216 02:07:21.774640 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ad700b17-ba2a-41d4-8bec-538a009a613b-serving-cert\") pod \"cluster-version-operator-649c4f5445-lzvc4\" (UID: \"ad700b17-ba2a-41d4-8bec-538a009a613b\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-lzvc4" Feb 16 02:07:21.788595 master-0 kubenswrapper[7721]: I0216 02:07:21.788552 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ad700b17-ba2a-41d4-8bec-538a009a613b-kube-api-access\") pod \"cluster-version-operator-649c4f5445-lzvc4\" (UID: \"ad700b17-ba2a-41d4-8bec-538a009a613b\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-lzvc4" Feb 16 02:07:21.872013 master-0 kubenswrapper[7721]: I0216 02:07:21.871963 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70143f1c-5cf7-45b6-9490-8aa5535443c0-config\") pod \"70143f1c-5cf7-45b6-9490-8aa5535443c0\" (UID: \"70143f1c-5cf7-45b6-9490-8aa5535443c0\") " Feb 16 02:07:21.872109 master-0 kubenswrapper[7721]: I0216 02:07:21.872057 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6v2sc\" (UniqueName: \"kubernetes.io/projected/70143f1c-5cf7-45b6-9490-8aa5535443c0-kube-api-access-6v2sc\") pod \"70143f1c-5cf7-45b6-9490-8aa5535443c0\" (UID: \"70143f1c-5cf7-45b6-9490-8aa5535443c0\") " Feb 16 02:07:21.872145 master-0 kubenswrapper[7721]: I0216 02:07:21.872105 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/70143f1c-5cf7-45b6-9490-8aa5535443c0-client-ca\") pod \"70143f1c-5cf7-45b6-9490-8aa5535443c0\" (UID: \"70143f1c-5cf7-45b6-9490-8aa5535443c0\") " Feb 16 02:07:21.872176 master-0 kubenswrapper[7721]: I0216 02:07:21.872157 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/70143f1c-5cf7-45b6-9490-8aa5535443c0-proxy-ca-bundles\") pod \"70143f1c-5cf7-45b6-9490-8aa5535443c0\" (UID: \"70143f1c-5cf7-45b6-9490-8aa5535443c0\") " Feb 16 02:07:21.872235 master-0 kubenswrapper[7721]: I0216 02:07:21.872194 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70143f1c-5cf7-45b6-9490-8aa5535443c0-serving-cert\") pod \"70143f1c-5cf7-45b6-9490-8aa5535443c0\" (UID: \"70143f1c-5cf7-45b6-9490-8aa5535443c0\") " Feb 16 02:07:21.873033 master-0 kubenswrapper[7721]: I0216 02:07:21.872931 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70143f1c-5cf7-45b6-9490-8aa5535443c0-client-ca" (OuterVolumeSpecName: "client-ca") pod "70143f1c-5cf7-45b6-9490-8aa5535443c0" (UID: "70143f1c-5cf7-45b6-9490-8aa5535443c0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:07:21.873549 master-0 kubenswrapper[7721]: I0216 02:07:21.873514 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70143f1c-5cf7-45b6-9490-8aa5535443c0-config" (OuterVolumeSpecName: "config") pod "70143f1c-5cf7-45b6-9490-8aa5535443c0" (UID: "70143f1c-5cf7-45b6-9490-8aa5535443c0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:07:21.873686 master-0 kubenswrapper[7721]: I0216 02:07:21.873648 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70143f1c-5cf7-45b6-9490-8aa5535443c0-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "70143f1c-5cf7-45b6-9490-8aa5535443c0" (UID: "70143f1c-5cf7-45b6-9490-8aa5535443c0"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:07:21.877098 master-0 kubenswrapper[7721]: I0216 02:07:21.877056 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70143f1c-5cf7-45b6-9490-8aa5535443c0-kube-api-access-6v2sc" (OuterVolumeSpecName: "kube-api-access-6v2sc") pod "70143f1c-5cf7-45b6-9490-8aa5535443c0" (UID: "70143f1c-5cf7-45b6-9490-8aa5535443c0"). InnerVolumeSpecName "kube-api-access-6v2sc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:07:21.877523 master-0 kubenswrapper[7721]: I0216 02:07:21.877462 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70143f1c-5cf7-45b6-9490-8aa5535443c0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "70143f1c-5cf7-45b6-9490-8aa5535443c0" (UID: "70143f1c-5cf7-45b6-9490-8aa5535443c0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:07:21.904030 master-0 kubenswrapper[7721]: I0216 02:07:21.903960 7721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8b946f9d6-8xg2l" Feb 16 02:07:21.911404 master-0 kubenswrapper[7721]: I0216 02:07:21.911380 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-649c4f5445-lzvc4" Feb 16 02:07:21.932694 master-0 kubenswrapper[7721]: W0216 02:07:21.932630 7721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad700b17_ba2a_41d4_8bec_538a009a613b.slice/crio-c4f6fda3b9ae78250d34cf8b7c30c1c11f08347cd23c715bd4e28a7a30204cde WatchSource:0}: Error finding container c4f6fda3b9ae78250d34cf8b7c30c1c11f08347cd23c715bd4e28a7a30204cde: Status 404 returned error can't find the container with id c4f6fda3b9ae78250d34cf8b7c30c1c11f08347cd23c715bd4e28a7a30204cde Feb 16 02:07:21.981065 master-0 kubenswrapper[7721]: I0216 02:07:21.981004 7721 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70143f1c-5cf7-45b6-9490-8aa5535443c0-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 16 02:07:21.981065 master-0 kubenswrapper[7721]: I0216 02:07:21.981051 7721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70143f1c-5cf7-45b6-9490-8aa5535443c0-config\") on node \"master-0\" DevicePath \"\"" Feb 16 02:07:21.981065 master-0 kubenswrapper[7721]: I0216 02:07:21.981067 7721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6v2sc\" (UniqueName: \"kubernetes.io/projected/70143f1c-5cf7-45b6-9490-8aa5535443c0-kube-api-access-6v2sc\") on node \"master-0\" DevicePath \"\"" Feb 16 02:07:21.981334 master-0 kubenswrapper[7721]: I0216 02:07:21.981080 7721 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/70143f1c-5cf7-45b6-9490-8aa5535443c0-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 16 02:07:21.981334 master-0 kubenswrapper[7721]: I0216 02:07:21.981093 7721 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/70143f1c-5cf7-45b6-9490-8aa5535443c0-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Feb 16 02:07:22.066472 master-0 kubenswrapper[7721]: I0216 02:07:22.066200 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-1-master-0"] Feb 16 02:07:22.081427 master-0 kubenswrapper[7721]: I0216 02:07:22.081349 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e80c9628-20e2-4326-b1f5-810fd755d7ca-client-ca\") pod \"e80c9628-20e2-4326-b1f5-810fd755d7ca\" (UID: \"e80c9628-20e2-4326-b1f5-810fd755d7ca\") " Feb 16 02:07:22.081427 master-0 kubenswrapper[7721]: I0216 02:07:22.081411 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e80c9628-20e2-4326-b1f5-810fd755d7ca-config\") pod \"e80c9628-20e2-4326-b1f5-810fd755d7ca\" (UID: \"e80c9628-20e2-4326-b1f5-810fd755d7ca\") " Feb 16 02:07:22.081553 master-0 kubenswrapper[7721]: I0216 02:07:22.081466 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvm7n\" (UniqueName: \"kubernetes.io/projected/e80c9628-20e2-4326-b1f5-810fd755d7ca-kube-api-access-bvm7n\") pod \"e80c9628-20e2-4326-b1f5-810fd755d7ca\" (UID: \"e80c9628-20e2-4326-b1f5-810fd755d7ca\") " Feb 16 02:07:22.081553 master-0 kubenswrapper[7721]: I0216 02:07:22.081499 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e80c9628-20e2-4326-b1f5-810fd755d7ca-serving-cert\") pod \"e80c9628-20e2-4326-b1f5-810fd755d7ca\" (UID: \"e80c9628-20e2-4326-b1f5-810fd755d7ca\") " Feb 16 02:07:22.082449 master-0 kubenswrapper[7721]: I0216 02:07:22.082398 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e80c9628-20e2-4326-b1f5-810fd755d7ca-client-ca" (OuterVolumeSpecName: "client-ca") pod "e80c9628-20e2-4326-b1f5-810fd755d7ca" (UID: "e80c9628-20e2-4326-b1f5-810fd755d7ca"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:07:22.084144 master-0 kubenswrapper[7721]: I0216 02:07:22.084075 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e80c9628-20e2-4326-b1f5-810fd755d7ca-config" (OuterVolumeSpecName: "config") pod "e80c9628-20e2-4326-b1f5-810fd755d7ca" (UID: "e80c9628-20e2-4326-b1f5-810fd755d7ca"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:07:22.084748 master-0 kubenswrapper[7721]: I0216 02:07:22.084712 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e80c9628-20e2-4326-b1f5-810fd755d7ca-kube-api-access-bvm7n" (OuterVolumeSpecName: "kube-api-access-bvm7n") pod "e80c9628-20e2-4326-b1f5-810fd755d7ca" (UID: "e80c9628-20e2-4326-b1f5-810fd755d7ca"). InnerVolumeSpecName "kube-api-access-bvm7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:07:22.086660 master-0 kubenswrapper[7721]: I0216 02:07:22.085630 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e80c9628-20e2-4326-b1f5-810fd755d7ca-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e80c9628-20e2-4326-b1f5-810fd755d7ca" (UID: "e80c9628-20e2-4326-b1f5-810fd755d7ca"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:07:22.138281 master-0 kubenswrapper[7721]: I0216 02:07:22.138227 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9kb98"] Feb 16 02:07:22.138478 master-0 kubenswrapper[7721]: E0216 02:07:22.138393 7721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e80c9628-20e2-4326-b1f5-810fd755d7ca" containerName="route-controller-manager" Feb 16 02:07:22.138478 master-0 kubenswrapper[7721]: I0216 02:07:22.138406 7721 state_mem.go:107] "Deleted CPUSet assignment" podUID="e80c9628-20e2-4326-b1f5-810fd755d7ca" containerName="route-controller-manager" Feb 16 02:07:22.138478 master-0 kubenswrapper[7721]: E0216 02:07:22.138420 7721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70143f1c-5cf7-45b6-9490-8aa5535443c0" containerName="controller-manager" Feb 16 02:07:22.138478 master-0 kubenswrapper[7721]: I0216 02:07:22.138426 7721 state_mem.go:107] "Deleted CPUSet assignment" podUID="70143f1c-5cf7-45b6-9490-8aa5535443c0" containerName="controller-manager" Feb 16 02:07:22.138604 master-0 kubenswrapper[7721]: I0216 02:07:22.138508 7721 memory_manager.go:354] "RemoveStaleState removing state" podUID="e80c9628-20e2-4326-b1f5-810fd755d7ca" containerName="route-controller-manager" Feb 16 02:07:22.138604 master-0 kubenswrapper[7721]: I0216 02:07:22.138518 7721 memory_manager.go:354] "RemoveStaleState removing state" podUID="70143f1c-5cf7-45b6-9490-8aa5535443c0" containerName="controller-manager" Feb 16 02:07:22.139015 master-0 kubenswrapper[7721]: I0216 02:07:22.138990 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9kb98" Feb 16 02:07:22.156345 master-0 kubenswrapper[7721]: I0216 02:07:22.156298 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-njlg6" Feb 16 02:07:22.163898 master-0 kubenswrapper[7721]: I0216 02:07:22.163804 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9kb98"] Feb 16 02:07:22.182533 master-0 kubenswrapper[7721]: I0216 02:07:22.182359 7721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvm7n\" (UniqueName: \"kubernetes.io/projected/e80c9628-20e2-4326-b1f5-810fd755d7ca-kube-api-access-bvm7n\") on node \"master-0\" DevicePath \"\"" Feb 16 02:07:22.182533 master-0 kubenswrapper[7721]: I0216 02:07:22.182426 7721 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e80c9628-20e2-4326-b1f5-810fd755d7ca-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 16 02:07:22.182533 master-0 kubenswrapper[7721]: I0216 02:07:22.182484 7721 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e80c9628-20e2-4326-b1f5-810fd755d7ca-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 16 02:07:22.182533 master-0 kubenswrapper[7721]: I0216 02:07:22.182511 7721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e80c9628-20e2-4326-b1f5-810fd755d7ca-config\") on node \"master-0\" DevicePath \"\"" Feb 16 02:07:22.284144 master-0 kubenswrapper[7721]: I0216 02:07:22.284071 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/774b6ff9-0e37-48fd-96c6-571859fec492-catalog-content\") pod \"redhat-operators-9kb98\" (UID: \"774b6ff9-0e37-48fd-96c6-571859fec492\") " pod="openshift-marketplace/redhat-operators-9kb98" Feb 16 02:07:22.284384 master-0 kubenswrapper[7721]: I0216 02:07:22.284348 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzcbl\" (UniqueName: \"kubernetes.io/projected/774b6ff9-0e37-48fd-96c6-571859fec492-kube-api-access-vzcbl\") pod \"redhat-operators-9kb98\" (UID: \"774b6ff9-0e37-48fd-96c6-571859fec492\") " pod="openshift-marketplace/redhat-operators-9kb98" Feb 16 02:07:22.284479 master-0 kubenswrapper[7721]: I0216 02:07:22.284396 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/774b6ff9-0e37-48fd-96c6-571859fec492-utilities\") pod \"redhat-operators-9kb98\" (UID: \"774b6ff9-0e37-48fd-96c6-571859fec492\") " pod="openshift-marketplace/redhat-operators-9kb98" Feb 16 02:07:22.385640 master-0 kubenswrapper[7721]: I0216 02:07:22.385035 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/774b6ff9-0e37-48fd-96c6-571859fec492-catalog-content\") pod \"redhat-operators-9kb98\" (UID: \"774b6ff9-0e37-48fd-96c6-571859fec492\") " pod="openshift-marketplace/redhat-operators-9kb98" Feb 16 02:07:22.385640 master-0 kubenswrapper[7721]: I0216 02:07:22.385098 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzcbl\" (UniqueName: \"kubernetes.io/projected/774b6ff9-0e37-48fd-96c6-571859fec492-kube-api-access-vzcbl\") pod \"redhat-operators-9kb98\" (UID: \"774b6ff9-0e37-48fd-96c6-571859fec492\") " pod="openshift-marketplace/redhat-operators-9kb98" Feb 16 02:07:22.385640 master-0 kubenswrapper[7721]: I0216 02:07:22.385116 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/774b6ff9-0e37-48fd-96c6-571859fec492-utilities\") pod \"redhat-operators-9kb98\" (UID: \"774b6ff9-0e37-48fd-96c6-571859fec492\") " pod="openshift-marketplace/redhat-operators-9kb98" Feb 16 02:07:22.385640 master-0 kubenswrapper[7721]: I0216 02:07:22.385531 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/774b6ff9-0e37-48fd-96c6-571859fec492-catalog-content\") pod \"redhat-operators-9kb98\" (UID: \"774b6ff9-0e37-48fd-96c6-571859fec492\") " pod="openshift-marketplace/redhat-operators-9kb98" Feb 16 02:07:22.386076 master-0 kubenswrapper[7721]: I0216 02:07:22.386052 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/774b6ff9-0e37-48fd-96c6-571859fec492-utilities\") pod \"redhat-operators-9kb98\" (UID: \"774b6ff9-0e37-48fd-96c6-571859fec492\") " pod="openshift-marketplace/redhat-operators-9kb98" Feb 16 02:07:22.395286 master-0 kubenswrapper[7721]: I0216 02:07:22.394645 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-gn9mv" event={"ID":"7f0f9b7d-e663-4927-861b-a9544d483b6e","Type":"ContainerStarted","Data":"278d7a4ef9c96758de03a855530dd794e64b5f5d06a10f9ec9ec6c31e3d67e58"} Feb 16 02:07:22.398395 master-0 kubenswrapper[7721]: I0216 02:07:22.398013 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-649c4f5445-lzvc4" event={"ID":"ad700b17-ba2a-41d4-8bec-538a009a613b","Type":"ContainerStarted","Data":"0f4270e5e44e4ba946d497e39a29fcdd94ebfa1e344531fd5ab06971f1a503e0"} Feb 16 02:07:22.398395 master-0 kubenswrapper[7721]: I0216 02:07:22.398060 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-649c4f5445-lzvc4" event={"ID":"ad700b17-ba2a-41d4-8bec-538a009a613b","Type":"ContainerStarted","Data":"c4f6fda3b9ae78250d34cf8b7c30c1c11f08347cd23c715bd4e28a7a30204cde"} Feb 16 02:07:22.401320 master-0 kubenswrapper[7721]: I0216 02:07:22.401246 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"4733c2df-0f5a-4696-b8c6-2568ebc7debc","Type":"ContainerStarted","Data":"8066623cf1d51e73e4a446a8cf51f10003367928813b2f433dc4283c2b007eff"} Feb 16 02:07:22.405858 master-0 kubenswrapper[7721]: I0216 02:07:22.405789 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzcbl\" (UniqueName: \"kubernetes.io/projected/774b6ff9-0e37-48fd-96c6-571859fec492-kube-api-access-vzcbl\") pod \"redhat-operators-9kb98\" (UID: \"774b6ff9-0e37-48fd-96c6-571859fec492\") " pod="openshift-marketplace/redhat-operators-9kb98" Feb 16 02:07:22.406705 master-0 kubenswrapper[7721]: I0216 02:07:22.406169 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"96142dec-d2af-459f-a45e-8e29ca30d4fd","Type":"ContainerStarted","Data":"9c65c6fee47ec602fb79a3d44443521383b462ecc1d1bc3ef41453f2b988a126"} Feb 16 02:07:22.406705 master-0 kubenswrapper[7721]: I0216 02:07:22.406207 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"96142dec-d2af-459f-a45e-8e29ca30d4fd","Type":"ContainerStarted","Data":"aecda77080879d1c94103f2409e2d691f37346ebab3adcb041c89ffa50e6d4c1"} Feb 16 02:07:22.406705 master-0 kubenswrapper[7721]: I0216 02:07:22.406344 7721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-2-master-0" podUID="96142dec-d2af-459f-a45e-8e29ca30d4fd" containerName="installer" containerID="cri-o://9c65c6fee47ec602fb79a3d44443521383b462ecc1d1bc3ef41453f2b988a126" gracePeriod=30 Feb 16 02:07:22.415461 master-0 kubenswrapper[7721]: I0216 02:07:22.409494 7721 generic.go:334] "Generic (PLEG): container finished" podID="70143f1c-5cf7-45b6-9490-8aa5535443c0" containerID="0d84b73e94769f5bb383da7748f5fe5a5e16a29541c8ca48421a9e237e1dd5ac" exitCode=0 Feb 16 02:07:22.415461 master-0 kubenswrapper[7721]: I0216 02:07:22.409557 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b5857dcf-vw8hp" event={"ID":"70143f1c-5cf7-45b6-9490-8aa5535443c0","Type":"ContainerDied","Data":"0d84b73e94769f5bb383da7748f5fe5a5e16a29541c8ca48421a9e237e1dd5ac"} Feb 16 02:07:22.415461 master-0 kubenswrapper[7721]: I0216 02:07:22.409585 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b5857dcf-vw8hp" event={"ID":"70143f1c-5cf7-45b6-9490-8aa5535443c0","Type":"ContainerDied","Data":"9db2b93c65513a36c60691617c314fa4cc54b5ac8a400f33061ef12f09836b7a"} Feb 16 02:07:22.415461 master-0 kubenswrapper[7721]: I0216 02:07:22.409608 7721 scope.go:117] "RemoveContainer" containerID="0d84b73e94769f5bb383da7748f5fe5a5e16a29541c8ca48421a9e237e1dd5ac" Feb 16 02:07:22.415461 master-0 kubenswrapper[7721]: I0216 02:07:22.409735 7721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b5857dcf-vw8hp" Feb 16 02:07:22.423792 master-0 kubenswrapper[7721]: I0216 02:07:22.421662 7721 generic.go:334] "Generic (PLEG): container finished" podID="e80c9628-20e2-4326-b1f5-810fd755d7ca" containerID="c3139c1b519564ebf90db17e138f8f50b4797f5c614e087f4bcb2f54580a8b86" exitCode=0 Feb 16 02:07:22.423792 master-0 kubenswrapper[7721]: I0216 02:07:22.421740 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8b946f9d6-8xg2l" event={"ID":"e80c9628-20e2-4326-b1f5-810fd755d7ca","Type":"ContainerDied","Data":"c3139c1b519564ebf90db17e138f8f50b4797f5c614e087f4bcb2f54580a8b86"} Feb 16 02:07:22.423792 master-0 kubenswrapper[7721]: I0216 02:07:22.421775 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8b946f9d6-8xg2l" event={"ID":"e80c9628-20e2-4326-b1f5-810fd755d7ca","Type":"ContainerDied","Data":"c8075716643da52e20d813a5b534de5957e55b1d15acbbf38c5599862fa2e772"} Feb 16 02:07:22.423792 master-0 kubenswrapper[7721]: I0216 02:07:22.421860 7721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8b946f9d6-8xg2l" Feb 16 02:07:22.424078 master-0 kubenswrapper[7721]: I0216 02:07:22.424043 7721 generic.go:334] "Generic (PLEG): container finished" podID="ab3f83a0-cda8-441c-9069-455a731b0a89" containerID="18d499d8a526dd8edbbf8498d799fc9ee812e51b7a856a0eff5dce3411756de5" exitCode=0 Feb 16 02:07:22.424145 master-0 kubenswrapper[7721]: I0216 02:07:22.424117 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67f79f9544-nhm26" event={"ID":"ab3f83a0-cda8-441c-9069-455a731b0a89","Type":"ContainerDied","Data":"18d499d8a526dd8edbbf8498d799fc9ee812e51b7a856a0eff5dce3411756de5"} Feb 16 02:07:22.429650 master-0 kubenswrapper[7721]: I0216 02:07:22.429626 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7c64d55f8-62wr2" event={"ID":"b6088119-1125-4271-8c0b-0675e700edd9","Type":"ContainerStarted","Data":"0a7334b26fd5842515d0403030d7f8a503f042b5470ce6d8a2e80440c021f184"} Feb 16 02:07:22.433722 master-0 kubenswrapper[7721]: I0216 02:07:22.433692 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-8nl7s" Feb 16 02:07:22.435872 master-0 kubenswrapper[7721]: I0216 02:07:22.435853 7721 scope.go:117] "RemoveContainer" containerID="0d84b73e94769f5bb383da7748f5fe5a5e16a29541c8ca48421a9e237e1dd5ac" Feb 16 02:07:22.437830 master-0 kubenswrapper[7721]: E0216 02:07:22.437797 7721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d84b73e94769f5bb383da7748f5fe5a5e16a29541c8ca48421a9e237e1dd5ac\": container with ID starting with 0d84b73e94769f5bb383da7748f5fe5a5e16a29541c8ca48421a9e237e1dd5ac not found: ID does not exist" containerID="0d84b73e94769f5bb383da7748f5fe5a5e16a29541c8ca48421a9e237e1dd5ac" Feb 16 02:07:22.437908 master-0 kubenswrapper[7721]: I0216 02:07:22.437835 7721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d84b73e94769f5bb383da7748f5fe5a5e16a29541c8ca48421a9e237e1dd5ac"} err="failed to get container status \"0d84b73e94769f5bb383da7748f5fe5a5e16a29541c8ca48421a9e237e1dd5ac\": rpc error: code = NotFound desc = could not find container \"0d84b73e94769f5bb383da7748f5fe5a5e16a29541c8ca48421a9e237e1dd5ac\": container with ID starting with 0d84b73e94769f5bb383da7748f5fe5a5e16a29541c8ca48421a9e237e1dd5ac not found: ID does not exist" Feb 16 02:07:22.437908 master-0 kubenswrapper[7721]: I0216 02:07:22.437873 7721 scope.go:117] "RemoveContainer" containerID="c3139c1b519564ebf90db17e138f8f50b4797f5c614e087f4bcb2f54580a8b86" Feb 16 02:07:22.454420 master-0 kubenswrapper[7721]: I0216 02:07:22.454323 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5574f479df-xqnpg"] Feb 16 02:07:22.454940 master-0 kubenswrapper[7721]: I0216 02:07:22.454865 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-696cfb9f87-87b8w"] Feb 16 02:07:22.455318 master-0 kubenswrapper[7721]: I0216 02:07:22.455259 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-696cfb9f87-87b8w" Feb 16 02:07:22.455729 master-0 kubenswrapper[7721]: I0216 02:07:22.455635 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5574f479df-xqnpg" Feb 16 02:07:22.459015 master-0 kubenswrapper[7721]: I0216 02:07:22.458886 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 02:07:22.459116 master-0 kubenswrapper[7721]: I0216 02:07:22.459097 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 02:07:22.459311 master-0 kubenswrapper[7721]: I0216 02:07:22.459270 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 02:07:22.459558 master-0 kubenswrapper[7721]: I0216 02:07:22.459540 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 02:07:22.459668 master-0 kubenswrapper[7721]: I0216 02:07:22.459637 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 02:07:22.459705 master-0 kubenswrapper[7721]: I0216 02:07:22.459679 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 02:07:22.459705 master-0 kubenswrapper[7721]: I0216 02:07:22.459682 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 02:07:22.459808 master-0 kubenswrapper[7721]: I0216 02:07:22.459658 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 02:07:22.459887 master-0 kubenswrapper[7721]: I0216 02:07:22.459840 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 02:07:22.459969 master-0 kubenswrapper[7721]: I0216 02:07:22.459948 7721 scope.go:117] "RemoveContainer" containerID="c3139c1b519564ebf90db17e138f8f50b4797f5c614e087f4bcb2f54580a8b86" Feb 16 02:07:22.462186 master-0 kubenswrapper[7721]: E0216 02:07:22.462092 7721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3139c1b519564ebf90db17e138f8f50b4797f5c614e087f4bcb2f54580a8b86\": container with ID starting with c3139c1b519564ebf90db17e138f8f50b4797f5c614e087f4bcb2f54580a8b86 not found: ID does not exist" containerID="c3139c1b519564ebf90db17e138f8f50b4797f5c614e087f4bcb2f54580a8b86" Feb 16 02:07:22.462186 master-0 kubenswrapper[7721]: I0216 02:07:22.462129 7721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3139c1b519564ebf90db17e138f8f50b4797f5c614e087f4bcb2f54580a8b86"} err="failed to get container status \"c3139c1b519564ebf90db17e138f8f50b4797f5c614e087f4bcb2f54580a8b86\": rpc error: code = NotFound desc = could not find container \"c3139c1b519564ebf90db17e138f8f50b4797f5c614e087f4bcb2f54580a8b86\": container with ID starting with c3139c1b519564ebf90db17e138f8f50b4797f5c614e087f4bcb2f54580a8b86 not found: ID does not exist" Feb 16 02:07:22.462321 master-0 kubenswrapper[7721]: I0216 02:07:22.462198 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 02:07:22.475322 master-0 kubenswrapper[7721]: I0216 02:07:22.475270 7721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-649c4f5445-lzvc4" podStartSLOduration=1.475250675 podStartE2EDuration="1.475250675s" podCreationTimestamp="2026-02-16 02:07:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:07:22.466403115 +0000 UTC m=+45.960637377" watchObservedRunningTime="2026-02-16 02:07:22.475250675 +0000 UTC m=+45.969484927" Feb 16 02:07:22.476545 master-0 kubenswrapper[7721]: I0216 02:07:22.476526 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-696cfb9f87-87b8w"] Feb 16 02:07:22.476638 master-0 kubenswrapper[7721]: I0216 02:07:22.476628 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5574f479df-xqnpg"] Feb 16 02:07:22.481286 master-0 kubenswrapper[7721]: I0216 02:07:22.481237 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 02:07:22.481655 master-0 kubenswrapper[7721]: I0216 02:07:22.481271 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9kb98" Feb 16 02:07:22.487078 master-0 kubenswrapper[7721]: I0216 02:07:22.487047 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2a7e185-78f4-4d69-b126-d465374a6218-config\") pod \"route-controller-manager-696cfb9f87-87b8w\" (UID: \"f2a7e185-78f4-4d69-b126-d465374a6218\") " pod="openshift-route-controller-manager/route-controller-manager-696cfb9f87-87b8w" Feb 16 02:07:22.488139 master-0 kubenswrapper[7721]: I0216 02:07:22.487270 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lm95w\" (UniqueName: \"kubernetes.io/projected/f2a7e185-78f4-4d69-b126-d465374a6218-kube-api-access-lm95w\") pod \"route-controller-manager-696cfb9f87-87b8w\" (UID: \"f2a7e185-78f4-4d69-b126-d465374a6218\") " pod="openshift-route-controller-manager/route-controller-manager-696cfb9f87-87b8w" Feb 16 02:07:22.488139 master-0 kubenswrapper[7721]: I0216 02:07:22.487409 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2a7e185-78f4-4d69-b126-d465374a6218-serving-cert\") pod \"route-controller-manager-696cfb9f87-87b8w\" (UID: \"f2a7e185-78f4-4d69-b126-d465374a6218\") " pod="openshift-route-controller-manager/route-controller-manager-696cfb9f87-87b8w" Feb 16 02:07:22.488339 master-0 kubenswrapper[7721]: I0216 02:07:22.488313 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f2a7e185-78f4-4d69-b126-d465374a6218-client-ca\") pod \"route-controller-manager-696cfb9f87-87b8w\" (UID: \"f2a7e185-78f4-4d69-b126-d465374a6218\") " pod="openshift-route-controller-manager/route-controller-manager-696cfb9f87-87b8w" Feb 16 02:07:22.489252 master-0 kubenswrapper[7721]: I0216 02:07:22.489194 7721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-2-master-0" podStartSLOduration=9.4891781 podStartE2EDuration="9.4891781s" podCreationTimestamp="2026-02-16 02:07:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:07:22.486367431 +0000 UTC m=+45.980601693" watchObservedRunningTime="2026-02-16 02:07:22.4891781 +0000 UTC m=+45.983412362" Feb 16 02:07:22.590267 master-0 kubenswrapper[7721]: I0216 02:07:22.590202 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7eda8a42-765e-47cf-896f-324e8185062e-serving-cert\") pod \"controller-manager-5574f479df-xqnpg\" (UID: \"7eda8a42-765e-47cf-896f-324e8185062e\") " pod="openshift-controller-manager/controller-manager-5574f479df-xqnpg" Feb 16 02:07:22.590267 master-0 kubenswrapper[7721]: I0216 02:07:22.590259 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f2a7e185-78f4-4d69-b126-d465374a6218-client-ca\") pod \"route-controller-manager-696cfb9f87-87b8w\" (UID: \"f2a7e185-78f4-4d69-b126-d465374a6218\") " pod="openshift-route-controller-manager/route-controller-manager-696cfb9f87-87b8w" Feb 16 02:07:22.590494 master-0 kubenswrapper[7721]: I0216 02:07:22.590292 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7eda8a42-765e-47cf-896f-324e8185062e-proxy-ca-bundles\") pod \"controller-manager-5574f479df-xqnpg\" (UID: \"7eda8a42-765e-47cf-896f-324e8185062e\") " pod="openshift-controller-manager/controller-manager-5574f479df-xqnpg" Feb 16 02:07:22.590494 master-0 kubenswrapper[7721]: I0216 02:07:22.590337 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7eda8a42-765e-47cf-896f-324e8185062e-config\") pod \"controller-manager-5574f479df-xqnpg\" (UID: \"7eda8a42-765e-47cf-896f-324e8185062e\") " pod="openshift-controller-manager/controller-manager-5574f479df-xqnpg" Feb 16 02:07:22.590494 master-0 kubenswrapper[7721]: I0216 02:07:22.590363 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2a7e185-78f4-4d69-b126-d465374a6218-config\") pod \"route-controller-manager-696cfb9f87-87b8w\" (UID: \"f2a7e185-78f4-4d69-b126-d465374a6218\") " pod="openshift-route-controller-manager/route-controller-manager-696cfb9f87-87b8w" Feb 16 02:07:22.590494 master-0 kubenswrapper[7721]: I0216 02:07:22.590386 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lm95w\" (UniqueName: \"kubernetes.io/projected/f2a7e185-78f4-4d69-b126-d465374a6218-kube-api-access-lm95w\") pod \"route-controller-manager-696cfb9f87-87b8w\" (UID: \"f2a7e185-78f4-4d69-b126-d465374a6218\") " pod="openshift-route-controller-manager/route-controller-manager-696cfb9f87-87b8w" Feb 16 02:07:22.590494 master-0 kubenswrapper[7721]: I0216 02:07:22.590408 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7eda8a42-765e-47cf-896f-324e8185062e-client-ca\") pod \"controller-manager-5574f479df-xqnpg\" (UID: \"7eda8a42-765e-47cf-896f-324e8185062e\") " pod="openshift-controller-manager/controller-manager-5574f479df-xqnpg" Feb 16 02:07:22.590494 master-0 kubenswrapper[7721]: I0216 02:07:22.590446 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmgwf\" (UniqueName: \"kubernetes.io/projected/7eda8a42-765e-47cf-896f-324e8185062e-kube-api-access-cmgwf\") pod \"controller-manager-5574f479df-xqnpg\" (UID: \"7eda8a42-765e-47cf-896f-324e8185062e\") " pod="openshift-controller-manager/controller-manager-5574f479df-xqnpg" Feb 16 02:07:22.590494 master-0 kubenswrapper[7721]: I0216 02:07:22.590470 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2a7e185-78f4-4d69-b126-d465374a6218-serving-cert\") pod \"route-controller-manager-696cfb9f87-87b8w\" (UID: \"f2a7e185-78f4-4d69-b126-d465374a6218\") " pod="openshift-route-controller-manager/route-controller-manager-696cfb9f87-87b8w" Feb 16 02:07:22.592941 master-0 kubenswrapper[7721]: I0216 02:07:22.591838 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f2a7e185-78f4-4d69-b126-d465374a6218-client-ca\") pod \"route-controller-manager-696cfb9f87-87b8w\" (UID: \"f2a7e185-78f4-4d69-b126-d465374a6218\") " pod="openshift-route-controller-manager/route-controller-manager-696cfb9f87-87b8w" Feb 16 02:07:22.593738 master-0 kubenswrapper[7721]: I0216 02:07:22.593694 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2a7e185-78f4-4d69-b126-d465374a6218-config\") pod \"route-controller-manager-696cfb9f87-87b8w\" (UID: \"f2a7e185-78f4-4d69-b126-d465374a6218\") " pod="openshift-route-controller-manager/route-controller-manager-696cfb9f87-87b8w" Feb 16 02:07:22.595153 master-0 kubenswrapper[7721]: I0216 02:07:22.595113 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2a7e185-78f4-4d69-b126-d465374a6218-serving-cert\") pod \"route-controller-manager-696cfb9f87-87b8w\" (UID: \"f2a7e185-78f4-4d69-b126-d465374a6218\") " pod="openshift-route-controller-manager/route-controller-manager-696cfb9f87-87b8w" Feb 16 02:07:22.612255 master-0 kubenswrapper[7721]: I0216 02:07:22.612189 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lm95w\" (UniqueName: \"kubernetes.io/projected/f2a7e185-78f4-4d69-b126-d465374a6218-kube-api-access-lm95w\") pod \"route-controller-manager-696cfb9f87-87b8w\" (UID: \"f2a7e185-78f4-4d69-b126-d465374a6218\") " pod="openshift-route-controller-manager/route-controller-manager-696cfb9f87-87b8w" Feb 16 02:07:22.660604 master-0 kubenswrapper[7721]: I0216 02:07:22.660546 7721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6b5857dcf-vw8hp"] Feb 16 02:07:22.673692 master-0 kubenswrapper[7721]: I0216 02:07:22.673627 7721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6b5857dcf-vw8hp"] Feb 16 02:07:22.674737 master-0 kubenswrapper[7721]: I0216 02:07:22.674677 7721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8b946f9d6-8xg2l"] Feb 16 02:07:22.680696 master-0 kubenswrapper[7721]: I0216 02:07:22.678349 7721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8b946f9d6-8xg2l"] Feb 16 02:07:22.691913 master-0 kubenswrapper[7721]: I0216 02:07:22.691658 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7eda8a42-765e-47cf-896f-324e8185062e-serving-cert\") pod \"controller-manager-5574f479df-xqnpg\" (UID: \"7eda8a42-765e-47cf-896f-324e8185062e\") " pod="openshift-controller-manager/controller-manager-5574f479df-xqnpg" Feb 16 02:07:22.691913 master-0 kubenswrapper[7721]: I0216 02:07:22.691713 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7eda8a42-765e-47cf-896f-324e8185062e-proxy-ca-bundles\") pod \"controller-manager-5574f479df-xqnpg\" (UID: \"7eda8a42-765e-47cf-896f-324e8185062e\") " pod="openshift-controller-manager/controller-manager-5574f479df-xqnpg" Feb 16 02:07:22.691913 master-0 kubenswrapper[7721]: I0216 02:07:22.691756 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7eda8a42-765e-47cf-896f-324e8185062e-config\") pod \"controller-manager-5574f479df-xqnpg\" (UID: \"7eda8a42-765e-47cf-896f-324e8185062e\") " pod="openshift-controller-manager/controller-manager-5574f479df-xqnpg" Feb 16 02:07:22.691913 master-0 kubenswrapper[7721]: I0216 02:07:22.691782 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7eda8a42-765e-47cf-896f-324e8185062e-client-ca\") pod \"controller-manager-5574f479df-xqnpg\" (UID: \"7eda8a42-765e-47cf-896f-324e8185062e\") " pod="openshift-controller-manager/controller-manager-5574f479df-xqnpg" Feb 16 02:07:22.692169 master-0 kubenswrapper[7721]: I0216 02:07:22.692102 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmgwf\" (UniqueName: \"kubernetes.io/projected/7eda8a42-765e-47cf-896f-324e8185062e-kube-api-access-cmgwf\") pod \"controller-manager-5574f479df-xqnpg\" (UID: \"7eda8a42-765e-47cf-896f-324e8185062e\") " pod="openshift-controller-manager/controller-manager-5574f479df-xqnpg" Feb 16 02:07:22.698372 master-0 kubenswrapper[7721]: I0216 02:07:22.697324 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7eda8a42-765e-47cf-896f-324e8185062e-proxy-ca-bundles\") pod \"controller-manager-5574f479df-xqnpg\" (UID: \"7eda8a42-765e-47cf-896f-324e8185062e\") " pod="openshift-controller-manager/controller-manager-5574f479df-xqnpg" Feb 16 02:07:22.698372 master-0 kubenswrapper[7721]: I0216 02:07:22.697522 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7eda8a42-765e-47cf-896f-324e8185062e-config\") pod \"controller-manager-5574f479df-xqnpg\" (UID: \"7eda8a42-765e-47cf-896f-324e8185062e\") " pod="openshift-controller-manager/controller-manager-5574f479df-xqnpg" Feb 16 02:07:22.698372 master-0 kubenswrapper[7721]: I0216 02:07:22.698333 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7eda8a42-765e-47cf-896f-324e8185062e-serving-cert\") pod \"controller-manager-5574f479df-xqnpg\" (UID: \"7eda8a42-765e-47cf-896f-324e8185062e\") " pod="openshift-controller-manager/controller-manager-5574f479df-xqnpg" Feb 16 02:07:22.698834 master-0 kubenswrapper[7721]: I0216 02:07:22.698806 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7eda8a42-765e-47cf-896f-324e8185062e-client-ca\") pod \"controller-manager-5574f479df-xqnpg\" (UID: \"7eda8a42-765e-47cf-896f-324e8185062e\") " pod="openshift-controller-manager/controller-manager-5574f479df-xqnpg" Feb 16 02:07:22.707116 master-0 kubenswrapper[7721]: I0216 02:07:22.707078 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmgwf\" (UniqueName: \"kubernetes.io/projected/7eda8a42-765e-47cf-896f-324e8185062e-kube-api-access-cmgwf\") pod \"controller-manager-5574f479df-xqnpg\" (UID: \"7eda8a42-765e-47cf-896f-324e8185062e\") " pod="openshift-controller-manager/controller-manager-5574f479df-xqnpg" Feb 16 02:07:22.730685 master-0 kubenswrapper[7721]: I0216 02:07:22.730346 7721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67f79f9544-nhm26" Feb 16 02:07:22.735042 master-0 kubenswrapper[7721]: I0216 02:07:22.734984 7721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70143f1c-5cf7-45b6-9490-8aa5535443c0" path="/var/lib/kubelet/pods/70143f1c-5cf7-45b6-9490-8aa5535443c0/volumes" Feb 16 02:07:22.735896 master-0 kubenswrapper[7721]: I0216 02:07:22.735833 7721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="864c0ef4-319c-457c-aa3b-adf0c3e5a0ff" path="/var/lib/kubelet/pods/864c0ef4-319c-457c-aa3b-adf0c3e5a0ff/volumes" Feb 16 02:07:22.738545 master-0 kubenswrapper[7721]: I0216 02:07:22.736721 7721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e80c9628-20e2-4326-b1f5-810fd755d7ca" path="/var/lib/kubelet/pods/e80c9628-20e2-4326-b1f5-810fd755d7ca/volumes" Feb 16 02:07:22.794894 master-0 kubenswrapper[7721]: I0216 02:07:22.794835 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xlhf7\" (UniqueName: \"kubernetes.io/projected/ab3f83a0-cda8-441c-9069-455a731b0a89-kube-api-access-xlhf7\") pod \"ab3f83a0-cda8-441c-9069-455a731b0a89\" (UID: \"ab3f83a0-cda8-441c-9069-455a731b0a89\") " Feb 16 02:07:22.794894 master-0 kubenswrapper[7721]: I0216 02:07:22.794888 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ab3f83a0-cda8-441c-9069-455a731b0a89-audit-dir\") pod \"ab3f83a0-cda8-441c-9069-455a731b0a89\" (UID: \"ab3f83a0-cda8-441c-9069-455a731b0a89\") " Feb 16 02:07:22.795130 master-0 kubenswrapper[7721]: I0216 02:07:22.794934 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab3f83a0-cda8-441c-9069-455a731b0a89-config\") pod \"ab3f83a0-cda8-441c-9069-455a731b0a89\" (UID: \"ab3f83a0-cda8-441c-9069-455a731b0a89\") " Feb 16 02:07:22.795130 master-0 kubenswrapper[7721]: I0216 02:07:22.794981 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ab3f83a0-cda8-441c-9069-455a731b0a89-node-pullsecrets\") pod \"ab3f83a0-cda8-441c-9069-455a731b0a89\" (UID: \"ab3f83a0-cda8-441c-9069-455a731b0a89\") " Feb 16 02:07:22.795130 master-0 kubenswrapper[7721]: I0216 02:07:22.794998 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ab3f83a0-cda8-441c-9069-455a731b0a89-etcd-client\") pod \"ab3f83a0-cda8-441c-9069-455a731b0a89\" (UID: \"ab3f83a0-cda8-441c-9069-455a731b0a89\") " Feb 16 02:07:22.795130 master-0 kubenswrapper[7721]: I0216 02:07:22.795016 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab3f83a0-cda8-441c-9069-455a731b0a89-trusted-ca-bundle\") pod \"ab3f83a0-cda8-441c-9069-455a731b0a89\" (UID: \"ab3f83a0-cda8-441c-9069-455a731b0a89\") " Feb 16 02:07:22.795130 master-0 kubenswrapper[7721]: I0216 02:07:22.795035 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ab3f83a0-cda8-441c-9069-455a731b0a89-image-import-ca\") pod \"ab3f83a0-cda8-441c-9069-455a731b0a89\" (UID: \"ab3f83a0-cda8-441c-9069-455a731b0a89\") " Feb 16 02:07:22.795130 master-0 kubenswrapper[7721]: I0216 02:07:22.795050 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ab3f83a0-cda8-441c-9069-455a731b0a89-audit\") pod \"ab3f83a0-cda8-441c-9069-455a731b0a89\" (UID: \"ab3f83a0-cda8-441c-9069-455a731b0a89\") " Feb 16 02:07:22.795130 master-0 kubenswrapper[7721]: I0216 02:07:22.795072 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab3f83a0-cda8-441c-9069-455a731b0a89-serving-cert\") pod \"ab3f83a0-cda8-441c-9069-455a731b0a89\" (UID: \"ab3f83a0-cda8-441c-9069-455a731b0a89\") " Feb 16 02:07:22.795315 master-0 kubenswrapper[7721]: I0216 02:07:22.795148 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ab3f83a0-cda8-441c-9069-455a731b0a89-etcd-serving-ca\") pod \"ab3f83a0-cda8-441c-9069-455a731b0a89\" (UID: \"ab3f83a0-cda8-441c-9069-455a731b0a89\") " Feb 16 02:07:22.795315 master-0 kubenswrapper[7721]: I0216 02:07:22.795166 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ab3f83a0-cda8-441c-9069-455a731b0a89-encryption-config\") pod \"ab3f83a0-cda8-441c-9069-455a731b0a89\" (UID: \"ab3f83a0-cda8-441c-9069-455a731b0a89\") " Feb 16 02:07:22.798676 master-0 kubenswrapper[7721]: I0216 02:07:22.798617 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab3f83a0-cda8-441c-9069-455a731b0a89-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "ab3f83a0-cda8-441c-9069-455a731b0a89" (UID: "ab3f83a0-cda8-441c-9069-455a731b0a89"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:07:22.800375 master-0 kubenswrapper[7721]: I0216 02:07:22.800327 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab3f83a0-cda8-441c-9069-455a731b0a89-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "ab3f83a0-cda8-441c-9069-455a731b0a89" (UID: "ab3f83a0-cda8-441c-9069-455a731b0a89"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:07:22.801271 master-0 kubenswrapper[7721]: I0216 02:07:22.801248 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab3f83a0-cda8-441c-9069-455a731b0a89-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "ab3f83a0-cda8-441c-9069-455a731b0a89" (UID: "ab3f83a0-cda8-441c-9069-455a731b0a89"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:07:22.801378 master-0 kubenswrapper[7721]: I0216 02:07:22.801361 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab3f83a0-cda8-441c-9069-455a731b0a89-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "ab3f83a0-cda8-441c-9069-455a731b0a89" (UID: "ab3f83a0-cda8-441c-9069-455a731b0a89"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:07:22.801457 master-0 kubenswrapper[7721]: I0216 02:07:22.801388 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab3f83a0-cda8-441c-9069-455a731b0a89-audit" (OuterVolumeSpecName: "audit") pod "ab3f83a0-cda8-441c-9069-455a731b0a89" (UID: "ab3f83a0-cda8-441c-9069-455a731b0a89"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:07:22.801838 master-0 kubenswrapper[7721]: I0216 02:07:22.801808 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab3f83a0-cda8-441c-9069-455a731b0a89-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "ab3f83a0-cda8-441c-9069-455a731b0a89" (UID: "ab3f83a0-cda8-441c-9069-455a731b0a89"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:07:22.801883 master-0 kubenswrapper[7721]: I0216 02:07:22.801860 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab3f83a0-cda8-441c-9069-455a731b0a89-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "ab3f83a0-cda8-441c-9069-455a731b0a89" (UID: "ab3f83a0-cda8-441c-9069-455a731b0a89"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:07:22.805304 master-0 kubenswrapper[7721]: I0216 02:07:22.805276 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab3f83a0-cda8-441c-9069-455a731b0a89-config" (OuterVolumeSpecName: "config") pod "ab3f83a0-cda8-441c-9069-455a731b0a89" (UID: "ab3f83a0-cda8-441c-9069-455a731b0a89"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:07:22.811068 master-0 kubenswrapper[7721]: I0216 02:07:22.811010 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab3f83a0-cda8-441c-9069-455a731b0a89-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "ab3f83a0-cda8-441c-9069-455a731b0a89" (UID: "ab3f83a0-cda8-441c-9069-455a731b0a89"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:07:22.811369 master-0 kubenswrapper[7721]: I0216 02:07:22.811315 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab3f83a0-cda8-441c-9069-455a731b0a89-kube-api-access-xlhf7" (OuterVolumeSpecName: "kube-api-access-xlhf7") pod "ab3f83a0-cda8-441c-9069-455a731b0a89" (UID: "ab3f83a0-cda8-441c-9069-455a731b0a89"). InnerVolumeSpecName "kube-api-access-xlhf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:07:22.813666 master-0 kubenswrapper[7721]: I0216 02:07:22.813601 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab3f83a0-cda8-441c-9069-455a731b0a89-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ab3f83a0-cda8-441c-9069-455a731b0a89" (UID: "ab3f83a0-cda8-441c-9069-455a731b0a89"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:07:22.852784 master-0 kubenswrapper[7721]: I0216 02:07:22.852725 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-696cfb9f87-87b8w" Feb 16 02:07:22.869099 master-0 kubenswrapper[7721]: I0216 02:07:22.869042 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5574f479df-xqnpg" Feb 16 02:07:22.896821 master-0 kubenswrapper[7721]: I0216 02:07:22.896752 7721 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ab3f83a0-cda8-441c-9069-455a731b0a89-etcd-serving-ca\") on node \"master-0\" DevicePath \"\"" Feb 16 02:07:22.896821 master-0 kubenswrapper[7721]: I0216 02:07:22.896809 7721 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ab3f83a0-cda8-441c-9069-455a731b0a89-encryption-config\") on node \"master-0\" DevicePath \"\"" Feb 16 02:07:22.896821 master-0 kubenswrapper[7721]: I0216 02:07:22.896828 7721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xlhf7\" (UniqueName: \"kubernetes.io/projected/ab3f83a0-cda8-441c-9069-455a731b0a89-kube-api-access-xlhf7\") on node \"master-0\" DevicePath \"\"" Feb 16 02:07:22.897600 master-0 kubenswrapper[7721]: I0216 02:07:22.896848 7721 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ab3f83a0-cda8-441c-9069-455a731b0a89-audit-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 02:07:22.897600 master-0 kubenswrapper[7721]: I0216 02:07:22.896866 7721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab3f83a0-cda8-441c-9069-455a731b0a89-config\") on node \"master-0\" DevicePath \"\"" Feb 16 02:07:22.897600 master-0 kubenswrapper[7721]: I0216 02:07:22.896879 7721 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ab3f83a0-cda8-441c-9069-455a731b0a89-node-pullsecrets\") on node \"master-0\" DevicePath \"\"" Feb 16 02:07:22.897600 master-0 kubenswrapper[7721]: I0216 02:07:22.896894 7721 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab3f83a0-cda8-441c-9069-455a731b0a89-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 02:07:22.897600 master-0 kubenswrapper[7721]: I0216 02:07:22.896910 7721 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ab3f83a0-cda8-441c-9069-455a731b0a89-etcd-client\") on node \"master-0\" DevicePath \"\"" Feb 16 02:07:22.897600 master-0 kubenswrapper[7721]: I0216 02:07:22.896924 7721 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ab3f83a0-cda8-441c-9069-455a731b0a89-image-import-ca\") on node \"master-0\" DevicePath \"\"" Feb 16 02:07:22.897600 master-0 kubenswrapper[7721]: I0216 02:07:22.896939 7721 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ab3f83a0-cda8-441c-9069-455a731b0a89-audit\") on node \"master-0\" DevicePath \"\"" Feb 16 02:07:22.897600 master-0 kubenswrapper[7721]: I0216 02:07:22.896954 7721 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab3f83a0-cda8-441c-9069-455a731b0a89-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 16 02:07:22.903745 master-0 kubenswrapper[7721]: I0216 02:07:22.903688 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vkp55"] Feb 16 02:07:22.904068 master-0 kubenswrapper[7721]: E0216 02:07:22.903985 7721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab3f83a0-cda8-441c-9069-455a731b0a89" containerName="fix-audit-permissions" Feb 16 02:07:22.904068 master-0 kubenswrapper[7721]: I0216 02:07:22.904002 7721 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab3f83a0-cda8-441c-9069-455a731b0a89" containerName="fix-audit-permissions" Feb 16 02:07:22.904264 master-0 kubenswrapper[7721]: I0216 02:07:22.904079 7721 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab3f83a0-cda8-441c-9069-455a731b0a89" containerName="fix-audit-permissions" Feb 16 02:07:22.908896 master-0 kubenswrapper[7721]: I0216 02:07:22.908876 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vkp55" Feb 16 02:07:22.921537 master-0 kubenswrapper[7721]: I0216 02:07:22.921485 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vkp55"] Feb 16 02:07:22.925558 master-0 kubenswrapper[7721]: I0216 02:07:22.925314 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-0_96142dec-d2af-459f-a45e-8e29ca30d4fd/installer/0.log" Feb 16 02:07:22.925558 master-0 kubenswrapper[7721]: I0216 02:07:22.925500 7721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Feb 16 02:07:23.002462 master-0 kubenswrapper[7721]: I0216 02:07:22.998073 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/96142dec-d2af-459f-a45e-8e29ca30d4fd-var-lock\") pod \"96142dec-d2af-459f-a45e-8e29ca30d4fd\" (UID: \"96142dec-d2af-459f-a45e-8e29ca30d4fd\") " Feb 16 02:07:23.002462 master-0 kubenswrapper[7721]: I0216 02:07:22.998204 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/96142dec-d2af-459f-a45e-8e29ca30d4fd-kubelet-dir\") pod \"96142dec-d2af-459f-a45e-8e29ca30d4fd\" (UID: \"96142dec-d2af-459f-a45e-8e29ca30d4fd\") " Feb 16 02:07:23.002462 master-0 kubenswrapper[7721]: I0216 02:07:22.998218 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96142dec-d2af-459f-a45e-8e29ca30d4fd-var-lock" (OuterVolumeSpecName: "var-lock") pod "96142dec-d2af-459f-a45e-8e29ca30d4fd" (UID: "96142dec-d2af-459f-a45e-8e29ca30d4fd"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:07:23.002462 master-0 kubenswrapper[7721]: I0216 02:07:22.998270 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/96142dec-d2af-459f-a45e-8e29ca30d4fd-kube-api-access\") pod \"96142dec-d2af-459f-a45e-8e29ca30d4fd\" (UID: \"96142dec-d2af-459f-a45e-8e29ca30d4fd\") " Feb 16 02:07:23.002462 master-0 kubenswrapper[7721]: I0216 02:07:22.998297 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96142dec-d2af-459f-a45e-8e29ca30d4fd-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "96142dec-d2af-459f-a45e-8e29ca30d4fd" (UID: "96142dec-d2af-459f-a45e-8e29ca30d4fd"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:07:23.002462 master-0 kubenswrapper[7721]: I0216 02:07:22.998541 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e5d40eb-8051-46a8-9cd9-d2b1f152dbf0-utilities\") pod \"certified-operators-vkp55\" (UID: \"0e5d40eb-8051-46a8-9cd9-d2b1f152dbf0\") " pod="openshift-marketplace/certified-operators-vkp55" Feb 16 02:07:23.002462 master-0 kubenswrapper[7721]: I0216 02:07:22.998574 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbf5f\" (UniqueName: \"kubernetes.io/projected/0e5d40eb-8051-46a8-9cd9-d2b1f152dbf0-kube-api-access-gbf5f\") pod \"certified-operators-vkp55\" (UID: \"0e5d40eb-8051-46a8-9cd9-d2b1f152dbf0\") " pod="openshift-marketplace/certified-operators-vkp55" Feb 16 02:07:23.002462 master-0 kubenswrapper[7721]: I0216 02:07:22.998618 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e5d40eb-8051-46a8-9cd9-d2b1f152dbf0-catalog-content\") pod \"certified-operators-vkp55\" (UID: \"0e5d40eb-8051-46a8-9cd9-d2b1f152dbf0\") " pod="openshift-marketplace/certified-operators-vkp55" Feb 16 02:07:23.002462 master-0 kubenswrapper[7721]: I0216 02:07:22.998672 7721 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/96142dec-d2af-459f-a45e-8e29ca30d4fd-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 02:07:23.002462 master-0 kubenswrapper[7721]: I0216 02:07:22.998686 7721 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/96142dec-d2af-459f-a45e-8e29ca30d4fd-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 16 02:07:23.024647 master-0 kubenswrapper[7721]: I0216 02:07:23.020175 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9kb98"] Feb 16 02:07:23.024647 master-0 kubenswrapper[7721]: I0216 02:07:23.020350 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96142dec-d2af-459f-a45e-8e29ca30d4fd-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "96142dec-d2af-459f-a45e-8e29ca30d4fd" (UID: "96142dec-d2af-459f-a45e-8e29ca30d4fd"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:07:23.105276 master-0 kubenswrapper[7721]: I0216 02:07:23.105224 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e5d40eb-8051-46a8-9cd9-d2b1f152dbf0-utilities\") pod \"certified-operators-vkp55\" (UID: \"0e5d40eb-8051-46a8-9cd9-d2b1f152dbf0\") " pod="openshift-marketplace/certified-operators-vkp55" Feb 16 02:07:23.105365 master-0 kubenswrapper[7721]: I0216 02:07:23.105282 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbf5f\" (UniqueName: \"kubernetes.io/projected/0e5d40eb-8051-46a8-9cd9-d2b1f152dbf0-kube-api-access-gbf5f\") pod \"certified-operators-vkp55\" (UID: \"0e5d40eb-8051-46a8-9cd9-d2b1f152dbf0\") " pod="openshift-marketplace/certified-operators-vkp55" Feb 16 02:07:23.105365 master-0 kubenswrapper[7721]: I0216 02:07:23.105346 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e5d40eb-8051-46a8-9cd9-d2b1f152dbf0-catalog-content\") pod \"certified-operators-vkp55\" (UID: \"0e5d40eb-8051-46a8-9cd9-d2b1f152dbf0\") " pod="openshift-marketplace/certified-operators-vkp55" Feb 16 02:07:23.105452 master-0 kubenswrapper[7721]: I0216 02:07:23.105410 7721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/96142dec-d2af-459f-a45e-8e29ca30d4fd-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 16 02:07:23.105955 master-0 kubenswrapper[7721]: I0216 02:07:23.105922 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e5d40eb-8051-46a8-9cd9-d2b1f152dbf0-catalog-content\") pod \"certified-operators-vkp55\" (UID: \"0e5d40eb-8051-46a8-9cd9-d2b1f152dbf0\") " pod="openshift-marketplace/certified-operators-vkp55" Feb 16 02:07:23.113079 master-0 kubenswrapper[7721]: I0216 02:07:23.112945 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e5d40eb-8051-46a8-9cd9-d2b1f152dbf0-utilities\") pod \"certified-operators-vkp55\" (UID: \"0e5d40eb-8051-46a8-9cd9-d2b1f152dbf0\") " pod="openshift-marketplace/certified-operators-vkp55" Feb 16 02:07:23.131684 master-0 kubenswrapper[7721]: I0216 02:07:23.130733 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbf5f\" (UniqueName: \"kubernetes.io/projected/0e5d40eb-8051-46a8-9cd9-d2b1f152dbf0-kube-api-access-gbf5f\") pod \"certified-operators-vkp55\" (UID: \"0e5d40eb-8051-46a8-9cd9-d2b1f152dbf0\") " pod="openshift-marketplace/certified-operators-vkp55" Feb 16 02:07:24.379725 master-0 kubenswrapper[7721]: I0216 02:07:23.344511 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vkp55" Feb 16 02:07:24.379725 master-0 kubenswrapper[7721]: I0216 02:07:24.375219 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Feb 16 02:07:24.379725 master-0 kubenswrapper[7721]: E0216 02:07:24.375396 7721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96142dec-d2af-459f-a45e-8e29ca30d4fd" containerName="installer" Feb 16 02:07:24.379725 master-0 kubenswrapper[7721]: I0216 02:07:24.375408 7721 state_mem.go:107] "Deleted CPUSet assignment" podUID="96142dec-d2af-459f-a45e-8e29ca30d4fd" containerName="installer" Feb 16 02:07:24.379725 master-0 kubenswrapper[7721]: I0216 02:07:24.375524 7721 memory_manager.go:354] "RemoveStaleState removing state" podUID="96142dec-d2af-459f-a45e-8e29ca30d4fd" containerName="installer" Feb 16 02:07:24.379725 master-0 kubenswrapper[7721]: I0216 02:07:24.375820 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Feb 16 02:07:24.392000 master-0 kubenswrapper[7721]: I0216 02:07:24.391120 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"4733c2df-0f5a-4696-b8c6-2568ebc7debc","Type":"ContainerStarted","Data":"3537f96b40e8859a6a366ec6550aaba73f34d9f862f4f2e89eccfbc047d01b00"} Feb 16 02:07:24.397499 master-0 kubenswrapper[7721]: I0216 02:07:24.396148 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-0_96142dec-d2af-459f-a45e-8e29ca30d4fd/installer/0.log" Feb 16 02:07:24.397499 master-0 kubenswrapper[7721]: I0216 02:07:24.396200 7721 generic.go:334] "Generic (PLEG): container finished" podID="96142dec-d2af-459f-a45e-8e29ca30d4fd" containerID="9c65c6fee47ec602fb79a3d44443521383b462ecc1d1bc3ef41453f2b988a126" exitCode=1 Feb 16 02:07:24.397499 master-0 kubenswrapper[7721]: I0216 02:07:24.396297 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"96142dec-d2af-459f-a45e-8e29ca30d4fd","Type":"ContainerDied","Data":"9c65c6fee47ec602fb79a3d44443521383b462ecc1d1bc3ef41453f2b988a126"} Feb 16 02:07:24.397499 master-0 kubenswrapper[7721]: I0216 02:07:24.396329 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"96142dec-d2af-459f-a45e-8e29ca30d4fd","Type":"ContainerDied","Data":"aecda77080879d1c94103f2409e2d691f37346ebab3adcb041c89ffa50e6d4c1"} Feb 16 02:07:24.397499 master-0 kubenswrapper[7721]: I0216 02:07:24.396348 7721 scope.go:117] "RemoveContainer" containerID="9c65c6fee47ec602fb79a3d44443521383b462ecc1d1bc3ef41453f2b988a126" Feb 16 02:07:24.397499 master-0 kubenswrapper[7721]: I0216 02:07:24.397332 7721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Feb 16 02:07:24.406397 master-0 kubenswrapper[7721]: I0216 02:07:24.402845 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Feb 16 02:07:24.413649 master-0 kubenswrapper[7721]: I0216 02:07:24.413599 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9kb98" event={"ID":"774b6ff9-0e37-48fd-96c6-571859fec492","Type":"ContainerStarted","Data":"f2664a967c697ea91de1f2c899a0e2505b3571321a1d6f66745c91fa25ea05f7"} Feb 16 02:07:24.413649 master-0 kubenswrapper[7721]: I0216 02:07:24.413653 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9kb98" event={"ID":"774b6ff9-0e37-48fd-96c6-571859fec492","Type":"ContainerStarted","Data":"a1ddda24c47c0110d6cb5389d382a77f53efd1546a0833c2415421c0d7cbe70f"} Feb 16 02:07:24.423737 master-0 kubenswrapper[7721]: I0216 02:07:24.423677 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67f79f9544-nhm26" event={"ID":"ab3f83a0-cda8-441c-9069-455a731b0a89","Type":"ContainerDied","Data":"b7926415c82753ccf4491a81d02ad31d5faae903226a791550a4520f43659b98"} Feb 16 02:07:24.423941 master-0 kubenswrapper[7721]: I0216 02:07:24.423806 7721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67f79f9544-nhm26" Feb 16 02:07:24.445015 master-0 kubenswrapper[7721]: I0216 02:07:24.444871 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-696cfb9f87-87b8w"] Feb 16 02:07:24.448037 master-0 kubenswrapper[7721]: I0216 02:07:24.445833 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5574f479df-xqnpg"] Feb 16 02:07:24.459072 master-0 kubenswrapper[7721]: I0216 02:07:24.456373 7721 scope.go:117] "RemoveContainer" containerID="9c65c6fee47ec602fb79a3d44443521383b462ecc1d1bc3ef41453f2b988a126" Feb 16 02:07:24.460903 master-0 kubenswrapper[7721]: E0216 02:07:24.460611 7721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c65c6fee47ec602fb79a3d44443521383b462ecc1d1bc3ef41453f2b988a126\": container with ID starting with 9c65c6fee47ec602fb79a3d44443521383b462ecc1d1bc3ef41453f2b988a126 not found: ID does not exist" containerID="9c65c6fee47ec602fb79a3d44443521383b462ecc1d1bc3ef41453f2b988a126" Feb 16 02:07:24.460903 master-0 kubenswrapper[7721]: I0216 02:07:24.460650 7721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c65c6fee47ec602fb79a3d44443521383b462ecc1d1bc3ef41453f2b988a126"} err="failed to get container status \"9c65c6fee47ec602fb79a3d44443521383b462ecc1d1bc3ef41453f2b988a126\": rpc error: code = NotFound desc = could not find container \"9c65c6fee47ec602fb79a3d44443521383b462ecc1d1bc3ef41453f2b988a126\": container with ID starting with 9c65c6fee47ec602fb79a3d44443521383b462ecc1d1bc3ef41453f2b988a126 not found: ID does not exist" Feb 16 02:07:24.460903 master-0 kubenswrapper[7721]: I0216 02:07:24.460694 7721 scope.go:117] "RemoveContainer" containerID="18d499d8a526dd8edbbf8498d799fc9ee812e51b7a856a0eff5dce3411756de5" Feb 16 02:07:24.476709 master-0 kubenswrapper[7721]: W0216 02:07:24.476657 7721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7eda8a42_765e_47cf_896f_324e8185062e.slice/crio-71f88debd6e32c8926a0394683094104a5453ca71a39b14a81064d10cb0255f9 WatchSource:0}: Error finding container 71f88debd6e32c8926a0394683094104a5453ca71a39b14a81064d10cb0255f9: Status 404 returned error can't find the container with id 71f88debd6e32c8926a0394683094104a5453ca71a39b14a81064d10cb0255f9 Feb 16 02:07:24.485379 master-0 kubenswrapper[7721]: I0216 02:07:24.478808 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4c252310-b471-4560-bff7-1fc3e5a0ca6e-kube-api-access\") pod \"installer-3-master-0\" (UID: \"4c252310-b471-4560-bff7-1fc3e5a0ca6e\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 16 02:07:24.485379 master-0 kubenswrapper[7721]: I0216 02:07:24.478901 7721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-1-master-0" podStartSLOduration=3.478878226 podStartE2EDuration="3.478878226s" podCreationTimestamp="2026-02-16 02:07:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:07:24.478258841 +0000 UTC m=+47.972493103" watchObservedRunningTime="2026-02-16 02:07:24.478878226 +0000 UTC m=+47.973112498" Feb 16 02:07:24.485379 master-0 kubenswrapper[7721]: I0216 02:07:24.478930 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4c252310-b471-4560-bff7-1fc3e5a0ca6e-var-lock\") pod \"installer-3-master-0\" (UID: \"4c252310-b471-4560-bff7-1fc3e5a0ca6e\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 16 02:07:24.485379 master-0 kubenswrapper[7721]: I0216 02:07:24.479820 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4c252310-b471-4560-bff7-1fc3e5a0ca6e-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"4c252310-b471-4560-bff7-1fc3e5a0ca6e\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 16 02:07:24.516332 master-0 kubenswrapper[7721]: I0216 02:07:24.516296 7721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Feb 16 02:07:24.520574 master-0 kubenswrapper[7721]: I0216 02:07:24.519775 7721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Feb 16 02:07:24.564851 master-0 kubenswrapper[7721]: I0216 02:07:24.564595 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Feb 16 02:07:24.565300 master-0 kubenswrapper[7721]: I0216 02:07:24.565262 7721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-67f79f9544-nhm26"] Feb 16 02:07:24.565300 master-0 kubenswrapper[7721]: I0216 02:07:24.565295 7721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-apiserver/apiserver-67f79f9544-nhm26"] Feb 16 02:07:24.565423 master-0 kubenswrapper[7721]: I0216 02:07:24.565384 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Feb 16 02:07:24.569092 master-0 kubenswrapper[7721]: I0216 02:07:24.569054 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 16 02:07:24.571061 master-0 kubenswrapper[7721]: I0216 02:07:24.571021 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Feb 16 02:07:24.580502 master-0 kubenswrapper[7721]: I0216 02:07:24.580451 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4c252310-b471-4560-bff7-1fc3e5a0ca6e-var-lock\") pod \"installer-3-master-0\" (UID: \"4c252310-b471-4560-bff7-1fc3e5a0ca6e\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 16 02:07:24.580666 master-0 kubenswrapper[7721]: I0216 02:07:24.580611 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4c252310-b471-4560-bff7-1fc3e5a0ca6e-var-lock\") pod \"installer-3-master-0\" (UID: \"4c252310-b471-4560-bff7-1fc3e5a0ca6e\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 16 02:07:24.580705 master-0 kubenswrapper[7721]: I0216 02:07:24.580672 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4c252310-b471-4560-bff7-1fc3e5a0ca6e-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"4c252310-b471-4560-bff7-1fc3e5a0ca6e\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 16 02:07:24.580788 master-0 kubenswrapper[7721]: I0216 02:07:24.580708 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4c252310-b471-4560-bff7-1fc3e5a0ca6e-kube-api-access\") pod \"installer-3-master-0\" (UID: \"4c252310-b471-4560-bff7-1fc3e5a0ca6e\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 16 02:07:24.581578 master-0 kubenswrapper[7721]: I0216 02:07:24.580851 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4c252310-b471-4560-bff7-1fc3e5a0ca6e-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"4c252310-b471-4560-bff7-1fc3e5a0ca6e\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 16 02:07:24.596568 master-0 kubenswrapper[7721]: I0216 02:07:24.596510 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4c252310-b471-4560-bff7-1fc3e5a0ca6e-kube-api-access\") pod \"installer-3-master-0\" (UID: \"4c252310-b471-4560-bff7-1fc3e5a0ca6e\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 16 02:07:24.682265 master-0 kubenswrapper[7721]: I0216 02:07:24.682204 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/80f43f07-ce08-4c21-9463-ea983a110244-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"80f43f07-ce08-4c21-9463-ea983a110244\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 16 02:07:24.682265 master-0 kubenswrapper[7721]: I0216 02:07:24.682253 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/80f43f07-ce08-4c21-9463-ea983a110244-kube-api-access\") pod \"installer-1-master-0\" (UID: \"80f43f07-ce08-4c21-9463-ea983a110244\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 16 02:07:24.682265 master-0 kubenswrapper[7721]: I0216 02:07:24.682272 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/80f43f07-ce08-4c21-9463-ea983a110244-var-lock\") pod \"installer-1-master-0\" (UID: \"80f43f07-ce08-4c21-9463-ea983a110244\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 16 02:07:24.698157 master-0 kubenswrapper[7721]: I0216 02:07:24.698092 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-trqk8"] Feb 16 02:07:24.699339 master-0 kubenswrapper[7721]: I0216 02:07:24.699297 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-trqk8" Feb 16 02:07:24.720677 master-0 kubenswrapper[7721]: I0216 02:07:24.720611 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-trqk8"] Feb 16 02:07:24.733291 master-0 kubenswrapper[7721]: I0216 02:07:24.733246 7721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96142dec-d2af-459f-a45e-8e29ca30d4fd" path="/var/lib/kubelet/pods/96142dec-d2af-459f-a45e-8e29ca30d4fd/volumes" Feb 16 02:07:24.734135 master-0 kubenswrapper[7721]: I0216 02:07:24.734115 7721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab3f83a0-cda8-441c-9069-455a731b0a89" path="/var/lib/kubelet/pods/ab3f83a0-cda8-441c-9069-455a731b0a89/volumes" Feb 16 02:07:24.766957 master-0 kubenswrapper[7721]: I0216 02:07:24.766894 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Feb 16 02:07:24.775076 master-0 kubenswrapper[7721]: I0216 02:07:24.774981 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vkp55"] Feb 16 02:07:24.782956 master-0 kubenswrapper[7721]: W0216 02:07:24.782896 7721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e5d40eb_8051_46a8_9cd9_d2b1f152dbf0.slice/crio-09423cf84bc18b39153d050c3d29a41cb735dc66d3e929960f6e1f6ec404f766 WatchSource:0}: Error finding container 09423cf84bc18b39153d050c3d29a41cb735dc66d3e929960f6e1f6ec404f766: Status 404 returned error can't find the container with id 09423cf84bc18b39153d050c3d29a41cb735dc66d3e929960f6e1f6ec404f766 Feb 16 02:07:24.783149 master-0 kubenswrapper[7721]: I0216 02:07:24.783112 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7jgf\" (UniqueName: \"kubernetes.io/projected/ab463f74-d1e7-44f1-9634-d9f63685b06d-kube-api-access-h7jgf\") pod \"community-operators-trqk8\" (UID: \"ab463f74-d1e7-44f1-9634-d9f63685b06d\") " pod="openshift-marketplace/community-operators-trqk8" Feb 16 02:07:24.783234 master-0 kubenswrapper[7721]: I0216 02:07:24.783158 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab463f74-d1e7-44f1-9634-d9f63685b06d-catalog-content\") pod \"community-operators-trqk8\" (UID: \"ab463f74-d1e7-44f1-9634-d9f63685b06d\") " pod="openshift-marketplace/community-operators-trqk8" Feb 16 02:07:24.783234 master-0 kubenswrapper[7721]: I0216 02:07:24.783194 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/80f43f07-ce08-4c21-9463-ea983a110244-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"80f43f07-ce08-4c21-9463-ea983a110244\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 16 02:07:24.783323 master-0 kubenswrapper[7721]: I0216 02:07:24.783239 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/80f43f07-ce08-4c21-9463-ea983a110244-kube-api-access\") pod \"installer-1-master-0\" (UID: \"80f43f07-ce08-4c21-9463-ea983a110244\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 16 02:07:24.783323 master-0 kubenswrapper[7721]: I0216 02:07:24.783265 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/80f43f07-ce08-4c21-9463-ea983a110244-var-lock\") pod \"installer-1-master-0\" (UID: \"80f43f07-ce08-4c21-9463-ea983a110244\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 16 02:07:24.783323 master-0 kubenswrapper[7721]: I0216 02:07:24.783313 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab463f74-d1e7-44f1-9634-d9f63685b06d-utilities\") pod \"community-operators-trqk8\" (UID: \"ab463f74-d1e7-44f1-9634-d9f63685b06d\") " pod="openshift-marketplace/community-operators-trqk8" Feb 16 02:07:24.784021 master-0 kubenswrapper[7721]: I0216 02:07:24.783638 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/80f43f07-ce08-4c21-9463-ea983a110244-var-lock\") pod \"installer-1-master-0\" (UID: \"80f43f07-ce08-4c21-9463-ea983a110244\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 16 02:07:24.784021 master-0 kubenswrapper[7721]: I0216 02:07:24.783647 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/80f43f07-ce08-4c21-9463-ea983a110244-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"80f43f07-ce08-4c21-9463-ea983a110244\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 16 02:07:24.802904 master-0 kubenswrapper[7721]: I0216 02:07:24.802857 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/80f43f07-ce08-4c21-9463-ea983a110244-kube-api-access\") pod \"installer-1-master-0\" (UID: \"80f43f07-ce08-4c21-9463-ea983a110244\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 16 02:07:24.887385 master-0 kubenswrapper[7721]: I0216 02:07:24.887336 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7jgf\" (UniqueName: \"kubernetes.io/projected/ab463f74-d1e7-44f1-9634-d9f63685b06d-kube-api-access-h7jgf\") pod \"community-operators-trqk8\" (UID: \"ab463f74-d1e7-44f1-9634-d9f63685b06d\") " pod="openshift-marketplace/community-operators-trqk8" Feb 16 02:07:24.887385 master-0 kubenswrapper[7721]: I0216 02:07:24.887391 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab463f74-d1e7-44f1-9634-d9f63685b06d-catalog-content\") pod \"community-operators-trqk8\" (UID: \"ab463f74-d1e7-44f1-9634-d9f63685b06d\") " pod="openshift-marketplace/community-operators-trqk8" Feb 16 02:07:24.887608 master-0 kubenswrapper[7721]: I0216 02:07:24.887450 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab463f74-d1e7-44f1-9634-d9f63685b06d-utilities\") pod \"community-operators-trqk8\" (UID: \"ab463f74-d1e7-44f1-9634-d9f63685b06d\") " pod="openshift-marketplace/community-operators-trqk8" Feb 16 02:07:24.887899 master-0 kubenswrapper[7721]: I0216 02:07:24.887879 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab463f74-d1e7-44f1-9634-d9f63685b06d-utilities\") pod \"community-operators-trqk8\" (UID: \"ab463f74-d1e7-44f1-9634-d9f63685b06d\") " pod="openshift-marketplace/community-operators-trqk8" Feb 16 02:07:24.888429 master-0 kubenswrapper[7721]: I0216 02:07:24.888402 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab463f74-d1e7-44f1-9634-d9f63685b06d-catalog-content\") pod \"community-operators-trqk8\" (UID: \"ab463f74-d1e7-44f1-9634-d9f63685b06d\") " pod="openshift-marketplace/community-operators-trqk8" Feb 16 02:07:24.912196 master-0 kubenswrapper[7721]: I0216 02:07:24.911881 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Feb 16 02:07:24.924704 master-0 kubenswrapper[7721]: I0216 02:07:24.924623 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7jgf\" (UniqueName: \"kubernetes.io/projected/ab463f74-d1e7-44f1-9634-d9f63685b06d-kube-api-access-h7jgf\") pod \"community-operators-trqk8\" (UID: \"ab463f74-d1e7-44f1-9634-d9f63685b06d\") " pod="openshift-marketplace/community-operators-trqk8" Feb 16 02:07:25.018456 master-0 kubenswrapper[7721]: I0216 02:07:25.015972 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-trqk8" Feb 16 02:07:25.195006 master-0 kubenswrapper[7721]: W0216 02:07:25.192836 7721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod4c252310_b471_4560_bff7_1fc3e5a0ca6e.slice/crio-8f4e7b15846d4dbe7cb1a82fc83a14ab9dd2113d8ac8871e55e04e8ec4eae91c WatchSource:0}: Error finding container 8f4e7b15846d4dbe7cb1a82fc83a14ab9dd2113d8ac8871e55e04e8ec4eae91c: Status 404 returned error can't find the container with id 8f4e7b15846d4dbe7cb1a82fc83a14ab9dd2113d8ac8871e55e04e8ec4eae91c Feb 16 02:07:25.195502 master-0 kubenswrapper[7721]: I0216 02:07:25.195468 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Feb 16 02:07:25.302320 master-0 kubenswrapper[7721]: I0216 02:07:25.300486 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9qtbw"] Feb 16 02:07:25.302320 master-0 kubenswrapper[7721]: I0216 02:07:25.301261 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9qtbw" Feb 16 02:07:25.325190 master-0 kubenswrapper[7721]: I0216 02:07:25.324519 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9qtbw"] Feb 16 02:07:25.364527 master-0 kubenswrapper[7721]: I0216 02:07:25.359886 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Feb 16 02:07:25.395405 master-0 kubenswrapper[7721]: I0216 02:07:25.395359 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bdb65c2-c4bc-4e33-9e5a-61542c659700-utilities\") pod \"redhat-marketplace-9qtbw\" (UID: \"0bdb65c2-c4bc-4e33-9e5a-61542c659700\") " pod="openshift-marketplace/redhat-marketplace-9qtbw" Feb 16 02:07:25.395910 master-0 kubenswrapper[7721]: I0216 02:07:25.395417 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xvw2\" (UniqueName: \"kubernetes.io/projected/0bdb65c2-c4bc-4e33-9e5a-61542c659700-kube-api-access-8xvw2\") pod \"redhat-marketplace-9qtbw\" (UID: \"0bdb65c2-c4bc-4e33-9e5a-61542c659700\") " pod="openshift-marketplace/redhat-marketplace-9qtbw" Feb 16 02:07:25.395910 master-0 kubenswrapper[7721]: I0216 02:07:25.395460 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bdb65c2-c4bc-4e33-9e5a-61542c659700-catalog-content\") pod \"redhat-marketplace-9qtbw\" (UID: \"0bdb65c2-c4bc-4e33-9e5a-61542c659700\") " pod="openshift-marketplace/redhat-marketplace-9qtbw" Feb 16 02:07:25.433277 master-0 kubenswrapper[7721]: I0216 02:07:25.433234 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-696cfb9f87-87b8w" event={"ID":"f2a7e185-78f4-4d69-b126-d465374a6218","Type":"ContainerStarted","Data":"9208e311bfc82baf59af5b784307b97c850fa510c8d36aefe36f52bc21d5d523"} Feb 16 02:07:25.433422 master-0 kubenswrapper[7721]: I0216 02:07:25.433409 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-696cfb9f87-87b8w" Feb 16 02:07:25.433509 master-0 kubenswrapper[7721]: I0216 02:07:25.433495 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-696cfb9f87-87b8w" event={"ID":"f2a7e185-78f4-4d69-b126-d465374a6218","Type":"ContainerStarted","Data":"4fc83c72923d06bb27dc72e1294113ac5b34e6ed149d10d4a4b519d50c5b89a9"} Feb 16 02:07:25.441962 master-0 kubenswrapper[7721]: I0216 02:07:25.438127 7721 generic.go:334] "Generic (PLEG): container finished" podID="774b6ff9-0e37-48fd-96c6-571859fec492" containerID="f2664a967c697ea91de1f2c899a0e2505b3571321a1d6f66745c91fa25ea05f7" exitCode=0 Feb 16 02:07:25.441962 master-0 kubenswrapper[7721]: I0216 02:07:25.438280 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9kb98" event={"ID":"774b6ff9-0e37-48fd-96c6-571859fec492","Type":"ContainerDied","Data":"f2664a967c697ea91de1f2c899a0e2505b3571321a1d6f66745c91fa25ea05f7"} Feb 16 02:07:25.441962 master-0 kubenswrapper[7721]: I0216 02:07:25.439725 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-696cfb9f87-87b8w" Feb 16 02:07:25.441962 master-0 kubenswrapper[7721]: I0216 02:07:25.441745 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"4c252310-b471-4560-bff7-1fc3e5a0ca6e","Type":"ContainerStarted","Data":"8f4e7b15846d4dbe7cb1a82fc83a14ab9dd2113d8ac8871e55e04e8ec4eae91c"} Feb 16 02:07:25.445035 master-0 kubenswrapper[7721]: I0216 02:07:25.443655 7721 generic.go:334] "Generic (PLEG): container finished" podID="0e5d40eb-8051-46a8-9cd9-d2b1f152dbf0" containerID="dd47a692ee6e4d8b43793e4e2b5ea092378722da29abfccc0503b64fd6fa2b5a" exitCode=0 Feb 16 02:07:25.445035 master-0 kubenswrapper[7721]: I0216 02:07:25.443720 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vkp55" event={"ID":"0e5d40eb-8051-46a8-9cd9-d2b1f152dbf0","Type":"ContainerDied","Data":"dd47a692ee6e4d8b43793e4e2b5ea092378722da29abfccc0503b64fd6fa2b5a"} Feb 16 02:07:25.445035 master-0 kubenswrapper[7721]: I0216 02:07:25.443743 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vkp55" event={"ID":"0e5d40eb-8051-46a8-9cd9-d2b1f152dbf0","Type":"ContainerStarted","Data":"09423cf84bc18b39153d050c3d29a41cb735dc66d3e929960f6e1f6ec404f766"} Feb 16 02:07:25.447652 master-0 kubenswrapper[7721]: I0216 02:07:25.447571 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5574f479df-xqnpg" event={"ID":"7eda8a42-765e-47cf-896f-324e8185062e","Type":"ContainerStarted","Data":"1ac7f9a59837f186887fd1e06eef65d6e68ebbfb41544b75e3a57d65b27ead8f"} Feb 16 02:07:25.447652 master-0 kubenswrapper[7721]: I0216 02:07:25.447618 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5574f479df-xqnpg" event={"ID":"7eda8a42-765e-47cf-896f-324e8185062e","Type":"ContainerStarted","Data":"71f88debd6e32c8926a0394683094104a5453ca71a39b14a81064d10cb0255f9"} Feb 16 02:07:25.447863 master-0 kubenswrapper[7721]: I0216 02:07:25.447817 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5574f479df-xqnpg" Feb 16 02:07:25.450487 master-0 kubenswrapper[7721]: I0216 02:07:25.450287 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"80f43f07-ce08-4c21-9463-ea983a110244","Type":"ContainerStarted","Data":"aae309ad89c83d46c9fddf6708eda09d37f1fa06aa9277a0a246c53f3525897c"} Feb 16 02:07:25.470544 master-0 kubenswrapper[7721]: I0216 02:07:25.470479 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5574f479df-xqnpg" Feb 16 02:07:25.470900 master-0 kubenswrapper[7721]: I0216 02:07:25.470827 7721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-696cfb9f87-87b8w" podStartSLOduration=4.470801826 podStartE2EDuration="4.470801826s" podCreationTimestamp="2026-02-16 02:07:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:07:25.470746235 +0000 UTC m=+48.964980527" watchObservedRunningTime="2026-02-16 02:07:25.470801826 +0000 UTC m=+48.965036088" Feb 16 02:07:25.519928 master-0 kubenswrapper[7721]: I0216 02:07:25.498001 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bdb65c2-c4bc-4e33-9e5a-61542c659700-utilities\") pod \"redhat-marketplace-9qtbw\" (UID: \"0bdb65c2-c4bc-4e33-9e5a-61542c659700\") " pod="openshift-marketplace/redhat-marketplace-9qtbw" Feb 16 02:07:25.519928 master-0 kubenswrapper[7721]: I0216 02:07:25.498205 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xvw2\" (UniqueName: \"kubernetes.io/projected/0bdb65c2-c4bc-4e33-9e5a-61542c659700-kube-api-access-8xvw2\") pod \"redhat-marketplace-9qtbw\" (UID: \"0bdb65c2-c4bc-4e33-9e5a-61542c659700\") " pod="openshift-marketplace/redhat-marketplace-9qtbw" Feb 16 02:07:25.519928 master-0 kubenswrapper[7721]: I0216 02:07:25.498348 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bdb65c2-c4bc-4e33-9e5a-61542c659700-catalog-content\") pod \"redhat-marketplace-9qtbw\" (UID: \"0bdb65c2-c4bc-4e33-9e5a-61542c659700\") " pod="openshift-marketplace/redhat-marketplace-9qtbw" Feb 16 02:07:25.519928 master-0 kubenswrapper[7721]: I0216 02:07:25.501422 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bdb65c2-c4bc-4e33-9e5a-61542c659700-catalog-content\") pod \"redhat-marketplace-9qtbw\" (UID: \"0bdb65c2-c4bc-4e33-9e5a-61542c659700\") " pod="openshift-marketplace/redhat-marketplace-9qtbw" Feb 16 02:07:25.519928 master-0 kubenswrapper[7721]: I0216 02:07:25.501679 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bdb65c2-c4bc-4e33-9e5a-61542c659700-utilities\") pod \"redhat-marketplace-9qtbw\" (UID: \"0bdb65c2-c4bc-4e33-9e5a-61542c659700\") " pod="openshift-marketplace/redhat-marketplace-9qtbw" Feb 16 02:07:25.519928 master-0 kubenswrapper[7721]: I0216 02:07:25.502532 7721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5574f479df-xqnpg" podStartSLOduration=4.502518943 podStartE2EDuration="4.502518943s" podCreationTimestamp="2026-02-16 02:07:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:07:25.501886148 +0000 UTC m=+48.996120420" watchObservedRunningTime="2026-02-16 02:07:25.502518943 +0000 UTC m=+48.996753215" Feb 16 02:07:25.519928 master-0 kubenswrapper[7721]: I0216 02:07:25.514344 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-trqk8"] Feb 16 02:07:25.553665 master-0 kubenswrapper[7721]: I0216 02:07:25.546608 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xvw2\" (UniqueName: \"kubernetes.io/projected/0bdb65c2-c4bc-4e33-9e5a-61542c659700-kube-api-access-8xvw2\") pod \"redhat-marketplace-9qtbw\" (UID: \"0bdb65c2-c4bc-4e33-9e5a-61542c659700\") " pod="openshift-marketplace/redhat-marketplace-9qtbw" Feb 16 02:07:25.640460 master-0 kubenswrapper[7721]: I0216 02:07:25.631909 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9qtbw" Feb 16 02:07:26.078759 master-0 kubenswrapper[7721]: I0216 02:07:26.078604 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9qtbw"] Feb 16 02:07:26.088850 master-0 kubenswrapper[7721]: W0216 02:07:26.088777 7721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bdb65c2_c4bc_4e33_9e5a_61542c659700.slice/crio-239825a3989258bbf59de5ff95b00559d0502acecd844ad3bed64d4e2e8c2676 WatchSource:0}: Error finding container 239825a3989258bbf59de5ff95b00559d0502acecd844ad3bed64d4e2e8c2676: Status 404 returned error can't find the container with id 239825a3989258bbf59de5ff95b00559d0502acecd844ad3bed64d4e2e8c2676 Feb 16 02:07:26.458331 master-0 kubenswrapper[7721]: I0216 02:07:26.458263 7721 generic.go:334] "Generic (PLEG): container finished" podID="ab463f74-d1e7-44f1-9634-d9f63685b06d" containerID="c3d16f1de74c265a7099eb264692d380180b156ac079a97448ae93f5a03844ad" exitCode=0 Feb 16 02:07:26.458861 master-0 kubenswrapper[7721]: I0216 02:07:26.458347 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-trqk8" event={"ID":"ab463f74-d1e7-44f1-9634-d9f63685b06d","Type":"ContainerDied","Data":"c3d16f1de74c265a7099eb264692d380180b156ac079a97448ae93f5a03844ad"} Feb 16 02:07:26.458861 master-0 kubenswrapper[7721]: I0216 02:07:26.458378 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-trqk8" event={"ID":"ab463f74-d1e7-44f1-9634-d9f63685b06d","Type":"ContainerStarted","Data":"63ff9ecc1fd3652504bc8f536c52d520abfef70fdd743636d7aff4953ee9f4f4"} Feb 16 02:07:26.468480 master-0 kubenswrapper[7721]: I0216 02:07:26.464137 7721 generic.go:334] "Generic (PLEG): container finished" podID="0bdb65c2-c4bc-4e33-9e5a-61542c659700" containerID="94f8bb1308558c15bc7708b6c955735ebf5750573898ff17e0375408e94d34fd" exitCode=0 Feb 16 02:07:26.468480 master-0 kubenswrapper[7721]: I0216 02:07:26.464219 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9qtbw" event={"ID":"0bdb65c2-c4bc-4e33-9e5a-61542c659700","Type":"ContainerDied","Data":"94f8bb1308558c15bc7708b6c955735ebf5750573898ff17e0375408e94d34fd"} Feb 16 02:07:26.468480 master-0 kubenswrapper[7721]: I0216 02:07:26.464251 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9qtbw" event={"ID":"0bdb65c2-c4bc-4e33-9e5a-61542c659700","Type":"ContainerStarted","Data":"239825a3989258bbf59de5ff95b00559d0502acecd844ad3bed64d4e2e8c2676"} Feb 16 02:07:26.471642 master-0 kubenswrapper[7721]: I0216 02:07:26.471571 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"4c252310-b471-4560-bff7-1fc3e5a0ca6e","Type":"ContainerStarted","Data":"50b35135525cac92334a91f4ec010bc5934bf80c0d3dfe0e713119678ac6f2a8"} Feb 16 02:07:26.476523 master-0 kubenswrapper[7721]: I0216 02:07:26.474648 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"80f43f07-ce08-4c21-9463-ea983a110244","Type":"ContainerStarted","Data":"b4ee28c7394858e0cf4928c25f023b84c89ee6af3676ca6c853d7b858571c63f"} Feb 16 02:07:26.661863 master-0 kubenswrapper[7721]: I0216 02:07:26.508740 7721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-3-master-0" podStartSLOduration=3.508714987 podStartE2EDuration="3.508714987s" podCreationTimestamp="2026-02-16 02:07:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:07:26.49516582 +0000 UTC m=+49.989400082" watchObservedRunningTime="2026-02-16 02:07:26.508714987 +0000 UTC m=+50.002949249" Feb 16 02:07:28.035307 master-0 kubenswrapper[7721]: I0216 02:07:28.035021 7721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-1-master-0" podStartSLOduration=4.03498744 podStartE2EDuration="4.03498744s" podCreationTimestamp="2026-02-16 02:07:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:07:28.02611542 +0000 UTC m=+51.520349682" watchObservedRunningTime="2026-02-16 02:07:28.03498744 +0000 UTC m=+51.529221692" Feb 16 02:07:28.058495 master-0 kubenswrapper[7721]: I0216 02:07:28.047345 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-578b9bc556-8g98v"] Feb 16 02:07:28.058495 master-0 kubenswrapper[7721]: I0216 02:07:28.048259 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-578b9bc556-8g98v" Feb 16 02:07:28.058495 master-0 kubenswrapper[7721]: I0216 02:07:28.057912 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 16 02:07:28.058495 master-0 kubenswrapper[7721]: I0216 02:07:28.058101 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 16 02:07:28.058495 master-0 kubenswrapper[7721]: I0216 02:07:28.058239 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 16 02:07:28.058495 master-0 kubenswrapper[7721]: I0216 02:07:28.058334 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 16 02:07:28.058495 master-0 kubenswrapper[7721]: I0216 02:07:28.058428 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 16 02:07:28.058853 master-0 kubenswrapper[7721]: I0216 02:07:28.058565 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 16 02:07:28.059782 master-0 kubenswrapper[7721]: I0216 02:07:28.059736 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 16 02:07:28.059957 master-0 kubenswrapper[7721]: I0216 02:07:28.059917 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 16 02:07:28.059994 master-0 kubenswrapper[7721]: I0216 02:07:28.059977 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 16 02:07:28.062168 master-0 kubenswrapper[7721]: I0216 02:07:28.062139 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 16 02:07:28.205833 master-0 kubenswrapper[7721]: I0216 02:07:28.205719 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8f918d5b-1a4c-4b56-98a4-5cef638bb615-etcd-serving-ca\") pod \"apiserver-578b9bc556-8g98v\" (UID: \"8f918d5b-1a4c-4b56-98a4-5cef638bb615\") " pod="openshift-apiserver/apiserver-578b9bc556-8g98v" Feb 16 02:07:28.205998 master-0 kubenswrapper[7721]: I0216 02:07:28.205847 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8f918d5b-1a4c-4b56-98a4-5cef638bb615-audit\") pod \"apiserver-578b9bc556-8g98v\" (UID: \"8f918d5b-1a4c-4b56-98a4-5cef638bb615\") " pod="openshift-apiserver/apiserver-578b9bc556-8g98v" Feb 16 02:07:28.205998 master-0 kubenswrapper[7721]: I0216 02:07:28.205900 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8f918d5b-1a4c-4b56-98a4-5cef638bb615-etcd-client\") pod \"apiserver-578b9bc556-8g98v\" (UID: \"8f918d5b-1a4c-4b56-98a4-5cef638bb615\") " pod="openshift-apiserver/apiserver-578b9bc556-8g98v" Feb 16 02:07:28.205998 master-0 kubenswrapper[7721]: I0216 02:07:28.205924 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8f918d5b-1a4c-4b56-98a4-5cef638bb615-audit-dir\") pod \"apiserver-578b9bc556-8g98v\" (UID: \"8f918d5b-1a4c-4b56-98a4-5cef638bb615\") " pod="openshift-apiserver/apiserver-578b9bc556-8g98v" Feb 16 02:07:28.205998 master-0 kubenswrapper[7721]: I0216 02:07:28.205956 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f918d5b-1a4c-4b56-98a4-5cef638bb615-config\") pod \"apiserver-578b9bc556-8g98v\" (UID: \"8f918d5b-1a4c-4b56-98a4-5cef638bb615\") " pod="openshift-apiserver/apiserver-578b9bc556-8g98v" Feb 16 02:07:28.206168 master-0 kubenswrapper[7721]: I0216 02:07:28.206125 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8f918d5b-1a4c-4b56-98a4-5cef638bb615-image-import-ca\") pod \"apiserver-578b9bc556-8g98v\" (UID: \"8f918d5b-1a4c-4b56-98a4-5cef638bb615\") " pod="openshift-apiserver/apiserver-578b9bc556-8g98v" Feb 16 02:07:28.206203 master-0 kubenswrapper[7721]: I0216 02:07:28.206183 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f918d5b-1a4c-4b56-98a4-5cef638bb615-trusted-ca-bundle\") pod \"apiserver-578b9bc556-8g98v\" (UID: \"8f918d5b-1a4c-4b56-98a4-5cef638bb615\") " pod="openshift-apiserver/apiserver-578b9bc556-8g98v" Feb 16 02:07:28.206252 master-0 kubenswrapper[7721]: I0216 02:07:28.206233 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8f918d5b-1a4c-4b56-98a4-5cef638bb615-encryption-config\") pod \"apiserver-578b9bc556-8g98v\" (UID: \"8f918d5b-1a4c-4b56-98a4-5cef638bb615\") " pod="openshift-apiserver/apiserver-578b9bc556-8g98v" Feb 16 02:07:28.206293 master-0 kubenswrapper[7721]: I0216 02:07:28.206274 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f918d5b-1a4c-4b56-98a4-5cef638bb615-serving-cert\") pod \"apiserver-578b9bc556-8g98v\" (UID: \"8f918d5b-1a4c-4b56-98a4-5cef638bb615\") " pod="openshift-apiserver/apiserver-578b9bc556-8g98v" Feb 16 02:07:28.206362 master-0 kubenswrapper[7721]: I0216 02:07:28.206332 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxnht\" (UniqueName: \"kubernetes.io/projected/8f918d5b-1a4c-4b56-98a4-5cef638bb615-kube-api-access-fxnht\") pod \"apiserver-578b9bc556-8g98v\" (UID: \"8f918d5b-1a4c-4b56-98a4-5cef638bb615\") " pod="openshift-apiserver/apiserver-578b9bc556-8g98v" Feb 16 02:07:28.206398 master-0 kubenswrapper[7721]: I0216 02:07:28.206375 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8f918d5b-1a4c-4b56-98a4-5cef638bb615-node-pullsecrets\") pod \"apiserver-578b9bc556-8g98v\" (UID: \"8f918d5b-1a4c-4b56-98a4-5cef638bb615\") " pod="openshift-apiserver/apiserver-578b9bc556-8g98v" Feb 16 02:07:28.307601 master-0 kubenswrapper[7721]: I0216 02:07:28.307392 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8f918d5b-1a4c-4b56-98a4-5cef638bb615-node-pullsecrets\") pod \"apiserver-578b9bc556-8g98v\" (UID: \"8f918d5b-1a4c-4b56-98a4-5cef638bb615\") " pod="openshift-apiserver/apiserver-578b9bc556-8g98v" Feb 16 02:07:28.307601 master-0 kubenswrapper[7721]: I0216 02:07:28.307482 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8f918d5b-1a4c-4b56-98a4-5cef638bb615-etcd-serving-ca\") pod \"apiserver-578b9bc556-8g98v\" (UID: \"8f918d5b-1a4c-4b56-98a4-5cef638bb615\") " pod="openshift-apiserver/apiserver-578b9bc556-8g98v" Feb 16 02:07:28.307601 master-0 kubenswrapper[7721]: I0216 02:07:28.307507 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8f918d5b-1a4c-4b56-98a4-5cef638bb615-audit\") pod \"apiserver-578b9bc556-8g98v\" (UID: \"8f918d5b-1a4c-4b56-98a4-5cef638bb615\") " pod="openshift-apiserver/apiserver-578b9bc556-8g98v" Feb 16 02:07:28.307601 master-0 kubenswrapper[7721]: I0216 02:07:28.307548 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8f918d5b-1a4c-4b56-98a4-5cef638bb615-etcd-client\") pod \"apiserver-578b9bc556-8g98v\" (UID: \"8f918d5b-1a4c-4b56-98a4-5cef638bb615\") " pod="openshift-apiserver/apiserver-578b9bc556-8g98v" Feb 16 02:07:28.307601 master-0 kubenswrapper[7721]: I0216 02:07:28.307571 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8f918d5b-1a4c-4b56-98a4-5cef638bb615-audit-dir\") pod \"apiserver-578b9bc556-8g98v\" (UID: \"8f918d5b-1a4c-4b56-98a4-5cef638bb615\") " pod="openshift-apiserver/apiserver-578b9bc556-8g98v" Feb 16 02:07:28.307601 master-0 kubenswrapper[7721]: I0216 02:07:28.307591 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f918d5b-1a4c-4b56-98a4-5cef638bb615-config\") pod \"apiserver-578b9bc556-8g98v\" (UID: \"8f918d5b-1a4c-4b56-98a4-5cef638bb615\") " pod="openshift-apiserver/apiserver-578b9bc556-8g98v" Feb 16 02:07:28.308167 master-0 kubenswrapper[7721]: I0216 02:07:28.307624 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8f918d5b-1a4c-4b56-98a4-5cef638bb615-image-import-ca\") pod \"apiserver-578b9bc556-8g98v\" (UID: \"8f918d5b-1a4c-4b56-98a4-5cef638bb615\") " pod="openshift-apiserver/apiserver-578b9bc556-8g98v" Feb 16 02:07:28.308167 master-0 kubenswrapper[7721]: I0216 02:07:28.307643 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f918d5b-1a4c-4b56-98a4-5cef638bb615-trusted-ca-bundle\") pod \"apiserver-578b9bc556-8g98v\" (UID: \"8f918d5b-1a4c-4b56-98a4-5cef638bb615\") " pod="openshift-apiserver/apiserver-578b9bc556-8g98v" Feb 16 02:07:28.308167 master-0 kubenswrapper[7721]: I0216 02:07:28.307673 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8f918d5b-1a4c-4b56-98a4-5cef638bb615-encryption-config\") pod \"apiserver-578b9bc556-8g98v\" (UID: \"8f918d5b-1a4c-4b56-98a4-5cef638bb615\") " pod="openshift-apiserver/apiserver-578b9bc556-8g98v" Feb 16 02:07:28.308167 master-0 kubenswrapper[7721]: I0216 02:07:28.307660 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8f918d5b-1a4c-4b56-98a4-5cef638bb615-node-pullsecrets\") pod \"apiserver-578b9bc556-8g98v\" (UID: \"8f918d5b-1a4c-4b56-98a4-5cef638bb615\") " pod="openshift-apiserver/apiserver-578b9bc556-8g98v" Feb 16 02:07:28.308167 master-0 kubenswrapper[7721]: I0216 02:07:28.307691 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f918d5b-1a4c-4b56-98a4-5cef638bb615-serving-cert\") pod \"apiserver-578b9bc556-8g98v\" (UID: \"8f918d5b-1a4c-4b56-98a4-5cef638bb615\") " pod="openshift-apiserver/apiserver-578b9bc556-8g98v" Feb 16 02:07:28.308167 master-0 kubenswrapper[7721]: I0216 02:07:28.307864 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxnht\" (UniqueName: \"kubernetes.io/projected/8f918d5b-1a4c-4b56-98a4-5cef638bb615-kube-api-access-fxnht\") pod \"apiserver-578b9bc556-8g98v\" (UID: \"8f918d5b-1a4c-4b56-98a4-5cef638bb615\") " pod="openshift-apiserver/apiserver-578b9bc556-8g98v" Feb 16 02:07:28.308608 master-0 kubenswrapper[7721]: I0216 02:07:28.308369 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8f918d5b-1a4c-4b56-98a4-5cef638bb615-audit-dir\") pod \"apiserver-578b9bc556-8g98v\" (UID: \"8f918d5b-1a4c-4b56-98a4-5cef638bb615\") " pod="openshift-apiserver/apiserver-578b9bc556-8g98v" Feb 16 02:07:28.309632 master-0 kubenswrapper[7721]: I0216 02:07:28.309589 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8f918d5b-1a4c-4b56-98a4-5cef638bb615-etcd-serving-ca\") pod \"apiserver-578b9bc556-8g98v\" (UID: \"8f918d5b-1a4c-4b56-98a4-5cef638bb615\") " pod="openshift-apiserver/apiserver-578b9bc556-8g98v" Feb 16 02:07:28.309855 master-0 kubenswrapper[7721]: I0216 02:07:28.309784 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8f918d5b-1a4c-4b56-98a4-5cef638bb615-image-import-ca\") pod \"apiserver-578b9bc556-8g98v\" (UID: \"8f918d5b-1a4c-4b56-98a4-5cef638bb615\") " pod="openshift-apiserver/apiserver-578b9bc556-8g98v" Feb 16 02:07:28.312481 master-0 kubenswrapper[7721]: I0216 02:07:28.312353 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8f918d5b-1a4c-4b56-98a4-5cef638bb615-audit\") pod \"apiserver-578b9bc556-8g98v\" (UID: \"8f918d5b-1a4c-4b56-98a4-5cef638bb615\") " pod="openshift-apiserver/apiserver-578b9bc556-8g98v" Feb 16 02:07:28.313237 master-0 kubenswrapper[7721]: I0216 02:07:28.313161 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f918d5b-1a4c-4b56-98a4-5cef638bb615-trusted-ca-bundle\") pod \"apiserver-578b9bc556-8g98v\" (UID: \"8f918d5b-1a4c-4b56-98a4-5cef638bb615\") " pod="openshift-apiserver/apiserver-578b9bc556-8g98v" Feb 16 02:07:28.313334 master-0 kubenswrapper[7721]: I0216 02:07:28.313258 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f918d5b-1a4c-4b56-98a4-5cef638bb615-serving-cert\") pod \"apiserver-578b9bc556-8g98v\" (UID: \"8f918d5b-1a4c-4b56-98a4-5cef638bb615\") " pod="openshift-apiserver/apiserver-578b9bc556-8g98v" Feb 16 02:07:28.313410 master-0 kubenswrapper[7721]: I0216 02:07:28.313316 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f918d5b-1a4c-4b56-98a4-5cef638bb615-config\") pod \"apiserver-578b9bc556-8g98v\" (UID: \"8f918d5b-1a4c-4b56-98a4-5cef638bb615\") " pod="openshift-apiserver/apiserver-578b9bc556-8g98v" Feb 16 02:07:28.314379 master-0 kubenswrapper[7721]: I0216 02:07:28.314268 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8f918d5b-1a4c-4b56-98a4-5cef638bb615-encryption-config\") pod \"apiserver-578b9bc556-8g98v\" (UID: \"8f918d5b-1a4c-4b56-98a4-5cef638bb615\") " pod="openshift-apiserver/apiserver-578b9bc556-8g98v" Feb 16 02:07:28.314560 master-0 kubenswrapper[7721]: I0216 02:07:28.314432 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8f918d5b-1a4c-4b56-98a4-5cef638bb615-etcd-client\") pod \"apiserver-578b9bc556-8g98v\" (UID: \"8f918d5b-1a4c-4b56-98a4-5cef638bb615\") " pod="openshift-apiserver/apiserver-578b9bc556-8g98v" Feb 16 02:07:28.659667 master-0 kubenswrapper[7721]: I0216 02:07:28.645694 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-578b9bc556-8g98v"] Feb 16 02:07:28.684208 master-0 kubenswrapper[7721]: I0216 02:07:28.684161 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxnht\" (UniqueName: \"kubernetes.io/projected/8f918d5b-1a4c-4b56-98a4-5cef638bb615-kube-api-access-fxnht\") pod \"apiserver-578b9bc556-8g98v\" (UID: \"8f918d5b-1a4c-4b56-98a4-5cef638bb615\") " pod="openshift-apiserver/apiserver-578b9bc556-8g98v" Feb 16 02:07:28.972751 master-0 kubenswrapper[7721]: I0216 02:07:28.972293 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-578b9bc556-8g98v" Feb 16 02:07:29.454178 master-0 kubenswrapper[7721]: I0216 02:07:29.454116 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-578b9bc556-8g98v"] Feb 16 02:07:29.817165 master-0 kubenswrapper[7721]: I0216 02:07:29.817035 7721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Feb 16 02:07:29.817749 master-0 kubenswrapper[7721]: I0216 02:07:29.817614 7721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-3-master-0" podUID="4c252310-b471-4560-bff7-1fc3e5a0ca6e" containerName="installer" containerID="cri-o://50b35135525cac92334a91f4ec010bc5934bf80c0d3dfe0e713119678ac6f2a8" gracePeriod=30 Feb 16 02:07:30.841143 master-0 kubenswrapper[7721]: I0216 02:07:30.841099 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_4c252310-b471-4560-bff7-1fc3e5a0ca6e/installer/0.log" Feb 16 02:07:30.841143 master-0 kubenswrapper[7721]: I0216 02:07:30.841151 7721 generic.go:334] "Generic (PLEG): container finished" podID="4c252310-b471-4560-bff7-1fc3e5a0ca6e" containerID="50b35135525cac92334a91f4ec010bc5934bf80c0d3dfe0e713119678ac6f2a8" exitCode=1 Feb 16 02:07:30.841845 master-0 kubenswrapper[7721]: I0216 02:07:30.841183 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"4c252310-b471-4560-bff7-1fc3e5a0ca6e","Type":"ContainerDied","Data":"50b35135525cac92334a91f4ec010bc5934bf80c0d3dfe0e713119678ac6f2a8"} Feb 16 02:07:31.800235 master-0 kubenswrapper[7721]: I0216 02:07:31.800168 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Feb 16 02:07:31.801075 master-0 kubenswrapper[7721]: I0216 02:07:31.801039 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Feb 16 02:07:31.811783 master-0 kubenswrapper[7721]: I0216 02:07:31.811736 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Feb 16 02:07:31.964894 master-0 kubenswrapper[7721]: I0216 02:07:31.964809 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8ea4c28c-8f53-4b41-9c85-c8c50599d7cd-var-lock\") pod \"installer-4-master-0\" (UID: \"8ea4c28c-8f53-4b41-9c85-c8c50599d7cd\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 16 02:07:31.964894 master-0 kubenswrapper[7721]: I0216 02:07:31.964874 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8ea4c28c-8f53-4b41-9c85-c8c50599d7cd-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"8ea4c28c-8f53-4b41-9c85-c8c50599d7cd\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 16 02:07:31.965790 master-0 kubenswrapper[7721]: I0216 02:07:31.964981 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8ea4c28c-8f53-4b41-9c85-c8c50599d7cd-kube-api-access\") pod \"installer-4-master-0\" (UID: \"8ea4c28c-8f53-4b41-9c85-c8c50599d7cd\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 16 02:07:32.066863 master-0 kubenswrapper[7721]: I0216 02:07:32.066706 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8ea4c28c-8f53-4b41-9c85-c8c50599d7cd-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"8ea4c28c-8f53-4b41-9c85-c8c50599d7cd\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 16 02:07:32.066863 master-0 kubenswrapper[7721]: I0216 02:07:32.066774 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8ea4c28c-8f53-4b41-9c85-c8c50599d7cd-kube-api-access\") pod \"installer-4-master-0\" (UID: \"8ea4c28c-8f53-4b41-9c85-c8c50599d7cd\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 16 02:07:32.067160 master-0 kubenswrapper[7721]: I0216 02:07:32.066886 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8ea4c28c-8f53-4b41-9c85-c8c50599d7cd-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"8ea4c28c-8f53-4b41-9c85-c8c50599d7cd\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 16 02:07:32.067160 master-0 kubenswrapper[7721]: I0216 02:07:32.067010 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8ea4c28c-8f53-4b41-9c85-c8c50599d7cd-var-lock\") pod \"installer-4-master-0\" (UID: \"8ea4c28c-8f53-4b41-9c85-c8c50599d7cd\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 16 02:07:32.067234 master-0 kubenswrapper[7721]: I0216 02:07:32.067155 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8ea4c28c-8f53-4b41-9c85-c8c50599d7cd-var-lock\") pod \"installer-4-master-0\" (UID: \"8ea4c28c-8f53-4b41-9c85-c8c50599d7cd\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 16 02:07:32.082426 master-0 kubenswrapper[7721]: I0216 02:07:32.082378 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8ea4c28c-8f53-4b41-9c85-c8c50599d7cd-kube-api-access\") pod \"installer-4-master-0\" (UID: \"8ea4c28c-8f53-4b41-9c85-c8c50599d7cd\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 16 02:07:32.098299 master-0 kubenswrapper[7721]: W0216 02:07:32.098242 7721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f918d5b_1a4c_4b56_98a4_5cef638bb615.slice/crio-a926069b18af0a45219030c9719e08a473a50355bc2d0c1fd700cdf2592cfa4c WatchSource:0}: Error finding container a926069b18af0a45219030c9719e08a473a50355bc2d0c1fd700cdf2592cfa4c: Status 404 returned error can't find the container with id a926069b18af0a45219030c9719e08a473a50355bc2d0c1fd700cdf2592cfa4c Feb 16 02:07:32.130838 master-0 kubenswrapper[7721]: I0216 02:07:32.130145 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Feb 16 02:07:32.220497 master-0 kubenswrapper[7721]: I0216 02:07:32.209743 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Feb 16 02:07:32.220497 master-0 kubenswrapper[7721]: I0216 02:07:32.210645 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Feb 16 02:07:32.220497 master-0 kubenswrapper[7721]: I0216 02:07:32.210809 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Feb 16 02:07:32.220497 master-0 kubenswrapper[7721]: I0216 02:07:32.214408 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 16 02:07:32.273494 master-0 kubenswrapper[7721]: I0216 02:07:32.273406 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0615fd34-eaf9-4a3a-8543-25a7a5747194-kube-api-access\") pod \"installer-1-master-0\" (UID: \"0615fd34-eaf9-4a3a-8543-25a7a5747194\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 16 02:07:32.273795 master-0 kubenswrapper[7721]: I0216 02:07:32.273756 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0615fd34-eaf9-4a3a-8543-25a7a5747194-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"0615fd34-eaf9-4a3a-8543-25a7a5747194\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 16 02:07:32.273795 master-0 kubenswrapper[7721]: I0216 02:07:32.273791 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0615fd34-eaf9-4a3a-8543-25a7a5747194-var-lock\") pod \"installer-1-master-0\" (UID: \"0615fd34-eaf9-4a3a-8543-25a7a5747194\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 16 02:07:32.363685 master-0 kubenswrapper[7721]: I0216 02:07:32.363658 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_4c252310-b471-4560-bff7-1fc3e5a0ca6e/installer/0.log" Feb 16 02:07:32.363780 master-0 kubenswrapper[7721]: I0216 02:07:32.363719 7721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Feb 16 02:07:32.374511 master-0 kubenswrapper[7721]: I0216 02:07:32.374446 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0615fd34-eaf9-4a3a-8543-25a7a5747194-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"0615fd34-eaf9-4a3a-8543-25a7a5747194\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 16 02:07:32.374511 master-0 kubenswrapper[7721]: I0216 02:07:32.374482 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0615fd34-eaf9-4a3a-8543-25a7a5747194-var-lock\") pod \"installer-1-master-0\" (UID: \"0615fd34-eaf9-4a3a-8543-25a7a5747194\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 16 02:07:32.374511 master-0 kubenswrapper[7721]: I0216 02:07:32.374508 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0615fd34-eaf9-4a3a-8543-25a7a5747194-kube-api-access\") pod \"installer-1-master-0\" (UID: \"0615fd34-eaf9-4a3a-8543-25a7a5747194\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 16 02:07:32.374821 master-0 kubenswrapper[7721]: I0216 02:07:32.374793 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0615fd34-eaf9-4a3a-8543-25a7a5747194-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"0615fd34-eaf9-4a3a-8543-25a7a5747194\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 16 02:07:32.375018 master-0 kubenswrapper[7721]: I0216 02:07:32.374960 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0615fd34-eaf9-4a3a-8543-25a7a5747194-var-lock\") pod \"installer-1-master-0\" (UID: \"0615fd34-eaf9-4a3a-8543-25a7a5747194\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 16 02:07:32.394739 master-0 kubenswrapper[7721]: I0216 02:07:32.394684 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0615fd34-eaf9-4a3a-8543-25a7a5747194-kube-api-access\") pod \"installer-1-master-0\" (UID: \"0615fd34-eaf9-4a3a-8543-25a7a5747194\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 16 02:07:32.475977 master-0 kubenswrapper[7721]: I0216 02:07:32.475914 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4c252310-b471-4560-bff7-1fc3e5a0ca6e-kubelet-dir\") pod \"4c252310-b471-4560-bff7-1fc3e5a0ca6e\" (UID: \"4c252310-b471-4560-bff7-1fc3e5a0ca6e\") " Feb 16 02:07:32.476137 master-0 kubenswrapper[7721]: I0216 02:07:32.476110 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4c252310-b471-4560-bff7-1fc3e5a0ca6e-kube-api-access\") pod \"4c252310-b471-4560-bff7-1fc3e5a0ca6e\" (UID: \"4c252310-b471-4560-bff7-1fc3e5a0ca6e\") " Feb 16 02:07:32.476180 master-0 kubenswrapper[7721]: I0216 02:07:32.476154 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4c252310-b471-4560-bff7-1fc3e5a0ca6e-var-lock\") pod \"4c252310-b471-4560-bff7-1fc3e5a0ca6e\" (UID: \"4c252310-b471-4560-bff7-1fc3e5a0ca6e\") " Feb 16 02:07:32.476533 master-0 kubenswrapper[7721]: I0216 02:07:32.476507 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c252310-b471-4560-bff7-1fc3e5a0ca6e-var-lock" (OuterVolumeSpecName: "var-lock") pod "4c252310-b471-4560-bff7-1fc3e5a0ca6e" (UID: "4c252310-b471-4560-bff7-1fc3e5a0ca6e"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:07:32.476632 master-0 kubenswrapper[7721]: I0216 02:07:32.476613 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c252310-b471-4560-bff7-1fc3e5a0ca6e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4c252310-b471-4560-bff7-1fc3e5a0ca6e" (UID: "4c252310-b471-4560-bff7-1fc3e5a0ca6e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:07:32.482299 master-0 kubenswrapper[7721]: I0216 02:07:32.482240 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c252310-b471-4560-bff7-1fc3e5a0ca6e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4c252310-b471-4560-bff7-1fc3e5a0ca6e" (UID: "4c252310-b471-4560-bff7-1fc3e5a0ca6e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:07:32.530725 master-0 kubenswrapper[7721]: I0216 02:07:32.530673 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Feb 16 02:07:32.554378 master-0 kubenswrapper[7721]: I0216 02:07:32.554332 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Feb 16 02:07:32.561120 master-0 kubenswrapper[7721]: W0216 02:07:32.561072 7721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod8ea4c28c_8f53_4b41_9c85_c8c50599d7cd.slice/crio-c1cda764f5f0471c7463d4e4932eaea865d91aa81030462076bd5270b356dfca WatchSource:0}: Error finding container c1cda764f5f0471c7463d4e4932eaea865d91aa81030462076bd5270b356dfca: Status 404 returned error can't find the container with id c1cda764f5f0471c7463d4e4932eaea865d91aa81030462076bd5270b356dfca Feb 16 02:07:32.577743 master-0 kubenswrapper[7721]: I0216 02:07:32.577705 7721 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4c252310-b471-4560-bff7-1fc3e5a0ca6e-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 02:07:32.577807 master-0 kubenswrapper[7721]: I0216 02:07:32.577746 7721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4c252310-b471-4560-bff7-1fc3e5a0ca6e-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 16 02:07:32.577807 master-0 kubenswrapper[7721]: I0216 02:07:32.577760 7721 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4c252310-b471-4560-bff7-1fc3e5a0ca6e-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 16 02:07:32.855001 master-0 kubenswrapper[7721]: I0216 02:07:32.854948 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"8ea4c28c-8f53-4b41-9c85-c8c50599d7cd","Type":"ContainerStarted","Data":"c1cda764f5f0471c7463d4e4932eaea865d91aa81030462076bd5270b356dfca"} Feb 16 02:07:32.860582 master-0 kubenswrapper[7721]: I0216 02:07:32.860543 7721 generic.go:334] "Generic (PLEG): container finished" podID="8f918d5b-1a4c-4b56-98a4-5cef638bb615" containerID="2df1122300d4e774c3090e5a2115fbbbf79fe2cef81c2ccba8b6a290040b96a4" exitCode=0 Feb 16 02:07:32.860672 master-0 kubenswrapper[7721]: I0216 02:07:32.860646 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-578b9bc556-8g98v" event={"ID":"8f918d5b-1a4c-4b56-98a4-5cef638bb615","Type":"ContainerDied","Data":"2df1122300d4e774c3090e5a2115fbbbf79fe2cef81c2ccba8b6a290040b96a4"} Feb 16 02:07:32.860900 master-0 kubenswrapper[7721]: I0216 02:07:32.860867 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-578b9bc556-8g98v" event={"ID":"8f918d5b-1a4c-4b56-98a4-5cef638bb615","Type":"ContainerStarted","Data":"a926069b18af0a45219030c9719e08a473a50355bc2d0c1fd700cdf2592cfa4c"} Feb 16 02:07:32.863461 master-0 kubenswrapper[7721]: I0216 02:07:32.863415 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_4c252310-b471-4560-bff7-1fc3e5a0ca6e/installer/0.log" Feb 16 02:07:32.863643 master-0 kubenswrapper[7721]: I0216 02:07:32.863615 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"4c252310-b471-4560-bff7-1fc3e5a0ca6e","Type":"ContainerDied","Data":"8f4e7b15846d4dbe7cb1a82fc83a14ab9dd2113d8ac8871e55e04e8ec4eae91c"} Feb 16 02:07:32.863701 master-0 kubenswrapper[7721]: I0216 02:07:32.863665 7721 scope.go:117] "RemoveContainer" containerID="50b35135525cac92334a91f4ec010bc5934bf80c0d3dfe0e713119678ac6f2a8" Feb 16 02:07:32.863701 master-0 kubenswrapper[7721]: I0216 02:07:32.863680 7721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Feb 16 02:07:32.947041 master-0 kubenswrapper[7721]: I0216 02:07:32.946966 7721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Feb 16 02:07:32.950667 master-0 kubenswrapper[7721]: I0216 02:07:32.950396 7721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Feb 16 02:07:32.963573 master-0 kubenswrapper[7721]: I0216 02:07:32.963511 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Feb 16 02:07:32.973383 master-0 kubenswrapper[7721]: W0216 02:07:32.973265 7721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod0615fd34_eaf9_4a3a_8543_25a7a5747194.slice/crio-245c08cde6ae6142a71beebf44880b7111b195a8d0e90b35f37d8ae7cd3316d9 WatchSource:0}: Error finding container 245c08cde6ae6142a71beebf44880b7111b195a8d0e90b35f37d8ae7cd3316d9: Status 404 returned error can't find the container with id 245c08cde6ae6142a71beebf44880b7111b195a8d0e90b35f37d8ae7cd3316d9 Feb 16 02:07:33.880088 master-0 kubenswrapper[7721]: I0216 02:07:33.880037 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-578b9bc556-8g98v" event={"ID":"8f918d5b-1a4c-4b56-98a4-5cef638bb615","Type":"ContainerStarted","Data":"75fc21ecbe0b7c5bd931a515a01c5554ff668c01d93a8da5d69bba29c9b07235"} Feb 16 02:07:33.880088 master-0 kubenswrapper[7721]: I0216 02:07:33.880084 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-578b9bc556-8g98v" event={"ID":"8f918d5b-1a4c-4b56-98a4-5cef638bb615","Type":"ContainerStarted","Data":"a5edd0c763da4ffd76b7ca44b263c2099d1fb2f218aab3c1cd65ac9a1f3a746b"} Feb 16 02:07:33.884935 master-0 kubenswrapper[7721]: I0216 02:07:33.884904 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"0615fd34-eaf9-4a3a-8543-25a7a5747194","Type":"ContainerStarted","Data":"34687e7a807c296d4e03913cf0d4730d2710fd3780bf2d83926deacd10e78353"} Feb 16 02:07:33.884935 master-0 kubenswrapper[7721]: I0216 02:07:33.884930 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"0615fd34-eaf9-4a3a-8543-25a7a5747194","Type":"ContainerStarted","Data":"245c08cde6ae6142a71beebf44880b7111b195a8d0e90b35f37d8ae7cd3316d9"} Feb 16 02:07:33.888595 master-0 kubenswrapper[7721]: I0216 02:07:33.888572 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"8ea4c28c-8f53-4b41-9c85-c8c50599d7cd","Type":"ContainerStarted","Data":"bb9ffb6ca918ba3341c8df0e7c8c8ba7325d86a68eb6b2856270c9f7326551b5"} Feb 16 02:07:33.973348 master-0 kubenswrapper[7721]: I0216 02:07:33.973292 7721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-578b9bc556-8g98v" Feb 16 02:07:33.973543 master-0 kubenswrapper[7721]: I0216 02:07:33.973377 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-578b9bc556-8g98v" Feb 16 02:07:34.134005 master-0 kubenswrapper[7721]: I0216 02:07:34.133868 7721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-578b9bc556-8g98v" podStartSLOduration=26.133841005 podStartE2EDuration="26.133841005s" podCreationTimestamp="2026-02-16 02:07:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:07:34.131749554 +0000 UTC m=+57.625983826" watchObservedRunningTime="2026-02-16 02:07:34.133841005 +0000 UTC m=+57.628075267" Feb 16 02:07:34.424092 master-0 kubenswrapper[7721]: I0216 02:07:34.423542 7721 patch_prober.go:28] interesting pod/apiserver-578b9bc556-8g98v container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 16 02:07:34.424092 master-0 kubenswrapper[7721]: [+]log ok Feb 16 02:07:34.424092 master-0 kubenswrapper[7721]: [+]etcd ok Feb 16 02:07:34.424092 master-0 kubenswrapper[7721]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 16 02:07:34.424092 master-0 kubenswrapper[7721]: [+]poststarthook/generic-apiserver-start-informers ok Feb 16 02:07:34.424092 master-0 kubenswrapper[7721]: [+]poststarthook/max-in-flight-filter ok Feb 16 02:07:34.424092 master-0 kubenswrapper[7721]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 16 02:07:34.424092 master-0 kubenswrapper[7721]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 16 02:07:34.424092 master-0 kubenswrapper[7721]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 16 02:07:34.424092 master-0 kubenswrapper[7721]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Feb 16 02:07:34.424092 master-0 kubenswrapper[7721]: [+]poststarthook/project.openshift.io-projectcache ok Feb 16 02:07:34.424092 master-0 kubenswrapper[7721]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 16 02:07:34.424092 master-0 kubenswrapper[7721]: [+]poststarthook/openshift.io-startinformers ok Feb 16 02:07:34.424092 master-0 kubenswrapper[7721]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 16 02:07:34.424092 master-0 kubenswrapper[7721]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 16 02:07:34.424092 master-0 kubenswrapper[7721]: livez check failed Feb 16 02:07:34.424092 master-0 kubenswrapper[7721]: I0216 02:07:34.423821 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-578b9bc556-8g98v" podUID="8f918d5b-1a4c-4b56-98a4-5cef638bb615" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:07:34.713636 master-0 kubenswrapper[7721]: I0216 02:07:34.709020 7721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-4-master-0" podStartSLOduration=3.7089964010000003 podStartE2EDuration="3.708996401s" podCreationTimestamp="2026-02-16 02:07:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:07:34.706818177 +0000 UTC m=+58.201052449" watchObservedRunningTime="2026-02-16 02:07:34.708996401 +0000 UTC m=+58.203230673" Feb 16 02:07:34.742581 master-0 kubenswrapper[7721]: I0216 02:07:34.742533 7721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c252310-b471-4560-bff7-1fc3e5a0ca6e" path="/var/lib/kubelet/pods/4c252310-b471-4560-bff7-1fc3e5a0ca6e/volumes" Feb 16 02:07:35.280763 master-0 kubenswrapper[7721]: I0216 02:07:35.280670 7721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-1-master-0" podStartSLOduration=3.28064361 podStartE2EDuration="3.28064361s" podCreationTimestamp="2026-02-16 02:07:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:07:35.280509996 +0000 UTC m=+58.774744278" watchObservedRunningTime="2026-02-16 02:07:35.28064361 +0000 UTC m=+58.774877872" Feb 16 02:07:35.912140 master-0 kubenswrapper[7721]: I0216 02:07:35.907191 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-0_45476b57-538b-4031-80c9-8025a49e8e88/installer/0.log" Feb 16 02:07:35.912140 master-0 kubenswrapper[7721]: I0216 02:07:35.909500 7721 generic.go:334] "Generic (PLEG): container finished" podID="45476b57-538b-4031-80c9-8025a49e8e88" containerID="844468f5a50bbaafc70df5bd3186ac9f161b117793658af967159de5ce3fa619" exitCode=1 Feb 16 02:07:35.912140 master-0 kubenswrapper[7721]: I0216 02:07:35.910128 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"45476b57-538b-4031-80c9-8025a49e8e88","Type":"ContainerDied","Data":"844468f5a50bbaafc70df5bd3186ac9f161b117793658af967159de5ce3fa619"} Feb 16 02:07:36.073891 master-0 kubenswrapper[7721]: I0216 02:07:36.073836 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-r5l9f"] Feb 16 02:07:36.074478 master-0 kubenswrapper[7721]: E0216 02:07:36.074424 7721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c252310-b471-4560-bff7-1fc3e5a0ca6e" containerName="installer" Feb 16 02:07:36.074627 master-0 kubenswrapper[7721]: I0216 02:07:36.074606 7721 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c252310-b471-4560-bff7-1fc3e5a0ca6e" containerName="installer" Feb 16 02:07:36.074889 master-0 kubenswrapper[7721]: I0216 02:07:36.074865 7721 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c252310-b471-4560-bff7-1fc3e5a0ca6e" containerName="installer" Feb 16 02:07:36.075591 master-0 kubenswrapper[7721]: I0216 02:07:36.075561 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-r5l9f" Feb 16 02:07:36.077586 master-0 kubenswrapper[7721]: I0216 02:07:36.077510 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 16 02:07:36.077936 master-0 kubenswrapper[7721]: I0216 02:07:36.077878 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 16 02:07:36.078370 master-0 kubenswrapper[7721]: I0216 02:07:36.078326 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 16 02:07:36.239716 master-0 kubenswrapper[7721]: I0216 02:07:36.239590 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbmnx\" (UniqueName: \"kubernetes.io/projected/a8d00a01-aa48-4830-a558-93a31cb98b31-kube-api-access-lbmnx\") pod \"control-plane-machine-set-operator-d8bf84b88-r5l9f\" (UID: \"a8d00a01-aa48-4830-a558-93a31cb98b31\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-r5l9f" Feb 16 02:07:36.239716 master-0 kubenswrapper[7721]: I0216 02:07:36.239692 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a8d00a01-aa48-4830-a558-93a31cb98b31-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-d8bf84b88-r5l9f\" (UID: \"a8d00a01-aa48-4830-a558-93a31cb98b31\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-r5l9f" Feb 16 02:07:36.287754 master-0 kubenswrapper[7721]: I0216 02:07:36.287702 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-r5l9f"] Feb 16 02:07:36.346566 master-0 kubenswrapper[7721]: I0216 02:07:36.345626 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a8d00a01-aa48-4830-a558-93a31cb98b31-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-d8bf84b88-r5l9f\" (UID: \"a8d00a01-aa48-4830-a558-93a31cb98b31\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-r5l9f" Feb 16 02:07:36.346566 master-0 kubenswrapper[7721]: I0216 02:07:36.345730 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbmnx\" (UniqueName: \"kubernetes.io/projected/a8d00a01-aa48-4830-a558-93a31cb98b31-kube-api-access-lbmnx\") pod \"control-plane-machine-set-operator-d8bf84b88-r5l9f\" (UID: \"a8d00a01-aa48-4830-a558-93a31cb98b31\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-r5l9f" Feb 16 02:07:36.373212 master-0 kubenswrapper[7721]: I0216 02:07:36.365264 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a8d00a01-aa48-4830-a558-93a31cb98b31-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-d8bf84b88-r5l9f\" (UID: \"a8d00a01-aa48-4830-a558-93a31cb98b31\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-r5l9f" Feb 16 02:07:36.380877 master-0 kubenswrapper[7721]: I0216 02:07:36.380571 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbmnx\" (UniqueName: \"kubernetes.io/projected/a8d00a01-aa48-4830-a558-93a31cb98b31-kube-api-access-lbmnx\") pod \"control-plane-machine-set-operator-d8bf84b88-r5l9f\" (UID: \"a8d00a01-aa48-4830-a558-93a31cb98b31\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-r5l9f" Feb 16 02:07:36.393903 master-0 kubenswrapper[7721]: I0216 02:07:36.393453 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-r5l9f" Feb 16 02:07:38.979767 master-0 kubenswrapper[7721]: I0216 02:07:38.979714 7721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-578b9bc556-8g98v" Feb 16 02:07:38.990192 master-0 kubenswrapper[7721]: I0216 02:07:38.989361 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-578b9bc556-8g98v" Feb 16 02:07:39.376346 master-0 kubenswrapper[7721]: I0216 02:07:39.376048 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-6c46d95f74-j4xhj"] Feb 16 02:07:39.377268 master-0 kubenswrapper[7721]: I0216 02:07:39.377229 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-j4xhj" Feb 16 02:07:39.380753 master-0 kubenswrapper[7721]: I0216 02:07:39.380710 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 16 02:07:39.380891 master-0 kubenswrapper[7721]: I0216 02:07:39.380855 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 16 02:07:39.380891 master-0 kubenswrapper[7721]: I0216 02:07:39.380871 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 16 02:07:39.381002 master-0 kubenswrapper[7721]: I0216 02:07:39.380817 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 16 02:07:39.390760 master-0 kubenswrapper[7721]: I0216 02:07:39.388680 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 16 02:07:39.499900 master-0 kubenswrapper[7721]: I0216 02:07:39.499803 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f27ae528-68de-4b59-9536-2d49b7a3cb29-auth-proxy-config\") pod \"machine-approver-6c46d95f74-j4xhj\" (UID: \"f27ae528-68de-4b59-9536-2d49b7a3cb29\") " pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-j4xhj" Feb 16 02:07:39.499900 master-0 kubenswrapper[7721]: I0216 02:07:39.499886 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f27ae528-68de-4b59-9536-2d49b7a3cb29-config\") pod \"machine-approver-6c46d95f74-j4xhj\" (UID: \"f27ae528-68de-4b59-9536-2d49b7a3cb29\") " pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-j4xhj" Feb 16 02:07:39.500267 master-0 kubenswrapper[7721]: I0216 02:07:39.499924 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xwmq\" (UniqueName: \"kubernetes.io/projected/f27ae528-68de-4b59-9536-2d49b7a3cb29-kube-api-access-4xwmq\") pod \"machine-approver-6c46d95f74-j4xhj\" (UID: \"f27ae528-68de-4b59-9536-2d49b7a3cb29\") " pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-j4xhj" Feb 16 02:07:39.500267 master-0 kubenswrapper[7721]: I0216 02:07:39.499998 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/f27ae528-68de-4b59-9536-2d49b7a3cb29-machine-approver-tls\") pod \"machine-approver-6c46d95f74-j4xhj\" (UID: \"f27ae528-68de-4b59-9536-2d49b7a3cb29\") " pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-j4xhj" Feb 16 02:07:39.601469 master-0 kubenswrapper[7721]: I0216 02:07:39.601375 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f27ae528-68de-4b59-9536-2d49b7a3cb29-auth-proxy-config\") pod \"machine-approver-6c46d95f74-j4xhj\" (UID: \"f27ae528-68de-4b59-9536-2d49b7a3cb29\") " pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-j4xhj" Feb 16 02:07:39.601469 master-0 kubenswrapper[7721]: I0216 02:07:39.601459 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f27ae528-68de-4b59-9536-2d49b7a3cb29-config\") pod \"machine-approver-6c46d95f74-j4xhj\" (UID: \"f27ae528-68de-4b59-9536-2d49b7a3cb29\") " pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-j4xhj" Feb 16 02:07:39.601861 master-0 kubenswrapper[7721]: I0216 02:07:39.601711 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xwmq\" (UniqueName: \"kubernetes.io/projected/f27ae528-68de-4b59-9536-2d49b7a3cb29-kube-api-access-4xwmq\") pod \"machine-approver-6c46d95f74-j4xhj\" (UID: \"f27ae528-68de-4b59-9536-2d49b7a3cb29\") " pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-j4xhj" Feb 16 02:07:39.601861 master-0 kubenswrapper[7721]: I0216 02:07:39.601835 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/f27ae528-68de-4b59-9536-2d49b7a3cb29-machine-approver-tls\") pod \"machine-approver-6c46d95f74-j4xhj\" (UID: \"f27ae528-68de-4b59-9536-2d49b7a3cb29\") " pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-j4xhj" Feb 16 02:07:39.602526 master-0 kubenswrapper[7721]: I0216 02:07:39.602475 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f27ae528-68de-4b59-9536-2d49b7a3cb29-auth-proxy-config\") pod \"machine-approver-6c46d95f74-j4xhj\" (UID: \"f27ae528-68de-4b59-9536-2d49b7a3cb29\") " pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-j4xhj" Feb 16 02:07:39.602580 master-0 kubenswrapper[7721]: I0216 02:07:39.602485 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f27ae528-68de-4b59-9536-2d49b7a3cb29-config\") pod \"machine-approver-6c46d95f74-j4xhj\" (UID: \"f27ae528-68de-4b59-9536-2d49b7a3cb29\") " pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-j4xhj" Feb 16 02:07:39.606920 master-0 kubenswrapper[7721]: I0216 02:07:39.606570 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/f27ae528-68de-4b59-9536-2d49b7a3cb29-machine-approver-tls\") pod \"machine-approver-6c46d95f74-j4xhj\" (UID: \"f27ae528-68de-4b59-9536-2d49b7a3cb29\") " pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-j4xhj" Feb 16 02:07:39.624527 master-0 kubenswrapper[7721]: I0216 02:07:39.624426 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xwmq\" (UniqueName: \"kubernetes.io/projected/f27ae528-68de-4b59-9536-2d49b7a3cb29-kube-api-access-4xwmq\") pod \"machine-approver-6c46d95f74-j4xhj\" (UID: \"f27ae528-68de-4b59-9536-2d49b7a3cb29\") " pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-j4xhj" Feb 16 02:07:39.700773 master-0 kubenswrapper[7721]: I0216 02:07:39.700410 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-j4xhj" Feb 16 02:07:43.122915 master-0 kubenswrapper[7721]: I0216 02:07:43.122842 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-n8xmg"] Feb 16 02:07:43.123801 master-0 kubenswrapper[7721]: I0216 02:07:43.123777 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-n8xmg" Feb 16 02:07:43.129752 master-0 kubenswrapper[7721]: I0216 02:07:43.129720 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Feb 16 02:07:43.129993 master-0 kubenswrapper[7721]: I0216 02:07:43.129958 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Feb 16 02:07:43.130191 master-0 kubenswrapper[7721]: I0216 02:07:43.130143 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-s4hmw" Feb 16 02:07:43.130361 master-0 kubenswrapper[7721]: I0216 02:07:43.130342 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Feb 16 02:07:43.133051 master-0 kubenswrapper[7721]: I0216 02:07:43.133017 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Feb 16 02:07:43.191639 master-0 kubenswrapper[7721]: I0216 02:07:43.191605 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/30fef0d5-46ea-4fa3-9ffa-88187d010ffe-cco-trusted-ca\") pod \"cloud-credential-operator-595c8f9ff-n8xmg\" (UID: \"30fef0d5-46ea-4fa3-9ffa-88187d010ffe\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-n8xmg" Feb 16 02:07:43.191760 master-0 kubenswrapper[7721]: I0216 02:07:43.191648 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/30fef0d5-46ea-4fa3-9ffa-88187d010ffe-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-595c8f9ff-n8xmg\" (UID: \"30fef0d5-46ea-4fa3-9ffa-88187d010ffe\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-n8xmg" Feb 16 02:07:43.191760 master-0 kubenswrapper[7721]: I0216 02:07:43.191689 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xj8x2\" (UniqueName: \"kubernetes.io/projected/30fef0d5-46ea-4fa3-9ffa-88187d010ffe-kube-api-access-xj8x2\") pod \"cloud-credential-operator-595c8f9ff-n8xmg\" (UID: \"30fef0d5-46ea-4fa3-9ffa-88187d010ffe\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-n8xmg" Feb 16 02:07:43.256184 master-0 kubenswrapper[7721]: I0216 02:07:43.255132 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-n8xmg"] Feb 16 02:07:43.293560 master-0 kubenswrapper[7721]: I0216 02:07:43.293488 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/30fef0d5-46ea-4fa3-9ffa-88187d010ffe-cco-trusted-ca\") pod \"cloud-credential-operator-595c8f9ff-n8xmg\" (UID: \"30fef0d5-46ea-4fa3-9ffa-88187d010ffe\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-n8xmg" Feb 16 02:07:43.293560 master-0 kubenswrapper[7721]: I0216 02:07:43.293561 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/30fef0d5-46ea-4fa3-9ffa-88187d010ffe-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-595c8f9ff-n8xmg\" (UID: \"30fef0d5-46ea-4fa3-9ffa-88187d010ffe\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-n8xmg" Feb 16 02:07:43.293850 master-0 kubenswrapper[7721]: I0216 02:07:43.293642 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xj8x2\" (UniqueName: \"kubernetes.io/projected/30fef0d5-46ea-4fa3-9ffa-88187d010ffe-kube-api-access-xj8x2\") pod \"cloud-credential-operator-595c8f9ff-n8xmg\" (UID: \"30fef0d5-46ea-4fa3-9ffa-88187d010ffe\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-n8xmg" Feb 16 02:07:43.294975 master-0 kubenswrapper[7721]: I0216 02:07:43.294938 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/30fef0d5-46ea-4fa3-9ffa-88187d010ffe-cco-trusted-ca\") pod \"cloud-credential-operator-595c8f9ff-n8xmg\" (UID: \"30fef0d5-46ea-4fa3-9ffa-88187d010ffe\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-n8xmg" Feb 16 02:07:43.300456 master-0 kubenswrapper[7721]: I0216 02:07:43.300354 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/30fef0d5-46ea-4fa3-9ffa-88187d010ffe-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-595c8f9ff-n8xmg\" (UID: \"30fef0d5-46ea-4fa3-9ffa-88187d010ffe\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-n8xmg" Feb 16 02:07:43.448931 master-0 kubenswrapper[7721]: I0216 02:07:43.448873 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xj8x2\" (UniqueName: \"kubernetes.io/projected/30fef0d5-46ea-4fa3-9ffa-88187d010ffe-kube-api-access-xj8x2\") pod \"cloud-credential-operator-595c8f9ff-n8xmg\" (UID: \"30fef0d5-46ea-4fa3-9ffa-88187d010ffe\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-n8xmg" Feb 16 02:07:43.480352 master-0 kubenswrapper[7721]: I0216 02:07:43.480280 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-n8xmg" Feb 16 02:07:44.717477 master-0 kubenswrapper[7721]: I0216 02:07:44.716467 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-k8jz5"] Feb 16 02:07:44.717477 master-0 kubenswrapper[7721]: I0216 02:07:44.717402 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-k8jz5" Feb 16 02:07:44.720144 master-0 kubenswrapper[7721]: I0216 02:07:44.719840 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 16 02:07:44.720361 master-0 kubenswrapper[7721]: I0216 02:07:44.720300 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-wp42g" Feb 16 02:07:44.720454 master-0 kubenswrapper[7721]: I0216 02:07:44.720408 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 16 02:07:44.720650 master-0 kubenswrapper[7721]: I0216 02:07:44.720581 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 16 02:07:44.819510 master-0 kubenswrapper[7721]: I0216 02:07:44.817892 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/5b62004d-7fe3-47ae-8e26-8496befb047c-samples-operator-tls\") pod \"cluster-samples-operator-f8cbff74c-k8jz5\" (UID: \"5b62004d-7fe3-47ae-8e26-8496befb047c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-k8jz5" Feb 16 02:07:44.819510 master-0 kubenswrapper[7721]: I0216 02:07:44.818024 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln8g4\" (UniqueName: \"kubernetes.io/projected/5b62004d-7fe3-47ae-8e26-8496befb047c-kube-api-access-ln8g4\") pod \"cluster-samples-operator-f8cbff74c-k8jz5\" (UID: \"5b62004d-7fe3-47ae-8e26-8496befb047c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-k8jz5" Feb 16 02:07:44.858286 master-0 kubenswrapper[7721]: I0216 02:07:44.855861 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-frvgm"] Feb 16 02:07:44.858286 master-0 kubenswrapper[7721]: I0216 02:07:44.857000 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-frvgm" Feb 16 02:07:44.860512 master-0 kubenswrapper[7721]: I0216 02:07:44.859306 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-k8jz5"] Feb 16 02:07:44.861073 master-0 kubenswrapper[7721]: I0216 02:07:44.861022 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Feb 16 02:07:44.861148 master-0 kubenswrapper[7721]: I0216 02:07:44.861058 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Feb 16 02:07:44.861345 master-0 kubenswrapper[7721]: I0216 02:07:44.861313 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Feb 16 02:07:44.861824 master-0 kubenswrapper[7721]: I0216 02:07:44.861729 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Feb 16 02:07:44.919635 master-0 kubenswrapper[7721]: I0216 02:07:44.919565 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ln8g4\" (UniqueName: \"kubernetes.io/projected/5b62004d-7fe3-47ae-8e26-8496befb047c-kube-api-access-ln8g4\") pod \"cluster-samples-operator-f8cbff74c-k8jz5\" (UID: \"5b62004d-7fe3-47ae-8e26-8496befb047c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-k8jz5" Feb 16 02:07:44.920173 master-0 kubenswrapper[7721]: I0216 02:07:44.920050 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/5b62004d-7fe3-47ae-8e26-8496befb047c-samples-operator-tls\") pod \"cluster-samples-operator-f8cbff74c-k8jz5\" (UID: \"5b62004d-7fe3-47ae-8e26-8496befb047c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-k8jz5" Feb 16 02:07:44.923763 master-0 kubenswrapper[7721]: I0216 02:07:44.923723 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/5b62004d-7fe3-47ae-8e26-8496befb047c-samples-operator-tls\") pod \"cluster-samples-operator-f8cbff74c-k8jz5\" (UID: \"5b62004d-7fe3-47ae-8e26-8496befb047c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-k8jz5" Feb 16 02:07:45.021778 master-0 kubenswrapper[7721]: I0216 02:07:45.021606 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/27a42eb0-677c-414d-b0ec-f945ec39b7e9-cert\") pod \"cluster-baremetal-operator-7bc947fc7d-frvgm\" (UID: \"27a42eb0-677c-414d-b0ec-f945ec39b7e9\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-frvgm" Feb 16 02:07:45.021778 master-0 kubenswrapper[7721]: I0216 02:07:45.021709 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27a42eb0-677c-414d-b0ec-f945ec39b7e9-config\") pod \"cluster-baremetal-operator-7bc947fc7d-frvgm\" (UID: \"27a42eb0-677c-414d-b0ec-f945ec39b7e9\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-frvgm" Feb 16 02:07:45.022060 master-0 kubenswrapper[7721]: I0216 02:07:45.021784 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/27a42eb0-677c-414d-b0ec-f945ec39b7e9-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-7bc947fc7d-frvgm\" (UID: \"27a42eb0-677c-414d-b0ec-f945ec39b7e9\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-frvgm" Feb 16 02:07:45.022060 master-0 kubenswrapper[7721]: I0216 02:07:45.021835 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/27a42eb0-677c-414d-b0ec-f945ec39b7e9-images\") pod \"cluster-baremetal-operator-7bc947fc7d-frvgm\" (UID: \"27a42eb0-677c-414d-b0ec-f945ec39b7e9\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-frvgm" Feb 16 02:07:45.022060 master-0 kubenswrapper[7721]: I0216 02:07:45.021883 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4djm\" (UniqueName: \"kubernetes.io/projected/27a42eb0-677c-414d-b0ec-f945ec39b7e9-kube-api-access-l4djm\") pod \"cluster-baremetal-operator-7bc947fc7d-frvgm\" (UID: \"27a42eb0-677c-414d-b0ec-f945ec39b7e9\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-frvgm" Feb 16 02:07:45.123184 master-0 kubenswrapper[7721]: I0216 02:07:45.123109 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/27a42eb0-677c-414d-b0ec-f945ec39b7e9-cert\") pod \"cluster-baremetal-operator-7bc947fc7d-frvgm\" (UID: \"27a42eb0-677c-414d-b0ec-f945ec39b7e9\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-frvgm" Feb 16 02:07:45.123518 master-0 kubenswrapper[7721]: I0216 02:07:45.123449 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27a42eb0-677c-414d-b0ec-f945ec39b7e9-config\") pod \"cluster-baremetal-operator-7bc947fc7d-frvgm\" (UID: \"27a42eb0-677c-414d-b0ec-f945ec39b7e9\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-frvgm" Feb 16 02:07:45.123679 master-0 kubenswrapper[7721]: I0216 02:07:45.123611 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/27a42eb0-677c-414d-b0ec-f945ec39b7e9-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-7bc947fc7d-frvgm\" (UID: \"27a42eb0-677c-414d-b0ec-f945ec39b7e9\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-frvgm" Feb 16 02:07:45.123747 master-0 kubenswrapper[7721]: I0216 02:07:45.123678 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/27a42eb0-677c-414d-b0ec-f945ec39b7e9-images\") pod \"cluster-baremetal-operator-7bc947fc7d-frvgm\" (UID: \"27a42eb0-677c-414d-b0ec-f945ec39b7e9\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-frvgm" Feb 16 02:07:45.123747 master-0 kubenswrapper[7721]: I0216 02:07:45.123735 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4djm\" (UniqueName: \"kubernetes.io/projected/27a42eb0-677c-414d-b0ec-f945ec39b7e9-kube-api-access-l4djm\") pod \"cluster-baremetal-operator-7bc947fc7d-frvgm\" (UID: \"27a42eb0-677c-414d-b0ec-f945ec39b7e9\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-frvgm" Feb 16 02:07:45.124413 master-0 kubenswrapper[7721]: I0216 02:07:45.124350 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27a42eb0-677c-414d-b0ec-f945ec39b7e9-config\") pod \"cluster-baremetal-operator-7bc947fc7d-frvgm\" (UID: \"27a42eb0-677c-414d-b0ec-f945ec39b7e9\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-frvgm" Feb 16 02:07:45.125346 master-0 kubenswrapper[7721]: I0216 02:07:45.125283 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/27a42eb0-677c-414d-b0ec-f945ec39b7e9-images\") pod \"cluster-baremetal-operator-7bc947fc7d-frvgm\" (UID: \"27a42eb0-677c-414d-b0ec-f945ec39b7e9\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-frvgm" Feb 16 02:07:45.128262 master-0 kubenswrapper[7721]: I0216 02:07:45.128187 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/27a42eb0-677c-414d-b0ec-f945ec39b7e9-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-7bc947fc7d-frvgm\" (UID: \"27a42eb0-677c-414d-b0ec-f945ec39b7e9\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-frvgm" Feb 16 02:07:45.141420 master-0 kubenswrapper[7721]: I0216 02:07:45.141385 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/27a42eb0-677c-414d-b0ec-f945ec39b7e9-cert\") pod \"cluster-baremetal-operator-7bc947fc7d-frvgm\" (UID: \"27a42eb0-677c-414d-b0ec-f945ec39b7e9\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-frvgm" Feb 16 02:07:45.192571 master-0 kubenswrapper[7721]: I0216 02:07:45.192495 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-frvgm"] Feb 16 02:07:45.408612 master-0 kubenswrapper[7721]: I0216 02:07:45.404952 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ln8g4\" (UniqueName: \"kubernetes.io/projected/5b62004d-7fe3-47ae-8e26-8496befb047c-kube-api-access-ln8g4\") pod \"cluster-samples-operator-f8cbff74c-k8jz5\" (UID: \"5b62004d-7fe3-47ae-8e26-8496befb047c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-k8jz5" Feb 16 02:07:45.408612 master-0 kubenswrapper[7721]: I0216 02:07:45.408285 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4djm\" (UniqueName: \"kubernetes.io/projected/27a42eb0-677c-414d-b0ec-f945ec39b7e9-kube-api-access-l4djm\") pod \"cluster-baremetal-operator-7bc947fc7d-frvgm\" (UID: \"27a42eb0-677c-414d-b0ec-f945ec39b7e9\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-frvgm" Feb 16 02:07:45.479101 master-0 kubenswrapper[7721]: I0216 02:07:45.479020 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-frvgm" Feb 16 02:07:45.638103 master-0 kubenswrapper[7721]: I0216 02:07:45.638052 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-k8jz5" Feb 16 02:07:46.179724 master-0 kubenswrapper[7721]: I0216 02:07:46.179648 7721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5574f479df-xqnpg"] Feb 16 02:07:46.180404 master-0 kubenswrapper[7721]: I0216 02:07:46.180018 7721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5574f479df-xqnpg" podUID="7eda8a42-765e-47cf-896f-324e8185062e" containerName="controller-manager" containerID="cri-o://1ac7f9a59837f186887fd1e06eef65d6e68ebbfb41544b75e3a57d65b27ead8f" gracePeriod=30 Feb 16 02:07:46.375226 master-0 kubenswrapper[7721]: I0216 02:07:46.375143 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-9rvcj"] Feb 16 02:07:46.376879 master-0 kubenswrapper[7721]: I0216 02:07:46.376849 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-9rvcj" Feb 16 02:07:46.380788 master-0 kubenswrapper[7721]: I0216 02:07:46.379413 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Feb 16 02:07:46.382669 master-0 kubenswrapper[7721]: I0216 02:07:46.380218 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Feb 16 02:07:46.460410 master-0 kubenswrapper[7721]: I0216 02:07:46.460091 7721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-696cfb9f87-87b8w"] Feb 16 02:07:46.460706 master-0 kubenswrapper[7721]: I0216 02:07:46.460524 7721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-696cfb9f87-87b8w" podUID="f2a7e185-78f4-4d69-b126-d465374a6218" containerName="route-controller-manager" containerID="cri-o://9208e311bfc82baf59af5b784307b97c850fa510c8d36aefe36f52bc21d5d523" gracePeriod=30 Feb 16 02:07:46.462422 master-0 kubenswrapper[7721]: I0216 02:07:46.462352 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-9rvcj"] Feb 16 02:07:46.542705 master-0 kubenswrapper[7721]: I0216 02:07:46.542580 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/48863ff6-63ac-42d7-bac7-29d888c92db9-auth-proxy-config\") pod \"cluster-autoscaler-operator-67fd9768b5-9rvcj\" (UID: \"48863ff6-63ac-42d7-bac7-29d888c92db9\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-9rvcj" Feb 16 02:07:46.542705 master-0 kubenswrapper[7721]: I0216 02:07:46.542699 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/48863ff6-63ac-42d7-bac7-29d888c92db9-cert\") pod \"cluster-autoscaler-operator-67fd9768b5-9rvcj\" (UID: \"48863ff6-63ac-42d7-bac7-29d888c92db9\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-9rvcj" Feb 16 02:07:46.543104 master-0 kubenswrapper[7721]: I0216 02:07:46.542741 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgj82\" (UniqueName: \"kubernetes.io/projected/48863ff6-63ac-42d7-bac7-29d888c92db9-kube-api-access-kgj82\") pod \"cluster-autoscaler-operator-67fd9768b5-9rvcj\" (UID: \"48863ff6-63ac-42d7-bac7-29d888c92db9\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-9rvcj" Feb 16 02:07:46.644492 master-0 kubenswrapper[7721]: I0216 02:07:46.644322 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/48863ff6-63ac-42d7-bac7-29d888c92db9-auth-proxy-config\") pod \"cluster-autoscaler-operator-67fd9768b5-9rvcj\" (UID: \"48863ff6-63ac-42d7-bac7-29d888c92db9\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-9rvcj" Feb 16 02:07:46.644848 master-0 kubenswrapper[7721]: I0216 02:07:46.644554 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/48863ff6-63ac-42d7-bac7-29d888c92db9-cert\") pod \"cluster-autoscaler-operator-67fd9768b5-9rvcj\" (UID: \"48863ff6-63ac-42d7-bac7-29d888c92db9\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-9rvcj" Feb 16 02:07:46.645582 master-0 kubenswrapper[7721]: I0216 02:07:46.645538 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgj82\" (UniqueName: \"kubernetes.io/projected/48863ff6-63ac-42d7-bac7-29d888c92db9-kube-api-access-kgj82\") pod \"cluster-autoscaler-operator-67fd9768b5-9rvcj\" (UID: \"48863ff6-63ac-42d7-bac7-29d888c92db9\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-9rvcj" Feb 16 02:07:46.647011 master-0 kubenswrapper[7721]: I0216 02:07:46.646955 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/48863ff6-63ac-42d7-bac7-29d888c92db9-auth-proxy-config\") pod \"cluster-autoscaler-operator-67fd9768b5-9rvcj\" (UID: \"48863ff6-63ac-42d7-bac7-29d888c92db9\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-9rvcj" Feb 16 02:07:46.649368 master-0 kubenswrapper[7721]: I0216 02:07:46.649328 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/48863ff6-63ac-42d7-bac7-29d888c92db9-cert\") pod \"cluster-autoscaler-operator-67fd9768b5-9rvcj\" (UID: \"48863ff6-63ac-42d7-bac7-29d888c92db9\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-9rvcj" Feb 16 02:07:46.679163 master-0 kubenswrapper[7721]: I0216 02:07:46.675990 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-insights/insights-operator-cb4f7b4cf-llpf5"] Feb 16 02:07:46.679163 master-0 kubenswrapper[7721]: I0216 02:07:46.677576 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-cb4f7b4cf-llpf5" Feb 16 02:07:46.680539 master-0 kubenswrapper[7721]: I0216 02:07:46.679535 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-qm7rm"] Feb 16 02:07:46.684413 master-0 kubenswrapper[7721]: I0216 02:07:46.680721 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-qm7rm" Feb 16 02:07:46.684413 master-0 kubenswrapper[7721]: I0216 02:07:46.681316 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Feb 16 02:07:46.684413 master-0 kubenswrapper[7721]: I0216 02:07:46.681416 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Feb 16 02:07:46.686270 master-0 kubenswrapper[7721]: I0216 02:07:46.685855 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Feb 16 02:07:46.686270 master-0 kubenswrapper[7721]: I0216 02:07:46.686121 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Feb 16 02:07:46.688859 master-0 kubenswrapper[7721]: I0216 02:07:46.686589 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-785jj" Feb 16 02:07:46.688859 master-0 kubenswrapper[7721]: I0216 02:07:46.686819 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Feb 16 02:07:46.696958 master-0 kubenswrapper[7721]: I0216 02:07:46.696917 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Feb 16 02:07:46.753289 master-0 kubenswrapper[7721]: I0216 02:07:46.746241 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/0abea413-e08a-465a-8ec4-2be650bfd5bd-snapshots\") pod \"insights-operator-cb4f7b4cf-llpf5\" (UID: \"0abea413-e08a-465a-8ec4-2be650bfd5bd\") " pod="openshift-insights/insights-operator-cb4f7b4cf-llpf5" Feb 16 02:07:46.753289 master-0 kubenswrapper[7721]: I0216 02:07:46.746294 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0abea413-e08a-465a-8ec4-2be650bfd5bd-trusted-ca-bundle\") pod \"insights-operator-cb4f7b4cf-llpf5\" (UID: \"0abea413-e08a-465a-8ec4-2be650bfd5bd\") " pod="openshift-insights/insights-operator-cb4f7b4cf-llpf5" Feb 16 02:07:46.753289 master-0 kubenswrapper[7721]: I0216 02:07:46.746352 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0abea413-e08a-465a-8ec4-2be650bfd5bd-serving-cert\") pod \"insights-operator-cb4f7b4cf-llpf5\" (UID: \"0abea413-e08a-465a-8ec4-2be650bfd5bd\") " pod="openshift-insights/insights-operator-cb4f7b4cf-llpf5" Feb 16 02:07:46.753289 master-0 kubenswrapper[7721]: I0216 02:07:46.746383 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxlnm\" (UniqueName: \"kubernetes.io/projected/0abea413-e08a-465a-8ec4-2be650bfd5bd-kube-api-access-bxlnm\") pod \"insights-operator-cb4f7b4cf-llpf5\" (UID: \"0abea413-e08a-465a-8ec4-2be650bfd5bd\") " pod="openshift-insights/insights-operator-cb4f7b4cf-llpf5" Feb 16 02:07:46.753289 master-0 kubenswrapper[7721]: I0216 02:07:46.746406 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsxrl\" (UniqueName: \"kubernetes.io/projected/a77e2f8f-d164-4a58-aab2-f3444c05cacb-kube-api-access-bsxrl\") pod \"cluster-storage-operator-75b869db96-qm7rm\" (UID: \"a77e2f8f-d164-4a58-aab2-f3444c05cacb\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-qm7rm" Feb 16 02:07:46.753289 master-0 kubenswrapper[7721]: I0216 02:07:46.746459 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/a77e2f8f-d164-4a58-aab2-f3444c05cacb-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-75b869db96-qm7rm\" (UID: \"a77e2f8f-d164-4a58-aab2-f3444c05cacb\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-qm7rm" Feb 16 02:07:46.753289 master-0 kubenswrapper[7721]: I0216 02:07:46.746495 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0abea413-e08a-465a-8ec4-2be650bfd5bd-service-ca-bundle\") pod \"insights-operator-cb4f7b4cf-llpf5\" (UID: \"0abea413-e08a-465a-8ec4-2be650bfd5bd\") " pod="openshift-insights/insights-operator-cb4f7b4cf-llpf5" Feb 16 02:07:46.848073 master-0 kubenswrapper[7721]: I0216 02:07:46.847960 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0abea413-e08a-465a-8ec4-2be650bfd5bd-service-ca-bundle\") pod \"insights-operator-cb4f7b4cf-llpf5\" (UID: \"0abea413-e08a-465a-8ec4-2be650bfd5bd\") " pod="openshift-insights/insights-operator-cb4f7b4cf-llpf5" Feb 16 02:07:46.848333 master-0 kubenswrapper[7721]: I0216 02:07:46.848123 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/0abea413-e08a-465a-8ec4-2be650bfd5bd-snapshots\") pod \"insights-operator-cb4f7b4cf-llpf5\" (UID: \"0abea413-e08a-465a-8ec4-2be650bfd5bd\") " pod="openshift-insights/insights-operator-cb4f7b4cf-llpf5" Feb 16 02:07:46.848333 master-0 kubenswrapper[7721]: I0216 02:07:46.848193 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0abea413-e08a-465a-8ec4-2be650bfd5bd-trusted-ca-bundle\") pod \"insights-operator-cb4f7b4cf-llpf5\" (UID: \"0abea413-e08a-465a-8ec4-2be650bfd5bd\") " pod="openshift-insights/insights-operator-cb4f7b4cf-llpf5" Feb 16 02:07:46.848333 master-0 kubenswrapper[7721]: I0216 02:07:46.848296 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0abea413-e08a-465a-8ec4-2be650bfd5bd-serving-cert\") pod \"insights-operator-cb4f7b4cf-llpf5\" (UID: \"0abea413-e08a-465a-8ec4-2be650bfd5bd\") " pod="openshift-insights/insights-operator-cb4f7b4cf-llpf5" Feb 16 02:07:46.848833 master-0 kubenswrapper[7721]: I0216 02:07:46.848358 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxlnm\" (UniqueName: \"kubernetes.io/projected/0abea413-e08a-465a-8ec4-2be650bfd5bd-kube-api-access-bxlnm\") pod \"insights-operator-cb4f7b4cf-llpf5\" (UID: \"0abea413-e08a-465a-8ec4-2be650bfd5bd\") " pod="openshift-insights/insights-operator-cb4f7b4cf-llpf5" Feb 16 02:07:46.848833 master-0 kubenswrapper[7721]: I0216 02:07:46.848711 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsxrl\" (UniqueName: \"kubernetes.io/projected/a77e2f8f-d164-4a58-aab2-f3444c05cacb-kube-api-access-bsxrl\") pod \"cluster-storage-operator-75b869db96-qm7rm\" (UID: \"a77e2f8f-d164-4a58-aab2-f3444c05cacb\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-qm7rm" Feb 16 02:07:46.849246 master-0 kubenswrapper[7721]: I0216 02:07:46.848847 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/a77e2f8f-d164-4a58-aab2-f3444c05cacb-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-75b869db96-qm7rm\" (UID: \"a77e2f8f-d164-4a58-aab2-f3444c05cacb\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-qm7rm" Feb 16 02:07:46.849246 master-0 kubenswrapper[7721]: I0216 02:07:46.848951 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/0abea413-e08a-465a-8ec4-2be650bfd5bd-snapshots\") pod \"insights-operator-cb4f7b4cf-llpf5\" (UID: \"0abea413-e08a-465a-8ec4-2be650bfd5bd\") " pod="openshift-insights/insights-operator-cb4f7b4cf-llpf5" Feb 16 02:07:46.850714 master-0 kubenswrapper[7721]: I0216 02:07:46.850659 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0abea413-e08a-465a-8ec4-2be650bfd5bd-trusted-ca-bundle\") pod \"insights-operator-cb4f7b4cf-llpf5\" (UID: \"0abea413-e08a-465a-8ec4-2be650bfd5bd\") " pod="openshift-insights/insights-operator-cb4f7b4cf-llpf5" Feb 16 02:07:46.852838 master-0 kubenswrapper[7721]: I0216 02:07:46.852788 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0abea413-e08a-465a-8ec4-2be650bfd5bd-service-ca-bundle\") pod \"insights-operator-cb4f7b4cf-llpf5\" (UID: \"0abea413-e08a-465a-8ec4-2be650bfd5bd\") " pod="openshift-insights/insights-operator-cb4f7b4cf-llpf5" Feb 16 02:07:46.855521 master-0 kubenswrapper[7721]: I0216 02:07:46.855471 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0abea413-e08a-465a-8ec4-2be650bfd5bd-serving-cert\") pod \"insights-operator-cb4f7b4cf-llpf5\" (UID: \"0abea413-e08a-465a-8ec4-2be650bfd5bd\") " pod="openshift-insights/insights-operator-cb4f7b4cf-llpf5" Feb 16 02:07:46.858518 master-0 kubenswrapper[7721]: I0216 02:07:46.857286 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/a77e2f8f-d164-4a58-aab2-f3444c05cacb-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-75b869db96-qm7rm\" (UID: \"a77e2f8f-d164-4a58-aab2-f3444c05cacb\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-qm7rm" Feb 16 02:07:47.017555 master-0 kubenswrapper[7721]: I0216 02:07:47.015039 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-qm7rm"] Feb 16 02:07:47.017555 master-0 kubenswrapper[7721]: I0216 02:07:47.016785 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-operator-cb4f7b4cf-llpf5"] Feb 16 02:07:47.030462 master-0 kubenswrapper[7721]: I0216 02:07:47.024898 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsxrl\" (UniqueName: \"kubernetes.io/projected/a77e2f8f-d164-4a58-aab2-f3444c05cacb-kube-api-access-bsxrl\") pod \"cluster-storage-operator-75b869db96-qm7rm\" (UID: \"a77e2f8f-d164-4a58-aab2-f3444c05cacb\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-qm7rm" Feb 16 02:07:47.030462 master-0 kubenswrapper[7721]: I0216 02:07:47.027291 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxlnm\" (UniqueName: \"kubernetes.io/projected/0abea413-e08a-465a-8ec4-2be650bfd5bd-kube-api-access-bxlnm\") pod \"insights-operator-cb4f7b4cf-llpf5\" (UID: \"0abea413-e08a-465a-8ec4-2be650bfd5bd\") " pod="openshift-insights/insights-operator-cb4f7b4cf-llpf5" Feb 16 02:07:47.030462 master-0 kubenswrapper[7721]: I0216 02:07:47.028170 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-cb4f7b4cf-llpf5" Feb 16 02:07:47.036830 master-0 kubenswrapper[7721]: I0216 02:07:47.036763 7721 generic.go:334] "Generic (PLEG): container finished" podID="456e6c3a-c16c-470b-a0cd-bb79865b54f0" containerID="84b61562b0c4e54147ae15c3e99cac0408baf94416f7643d3aafcf6087c2cdf4" exitCode=0 Feb 16 02:07:47.036941 master-0 kubenswrapper[7721]: I0216 02:07:47.036904 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-6fcf4c966-dctqr" event={"ID":"456e6c3a-c16c-470b-a0cd-bb79865b54f0","Type":"ContainerDied","Data":"84b61562b0c4e54147ae15c3e99cac0408baf94416f7643d3aafcf6087c2cdf4"} Feb 16 02:07:47.037708 master-0 kubenswrapper[7721]: I0216 02:07:47.037668 7721 scope.go:117] "RemoveContainer" containerID="84b61562b0c4e54147ae15c3e99cac0408baf94416f7643d3aafcf6087c2cdf4" Feb 16 02:07:47.039170 master-0 kubenswrapper[7721]: I0216 02:07:47.039054 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgj82\" (UniqueName: \"kubernetes.io/projected/48863ff6-63ac-42d7-bac7-29d888c92db9-kube-api-access-kgj82\") pod \"cluster-autoscaler-operator-67fd9768b5-9rvcj\" (UID: \"48863ff6-63ac-42d7-bac7-29d888c92db9\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-9rvcj" Feb 16 02:07:47.040808 master-0 kubenswrapper[7721]: I0216 02:07:47.040737 7721 generic.go:334] "Generic (PLEG): container finished" podID="7eda8a42-765e-47cf-896f-324e8185062e" containerID="1ac7f9a59837f186887fd1e06eef65d6e68ebbfb41544b75e3a57d65b27ead8f" exitCode=0 Feb 16 02:07:47.040885 master-0 kubenswrapper[7721]: I0216 02:07:47.040808 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5574f479df-xqnpg" event={"ID":"7eda8a42-765e-47cf-896f-324e8185062e","Type":"ContainerDied","Data":"1ac7f9a59837f186887fd1e06eef65d6e68ebbfb41544b75e3a57d65b27ead8f"} Feb 16 02:07:47.042278 master-0 kubenswrapper[7721]: I0216 02:07:47.042236 7721 generic.go:334] "Generic (PLEG): container finished" podID="f2a7e185-78f4-4d69-b126-d465374a6218" containerID="9208e311bfc82baf59af5b784307b97c850fa510c8d36aefe36f52bc21d5d523" exitCode=0 Feb 16 02:07:47.042325 master-0 kubenswrapper[7721]: I0216 02:07:47.042277 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-696cfb9f87-87b8w" event={"ID":"f2a7e185-78f4-4d69-b126-d465374a6218","Type":"ContainerDied","Data":"9208e311bfc82baf59af5b784307b97c850fa510c8d36aefe36f52bc21d5d523"} Feb 16 02:07:47.058285 master-0 kubenswrapper[7721]: I0216 02:07:47.058210 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-qm7rm" Feb 16 02:07:47.250300 master-0 kubenswrapper[7721]: I0216 02:07:47.243947 7721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Feb 16 02:07:47.250300 master-0 kubenswrapper[7721]: I0216 02:07:47.244307 7721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/installer-1-master-0" podUID="0615fd34-eaf9-4a3a-8543-25a7a5747194" containerName="installer" containerID="cri-o://34687e7a807c296d4e03913cf0d4730d2710fd3780bf2d83926deacd10e78353" gracePeriod=30 Feb 16 02:07:47.295025 master-0 kubenswrapper[7721]: I0216 02:07:47.294766 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-9rvcj" Feb 16 02:07:48.185030 master-0 kubenswrapper[7721]: I0216 02:07:48.184866 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-84976bb859-5gs6g"] Feb 16 02:07:48.197430 master-0 kubenswrapper[7721]: I0216 02:07:48.186506 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-5gs6g" Feb 16 02:07:48.197430 master-0 kubenswrapper[7721]: I0216 02:07:48.190985 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 16 02:07:48.197430 master-0 kubenswrapper[7721]: I0216 02:07:48.191348 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 16 02:07:48.197430 master-0 kubenswrapper[7721]: I0216 02:07:48.191621 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 16 02:07:48.197430 master-0 kubenswrapper[7721]: I0216 02:07:48.192381 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 16 02:07:48.197430 master-0 kubenswrapper[7721]: I0216 02:07:48.196531 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 16 02:07:48.197430 master-0 kubenswrapper[7721]: I0216 02:07:48.196918 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-84976bb859-5gs6g"] Feb 16 02:07:48.289462 master-0 kubenswrapper[7721]: I0216 02:07:48.289365 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmjjn\" (UniqueName: \"kubernetes.io/projected/d870332c-2498-4135-a9b3-a71e67c2805b-kube-api-access-wmjjn\") pod \"machine-config-operator-84976bb859-5gs6g\" (UID: \"d870332c-2498-4135-a9b3-a71e67c2805b\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-5gs6g" Feb 16 02:07:48.289462 master-0 kubenswrapper[7721]: I0216 02:07:48.289446 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d870332c-2498-4135-a9b3-a71e67c2805b-images\") pod \"machine-config-operator-84976bb859-5gs6g\" (UID: \"d870332c-2498-4135-a9b3-a71e67c2805b\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-5gs6g" Feb 16 02:07:48.289462 master-0 kubenswrapper[7721]: I0216 02:07:48.289475 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d870332c-2498-4135-a9b3-a71e67c2805b-auth-proxy-config\") pod \"machine-config-operator-84976bb859-5gs6g\" (UID: \"d870332c-2498-4135-a9b3-a71e67c2805b\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-5gs6g" Feb 16 02:07:48.290157 master-0 kubenswrapper[7721]: I0216 02:07:48.289528 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d870332c-2498-4135-a9b3-a71e67c2805b-proxy-tls\") pod \"machine-config-operator-84976bb859-5gs6g\" (UID: \"d870332c-2498-4135-a9b3-a71e67c2805b\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-5gs6g" Feb 16 02:07:48.321450 master-0 kubenswrapper[7721]: I0216 02:07:48.321329 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-s52cg"] Feb 16 02:07:48.333451 master-0 kubenswrapper[7721]: I0216 02:07:48.323206 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-s52cg" Feb 16 02:07:48.333451 master-0 kubenswrapper[7721]: I0216 02:07:48.326907 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Feb 16 02:07:48.333451 master-0 kubenswrapper[7721]: I0216 02:07:48.330865 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Feb 16 02:07:48.333451 master-0 kubenswrapper[7721]: I0216 02:07:48.331047 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Feb 16 02:07:48.333451 master-0 kubenswrapper[7721]: I0216 02:07:48.331607 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Feb 16 02:07:48.333451 master-0 kubenswrapper[7721]: I0216 02:07:48.331673 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-ccbvw" Feb 16 02:07:48.333451 master-0 kubenswrapper[7721]: I0216 02:07:48.331685 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Feb 16 02:07:48.390940 master-0 kubenswrapper[7721]: I0216 02:07:48.390888 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d870332c-2498-4135-a9b3-a71e67c2805b-images\") pod \"machine-config-operator-84976bb859-5gs6g\" (UID: \"d870332c-2498-4135-a9b3-a71e67c2805b\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-5gs6g" Feb 16 02:07:48.390940 master-0 kubenswrapper[7721]: I0216 02:07:48.390941 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/b120a297-2f2b-43f4-a19a-dad863cb2272-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-5b487c8bfc-s52cg\" (UID: \"b120a297-2f2b-43f4-a19a-dad863cb2272\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-s52cg" Feb 16 02:07:48.391307 master-0 kubenswrapper[7721]: I0216 02:07:48.391082 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d870332c-2498-4135-a9b3-a71e67c2805b-auth-proxy-config\") pod \"machine-config-operator-84976bb859-5gs6g\" (UID: \"d870332c-2498-4135-a9b3-a71e67c2805b\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-5gs6g" Feb 16 02:07:48.391307 master-0 kubenswrapper[7721]: I0216 02:07:48.391206 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/b120a297-2f2b-43f4-a19a-dad863cb2272-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-5b487c8bfc-s52cg\" (UID: \"b120a297-2f2b-43f4-a19a-dad863cb2272\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-s52cg" Feb 16 02:07:48.391307 master-0 kubenswrapper[7721]: I0216 02:07:48.391294 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d870332c-2498-4135-a9b3-a71e67c2805b-proxy-tls\") pod \"machine-config-operator-84976bb859-5gs6g\" (UID: \"d870332c-2498-4135-a9b3-a71e67c2805b\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-5gs6g" Feb 16 02:07:48.391583 master-0 kubenswrapper[7721]: I0216 02:07:48.391500 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4rs4\" (UniqueName: \"kubernetes.io/projected/b120a297-2f2b-43f4-a19a-dad863cb2272-kube-api-access-v4rs4\") pod \"cluster-cloud-controller-manager-operator-5b487c8bfc-s52cg\" (UID: \"b120a297-2f2b-43f4-a19a-dad863cb2272\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-s52cg" Feb 16 02:07:48.391653 master-0 kubenswrapper[7721]: I0216 02:07:48.391591 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b120a297-2f2b-43f4-a19a-dad863cb2272-images\") pod \"cluster-cloud-controller-manager-operator-5b487c8bfc-s52cg\" (UID: \"b120a297-2f2b-43f4-a19a-dad863cb2272\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-s52cg" Feb 16 02:07:48.391697 master-0 kubenswrapper[7721]: I0216 02:07:48.391658 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d870332c-2498-4135-a9b3-a71e67c2805b-images\") pod \"machine-config-operator-84976bb859-5gs6g\" (UID: \"d870332c-2498-4135-a9b3-a71e67c2805b\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-5gs6g" Feb 16 02:07:48.391742 master-0 kubenswrapper[7721]: I0216 02:07:48.391716 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b120a297-2f2b-43f4-a19a-dad863cb2272-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-5b487c8bfc-s52cg\" (UID: \"b120a297-2f2b-43f4-a19a-dad863cb2272\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-s52cg" Feb 16 02:07:48.391793 master-0 kubenswrapper[7721]: I0216 02:07:48.391768 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmjjn\" (UniqueName: \"kubernetes.io/projected/d870332c-2498-4135-a9b3-a71e67c2805b-kube-api-access-wmjjn\") pod \"machine-config-operator-84976bb859-5gs6g\" (UID: \"d870332c-2498-4135-a9b3-a71e67c2805b\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-5gs6g" Feb 16 02:07:48.391793 master-0 kubenswrapper[7721]: I0216 02:07:48.391781 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d870332c-2498-4135-a9b3-a71e67c2805b-auth-proxy-config\") pod \"machine-config-operator-84976bb859-5gs6g\" (UID: \"d870332c-2498-4135-a9b3-a71e67c2805b\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-5gs6g" Feb 16 02:07:48.400134 master-0 kubenswrapper[7721]: I0216 02:07:48.400062 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d870332c-2498-4135-a9b3-a71e67c2805b-proxy-tls\") pod \"machine-config-operator-84976bb859-5gs6g\" (UID: \"d870332c-2498-4135-a9b3-a71e67c2805b\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-5gs6g" Feb 16 02:07:48.420643 master-0 kubenswrapper[7721]: I0216 02:07:48.420585 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmjjn\" (UniqueName: \"kubernetes.io/projected/d870332c-2498-4135-a9b3-a71e67c2805b-kube-api-access-wmjjn\") pod \"machine-config-operator-84976bb859-5gs6g\" (UID: \"d870332c-2498-4135-a9b3-a71e67c2805b\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-5gs6g" Feb 16 02:07:48.500108 master-0 kubenswrapper[7721]: I0216 02:07:48.499966 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b120a297-2f2b-43f4-a19a-dad863cb2272-images\") pod \"cluster-cloud-controller-manager-operator-5b487c8bfc-s52cg\" (UID: \"b120a297-2f2b-43f4-a19a-dad863cb2272\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-s52cg" Feb 16 02:07:48.500108 master-0 kubenswrapper[7721]: I0216 02:07:48.500036 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b120a297-2f2b-43f4-a19a-dad863cb2272-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-5b487c8bfc-s52cg\" (UID: \"b120a297-2f2b-43f4-a19a-dad863cb2272\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-s52cg" Feb 16 02:07:48.500108 master-0 kubenswrapper[7721]: I0216 02:07:48.500070 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/b120a297-2f2b-43f4-a19a-dad863cb2272-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-5b487c8bfc-s52cg\" (UID: \"b120a297-2f2b-43f4-a19a-dad863cb2272\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-s52cg" Feb 16 02:07:48.500108 master-0 kubenswrapper[7721]: I0216 02:07:48.500106 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/b120a297-2f2b-43f4-a19a-dad863cb2272-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-5b487c8bfc-s52cg\" (UID: \"b120a297-2f2b-43f4-a19a-dad863cb2272\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-s52cg" Feb 16 02:07:48.500468 master-0 kubenswrapper[7721]: I0216 02:07:48.500153 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4rs4\" (UniqueName: \"kubernetes.io/projected/b120a297-2f2b-43f4-a19a-dad863cb2272-kube-api-access-v4rs4\") pod \"cluster-cloud-controller-manager-operator-5b487c8bfc-s52cg\" (UID: \"b120a297-2f2b-43f4-a19a-dad863cb2272\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-s52cg" Feb 16 02:07:48.501159 master-0 kubenswrapper[7721]: I0216 02:07:48.501118 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/b120a297-2f2b-43f4-a19a-dad863cb2272-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-5b487c8bfc-s52cg\" (UID: \"b120a297-2f2b-43f4-a19a-dad863cb2272\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-s52cg" Feb 16 02:07:48.501210 master-0 kubenswrapper[7721]: I0216 02:07:48.501166 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b120a297-2f2b-43f4-a19a-dad863cb2272-images\") pod \"cluster-cloud-controller-manager-operator-5b487c8bfc-s52cg\" (UID: \"b120a297-2f2b-43f4-a19a-dad863cb2272\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-s52cg" Feb 16 02:07:48.501907 master-0 kubenswrapper[7721]: I0216 02:07:48.501832 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b120a297-2f2b-43f4-a19a-dad863cb2272-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-5b487c8bfc-s52cg\" (UID: \"b120a297-2f2b-43f4-a19a-dad863cb2272\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-s52cg" Feb 16 02:07:48.522295 master-0 kubenswrapper[7721]: I0216 02:07:48.522013 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/b120a297-2f2b-43f4-a19a-dad863cb2272-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-5b487c8bfc-s52cg\" (UID: \"b120a297-2f2b-43f4-a19a-dad863cb2272\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-s52cg" Feb 16 02:07:48.542460 master-0 kubenswrapper[7721]: I0216 02:07:48.538198 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4rs4\" (UniqueName: \"kubernetes.io/projected/b120a297-2f2b-43f4-a19a-dad863cb2272-kube-api-access-v4rs4\") pod \"cluster-cloud-controller-manager-operator-5b487c8bfc-s52cg\" (UID: \"b120a297-2f2b-43f4-a19a-dad863cb2272\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-s52cg" Feb 16 02:07:48.570462 master-0 kubenswrapper[7721]: I0216 02:07:48.567614 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-5gs6g" Feb 16 02:07:48.651685 master-0 kubenswrapper[7721]: I0216 02:07:48.651610 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-s52cg" Feb 16 02:07:48.732456 master-0 kubenswrapper[7721]: I0216 02:07:48.731015 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-bd7dd5c46-qw2zq"] Feb 16 02:07:48.732456 master-0 kubenswrapper[7721]: I0216 02:07:48.732126 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-qw2zq" Feb 16 02:07:48.735495 master-0 kubenswrapper[7721]: I0216 02:07:48.735045 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 16 02:07:48.735495 master-0 kubenswrapper[7721]: I0216 02:07:48.735381 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 16 02:07:48.735856 master-0 kubenswrapper[7721]: I0216 02:07:48.735557 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 16 02:07:48.748238 master-0 kubenswrapper[7721]: I0216 02:07:48.747071 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-bd7dd5c46-qw2zq"] Feb 16 02:07:48.800477 master-0 kubenswrapper[7721]: I0216 02:07:48.794607 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-87777c9b7-fxzh6"] Feb 16 02:07:48.800477 master-0 kubenswrapper[7721]: I0216 02:07:48.799118 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-87777c9b7-fxzh6" Feb 16 02:07:48.815273 master-0 kubenswrapper[7721]: I0216 02:07:48.805267 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 16 02:07:48.815273 master-0 kubenswrapper[7721]: I0216 02:07:48.814273 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dc3354cb-b6c3-40a5-a695-cccb079ad292-apiservice-cert\") pod \"packageserver-87777c9b7-fxzh6\" (UID: \"dc3354cb-b6c3-40a5-a695-cccb079ad292\") " pod="openshift-operator-lifecycle-manager/packageserver-87777c9b7-fxzh6" Feb 16 02:07:48.817872 master-0 kubenswrapper[7721]: I0216 02:07:48.817350 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dc3354cb-b6c3-40a5-a695-cccb079ad292-webhook-cert\") pod \"packageserver-87777c9b7-fxzh6\" (UID: \"dc3354cb-b6c3-40a5-a695-cccb079ad292\") " pod="openshift-operator-lifecycle-manager/packageserver-87777c9b7-fxzh6" Feb 16 02:07:48.817872 master-0 kubenswrapper[7721]: I0216 02:07:48.817425 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fec84b8a-a0d1-4b07-8827-cef0beb89ecd-config\") pod \"machine-api-operator-bd7dd5c46-qw2zq\" (UID: \"fec84b8a-a0d1-4b07-8827-cef0beb89ecd\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-qw2zq" Feb 16 02:07:48.825417 master-0 kubenswrapper[7721]: I0216 02:07:48.824854 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/dc3354cb-b6c3-40a5-a695-cccb079ad292-tmpfs\") pod \"packageserver-87777c9b7-fxzh6\" (UID: \"dc3354cb-b6c3-40a5-a695-cccb079ad292\") " pod="openshift-operator-lifecycle-manager/packageserver-87777c9b7-fxzh6" Feb 16 02:07:48.825417 master-0 kubenswrapper[7721]: I0216 02:07:48.825008 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/fec84b8a-a0d1-4b07-8827-cef0beb89ecd-machine-api-operator-tls\") pod \"machine-api-operator-bd7dd5c46-qw2zq\" (UID: \"fec84b8a-a0d1-4b07-8827-cef0beb89ecd\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-qw2zq" Feb 16 02:07:48.825417 master-0 kubenswrapper[7721]: I0216 02:07:48.825071 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hm44l\" (UniqueName: \"kubernetes.io/projected/dc3354cb-b6c3-40a5-a695-cccb079ad292-kube-api-access-hm44l\") pod \"packageserver-87777c9b7-fxzh6\" (UID: \"dc3354cb-b6c3-40a5-a695-cccb079ad292\") " pod="openshift-operator-lifecycle-manager/packageserver-87777c9b7-fxzh6" Feb 16 02:07:48.825417 master-0 kubenswrapper[7721]: I0216 02:07:48.825114 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fec84b8a-a0d1-4b07-8827-cef0beb89ecd-images\") pod \"machine-api-operator-bd7dd5c46-qw2zq\" (UID: \"fec84b8a-a0d1-4b07-8827-cef0beb89ecd\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-qw2zq" Feb 16 02:07:48.825417 master-0 kubenswrapper[7721]: I0216 02:07:48.825162 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88kmw\" (UniqueName: \"kubernetes.io/projected/fec84b8a-a0d1-4b07-8827-cef0beb89ecd-kube-api-access-88kmw\") pod \"machine-api-operator-bd7dd5c46-qw2zq\" (UID: \"fec84b8a-a0d1-4b07-8827-cef0beb89ecd\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-qw2zq" Feb 16 02:07:48.827998 master-0 kubenswrapper[7721]: I0216 02:07:48.827952 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-87777c9b7-fxzh6"] Feb 16 02:07:48.929844 master-0 kubenswrapper[7721]: I0216 02:07:48.927713 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dc3354cb-b6c3-40a5-a695-cccb079ad292-webhook-cert\") pod \"packageserver-87777c9b7-fxzh6\" (UID: \"dc3354cb-b6c3-40a5-a695-cccb079ad292\") " pod="openshift-operator-lifecycle-manager/packageserver-87777c9b7-fxzh6" Feb 16 02:07:48.929844 master-0 kubenswrapper[7721]: I0216 02:07:48.927772 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fec84b8a-a0d1-4b07-8827-cef0beb89ecd-config\") pod \"machine-api-operator-bd7dd5c46-qw2zq\" (UID: \"fec84b8a-a0d1-4b07-8827-cef0beb89ecd\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-qw2zq" Feb 16 02:07:48.929844 master-0 kubenswrapper[7721]: I0216 02:07:48.927808 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/dc3354cb-b6c3-40a5-a695-cccb079ad292-tmpfs\") pod \"packageserver-87777c9b7-fxzh6\" (UID: \"dc3354cb-b6c3-40a5-a695-cccb079ad292\") " pod="openshift-operator-lifecycle-manager/packageserver-87777c9b7-fxzh6" Feb 16 02:07:48.929844 master-0 kubenswrapper[7721]: I0216 02:07:48.927877 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/fec84b8a-a0d1-4b07-8827-cef0beb89ecd-machine-api-operator-tls\") pod \"machine-api-operator-bd7dd5c46-qw2zq\" (UID: \"fec84b8a-a0d1-4b07-8827-cef0beb89ecd\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-qw2zq" Feb 16 02:07:48.929844 master-0 kubenswrapper[7721]: I0216 02:07:48.927901 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hm44l\" (UniqueName: \"kubernetes.io/projected/dc3354cb-b6c3-40a5-a695-cccb079ad292-kube-api-access-hm44l\") pod \"packageserver-87777c9b7-fxzh6\" (UID: \"dc3354cb-b6c3-40a5-a695-cccb079ad292\") " pod="openshift-operator-lifecycle-manager/packageserver-87777c9b7-fxzh6" Feb 16 02:07:48.929844 master-0 kubenswrapper[7721]: I0216 02:07:48.927919 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fec84b8a-a0d1-4b07-8827-cef0beb89ecd-images\") pod \"machine-api-operator-bd7dd5c46-qw2zq\" (UID: \"fec84b8a-a0d1-4b07-8827-cef0beb89ecd\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-qw2zq" Feb 16 02:07:48.929844 master-0 kubenswrapper[7721]: I0216 02:07:48.927946 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88kmw\" (UniqueName: \"kubernetes.io/projected/fec84b8a-a0d1-4b07-8827-cef0beb89ecd-kube-api-access-88kmw\") pod \"machine-api-operator-bd7dd5c46-qw2zq\" (UID: \"fec84b8a-a0d1-4b07-8827-cef0beb89ecd\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-qw2zq" Feb 16 02:07:48.929844 master-0 kubenswrapper[7721]: I0216 02:07:48.927977 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dc3354cb-b6c3-40a5-a695-cccb079ad292-apiservice-cert\") pod \"packageserver-87777c9b7-fxzh6\" (UID: \"dc3354cb-b6c3-40a5-a695-cccb079ad292\") " pod="openshift-operator-lifecycle-manager/packageserver-87777c9b7-fxzh6" Feb 16 02:07:48.929844 master-0 kubenswrapper[7721]: E0216 02:07:48.928835 7721 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: secret "machine-api-operator-tls" not found Feb 16 02:07:48.929844 master-0 kubenswrapper[7721]: E0216 02:07:48.928909 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fec84b8a-a0d1-4b07-8827-cef0beb89ecd-machine-api-operator-tls podName:fec84b8a-a0d1-4b07-8827-cef0beb89ecd nodeName:}" failed. No retries permitted until 2026-02-16 02:07:49.428883883 +0000 UTC m=+72.923118145 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/fec84b8a-a0d1-4b07-8827-cef0beb89ecd-machine-api-operator-tls") pod "machine-api-operator-bd7dd5c46-qw2zq" (UID: "fec84b8a-a0d1-4b07-8827-cef0beb89ecd") : secret "machine-api-operator-tls" not found Feb 16 02:07:48.930666 master-0 kubenswrapper[7721]: I0216 02:07:48.930614 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fec84b8a-a0d1-4b07-8827-cef0beb89ecd-images\") pod \"machine-api-operator-bd7dd5c46-qw2zq\" (UID: \"fec84b8a-a0d1-4b07-8827-cef0beb89ecd\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-qw2zq" Feb 16 02:07:48.930909 master-0 kubenswrapper[7721]: I0216 02:07:48.930846 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fec84b8a-a0d1-4b07-8827-cef0beb89ecd-config\") pod \"machine-api-operator-bd7dd5c46-qw2zq\" (UID: \"fec84b8a-a0d1-4b07-8827-cef0beb89ecd\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-qw2zq" Feb 16 02:07:48.931046 master-0 kubenswrapper[7721]: I0216 02:07:48.930993 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/dc3354cb-b6c3-40a5-a695-cccb079ad292-tmpfs\") pod \"packageserver-87777c9b7-fxzh6\" (UID: \"dc3354cb-b6c3-40a5-a695-cccb079ad292\") " pod="openshift-operator-lifecycle-manager/packageserver-87777c9b7-fxzh6" Feb 16 02:07:48.940214 master-0 kubenswrapper[7721]: I0216 02:07:48.940127 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dc3354cb-b6c3-40a5-a695-cccb079ad292-apiservice-cert\") pod \"packageserver-87777c9b7-fxzh6\" (UID: \"dc3354cb-b6c3-40a5-a695-cccb079ad292\") " pod="openshift-operator-lifecycle-manager/packageserver-87777c9b7-fxzh6" Feb 16 02:07:48.948178 master-0 kubenswrapper[7721]: I0216 02:07:48.948114 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hm44l\" (UniqueName: \"kubernetes.io/projected/dc3354cb-b6c3-40a5-a695-cccb079ad292-kube-api-access-hm44l\") pod \"packageserver-87777c9b7-fxzh6\" (UID: \"dc3354cb-b6c3-40a5-a695-cccb079ad292\") " pod="openshift-operator-lifecycle-manager/packageserver-87777c9b7-fxzh6" Feb 16 02:07:48.949769 master-0 kubenswrapper[7721]: I0216 02:07:48.949740 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dc3354cb-b6c3-40a5-a695-cccb079ad292-webhook-cert\") pod \"packageserver-87777c9b7-fxzh6\" (UID: \"dc3354cb-b6c3-40a5-a695-cccb079ad292\") " pod="openshift-operator-lifecycle-manager/packageserver-87777c9b7-fxzh6" Feb 16 02:07:48.956357 master-0 kubenswrapper[7721]: I0216 02:07:48.956303 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88kmw\" (UniqueName: \"kubernetes.io/projected/fec84b8a-a0d1-4b07-8827-cef0beb89ecd-kube-api-access-88kmw\") pod \"machine-api-operator-bd7dd5c46-qw2zq\" (UID: \"fec84b8a-a0d1-4b07-8827-cef0beb89ecd\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-qw2zq" Feb 16 02:07:49.133684 master-0 kubenswrapper[7721]: I0216 02:07:49.133510 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-87777c9b7-fxzh6" Feb 16 02:07:49.436638 master-0 kubenswrapper[7721]: I0216 02:07:49.436556 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/fec84b8a-a0d1-4b07-8827-cef0beb89ecd-machine-api-operator-tls\") pod \"machine-api-operator-bd7dd5c46-qw2zq\" (UID: \"fec84b8a-a0d1-4b07-8827-cef0beb89ecd\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-qw2zq" Feb 16 02:07:49.458741 master-0 kubenswrapper[7721]: I0216 02:07:49.458656 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/fec84b8a-a0d1-4b07-8827-cef0beb89ecd-machine-api-operator-tls\") pod \"machine-api-operator-bd7dd5c46-qw2zq\" (UID: \"fec84b8a-a0d1-4b07-8827-cef0beb89ecd\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-qw2zq" Feb 16 02:07:49.601254 master-0 kubenswrapper[7721]: I0216 02:07:49.600402 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Feb 16 02:07:49.601884 master-0 kubenswrapper[7721]: I0216 02:07:49.601696 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Feb 16 02:07:49.613480 master-0 kubenswrapper[7721]: I0216 02:07:49.613207 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Feb 16 02:07:49.640734 master-0 kubenswrapper[7721]: I0216 02:07:49.640684 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1f35c7c9-16ec-486e-99ff-f1cbcce76eb3-var-lock\") pod \"installer-2-master-0\" (UID: \"1f35c7c9-16ec-486e-99ff-f1cbcce76eb3\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 16 02:07:49.641083 master-0 kubenswrapper[7721]: I0216 02:07:49.640763 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1f35c7c9-16ec-486e-99ff-f1cbcce76eb3-kube-api-access\") pod \"installer-2-master-0\" (UID: \"1f35c7c9-16ec-486e-99ff-f1cbcce76eb3\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 16 02:07:49.641083 master-0 kubenswrapper[7721]: I0216 02:07:49.640831 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1f35c7c9-16ec-486e-99ff-f1cbcce76eb3-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"1f35c7c9-16ec-486e-99ff-f1cbcce76eb3\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 16 02:07:49.682542 master-0 kubenswrapper[7721]: I0216 02:07:49.682406 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-qw2zq" Feb 16 02:07:49.743826 master-0 kubenswrapper[7721]: I0216 02:07:49.743625 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1f35c7c9-16ec-486e-99ff-f1cbcce76eb3-var-lock\") pod \"installer-2-master-0\" (UID: \"1f35c7c9-16ec-486e-99ff-f1cbcce76eb3\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 16 02:07:49.743826 master-0 kubenswrapper[7721]: I0216 02:07:49.743704 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1f35c7c9-16ec-486e-99ff-f1cbcce76eb3-kube-api-access\") pod \"installer-2-master-0\" (UID: \"1f35c7c9-16ec-486e-99ff-f1cbcce76eb3\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 16 02:07:49.743826 master-0 kubenswrapper[7721]: I0216 02:07:49.743808 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1f35c7c9-16ec-486e-99ff-f1cbcce76eb3-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"1f35c7c9-16ec-486e-99ff-f1cbcce76eb3\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 16 02:07:49.744473 master-0 kubenswrapper[7721]: I0216 02:07:49.744379 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1f35c7c9-16ec-486e-99ff-f1cbcce76eb3-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"1f35c7c9-16ec-486e-99ff-f1cbcce76eb3\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 16 02:07:49.744552 master-0 kubenswrapper[7721]: I0216 02:07:49.744521 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1f35c7c9-16ec-486e-99ff-f1cbcce76eb3-var-lock\") pod \"installer-2-master-0\" (UID: \"1f35c7c9-16ec-486e-99ff-f1cbcce76eb3\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 16 02:07:49.784675 master-0 kubenswrapper[7721]: I0216 02:07:49.784603 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1f35c7c9-16ec-486e-99ff-f1cbcce76eb3-kube-api-access\") pod \"installer-2-master-0\" (UID: \"1f35c7c9-16ec-486e-99ff-f1cbcce76eb3\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 16 02:07:49.945779 master-0 kubenswrapper[7721]: I0216 02:07:49.945728 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Feb 16 02:07:51.075063 master-0 kubenswrapper[7721]: I0216 02:07:51.074923 7721 generic.go:334] "Generic (PLEG): container finished" podID="e379cfaf-3a4c-40e7-8641-3524b3669295" containerID="561891ec1509f7c4965b19f5a07719f12421d6e230fb355e2417164216f94e4e" exitCode=0 Feb 16 02:07:51.075063 master-0 kubenswrapper[7721]: I0216 02:07:51.074987 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-v7lmz" event={"ID":"e379cfaf-3a4c-40e7-8641-3524b3669295","Type":"ContainerDied","Data":"561891ec1509f7c4965b19f5a07719f12421d6e230fb355e2417164216f94e4e"} Feb 16 02:07:51.075635 master-0 kubenswrapper[7721]: I0216 02:07:51.075428 7721 scope.go:117] "RemoveContainer" containerID="561891ec1509f7c4965b19f5a07719f12421d6e230fb355e2417164216f94e4e" Feb 16 02:07:52.283247 master-0 kubenswrapper[7721]: I0216 02:07:52.283210 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-0_45476b57-538b-4031-80c9-8025a49e8e88/installer/0.log" Feb 16 02:07:52.283655 master-0 kubenswrapper[7721]: I0216 02:07:52.283285 7721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Feb 16 02:07:52.384373 master-0 kubenswrapper[7721]: I0216 02:07:52.384204 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/45476b57-538b-4031-80c9-8025a49e8e88-kube-api-access\") pod \"45476b57-538b-4031-80c9-8025a49e8e88\" (UID: \"45476b57-538b-4031-80c9-8025a49e8e88\") " Feb 16 02:07:52.384373 master-0 kubenswrapper[7721]: I0216 02:07:52.384369 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/45476b57-538b-4031-80c9-8025a49e8e88-var-lock\") pod \"45476b57-538b-4031-80c9-8025a49e8e88\" (UID: \"45476b57-538b-4031-80c9-8025a49e8e88\") " Feb 16 02:07:52.384373 master-0 kubenswrapper[7721]: I0216 02:07:52.384415 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/45476b57-538b-4031-80c9-8025a49e8e88-kubelet-dir\") pod \"45476b57-538b-4031-80c9-8025a49e8e88\" (UID: \"45476b57-538b-4031-80c9-8025a49e8e88\") " Feb 16 02:07:52.384373 master-0 kubenswrapper[7721]: I0216 02:07:52.384589 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45476b57-538b-4031-80c9-8025a49e8e88-var-lock" (OuterVolumeSpecName: "var-lock") pod "45476b57-538b-4031-80c9-8025a49e8e88" (UID: "45476b57-538b-4031-80c9-8025a49e8e88"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:07:52.384966 master-0 kubenswrapper[7721]: I0216 02:07:52.384788 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45476b57-538b-4031-80c9-8025a49e8e88-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "45476b57-538b-4031-80c9-8025a49e8e88" (UID: "45476b57-538b-4031-80c9-8025a49e8e88"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:07:52.384966 master-0 kubenswrapper[7721]: I0216 02:07:52.384810 7721 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/45476b57-538b-4031-80c9-8025a49e8e88-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 16 02:07:52.394004 master-0 kubenswrapper[7721]: I0216 02:07:52.393948 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45476b57-538b-4031-80c9-8025a49e8e88-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "45476b57-538b-4031-80c9-8025a49e8e88" (UID: "45476b57-538b-4031-80c9-8025a49e8e88"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:07:52.485851 master-0 kubenswrapper[7721]: I0216 02:07:52.485672 7721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/45476b57-538b-4031-80c9-8025a49e8e88-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 16 02:07:52.485851 master-0 kubenswrapper[7721]: I0216 02:07:52.485705 7721 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/45476b57-538b-4031-80c9-8025a49e8e88-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 02:07:52.756850 master-0 kubenswrapper[7721]: I0216 02:07:52.756814 7721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-696cfb9f87-87b8w" Feb 16 02:07:52.800591 master-0 kubenswrapper[7721]: I0216 02:07:52.800280 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lm95w\" (UniqueName: \"kubernetes.io/projected/f2a7e185-78f4-4d69-b126-d465374a6218-kube-api-access-lm95w\") pod \"f2a7e185-78f4-4d69-b126-d465374a6218\" (UID: \"f2a7e185-78f4-4d69-b126-d465374a6218\") " Feb 16 02:07:52.800591 master-0 kubenswrapper[7721]: I0216 02:07:52.800538 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f2a7e185-78f4-4d69-b126-d465374a6218-client-ca\") pod \"f2a7e185-78f4-4d69-b126-d465374a6218\" (UID: \"f2a7e185-78f4-4d69-b126-d465374a6218\") " Feb 16 02:07:52.800591 master-0 kubenswrapper[7721]: I0216 02:07:52.800572 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2a7e185-78f4-4d69-b126-d465374a6218-config\") pod \"f2a7e185-78f4-4d69-b126-d465374a6218\" (UID: \"f2a7e185-78f4-4d69-b126-d465374a6218\") " Feb 16 02:07:52.800778 master-0 kubenswrapper[7721]: I0216 02:07:52.800668 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2a7e185-78f4-4d69-b126-d465374a6218-serving-cert\") pod \"f2a7e185-78f4-4d69-b126-d465374a6218\" (UID: \"f2a7e185-78f4-4d69-b126-d465374a6218\") " Feb 16 02:07:52.801320 master-0 kubenswrapper[7721]: I0216 02:07:52.801269 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2a7e185-78f4-4d69-b126-d465374a6218-client-ca" (OuterVolumeSpecName: "client-ca") pod "f2a7e185-78f4-4d69-b126-d465374a6218" (UID: "f2a7e185-78f4-4d69-b126-d465374a6218"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:07:52.801962 master-0 kubenswrapper[7721]: I0216 02:07:52.801933 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2a7e185-78f4-4d69-b126-d465374a6218-config" (OuterVolumeSpecName: "config") pod "f2a7e185-78f4-4d69-b126-d465374a6218" (UID: "f2a7e185-78f4-4d69-b126-d465374a6218"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:07:52.822533 master-0 kubenswrapper[7721]: I0216 02:07:52.821354 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2a7e185-78f4-4d69-b126-d465374a6218-kube-api-access-lm95w" (OuterVolumeSpecName: "kube-api-access-lm95w") pod "f2a7e185-78f4-4d69-b126-d465374a6218" (UID: "f2a7e185-78f4-4d69-b126-d465374a6218"). InnerVolumeSpecName "kube-api-access-lm95w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:07:52.822533 master-0 kubenswrapper[7721]: I0216 02:07:52.822169 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2a7e185-78f4-4d69-b126-d465374a6218-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f2a7e185-78f4-4d69-b126-d465374a6218" (UID: "f2a7e185-78f4-4d69-b126-d465374a6218"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:07:52.828981 master-0 kubenswrapper[7721]: I0216 02:07:52.828929 7721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lm95w\" (UniqueName: \"kubernetes.io/projected/f2a7e185-78f4-4d69-b126-d465374a6218-kube-api-access-lm95w\") on node \"master-0\" DevicePath \"\"" Feb 16 02:07:52.829097 master-0 kubenswrapper[7721]: I0216 02:07:52.828991 7721 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f2a7e185-78f4-4d69-b126-d465374a6218-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 16 02:07:52.829097 master-0 kubenswrapper[7721]: I0216 02:07:52.829007 7721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2a7e185-78f4-4d69-b126-d465374a6218-config\") on node \"master-0\" DevicePath \"\"" Feb 16 02:07:52.829097 master-0 kubenswrapper[7721]: I0216 02:07:52.829019 7721 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2a7e185-78f4-4d69-b126-d465374a6218-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 16 02:07:52.870847 master-0 kubenswrapper[7721]: I0216 02:07:52.870593 7721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5574f479df-xqnpg" Feb 16 02:07:52.923518 master-0 kubenswrapper[7721]: I0216 02:07:52.923473 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-r5l9f"] Feb 16 02:07:53.031975 master-0 kubenswrapper[7721]: I0216 02:07:53.030896 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7eda8a42-765e-47cf-896f-324e8185062e-config\") pod \"7eda8a42-765e-47cf-896f-324e8185062e\" (UID: \"7eda8a42-765e-47cf-896f-324e8185062e\") " Feb 16 02:07:53.031975 master-0 kubenswrapper[7721]: I0216 02:07:53.031027 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7eda8a42-765e-47cf-896f-324e8185062e-proxy-ca-bundles\") pod \"7eda8a42-765e-47cf-896f-324e8185062e\" (UID: \"7eda8a42-765e-47cf-896f-324e8185062e\") " Feb 16 02:07:53.031975 master-0 kubenswrapper[7721]: I0216 02:07:53.031073 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7eda8a42-765e-47cf-896f-324e8185062e-serving-cert\") pod \"7eda8a42-765e-47cf-896f-324e8185062e\" (UID: \"7eda8a42-765e-47cf-896f-324e8185062e\") " Feb 16 02:07:53.031975 master-0 kubenswrapper[7721]: I0216 02:07:53.031115 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7eda8a42-765e-47cf-896f-324e8185062e-client-ca\") pod \"7eda8a42-765e-47cf-896f-324e8185062e\" (UID: \"7eda8a42-765e-47cf-896f-324e8185062e\") " Feb 16 02:07:53.031975 master-0 kubenswrapper[7721]: I0216 02:07:53.031140 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmgwf\" (UniqueName: \"kubernetes.io/projected/7eda8a42-765e-47cf-896f-324e8185062e-kube-api-access-cmgwf\") pod \"7eda8a42-765e-47cf-896f-324e8185062e\" (UID: \"7eda8a42-765e-47cf-896f-324e8185062e\") " Feb 16 02:07:53.031975 master-0 kubenswrapper[7721]: I0216 02:07:53.031677 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7eda8a42-765e-47cf-896f-324e8185062e-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7eda8a42-765e-47cf-896f-324e8185062e" (UID: "7eda8a42-765e-47cf-896f-324e8185062e"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:07:53.031975 master-0 kubenswrapper[7721]: I0216 02:07:53.031981 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7eda8a42-765e-47cf-896f-324e8185062e-client-ca" (OuterVolumeSpecName: "client-ca") pod "7eda8a42-765e-47cf-896f-324e8185062e" (UID: "7eda8a42-765e-47cf-896f-324e8185062e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:07:53.032498 master-0 kubenswrapper[7721]: I0216 02:07:53.031971 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7eda8a42-765e-47cf-896f-324e8185062e-config" (OuterVolumeSpecName: "config") pod "7eda8a42-765e-47cf-896f-324e8185062e" (UID: "7eda8a42-765e-47cf-896f-324e8185062e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:07:53.036996 master-0 kubenswrapper[7721]: I0216 02:07:53.036261 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7eda8a42-765e-47cf-896f-324e8185062e-kube-api-access-cmgwf" (OuterVolumeSpecName: "kube-api-access-cmgwf") pod "7eda8a42-765e-47cf-896f-324e8185062e" (UID: "7eda8a42-765e-47cf-896f-324e8185062e"). InnerVolumeSpecName "kube-api-access-cmgwf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:07:53.058475 master-0 kubenswrapper[7721]: I0216 02:07:53.057156 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7eda8a42-765e-47cf-896f-324e8185062e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7eda8a42-765e-47cf-896f-324e8185062e" (UID: "7eda8a42-765e-47cf-896f-324e8185062e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:07:53.097476 master-0 kubenswrapper[7721]: I0216 02:07:53.095206 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5574f479df-xqnpg" event={"ID":"7eda8a42-765e-47cf-896f-324e8185062e","Type":"ContainerDied","Data":"71f88debd6e32c8926a0394683094104a5453ca71a39b14a81064d10cb0255f9"} Feb 16 02:07:53.097476 master-0 kubenswrapper[7721]: I0216 02:07:53.095260 7721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5574f479df-xqnpg" Feb 16 02:07:53.097476 master-0 kubenswrapper[7721]: I0216 02:07:53.095277 7721 scope.go:117] "RemoveContainer" containerID="1ac7f9a59837f186887fd1e06eef65d6e68ebbfb41544b75e3a57d65b27ead8f" Feb 16 02:07:53.113559 master-0 kubenswrapper[7721]: I0216 02:07:53.112471 7721 generic.go:334] "Generic (PLEG): container finished" podID="0bdb65c2-c4bc-4e33-9e5a-61542c659700" containerID="e5940b75272319c7aabca48d2d6edec79fe11d41b3036bd4f4cedae45e24b5d7" exitCode=0 Feb 16 02:07:53.113559 master-0 kubenswrapper[7721]: I0216 02:07:53.112547 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9qtbw" event={"ID":"0bdb65c2-c4bc-4e33-9e5a-61542c659700","Type":"ContainerDied","Data":"e5940b75272319c7aabca48d2d6edec79fe11d41b3036bd4f4cedae45e24b5d7"} Feb 16 02:07:53.127161 master-0 kubenswrapper[7721]: I0216 02:07:53.126660 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"45476b57-538b-4031-80c9-8025a49e8e88","Type":"ContainerDied","Data":"fd6a595d794b352d399441301e60f9c24a74357bb8a6e67bdca2c5e538615037"} Feb 16 02:07:53.127161 master-0 kubenswrapper[7721]: I0216 02:07:53.126693 7721 scope.go:117] "RemoveContainer" containerID="844468f5a50bbaafc70df5bd3186ac9f161b117793658af967159de5ce3fa619" Feb 16 02:07:53.127161 master-0 kubenswrapper[7721]: I0216 02:07:53.126776 7721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Feb 16 02:07:53.139179 master-0 kubenswrapper[7721]: I0216 02:07:53.134207 7721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7eda8a42-765e-47cf-896f-324e8185062e-config\") on node \"master-0\" DevicePath \"\"" Feb 16 02:07:53.139179 master-0 kubenswrapper[7721]: I0216 02:07:53.134249 7721 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7eda8a42-765e-47cf-896f-324e8185062e-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Feb 16 02:07:53.139179 master-0 kubenswrapper[7721]: I0216 02:07:53.134264 7721 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7eda8a42-765e-47cf-896f-324e8185062e-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 16 02:07:53.139179 master-0 kubenswrapper[7721]: I0216 02:07:53.134274 7721 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7eda8a42-765e-47cf-896f-324e8185062e-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 16 02:07:53.139179 master-0 kubenswrapper[7721]: I0216 02:07:53.134284 7721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cmgwf\" (UniqueName: \"kubernetes.io/projected/7eda8a42-765e-47cf-896f-324e8185062e-kube-api-access-cmgwf\") on node \"master-0\" DevicePath \"\"" Feb 16 02:07:53.139179 master-0 kubenswrapper[7721]: I0216 02:07:53.137112 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-k8jz5"] Feb 16 02:07:53.153208 master-0 kubenswrapper[7721]: I0216 02:07:53.151186 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-j4xhj" event={"ID":"f27ae528-68de-4b59-9536-2d49b7a3cb29","Type":"ContainerStarted","Data":"90b9d68e5e14d707d4874d2b9402ccfe55f90fc3cfd436b05bd088c745ba5d22"} Feb 16 02:07:53.153208 master-0 kubenswrapper[7721]: I0216 02:07:53.151265 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-j4xhj" event={"ID":"f27ae528-68de-4b59-9536-2d49b7a3cb29","Type":"ContainerStarted","Data":"c8272269dd94d2b1ec1e9213f6473a063042376209d9c67263706c106c76b22d"} Feb 16 02:07:53.169478 master-0 kubenswrapper[7721]: I0216 02:07:53.166597 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-s52cg" event={"ID":"b120a297-2f2b-43f4-a19a-dad863cb2272","Type":"ContainerStarted","Data":"b8494b263fe1529d6f7a8254addf28f0b268f636a2e08846d97c6e9e64889d8b"} Feb 16 02:07:53.183461 master-0 kubenswrapper[7721]: I0216 02:07:53.179221 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-trqk8" event={"ID":"ab463f74-d1e7-44f1-9634-d9f63685b06d","Type":"ContainerStarted","Data":"699ab655e0c4a6b31309afab886f90e9b48704fa893a80a048fd279bf60a2f9d"} Feb 16 02:07:53.187460 master-0 kubenswrapper[7721]: I0216 02:07:53.184193 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-v7lmz" event={"ID":"e379cfaf-3a4c-40e7-8641-3524b3669295","Type":"ContainerStarted","Data":"e691a05529b4feda1459fb089aa0bfd36c24c35f07686b8d317ee98a6be4be8a"} Feb 16 02:07:53.192005 master-0 kubenswrapper[7721]: I0216 02:07:53.190028 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vkp55" event={"ID":"0e5d40eb-8051-46a8-9cd9-d2b1f152dbf0","Type":"ContainerStarted","Data":"c32d539f67c811ff9746ff9280cdab7ab4b0cb4f483d1d5325c5241516366b56"} Feb 16 02:07:53.196595 master-0 kubenswrapper[7721]: I0216 02:07:53.192832 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-6fcf4c966-dctqr" event={"ID":"456e6c3a-c16c-470b-a0cd-bb79865b54f0","Type":"ContainerStarted","Data":"d6b8bd52621bc720ed9dc674f34ff05c02fea3c605d0d925e1c0610bec6f8610"} Feb 16 02:07:53.196595 master-0 kubenswrapper[7721]: I0216 02:07:53.196401 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9kb98" event={"ID":"774b6ff9-0e37-48fd-96c6-571859fec492","Type":"ContainerStarted","Data":"7be24ae6b55716532fb5e51eb053c2830fcfe8aa6050ec53c06ef1084c4bbe8d"} Feb 16 02:07:53.202945 master-0 kubenswrapper[7721]: I0216 02:07:53.198023 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-r5l9f" event={"ID":"a8d00a01-aa48-4830-a558-93a31cb98b31","Type":"ContainerStarted","Data":"283ae7ddfaf1351f34dacc8beed10e6971e1ef88d2fe208447bc84c265e096e1"} Feb 16 02:07:53.202945 master-0 kubenswrapper[7721]: I0216 02:07:53.199172 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-696cfb9f87-87b8w" event={"ID":"f2a7e185-78f4-4d69-b126-d465374a6218","Type":"ContainerDied","Data":"4fc83c72923d06bb27dc72e1294113ac5b34e6ed149d10d4a4b519d50c5b89a9"} Feb 16 02:07:53.202945 master-0 kubenswrapper[7721]: I0216 02:07:53.199221 7721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-696cfb9f87-87b8w" Feb 16 02:07:53.257466 master-0 kubenswrapper[7721]: I0216 02:07:53.252868 7721 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-etcd/etcd-master-0-master-0"] Feb 16 02:07:53.257466 master-0 kubenswrapper[7721]: I0216 02:07:53.253122 7721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0-master-0" podUID="400a178a4d5e9a88ba5bbbd1da2ad15e" containerName="etcdctl" containerID="cri-o://1ac49b435dd0ca350c530f89dad4bc64dffda1e4e142d28d15199074e3eba071" gracePeriod=30 Feb 16 02:07:53.257466 master-0 kubenswrapper[7721]: I0216 02:07:53.253265 7721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0-master-0" podUID="400a178a4d5e9a88ba5bbbd1da2ad15e" containerName="etcd" containerID="cri-o://d3c90f1e73202b8ff7d7463b840dabc14c0987d4a5dea05816767a582d4b8f44" gracePeriod=30 Feb 16 02:07:53.257466 master-0 kubenswrapper[7721]: I0216 02:07:53.255018 7721 scope.go:117] "RemoveContainer" containerID="9208e311bfc82baf59af5b784307b97c850fa510c8d36aefe36f52bc21d5d523" Feb 16 02:07:53.257875 master-0 kubenswrapper[7721]: I0216 02:07:53.257692 7721 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-0"] Feb 16 02:07:53.260359 master-0 kubenswrapper[7721]: E0216 02:07:53.257939 7721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="400a178a4d5e9a88ba5bbbd1da2ad15e" containerName="etcd" Feb 16 02:07:53.260359 master-0 kubenswrapper[7721]: I0216 02:07:53.257963 7721 state_mem.go:107] "Deleted CPUSet assignment" podUID="400a178a4d5e9a88ba5bbbd1da2ad15e" containerName="etcd" Feb 16 02:07:53.260359 master-0 kubenswrapper[7721]: E0216 02:07:53.257984 7721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2a7e185-78f4-4d69-b126-d465374a6218" containerName="route-controller-manager" Feb 16 02:07:53.260359 master-0 kubenswrapper[7721]: I0216 02:07:53.257992 7721 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2a7e185-78f4-4d69-b126-d465374a6218" containerName="route-controller-manager" Feb 16 02:07:53.260359 master-0 kubenswrapper[7721]: E0216 02:07:53.258004 7721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="400a178a4d5e9a88ba5bbbd1da2ad15e" containerName="etcdctl" Feb 16 02:07:53.260359 master-0 kubenswrapper[7721]: I0216 02:07:53.258012 7721 state_mem.go:107] "Deleted CPUSet assignment" podUID="400a178a4d5e9a88ba5bbbd1da2ad15e" containerName="etcdctl" Feb 16 02:07:53.260359 master-0 kubenswrapper[7721]: E0216 02:07:53.258023 7721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7eda8a42-765e-47cf-896f-324e8185062e" containerName="controller-manager" Feb 16 02:07:53.260359 master-0 kubenswrapper[7721]: I0216 02:07:53.258032 7721 state_mem.go:107] "Deleted CPUSet assignment" podUID="7eda8a42-765e-47cf-896f-324e8185062e" containerName="controller-manager" Feb 16 02:07:53.260359 master-0 kubenswrapper[7721]: E0216 02:07:53.258046 7721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45476b57-538b-4031-80c9-8025a49e8e88" containerName="installer" Feb 16 02:07:53.260359 master-0 kubenswrapper[7721]: I0216 02:07:53.258054 7721 state_mem.go:107] "Deleted CPUSet assignment" podUID="45476b57-538b-4031-80c9-8025a49e8e88" containerName="installer" Feb 16 02:07:53.260359 master-0 kubenswrapper[7721]: I0216 02:07:53.258166 7721 memory_manager.go:354] "RemoveStaleState removing state" podUID="400a178a4d5e9a88ba5bbbd1da2ad15e" containerName="etcd" Feb 16 02:07:53.260359 master-0 kubenswrapper[7721]: I0216 02:07:53.258182 7721 memory_manager.go:354] "RemoveStaleState removing state" podUID="7eda8a42-765e-47cf-896f-324e8185062e" containerName="controller-manager" Feb 16 02:07:53.260359 master-0 kubenswrapper[7721]: I0216 02:07:53.258193 7721 memory_manager.go:354] "RemoveStaleState removing state" podUID="45476b57-538b-4031-80c9-8025a49e8e88" containerName="installer" Feb 16 02:07:53.260359 master-0 kubenswrapper[7721]: I0216 02:07:53.258203 7721 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2a7e185-78f4-4d69-b126-d465374a6218" containerName="route-controller-manager" Feb 16 02:07:53.260359 master-0 kubenswrapper[7721]: I0216 02:07:53.258214 7721 memory_manager.go:354] "RemoveStaleState removing state" podUID="400a178a4d5e9a88ba5bbbd1da2ad15e" containerName="etcdctl" Feb 16 02:07:53.260359 master-0 kubenswrapper[7721]: I0216 02:07:53.259883 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Feb 16 02:07:53.343103 master-0 kubenswrapper[7721]: I0216 02:07:53.342039 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-resource-dir\") pod \"etcd-master-0\" (UID: \"401699cb53e7098157e808a83125b0e4\") " pod="openshift-etcd/etcd-master-0" Feb 16 02:07:53.343103 master-0 kubenswrapper[7721]: I0216 02:07:53.342212 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-data-dir\") pod \"etcd-master-0\" (UID: \"401699cb53e7098157e808a83125b0e4\") " pod="openshift-etcd/etcd-master-0" Feb 16 02:07:53.343103 master-0 kubenswrapper[7721]: I0216 02:07:53.342228 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-log-dir\") pod \"etcd-master-0\" (UID: \"401699cb53e7098157e808a83125b0e4\") " pod="openshift-etcd/etcd-master-0" Feb 16 02:07:53.343103 master-0 kubenswrapper[7721]: I0216 02:07:53.342252 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-usr-local-bin\") pod \"etcd-master-0\" (UID: \"401699cb53e7098157e808a83125b0e4\") " pod="openshift-etcd/etcd-master-0" Feb 16 02:07:53.343103 master-0 kubenswrapper[7721]: I0216 02:07:53.342267 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-cert-dir\") pod \"etcd-master-0\" (UID: \"401699cb53e7098157e808a83125b0e4\") " pod="openshift-etcd/etcd-master-0" Feb 16 02:07:53.343103 master-0 kubenswrapper[7721]: I0216 02:07:53.342283 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-static-pod-dir\") pod \"etcd-master-0\" (UID: \"401699cb53e7098157e808a83125b0e4\") " pod="openshift-etcd/etcd-master-0" Feb 16 02:07:53.448307 master-0 kubenswrapper[7721]: I0216 02:07:53.448252 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-data-dir\") pod \"etcd-master-0\" (UID: \"401699cb53e7098157e808a83125b0e4\") " pod="openshift-etcd/etcd-master-0" Feb 16 02:07:53.448411 master-0 kubenswrapper[7721]: I0216 02:07:53.448319 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-log-dir\") pod \"etcd-master-0\" (UID: \"401699cb53e7098157e808a83125b0e4\") " pod="openshift-etcd/etcd-master-0" Feb 16 02:07:53.448411 master-0 kubenswrapper[7721]: I0216 02:07:53.448365 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-usr-local-bin\") pod \"etcd-master-0\" (UID: \"401699cb53e7098157e808a83125b0e4\") " pod="openshift-etcd/etcd-master-0" Feb 16 02:07:53.448411 master-0 kubenswrapper[7721]: I0216 02:07:53.448386 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-cert-dir\") pod \"etcd-master-0\" (UID: \"401699cb53e7098157e808a83125b0e4\") " pod="openshift-etcd/etcd-master-0" Feb 16 02:07:53.448527 master-0 kubenswrapper[7721]: I0216 02:07:53.448413 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-static-pod-dir\") pod \"etcd-master-0\" (UID: \"401699cb53e7098157e808a83125b0e4\") " pod="openshift-etcd/etcd-master-0" Feb 16 02:07:53.448527 master-0 kubenswrapper[7721]: I0216 02:07:53.448483 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-resource-dir\") pod \"etcd-master-0\" (UID: \"401699cb53e7098157e808a83125b0e4\") " pod="openshift-etcd/etcd-master-0" Feb 16 02:07:53.448626 master-0 kubenswrapper[7721]: I0216 02:07:53.448608 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-resource-dir\") pod \"etcd-master-0\" (UID: \"401699cb53e7098157e808a83125b0e4\") " pod="openshift-etcd/etcd-master-0" Feb 16 02:07:53.448695 master-0 kubenswrapper[7721]: I0216 02:07:53.448256 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-data-dir\") pod \"etcd-master-0\" (UID: \"401699cb53e7098157e808a83125b0e4\") " pod="openshift-etcd/etcd-master-0" Feb 16 02:07:53.448695 master-0 kubenswrapper[7721]: I0216 02:07:53.448658 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-log-dir\") pod \"etcd-master-0\" (UID: \"401699cb53e7098157e808a83125b0e4\") " pod="openshift-etcd/etcd-master-0" Feb 16 02:07:53.448695 master-0 kubenswrapper[7721]: I0216 02:07:53.448682 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-usr-local-bin\") pod \"etcd-master-0\" (UID: \"401699cb53e7098157e808a83125b0e4\") " pod="openshift-etcd/etcd-master-0" Feb 16 02:07:53.448777 master-0 kubenswrapper[7721]: I0216 02:07:53.448705 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-cert-dir\") pod \"etcd-master-0\" (UID: \"401699cb53e7098157e808a83125b0e4\") " pod="openshift-etcd/etcd-master-0" Feb 16 02:07:53.448777 master-0 kubenswrapper[7721]: I0216 02:07:53.448728 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-static-pod-dir\") pod \"etcd-master-0\" (UID: \"401699cb53e7098157e808a83125b0e4\") " pod="openshift-etcd/etcd-master-0" Feb 16 02:07:54.211046 master-0 kubenswrapper[7721]: I0216 02:07:54.210962 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-k8jz5" event={"ID":"5b62004d-7fe3-47ae-8e26-8496befb047c","Type":"ContainerStarted","Data":"f9e85a0740edade16aca29d94977dcf8952ab075721ac965ae1df68ba4eec6d2"} Feb 16 02:07:54.213303 master-0 kubenswrapper[7721]: I0216 02:07:54.213251 7721 generic.go:334] "Generic (PLEG): container finished" podID="ab463f74-d1e7-44f1-9634-d9f63685b06d" containerID="699ab655e0c4a6b31309afab886f90e9b48704fa893a80a048fd279bf60a2f9d" exitCode=0 Feb 16 02:07:54.213382 master-0 kubenswrapper[7721]: I0216 02:07:54.213348 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-trqk8" event={"ID":"ab463f74-d1e7-44f1-9634-d9f63685b06d","Type":"ContainerDied","Data":"699ab655e0c4a6b31309afab886f90e9b48704fa893a80a048fd279bf60a2f9d"} Feb 16 02:07:54.215921 master-0 kubenswrapper[7721]: I0216 02:07:54.215861 7721 generic.go:334] "Generic (PLEG): container finished" podID="774b6ff9-0e37-48fd-96c6-571859fec492" containerID="7be24ae6b55716532fb5e51eb053c2830fcfe8aa6050ec53c06ef1084c4bbe8d" exitCode=0 Feb 16 02:07:54.216018 master-0 kubenswrapper[7721]: I0216 02:07:54.215972 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9kb98" event={"ID":"774b6ff9-0e37-48fd-96c6-571859fec492","Type":"ContainerDied","Data":"7be24ae6b55716532fb5e51eb053c2830fcfe8aa6050ec53c06ef1084c4bbe8d"} Feb 16 02:07:54.218320 master-0 kubenswrapper[7721]: I0216 02:07:54.218271 7721 generic.go:334] "Generic (PLEG): container finished" podID="0e5d40eb-8051-46a8-9cd9-d2b1f152dbf0" containerID="c32d539f67c811ff9746ff9280cdab7ab4b0cb4f483d1d5325c5241516366b56" exitCode=0 Feb 16 02:07:54.218367 master-0 kubenswrapper[7721]: I0216 02:07:54.218344 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vkp55" event={"ID":"0e5d40eb-8051-46a8-9cd9-d2b1f152dbf0","Type":"ContainerDied","Data":"c32d539f67c811ff9746ff9280cdab7ab4b0cb4f483d1d5325c5241516366b56"} Feb 16 02:07:57.240985 master-0 kubenswrapper[7721]: I0216 02:07:57.240844 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-r5l9f" event={"ID":"a8d00a01-aa48-4830-a558-93a31cb98b31","Type":"ContainerStarted","Data":"cbb215837f1e5b2ced545b28b81dafe9fa0f617cf84f3ee5cf431ddb83b1fb21"} Feb 16 02:07:57.244341 master-0 kubenswrapper[7721]: I0216 02:07:57.244296 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-j4xhj" event={"ID":"f27ae528-68de-4b59-9536-2d49b7a3cb29","Type":"ContainerStarted","Data":"d9db1b3a5f1d0e336b63ef9a33d4cba07ee8d59948b784ecd5b54fc2c10d78c3"} Feb 16 02:07:57.246906 master-0 kubenswrapper[7721]: I0216 02:07:57.246791 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-k8jz5" event={"ID":"5b62004d-7fe3-47ae-8e26-8496befb047c","Type":"ContainerStarted","Data":"910ba5e799eb76c82fffb6096d33595b56a5b71babc917d550cd0c73a8cd72e2"} Feb 16 02:07:57.248564 master-0 kubenswrapper[7721]: I0216 02:07:57.246830 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-k8jz5" event={"ID":"5b62004d-7fe3-47ae-8e26-8496befb047c","Type":"ContainerStarted","Data":"51f00918e38781d6ebe9409d52687447fb8fae31e4a37c1ec90db3038853713d"} Feb 16 02:07:57.249519 master-0 kubenswrapper[7721]: I0216 02:07:57.249485 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-s52cg" event={"ID":"b120a297-2f2b-43f4-a19a-dad863cb2272","Type":"ContainerStarted","Data":"e0c807dcdfc9df7b2c6822e6e59cca10d3dbe35d7119bfc15e6122189915614e"} Feb 16 02:07:58.315775 master-0 kubenswrapper[7721]: I0216 02:07:58.315693 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-s52cg" event={"ID":"b120a297-2f2b-43f4-a19a-dad863cb2272","Type":"ContainerStarted","Data":"54ac0511a79cf836ca40b96533cb52642311ac3eb6a7e16a8e1debf0310fdc28"} Feb 16 02:07:58.316392 master-0 kubenswrapper[7721]: I0216 02:07:58.315793 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-s52cg" event={"ID":"b120a297-2f2b-43f4-a19a-dad863cb2272","Type":"ContainerStarted","Data":"86995061c658ce827216a0caa7afe639797ce66a82b23130be90bca7b7ae56d4"} Feb 16 02:07:59.799137 master-0 kubenswrapper[7721]: I0216 02:07:59.799086 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-tkqng" Feb 16 02:08:01.339685 master-0 kubenswrapper[7721]: I0216 02:08:01.339483 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vkp55" event={"ID":"0e5d40eb-8051-46a8-9cd9-d2b1f152dbf0","Type":"ContainerStarted","Data":"d4e6dcba58c6aa96b4941baa62882cd9180552a541add0468ddd2b0e1f5a6373"} Feb 16 02:08:01.344214 master-0 kubenswrapper[7721]: I0216 02:08:01.344171 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-trqk8" event={"ID":"ab463f74-d1e7-44f1-9634-d9f63685b06d","Type":"ContainerStarted","Data":"fcd918e42e09edbde82af27329ac4d0663845d79ca2085b97d9bb5eab9b7e0af"} Feb 16 02:08:01.348139 master-0 kubenswrapper[7721]: I0216 02:08:01.348051 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9qtbw" event={"ID":"0bdb65c2-c4bc-4e33-9e5a-61542c659700","Type":"ContainerStarted","Data":"74431daf39a9feaef137ae8d22f9b9d06dc8b940ba1cc1cbd03fb059358f6dbd"} Feb 16 02:08:01.352632 master-0 kubenswrapper[7721]: I0216 02:08:01.352578 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9kb98" event={"ID":"774b6ff9-0e37-48fd-96c6-571859fec492","Type":"ContainerStarted","Data":"60918c66097ffeb7d50714bdcecec18d468a6dbbe93f7cbc2d5000b882984224"} Feb 16 02:08:02.482222 master-0 kubenswrapper[7721]: I0216 02:08:02.482145 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9kb98" Feb 16 02:08:02.482222 master-0 kubenswrapper[7721]: I0216 02:08:02.482225 7721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9kb98" Feb 16 02:08:03.345717 master-0 kubenswrapper[7721]: I0216 02:08:03.345647 7721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vkp55" Feb 16 02:08:03.345717 master-0 kubenswrapper[7721]: I0216 02:08:03.345725 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vkp55" Feb 16 02:08:03.541838 master-0 kubenswrapper[7721]: I0216 02:08:03.541780 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9kb98" podUID="774b6ff9-0e37-48fd-96c6-571859fec492" containerName="registry-server" probeResult="failure" output=< Feb 16 02:08:03.541838 master-0 kubenswrapper[7721]: timeout: failed to connect service ":50051" within 1s Feb 16 02:08:03.541838 master-0 kubenswrapper[7721]: > Feb 16 02:08:04.378380 master-0 kubenswrapper[7721]: I0216 02:08:04.378299 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_0615fd34-eaf9-4a3a-8543-25a7a5747194/installer/0.log" Feb 16 02:08:04.378685 master-0 kubenswrapper[7721]: I0216 02:08:04.378389 7721 generic.go:334] "Generic (PLEG): container finished" podID="0615fd34-eaf9-4a3a-8543-25a7a5747194" containerID="34687e7a807c296d4e03913cf0d4730d2710fd3780bf2d83926deacd10e78353" exitCode=1 Feb 16 02:08:04.378685 master-0 kubenswrapper[7721]: I0216 02:08:04.378468 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"0615fd34-eaf9-4a3a-8543-25a7a5747194","Type":"ContainerDied","Data":"34687e7a807c296d4e03913cf0d4730d2710fd3780bf2d83926deacd10e78353"} Feb 16 02:08:04.404767 master-0 kubenswrapper[7721]: I0216 02:08:04.404613 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-vkp55" podUID="0e5d40eb-8051-46a8-9cd9-d2b1f152dbf0" containerName="registry-server" probeResult="failure" output=< Feb 16 02:08:04.404767 master-0 kubenswrapper[7721]: timeout: failed to connect service ":50051" within 1s Feb 16 02:08:04.404767 master-0 kubenswrapper[7721]: > Feb 16 02:08:05.017027 master-0 kubenswrapper[7721]: I0216 02:08:05.016942 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-trqk8" Feb 16 02:08:05.017820 master-0 kubenswrapper[7721]: I0216 02:08:05.017038 7721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-trqk8" Feb 16 02:08:05.082948 master-0 kubenswrapper[7721]: I0216 02:08:05.082863 7721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-trqk8" Feb 16 02:08:05.290841 master-0 kubenswrapper[7721]: I0216 02:08:05.290755 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_0615fd34-eaf9-4a3a-8543-25a7a5747194/installer/0.log" Feb 16 02:08:05.290841 master-0 kubenswrapper[7721]: I0216 02:08:05.290851 7721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Feb 16 02:08:05.388616 master-0 kubenswrapper[7721]: I0216 02:08:05.388547 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_0615fd34-eaf9-4a3a-8543-25a7a5747194/installer/0.log" Feb 16 02:08:05.389238 master-0 kubenswrapper[7721]: I0216 02:08:05.389118 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"0615fd34-eaf9-4a3a-8543-25a7a5747194","Type":"ContainerDied","Data":"245c08cde6ae6142a71beebf44880b7111b195a8d0e90b35f37d8ae7cd3316d9"} Feb 16 02:08:05.389238 master-0 kubenswrapper[7721]: I0216 02:08:05.389202 7721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Feb 16 02:08:05.389481 master-0 kubenswrapper[7721]: I0216 02:08:05.389212 7721 scope.go:117] "RemoveContainer" containerID="34687e7a807c296d4e03913cf0d4730d2710fd3780bf2d83926deacd10e78353" Feb 16 02:08:05.434875 master-0 kubenswrapper[7721]: I0216 02:08:05.434823 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0615fd34-eaf9-4a3a-8543-25a7a5747194-var-lock\") pod \"0615fd34-eaf9-4a3a-8543-25a7a5747194\" (UID: \"0615fd34-eaf9-4a3a-8543-25a7a5747194\") " Feb 16 02:08:05.435039 master-0 kubenswrapper[7721]: I0216 02:08:05.434901 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0615fd34-eaf9-4a3a-8543-25a7a5747194-kubelet-dir\") pod \"0615fd34-eaf9-4a3a-8543-25a7a5747194\" (UID: \"0615fd34-eaf9-4a3a-8543-25a7a5747194\") " Feb 16 02:08:05.435039 master-0 kubenswrapper[7721]: I0216 02:08:05.434993 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0615fd34-eaf9-4a3a-8543-25a7a5747194-kube-api-access\") pod \"0615fd34-eaf9-4a3a-8543-25a7a5747194\" (UID: \"0615fd34-eaf9-4a3a-8543-25a7a5747194\") " Feb 16 02:08:05.435508 master-0 kubenswrapper[7721]: I0216 02:08:05.435395 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0615fd34-eaf9-4a3a-8543-25a7a5747194-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "0615fd34-eaf9-4a3a-8543-25a7a5747194" (UID: "0615fd34-eaf9-4a3a-8543-25a7a5747194"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:08:05.435610 master-0 kubenswrapper[7721]: I0216 02:08:05.435577 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0615fd34-eaf9-4a3a-8543-25a7a5747194-var-lock" (OuterVolumeSpecName: "var-lock") pod "0615fd34-eaf9-4a3a-8543-25a7a5747194" (UID: "0615fd34-eaf9-4a3a-8543-25a7a5747194"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:08:05.445685 master-0 kubenswrapper[7721]: I0216 02:08:05.445638 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0615fd34-eaf9-4a3a-8543-25a7a5747194-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0615fd34-eaf9-4a3a-8543-25a7a5747194" (UID: "0615fd34-eaf9-4a3a-8543-25a7a5747194"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:08:05.536745 master-0 kubenswrapper[7721]: I0216 02:08:05.536609 7721 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0615fd34-eaf9-4a3a-8543-25a7a5747194-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 16 02:08:05.536745 master-0 kubenswrapper[7721]: I0216 02:08:05.536655 7721 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0615fd34-eaf9-4a3a-8543-25a7a5747194-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 02:08:05.536745 master-0 kubenswrapper[7721]: I0216 02:08:05.536676 7721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0615fd34-eaf9-4a3a-8543-25a7a5747194-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 16 02:08:05.634614 master-0 kubenswrapper[7721]: I0216 02:08:05.634480 7721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9qtbw" Feb 16 02:08:05.634929 master-0 kubenswrapper[7721]: I0216 02:08:05.634638 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9qtbw" Feb 16 02:08:05.702192 master-0 kubenswrapper[7721]: I0216 02:08:05.702041 7721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9qtbw" Feb 16 02:08:06.472572 master-0 kubenswrapper[7721]: I0216 02:08:06.472481 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9qtbw" Feb 16 02:08:06.476788 master-0 kubenswrapper[7721]: E0216 02:08:06.476687 7721 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Feb 16 02:08:06.477584 master-0 kubenswrapper[7721]: I0216 02:08:06.477524 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Feb 16 02:08:07.408031 master-0 kubenswrapper[7721]: I0216 02:08:07.407877 7721 generic.go:334] "Generic (PLEG): container finished" podID="401699cb53e7098157e808a83125b0e4" containerID="09435c57d62c6b4c54925d09966ded5900478367440fe80cf72c6bfff877a848" exitCode=0 Feb 16 02:08:07.408570 master-0 kubenswrapper[7721]: I0216 02:08:07.407992 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"401699cb53e7098157e808a83125b0e4","Type":"ContainerDied","Data":"09435c57d62c6b4c54925d09966ded5900478367440fe80cf72c6bfff877a848"} Feb 16 02:08:07.408782 master-0 kubenswrapper[7721]: I0216 02:08:07.408748 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"401699cb53e7098157e808a83125b0e4","Type":"ContainerStarted","Data":"2d1ef49b96ebdd856812371adfbca86182a1e5d84ab3a176435a0a697e3fe026"} Feb 16 02:08:07.414300 master-0 kubenswrapper[7721]: I0216 02:08:07.414203 7721 generic.go:334] "Generic (PLEG): container finished" podID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerID="e320e64d785f6de34aed9795724368979c84944d7c7a25afb100430d56e9ef3e" exitCode=1 Feb 16 02:08:07.414561 master-0 kubenswrapper[7721]: I0216 02:08:07.414516 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerDied","Data":"e320e64d785f6de34aed9795724368979c84944d7c7a25afb100430d56e9ef3e"} Feb 16 02:08:07.415493 master-0 kubenswrapper[7721]: I0216 02:08:07.415462 7721 scope.go:117] "RemoveContainer" containerID="e320e64d785f6de34aed9795724368979c84944d7c7a25afb100430d56e9ef3e" Feb 16 02:08:07.887844 master-0 kubenswrapper[7721]: I0216 02:08:07.887759 7721 patch_prober.go:28] interesting pod/authentication-operator-755d954778-bngv9 container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.9:8443/healthz\": dial tcp 10.128.0.9:8443: connect: connection refused" start-of-body= Feb 16 02:08:07.888689 master-0 kubenswrapper[7721]: I0216 02:08:07.887846 7721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-755d954778-bngv9" podUID="c9cd32bc-a13a-44ee-ba52-7bb335c7007b" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.9:8443/healthz\": dial tcp 10.128.0.9:8443: connect: connection refused" Feb 16 02:08:08.199612 master-0 kubenswrapper[7721]: E0216 02:08:08.199413 7721 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 02:08:08.426362 master-0 kubenswrapper[7721]: I0216 02:08:08.426276 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerStarted","Data":"f7e5042e1717f873aa5ed64ccd2f2b11417f41ea4156f1f41e924e94dbf23445"} Feb 16 02:08:08.429190 master-0 kubenswrapper[7721]: I0216 02:08:08.429121 7721 generic.go:334] "Generic (PLEG): container finished" podID="4733c2df-0f5a-4696-b8c6-2568ebc7debc" containerID="3537f96b40e8859a6a366ec6550aaba73f34d9f862f4f2e89eccfbc047d01b00" exitCode=0 Feb 16 02:08:08.429303 master-0 kubenswrapper[7721]: I0216 02:08:08.429191 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"4733c2df-0f5a-4696-b8c6-2568ebc7debc","Type":"ContainerDied","Data":"3537f96b40e8859a6a366ec6550aaba73f34d9f862f4f2e89eccfbc047d01b00"} Feb 16 02:08:08.627183 master-0 kubenswrapper[7721]: E0216 02:08:08.626729 7721 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T02:07:58Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T02:07:58Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T02:07:58Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T02:07:58Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec\\\"],\\\"sizeBytes\\\":1631983282},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe683caef773a1963fc13f96afe58892563ea9921db8ac39369e3a9a05ea7181\\\"],\\\"sizeBytes\\\":1232696860},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9250bc5d881852654c420b833aa018257e927522e9d8e1b74307dd7b4b0bfc42\\\"],\\\"sizeBytes\\\":987280724},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d\\\"],\\\"sizeBytes\\\":938665460},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:df623c15a78ca969fb8ad134bde911c2047bf82b50244ee8e523763b6587e072\\\"],\\\"sizeBytes\\\":870929735},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c\\\"],\\\"sizeBytes\\\":857432360},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07093043bca0089b3c56d9e5331e68f549541e5661e2a39a260aa534dc9528bd\\\"],\\\"sizeBytes\\\":767663184},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e30865ea7d55b76cb925c7d26c650f0bc70fd9a02d7d59d0fe1a3024426229ad\\\"],\\\"sizeBytes\\\":682673937},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e786e28fbe0b95c4f5723bebc3abde1333b259fd26673716fc5638d88286d8b7\\\"],\\\"sizeBytes\\\":677894171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dffbd86bfae06921432678caf184b335bf2fc6ac7ee128f48aee396d57ea55\\\"],\\\"sizeBytes\\\":672642165},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aacc2698d097e25bf26e35393ef3536f7a240880d0a87f46a2b7ea3c13731d1e\\\"],\\\"sizeBytes\\\":616473928},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b41a8ae60c0eafa4a13e6dcd0e79ba63b0d7bd2bdc28aaed434b3bef98a5dc95\\\"],\\\"sizeBytes\\\":584205881},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e155421620a4ac28a759265f53059b75308fdd1491caeba6a9a34d2fbeab4954\\\"],\\\"sizeBytes\\\":576983707},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f122c11c2f6a10ca150b136f7291d2e135b3a182d67809aa49727da289787cee\\\"],\\\"sizeBytes\\\":553036394},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc03f91dbf08df9907c0ebad30c54a7fa92285b19ec4e440ed762b197378a861\\\"],\\\"sizeBytes\\\":543577525},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3\\\"],\\\"sizeBytes\\\":524042902},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bfc52d6ca96f377d53757dc437ca720e860e3e016d16c084bd5f6f2e337d3a1d\\\"],\\\"sizeBytes\\\":523760203},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd9324942b3d09b4b9a768f36b47be4e555d947910ee3d115fc5448c95f7399\\\"],\\\"sizeBytes\\\":513211213},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc\\\"],\\\"sizeBytes\\\":512819769},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49\\\"],\\\"sizeBytes\\\":509806416},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:047699c5a63593f45e9dd6f9fac0fa636ffc012331ee592891bfb08001bdd963\\\"],\\\"sizeBytes\\\":508050651},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd544a8a6b4d08fe0f4fd076109c09cf181302ab6056ec6b2b89d68a52954c5\\\"],\\\"sizeBytes\\\":507103881},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3d21c51712e6e0cdd9c776479f6d1ab55bc1085df5bb5f583e69ee192d11fd3\\\"],\\\"sizeBytes\\\":506056636},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:333e6572029953b4c4676076f0991ee6e5c7d28cbe2887c71b1682f19831d8a1\\\"],\\\"sizeBytes\\\":505990615},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39\\\"],\\\"sizeBytes\\\":503717987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5abe992def861ec075251ae17bbd66fa23bd05bd3701953c0fdcf68a8d161f1e\\\"],\\\"sizeBytes\\\":503374574},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f353131d8a1223db7f637c9851016b3a348d80c2b2be663a2db6d01e14ddca88\\\"],\\\"sizeBytes\\\":502798848},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:13d06502c0f0a3c73f69bf8d0743718f7cfc46e71f4a12916517ad7e9bff17e1\\\"],\\\"sizeBytes\\\":501305896},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a\\\"],\\\"sizeBytes\\\":501222351},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cf24751d6b6d66fcfc26aa8e0f94a4248a3edab6dbfe3fe9651a90b6b4d92192\\\"],\\\"sizeBytes\\\":500175306},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d54bd262ca625a326b01ea2bfd33db10a402c05590e6b710b0959712e1bf30b\\\"],\\\"sizeBytes\\\":500068323},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e391fce0b2e04f22fc089597db9e0671ba7f8b5b3a709151b5f33dd23b262144\\\"],\\\"sizeBytes\\\":499445182},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2431778975829348e271dc9bf123c7a24c81a52537a61414cd17c8436436078b\\\"],\\\"sizeBytes\\\":490819380},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae30b3ab740f21c451d0272bceacb99fa34d22bbf2ea22f1e1e18230a156104b\\\"],\\\"sizeBytes\\\":489891070},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0871b6c16b38a2eda5d1c89fd75079aff0775224307e940557e6fda6ba229f38\\\"],\\\"sizeBytes\\\":481921522},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0100b616991bd8bca68d583e902283aa4cc0d388046437d5d68407190e3fb041\\\"],\\\"sizeBytes\\\":479280723},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8ea13b0cbfe9be0d3d7ea80d50e512af6a453921a553c7c79b566530142b611b\\\"],\\\"sizeBytes\\\":479006001},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1c8b9784a60860a08bd47935f0767b7b7f8f36c5c0adb7623a31b82c01d4c09\\\"],\\\"sizeBytes\\\":463090242},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0\\\"],\\\"sizeBytes\\\":459915626},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a26b20d3ef7b75aeb05acf9be2702f9d478822c43f679ff578811843692b960c\\\"],\\\"sizeBytes\\\":458531660},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dab7a82d88f90f1ef4ac307b16132d4d573a4fa9080acc3272ca084613bd902a\\\"],\\\"sizeBytes\\\":452956763},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bbe162375a11ed3810a1081c30dd400f461f2421d5f1e27d8792048bbd216956\\\"],\\\"sizeBytes\\\":451401927},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:80531a0fe966e1cc0582c581951b09a7a4e42037c106748c44859110361b2c1b\\\"],\\\"sizeBytes\\\":443654349},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3bb3c46533b24f1a6a6669117dc888ed8f0c7ae56b34068a4ff2052335e34c4e\\\"],\\\"sizeBytes\\\":442871962},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:240701090a5f8e40d4b88fa200cf63dffb11a8e2eae713cf3c629b016c2823b0\\\"],\\\"sizeBytes\\\":438101353},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9cc42212fb15c1f3e6a88acaaa4919c9693be3c6099ea849d28855e231dc9e44\\\"],\\\"sizeBytes\\\":433480092},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c38d58b62290b59d0182b50ce3cfd87fbb7729f3ce6fc06ffa46d9805c7dd78\\\"],\\\"sizeBytes\\\":406416461},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:15c5e645edf257a08c061ad9ae7dab4293104a042b8396181d76dd28f396cebe\\\"],\\\"sizeBytes\\\":402172859},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1593b6aac7bb18c1bbb5d41693e8b8c7f0c0410fcc09e15de52d8bd53e356541\\\"],\\\"sizeBytes\\\":391352099}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 02:08:09.817650 master-0 kubenswrapper[7721]: I0216 02:08:09.817545 7721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Feb 16 02:08:09.914023 master-0 kubenswrapper[7721]: I0216 02:08:09.913888 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4733c2df-0f5a-4696-b8c6-2568ebc7debc-var-lock\") pod \"4733c2df-0f5a-4696-b8c6-2568ebc7debc\" (UID: \"4733c2df-0f5a-4696-b8c6-2568ebc7debc\") " Feb 16 02:08:09.914349 master-0 kubenswrapper[7721]: I0216 02:08:09.914051 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4733c2df-0f5a-4696-b8c6-2568ebc7debc-var-lock" (OuterVolumeSpecName: "var-lock") pod "4733c2df-0f5a-4696-b8c6-2568ebc7debc" (UID: "4733c2df-0f5a-4696-b8c6-2568ebc7debc"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:08:09.914349 master-0 kubenswrapper[7721]: I0216 02:08:09.914140 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4733c2df-0f5a-4696-b8c6-2568ebc7debc-kube-api-access\") pod \"4733c2df-0f5a-4696-b8c6-2568ebc7debc\" (UID: \"4733c2df-0f5a-4696-b8c6-2568ebc7debc\") " Feb 16 02:08:09.914349 master-0 kubenswrapper[7721]: I0216 02:08:09.914200 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4733c2df-0f5a-4696-b8c6-2568ebc7debc-kubelet-dir\") pod \"4733c2df-0f5a-4696-b8c6-2568ebc7debc\" (UID: \"4733c2df-0f5a-4696-b8c6-2568ebc7debc\") " Feb 16 02:08:09.914621 master-0 kubenswrapper[7721]: I0216 02:08:09.914483 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4733c2df-0f5a-4696-b8c6-2568ebc7debc-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4733c2df-0f5a-4696-b8c6-2568ebc7debc" (UID: "4733c2df-0f5a-4696-b8c6-2568ebc7debc"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:08:09.914715 master-0 kubenswrapper[7721]: I0216 02:08:09.914698 7721 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4733c2df-0f5a-4696-b8c6-2568ebc7debc-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 02:08:09.914810 master-0 kubenswrapper[7721]: I0216 02:08:09.914725 7721 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4733c2df-0f5a-4696-b8c6-2568ebc7debc-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 16 02:08:09.917969 master-0 kubenswrapper[7721]: I0216 02:08:09.917900 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4733c2df-0f5a-4696-b8c6-2568ebc7debc-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4733c2df-0f5a-4696-b8c6-2568ebc7debc" (UID: "4733c2df-0f5a-4696-b8c6-2568ebc7debc"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:08:10.015940 master-0 kubenswrapper[7721]: I0216 02:08:10.015736 7721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4733c2df-0f5a-4696-b8c6-2568ebc7debc-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 16 02:08:10.445830 master-0 kubenswrapper[7721]: I0216 02:08:10.445762 7721 generic.go:334] "Generic (PLEG): container finished" podID="9460ca0802075a8a6a10d7b3e6052c4d" containerID="0d84c00dcc11900a2f5a4ff15f798ef8c8b6cc92a9b7e1f32a7c33bfeed4a478" exitCode=1 Feb 16 02:08:10.446079 master-0 kubenswrapper[7721]: I0216 02:08:10.445882 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"9460ca0802075a8a6a10d7b3e6052c4d","Type":"ContainerDied","Data":"0d84c00dcc11900a2f5a4ff15f798ef8c8b6cc92a9b7e1f32a7c33bfeed4a478"} Feb 16 02:08:10.446772 master-0 kubenswrapper[7721]: I0216 02:08:10.446737 7721 scope.go:117] "RemoveContainer" containerID="0d84c00dcc11900a2f5a4ff15f798ef8c8b6cc92a9b7e1f32a7c33bfeed4a478" Feb 16 02:08:10.448344 master-0 kubenswrapper[7721]: I0216 02:08:10.448304 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"4733c2df-0f5a-4696-b8c6-2568ebc7debc","Type":"ContainerDied","Data":"8066623cf1d51e73e4a446a8cf51f10003367928813b2f433dc4283c2b007eff"} Feb 16 02:08:10.448392 master-0 kubenswrapper[7721]: I0216 02:08:10.448357 7721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8066623cf1d51e73e4a446a8cf51f10003367928813b2f433dc4283c2b007eff" Feb 16 02:08:10.448504 master-0 kubenswrapper[7721]: I0216 02:08:10.448475 7721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Feb 16 02:08:10.826586 master-0 kubenswrapper[7721]: I0216 02:08:10.826498 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 02:08:11.460627 master-0 kubenswrapper[7721]: I0216 02:08:11.460509 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-5f5f84757d-b47jp_6c02961f-30ec-4405-b7fa-9c4192342ae9/openshift-controller-manager-operator/0.log" Feb 16 02:08:11.460627 master-0 kubenswrapper[7721]: I0216 02:08:11.460596 7721 generic.go:334] "Generic (PLEG): container finished" podID="6c02961f-30ec-4405-b7fa-9c4192342ae9" containerID="4af3b63baf882cf9b9d02a791b48a2c854ad5ccd1fbd43903fb2e66b8e587e95" exitCode=1 Feb 16 02:08:11.460974 master-0 kubenswrapper[7721]: I0216 02:08:11.460700 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-b47jp" event={"ID":"6c02961f-30ec-4405-b7fa-9c4192342ae9","Type":"ContainerDied","Data":"4af3b63baf882cf9b9d02a791b48a2c854ad5ccd1fbd43903fb2e66b8e587e95"} Feb 16 02:08:11.461465 master-0 kubenswrapper[7721]: I0216 02:08:11.461396 7721 scope.go:117] "RemoveContainer" containerID="4af3b63baf882cf9b9d02a791b48a2c854ad5ccd1fbd43903fb2e66b8e587e95" Feb 16 02:08:11.466306 master-0 kubenswrapper[7721]: I0216 02:08:11.466260 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_80f43f07-ce08-4c21-9463-ea983a110244/installer/0.log" Feb 16 02:08:11.466373 master-0 kubenswrapper[7721]: I0216 02:08:11.466344 7721 generic.go:334] "Generic (PLEG): container finished" podID="80f43f07-ce08-4c21-9463-ea983a110244" containerID="b4ee28c7394858e0cf4928c25f023b84c89ee6af3676ca6c853d7b858571c63f" exitCode=1 Feb 16 02:08:11.466563 master-0 kubenswrapper[7721]: I0216 02:08:11.466458 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"80f43f07-ce08-4c21-9463-ea983a110244","Type":"ContainerDied","Data":"b4ee28c7394858e0cf4928c25f023b84c89ee6af3676ca6c853d7b858571c63f"} Feb 16 02:08:11.470191 master-0 kubenswrapper[7721]: I0216 02:08:11.470140 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"9460ca0802075a8a6a10d7b3e6052c4d","Type":"ContainerStarted","Data":"1bad524fd514e3639a6a8b060873c8398b9f534aa2528726df9aa2897827465b"} Feb 16 02:08:11.842152 master-0 kubenswrapper[7721]: I0216 02:08:11.841983 7721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 02:08:12.482706 master-0 kubenswrapper[7721]: I0216 02:08:12.482594 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-5f5f84757d-b47jp_6c02961f-30ec-4405-b7fa-9c4192342ae9/openshift-controller-manager-operator/0.log" Feb 16 02:08:12.483587 master-0 kubenswrapper[7721]: I0216 02:08:12.483166 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-b47jp" event={"ID":"6c02961f-30ec-4405-b7fa-9c4192342ae9","Type":"ContainerStarted","Data":"907bfaa35e251ac0a99127e043064ef8d7828048025a8b998d4e1bd9a8208385"} Feb 16 02:08:12.548985 master-0 kubenswrapper[7721]: I0216 02:08:12.548910 7721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9kb98" Feb 16 02:08:12.635371 master-0 kubenswrapper[7721]: I0216 02:08:12.635322 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9kb98" Feb 16 02:08:12.898943 master-0 kubenswrapper[7721]: I0216 02:08:12.898862 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_80f43f07-ce08-4c21-9463-ea983a110244/installer/0.log" Feb 16 02:08:12.900033 master-0 kubenswrapper[7721]: I0216 02:08:12.898974 7721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Feb 16 02:08:13.062568 master-0 kubenswrapper[7721]: I0216 02:08:13.062494 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/80f43f07-ce08-4c21-9463-ea983a110244-var-lock\") pod \"80f43f07-ce08-4c21-9463-ea983a110244\" (UID: \"80f43f07-ce08-4c21-9463-ea983a110244\") " Feb 16 02:08:13.063064 master-0 kubenswrapper[7721]: I0216 02:08:13.063024 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/80f43f07-ce08-4c21-9463-ea983a110244-kubelet-dir\") pod \"80f43f07-ce08-4c21-9463-ea983a110244\" (UID: \"80f43f07-ce08-4c21-9463-ea983a110244\") " Feb 16 02:08:13.063354 master-0 kubenswrapper[7721]: I0216 02:08:13.062731 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80f43f07-ce08-4c21-9463-ea983a110244-var-lock" (OuterVolumeSpecName: "var-lock") pod "80f43f07-ce08-4c21-9463-ea983a110244" (UID: "80f43f07-ce08-4c21-9463-ea983a110244"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:08:13.063539 master-0 kubenswrapper[7721]: I0216 02:08:13.063209 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80f43f07-ce08-4c21-9463-ea983a110244-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "80f43f07-ce08-4c21-9463-ea983a110244" (UID: "80f43f07-ce08-4c21-9463-ea983a110244"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:08:13.063823 master-0 kubenswrapper[7721]: I0216 02:08:13.063783 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/80f43f07-ce08-4c21-9463-ea983a110244-kube-api-access\") pod \"80f43f07-ce08-4c21-9463-ea983a110244\" (UID: \"80f43f07-ce08-4c21-9463-ea983a110244\") " Feb 16 02:08:13.064389 master-0 kubenswrapper[7721]: I0216 02:08:13.064354 7721 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/80f43f07-ce08-4c21-9463-ea983a110244-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 02:08:13.064801 master-0 kubenswrapper[7721]: I0216 02:08:13.064595 7721 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/80f43f07-ce08-4c21-9463-ea983a110244-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 16 02:08:13.068384 master-0 kubenswrapper[7721]: I0216 02:08:13.068269 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80f43f07-ce08-4c21-9463-ea983a110244-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "80f43f07-ce08-4c21-9463-ea983a110244" (UID: "80f43f07-ce08-4c21-9463-ea983a110244"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:08:13.176996 master-0 kubenswrapper[7721]: I0216 02:08:13.167603 7721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/80f43f07-ce08-4c21-9463-ea983a110244-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 16 02:08:13.418108 master-0 kubenswrapper[7721]: I0216 02:08:13.418030 7721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vkp55" Feb 16 02:08:13.467748 master-0 kubenswrapper[7721]: I0216 02:08:13.467528 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vkp55" Feb 16 02:08:13.490967 master-0 kubenswrapper[7721]: I0216 02:08:13.490906 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_80f43f07-ce08-4c21-9463-ea983a110244/installer/0.log" Feb 16 02:08:13.491543 master-0 kubenswrapper[7721]: I0216 02:08:13.491465 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"80f43f07-ce08-4c21-9463-ea983a110244","Type":"ContainerDied","Data":"aae309ad89c83d46c9fddf6708eda09d37f1fa06aa9277a0a246c53f3525897c"} Feb 16 02:08:13.491686 master-0 kubenswrapper[7721]: I0216 02:08:13.491550 7721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aae309ad89c83d46c9fddf6708eda09d37f1fa06aa9277a0a246c53f3525897c" Feb 16 02:08:13.491833 master-0 kubenswrapper[7721]: I0216 02:08:13.491744 7721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Feb 16 02:08:14.842930 master-0 kubenswrapper[7721]: I0216 02:08:14.842754 7721 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 02:08:15.085856 master-0 kubenswrapper[7721]: I0216 02:08:15.085729 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-trqk8" Feb 16 02:08:17.887789 master-0 kubenswrapper[7721]: I0216 02:08:17.887713 7721 patch_prober.go:28] interesting pod/authentication-operator-755d954778-bngv9 container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.9:8443/healthz\": dial tcp 10.128.0.9:8443: connect: connection refused" start-of-body= Feb 16 02:08:17.888619 master-0 kubenswrapper[7721]: I0216 02:08:17.887800 7721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-755d954778-bngv9" podUID="c9cd32bc-a13a-44ee-ba52-7bb335c7007b" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.9:8443/healthz\": dial tcp 10.128.0.9:8443: connect: connection refused" Feb 16 02:08:18.200688 master-0 kubenswrapper[7721]: E0216 02:08:18.200578 7721 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 02:08:18.627912 master-0 kubenswrapper[7721]: E0216 02:08:18.627722 7721 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 02:08:20.418103 master-0 kubenswrapper[7721]: E0216 02:08:20.417999 7721 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Feb 16 02:08:20.542518 master-0 kubenswrapper[7721]: I0216 02:08:20.542418 7721 generic.go:334] "Generic (PLEG): container finished" podID="400a178a4d5e9a88ba5bbbd1da2ad15e" containerID="d3c90f1e73202b8ff7d7463b840dabc14c0987d4a5dea05816767a582d4b8f44" exitCode=0 Feb 16 02:08:21.552330 master-0 kubenswrapper[7721]: I0216 02:08:21.552268 7721 generic.go:334] "Generic (PLEG): container finished" podID="401699cb53e7098157e808a83125b0e4" containerID="38253c4a837f04a0a9230ea518637f47275c1199732b226cf26c062c64a84db2" exitCode=0 Feb 16 02:08:21.552851 master-0 kubenswrapper[7721]: I0216 02:08:21.552335 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"401699cb53e7098157e808a83125b0e4","Type":"ContainerDied","Data":"38253c4a837f04a0a9230ea518637f47275c1199732b226cf26c062c64a84db2"} Feb 16 02:08:23.408495 master-0 kubenswrapper[7721]: I0216 02:08:23.408405 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0-master-0_400a178a4d5e9a88ba5bbbd1da2ad15e/etcdctl/0.log" Feb 16 02:08:23.409133 master-0 kubenswrapper[7721]: I0216 02:08:23.408562 7721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Feb 16 02:08:23.526936 master-0 kubenswrapper[7721]: I0216 02:08:23.526839 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/400a178a4d5e9a88ba5bbbd1da2ad15e-certs\") pod \"400a178a4d5e9a88ba5bbbd1da2ad15e\" (UID: \"400a178a4d5e9a88ba5bbbd1da2ad15e\") " Feb 16 02:08:23.527205 master-0 kubenswrapper[7721]: I0216 02:08:23.526891 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/400a178a4d5e9a88ba5bbbd1da2ad15e-certs" (OuterVolumeSpecName: "certs") pod "400a178a4d5e9a88ba5bbbd1da2ad15e" (UID: "400a178a4d5e9a88ba5bbbd1da2ad15e"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:08:23.527205 master-0 kubenswrapper[7721]: I0216 02:08:23.527127 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/400a178a4d5e9a88ba5bbbd1da2ad15e-data-dir\") pod \"400a178a4d5e9a88ba5bbbd1da2ad15e\" (UID: \"400a178a4d5e9a88ba5bbbd1da2ad15e\") " Feb 16 02:08:23.527357 master-0 kubenswrapper[7721]: I0216 02:08:23.527241 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/400a178a4d5e9a88ba5bbbd1da2ad15e-data-dir" (OuterVolumeSpecName: "data-dir") pod "400a178a4d5e9a88ba5bbbd1da2ad15e" (UID: "400a178a4d5e9a88ba5bbbd1da2ad15e"). InnerVolumeSpecName "data-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:08:23.527803 master-0 kubenswrapper[7721]: I0216 02:08:23.527756 7721 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/400a178a4d5e9a88ba5bbbd1da2ad15e-certs\") on node \"master-0\" DevicePath \"\"" Feb 16 02:08:23.527803 master-0 kubenswrapper[7721]: I0216 02:08:23.527790 7721 reconciler_common.go:293] "Volume detached for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/400a178a4d5e9a88ba5bbbd1da2ad15e-data-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 02:08:23.570589 master-0 kubenswrapper[7721]: I0216 02:08:23.570472 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0-master-0_400a178a4d5e9a88ba5bbbd1da2ad15e/etcdctl/0.log" Feb 16 02:08:23.570589 master-0 kubenswrapper[7721]: I0216 02:08:23.570540 7721 generic.go:334] "Generic (PLEG): container finished" podID="400a178a4d5e9a88ba5bbbd1da2ad15e" containerID="1ac49b435dd0ca350c530f89dad4bc64dffda1e4e142d28d15199074e3eba071" exitCode=137 Feb 16 02:08:23.570830 master-0 kubenswrapper[7721]: I0216 02:08:23.570651 7721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Feb 16 02:08:23.570830 master-0 kubenswrapper[7721]: I0216 02:08:23.570701 7721 scope.go:117] "RemoveContainer" containerID="d3c90f1e73202b8ff7d7463b840dabc14c0987d4a5dea05816767a582d4b8f44" Feb 16 02:08:23.574170 master-0 kubenswrapper[7721]: I0216 02:08:23.574132 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-6d4655d9cf-v7lmz_e379cfaf-3a4c-40e7-8641-3524b3669295/openshift-apiserver-operator/1.log" Feb 16 02:08:23.574852 master-0 kubenswrapper[7721]: I0216 02:08:23.574795 7721 generic.go:334] "Generic (PLEG): container finished" podID="e379cfaf-3a4c-40e7-8641-3524b3669295" containerID="e691a05529b4feda1459fb089aa0bfd36c24c35f07686b8d317ee98a6be4be8a" exitCode=255 Feb 16 02:08:23.574997 master-0 kubenswrapper[7721]: I0216 02:08:23.574848 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-v7lmz" event={"ID":"e379cfaf-3a4c-40e7-8641-3524b3669295","Type":"ContainerDied","Data":"e691a05529b4feda1459fb089aa0bfd36c24c35f07686b8d317ee98a6be4be8a"} Feb 16 02:08:23.575874 master-0 kubenswrapper[7721]: I0216 02:08:23.575836 7721 scope.go:117] "RemoveContainer" containerID="e691a05529b4feda1459fb089aa0bfd36c24c35f07686b8d317ee98a6be4be8a" Feb 16 02:08:23.576270 master-0 kubenswrapper[7721]: E0216 02:08:23.576209 7721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-apiserver-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=openshift-apiserver-operator pod=openshift-apiserver-operator-6d4655d9cf-v7lmz_openshift-apiserver-operator(e379cfaf-3a4c-40e7-8641-3524b3669295)\"" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-v7lmz" podUID="e379cfaf-3a4c-40e7-8641-3524b3669295" Feb 16 02:08:23.577490 master-0 kubenswrapper[7721]: I0216 02:08:23.577429 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-6fcf4c966-dctqr_456e6c3a-c16c-470b-a0cd-bb79865b54f0/network-operator/1.log" Feb 16 02:08:23.578100 master-0 kubenswrapper[7721]: I0216 02:08:23.578063 7721 generic.go:334] "Generic (PLEG): container finished" podID="456e6c3a-c16c-470b-a0cd-bb79865b54f0" containerID="d6b8bd52621bc720ed9dc674f34ff05c02fea3c605d0d925e1c0610bec6f8610" exitCode=255 Feb 16 02:08:23.578204 master-0 kubenswrapper[7721]: I0216 02:08:23.578101 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-6fcf4c966-dctqr" event={"ID":"456e6c3a-c16c-470b-a0cd-bb79865b54f0","Type":"ContainerDied","Data":"d6b8bd52621bc720ed9dc674f34ff05c02fea3c605d0d925e1c0610bec6f8610"} Feb 16 02:08:23.578805 master-0 kubenswrapper[7721]: I0216 02:08:23.578755 7721 scope.go:117] "RemoveContainer" containerID="d6b8bd52621bc720ed9dc674f34ff05c02fea3c605d0d925e1c0610bec6f8610" Feb 16 02:08:23.579107 master-0 kubenswrapper[7721]: E0216 02:08:23.579054 7721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=network-operator pod=network-operator-6fcf4c966-dctqr_openshift-network-operator(456e6c3a-c16c-470b-a0cd-bb79865b54f0)\"" pod="openshift-network-operator/network-operator-6fcf4c966-dctqr" podUID="456e6c3a-c16c-470b-a0cd-bb79865b54f0" Feb 16 02:08:23.600617 master-0 kubenswrapper[7721]: I0216 02:08:23.600551 7721 scope.go:117] "RemoveContainer" containerID="1ac49b435dd0ca350c530f89dad4bc64dffda1e4e142d28d15199074e3eba071" Feb 16 02:08:23.623838 master-0 kubenswrapper[7721]: I0216 02:08:23.623789 7721 scope.go:117] "RemoveContainer" containerID="d3c90f1e73202b8ff7d7463b840dabc14c0987d4a5dea05816767a582d4b8f44" Feb 16 02:08:23.624461 master-0 kubenswrapper[7721]: E0216 02:08:23.624388 7721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3c90f1e73202b8ff7d7463b840dabc14c0987d4a5dea05816767a582d4b8f44\": container with ID starting with d3c90f1e73202b8ff7d7463b840dabc14c0987d4a5dea05816767a582d4b8f44 not found: ID does not exist" containerID="d3c90f1e73202b8ff7d7463b840dabc14c0987d4a5dea05816767a582d4b8f44" Feb 16 02:08:23.624540 master-0 kubenswrapper[7721]: I0216 02:08:23.624484 7721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3c90f1e73202b8ff7d7463b840dabc14c0987d4a5dea05816767a582d4b8f44"} err="failed to get container status \"d3c90f1e73202b8ff7d7463b840dabc14c0987d4a5dea05816767a582d4b8f44\": rpc error: code = NotFound desc = could not find container \"d3c90f1e73202b8ff7d7463b840dabc14c0987d4a5dea05816767a582d4b8f44\": container with ID starting with d3c90f1e73202b8ff7d7463b840dabc14c0987d4a5dea05816767a582d4b8f44 not found: ID does not exist" Feb 16 02:08:23.624540 master-0 kubenswrapper[7721]: I0216 02:08:23.624522 7721 scope.go:117] "RemoveContainer" containerID="1ac49b435dd0ca350c530f89dad4bc64dffda1e4e142d28d15199074e3eba071" Feb 16 02:08:23.625137 master-0 kubenswrapper[7721]: E0216 02:08:23.625076 7721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ac49b435dd0ca350c530f89dad4bc64dffda1e4e142d28d15199074e3eba071\": container with ID starting with 1ac49b435dd0ca350c530f89dad4bc64dffda1e4e142d28d15199074e3eba071 not found: ID does not exist" containerID="1ac49b435dd0ca350c530f89dad4bc64dffda1e4e142d28d15199074e3eba071" Feb 16 02:08:23.625203 master-0 kubenswrapper[7721]: I0216 02:08:23.625161 7721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ac49b435dd0ca350c530f89dad4bc64dffda1e4e142d28d15199074e3eba071"} err="failed to get container status \"1ac49b435dd0ca350c530f89dad4bc64dffda1e4e142d28d15199074e3eba071\": rpc error: code = NotFound desc = could not find container \"1ac49b435dd0ca350c530f89dad4bc64dffda1e4e142d28d15199074e3eba071\": container with ID starting with 1ac49b435dd0ca350c530f89dad4bc64dffda1e4e142d28d15199074e3eba071 not found: ID does not exist" Feb 16 02:08:23.625203 master-0 kubenswrapper[7721]: I0216 02:08:23.625192 7721 scope.go:117] "RemoveContainer" containerID="561891ec1509f7c4965b19f5a07719f12421d6e230fb355e2417164216f94e4e" Feb 16 02:08:23.668602 master-0 kubenswrapper[7721]: I0216 02:08:23.668568 7721 scope.go:117] "RemoveContainer" containerID="84b61562b0c4e54147ae15c3e99cac0408baf94416f7643d3aafcf6087c2cdf4" Feb 16 02:08:24.588227 master-0 kubenswrapper[7721]: I0216 02:08:24.588130 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-6fcf4c966-dctqr_456e6c3a-c16c-470b-a0cd-bb79865b54f0/network-operator/1.log" Feb 16 02:08:24.593402 master-0 kubenswrapper[7721]: I0216 02:08:24.593352 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-6d4655d9cf-v7lmz_e379cfaf-3a4c-40e7-8641-3524b3669295/openshift-apiserver-operator/1.log" Feb 16 02:08:24.612840 master-0 kubenswrapper[7721]: E0216 02:08:24.612762 7721 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/096d0b36f2002037317584bfc60221d5e8bccd7267512ed1e2e8ae5dbb9c6736/diff" to get inode usage: stat /var/lib/containers/storage/overlay/096d0b36f2002037317584bfc60221d5e8bccd7267512ed1e2e8ae5dbb9c6736/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/openshift-route-controller-manager_route-controller-manager-696cfb9f87-87b8w_f2a7e185-78f4-4d69-b126-d465374a6218/route-controller-manager/0.log" to get inode usage: stat /var/log/pods/openshift-route-controller-manager_route-controller-manager-696cfb9f87-87b8w_f2a7e185-78f4-4d69-b126-d465374a6218/route-controller-manager/0.log: no such file or directory Feb 16 02:08:24.625086 master-0 kubenswrapper[7721]: E0216 02:08:24.625006 7721 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/724076cc410f0302af4696c11bd28a76bbf4474ef2b879f40c91bb1ccec1f80f/diff" to get inode usage: stat /var/lib/containers/storage/overlay/724076cc410f0302af4696c11bd28a76bbf4474ef2b879f40c91bb1ccec1f80f/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/openshift-controller-manager_controller-manager-5574f479df-xqnpg_7eda8a42-765e-47cf-896f-324e8185062e/controller-manager/0.log" to get inode usage: stat /var/log/pods/openshift-controller-manager_controller-manager-5574f479df-xqnpg_7eda8a42-765e-47cf-896f-324e8185062e/controller-manager/0.log: no such file or directory Feb 16 02:08:24.736214 master-0 kubenswrapper[7721]: I0216 02:08:24.736104 7721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="400a178a4d5e9a88ba5bbbd1da2ad15e" path="/var/lib/kubelet/pods/400a178a4d5e9a88ba5bbbd1da2ad15e/volumes" Feb 16 02:08:24.736857 master-0 kubenswrapper[7721]: I0216 02:08:24.736808 7721 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Feb 16 02:08:24.843750 master-0 kubenswrapper[7721]: I0216 02:08:24.843548 7721 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 02:08:27.279705 master-0 kubenswrapper[7721]: E0216 02:08:27.279343 7721 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{etcd-master-0-master-0.189497fe9bb4dd11 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:400a178a4d5e9a88ba5bbbd1da2ad15e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Killing,Message:Stopping container etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:07:53.253256465 +0000 UTC m=+76.747490727,LastTimestamp:2026-02-16 02:07:53.253256465 +0000 UTC m=+76.747490727,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:08:27.887680 master-0 kubenswrapper[7721]: I0216 02:08:27.887578 7721 patch_prober.go:28] interesting pod/authentication-operator-755d954778-bngv9 container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.9:8443/healthz\": dial tcp 10.128.0.9:8443: connect: connection refused" start-of-body= Feb 16 02:08:27.888003 master-0 kubenswrapper[7721]: I0216 02:08:27.887686 7721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-755d954778-bngv9" podUID="c9cd32bc-a13a-44ee-ba52-7bb335c7007b" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.9:8443/healthz\": dial tcp 10.128.0.9:8443: connect: connection refused" Feb 16 02:08:28.201887 master-0 kubenswrapper[7721]: E0216 02:08:28.201757 7721 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 02:08:28.628882 master-0 kubenswrapper[7721]: E0216 02:08:28.628680 7721 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 02:08:29.541370 master-0 kubenswrapper[7721]: I0216 02:08:29.541256 7721 patch_prober.go:28] interesting pod/etcd-operator-67bf55ccdd-htjgz container/etcd-operator namespace/openshift-etcd-operator: Liveness probe status=failure output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" start-of-body= Feb 16 02:08:29.541652 master-0 kubenswrapper[7721]: I0216 02:08:29.541361 7721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-htjgz" podUID="724ac845-3835-458b-9645-e665be135ff9" containerName="etcd-operator" probeResult="failure" output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" Feb 16 02:08:29.631026 master-0 kubenswrapper[7721]: I0216 02:08:29.630932 7721 generic.go:334] "Generic (PLEG): container finished" podID="4a5b01c1-1231-4e69-8b6c-c4981b65b26e" containerID="5fc216122a0910fe569f2678eb8d9427d5895c0ca4368e57e2f60b2f9f7164e2" exitCode=0 Feb 16 02:08:31.646940 master-0 kubenswrapper[7721]: I0216 02:08:31.646848 7721 generic.go:334] "Generic (PLEG): container finished" podID="980aa005-f51d-4ca2-aee6-a6fdeefd86d0" containerID="ff658673f7455373622a5b5ee3d4af2de4dca2c4bf35a4e09ed477558c99902a" exitCode=0 Feb 16 02:08:33.178168 master-0 kubenswrapper[7721]: E0216 02:08:33.178114 7721 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/5af6c1971100f17756e7e762c06f79f83611796bba0537d1a0ac340c54455d7f/diff" to get inode usage: stat /var/lib/containers/storage/overlay/5af6c1971100f17756e7e762c06f79f83611796bba0537d1a0ac340c54455d7f/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_0615fd34-eaf9-4a3a-8543-25a7a5747194/installer/0.log" to get inode usage: stat /var/log/pods/openshift-kube-controller-manager_installer-1-master-0_0615fd34-eaf9-4a3a-8543-25a7a5747194/installer/0.log: no such file or directory Feb 16 02:08:34.561305 master-0 kubenswrapper[7721]: E0216 02:08:34.561191 7721 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Feb 16 02:08:34.842796 master-0 kubenswrapper[7721]: I0216 02:08:34.842568 7721 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 02:08:35.679539 master-0 kubenswrapper[7721]: I0216 02:08:35.679485 7721 generic.go:334] "Generic (PLEG): container finished" podID="c9cd32bc-a13a-44ee-ba52-7bb335c7007b" containerID="3c59868e46c60a0139fbb9feace033d7ff3288c7e5f3febf6586656bff57983b" exitCode=0 Feb 16 02:08:35.682801 master-0 kubenswrapper[7721]: I0216 02:08:35.682098 7721 generic.go:334] "Generic (PLEG): container finished" podID="401699cb53e7098157e808a83125b0e4" containerID="62af0446d65ad4423070101103807a98e30f740477e4dc3f78e2f74fd5837d04" exitCode=0 Feb 16 02:08:36.714983 master-0 kubenswrapper[7721]: E0216 02:08:36.714899 7721 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/be02d296b21e94b9668a7a677c84759341f59cb1fa84a9476fba5687fa506302/diff" to get inode usage: stat /var/lib/containers/storage/overlay/be02d296b21e94b9668a7a677c84759341f59cb1fa84a9476fba5687fa506302/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/openshift-etcd_etcd-master-0-master-0_400a178a4d5e9a88ba5bbbd1da2ad15e/etcdctl/0.log" to get inode usage: stat /var/log/pods/openshift-etcd_etcd-master-0-master-0_400a178a4d5e9a88ba5bbbd1da2ad15e/etcdctl/0.log: no such file or directory Feb 16 02:08:36.758845 master-0 kubenswrapper[7721]: E0216 02:08:36.758757 7721 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/204a0b39eb8db35b884f6e11e2259e6ca9dc01524d1f0df1c655ff8e2f3a8cd3/diff" to get inode usage: stat /var/lib/containers/storage/overlay/204a0b39eb8db35b884f6e11e2259e6ca9dc01524d1f0df1c655ff8e2f3a8cd3/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/openshift-etcd_etcd-master-0-master-0_400a178a4d5e9a88ba5bbbd1da2ad15e/etcd/0.log" to get inode usage: stat /var/log/pods/openshift-etcd_etcd-master-0-master-0_400a178a4d5e9a88ba5bbbd1da2ad15e/etcd/0.log: no such file or directory Feb 16 02:08:36.792357 master-0 kubenswrapper[7721]: E0216 02:08:36.792285 7721 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/68f1733e4847cb00b9491f9cab45a2882a3c8ab7b2b7631d0ad08535c9f52013/diff" to get inode usage: stat /var/lib/containers/storage/overlay/68f1733e4847cb00b9491f9cab45a2882a3c8ab7b2b7631d0ad08535c9f52013/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/openshift-network-operator_network-operator-6fcf4c966-dctqr_456e6c3a-c16c-470b-a0cd-bb79865b54f0/network-operator/0.log" to get inode usage: stat /var/log/pods/openshift-network-operator_network-operator-6fcf4c966-dctqr_456e6c3a-c16c-470b-a0cd-bb79865b54f0/network-operator/0.log: no such file or directory Feb 16 02:08:38.202996 master-0 kubenswrapper[7721]: E0216 02:08:38.202872 7721 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 02:08:38.629721 master-0 kubenswrapper[7721]: E0216 02:08:38.629510 7721 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 02:08:39.712817 master-0 kubenswrapper[7721]: I0216 02:08:39.712717 7721 generic.go:334] "Generic (PLEG): container finished" podID="1743372f-bdb0-4558-b47b-3714f3aa3fde" containerID="1f742ab76573db69bc143df83fcf581f4c09f3de9ec005f01809b1af5690b4d3" exitCode=0 Feb 16 02:08:40.097075 master-0 kubenswrapper[7721]: E0216 02:08:40.096982 7721 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/eff2bfc278a1187dff0b455efac80129ebf0dbcbac8bb9fe36cbe4994c9bd840/diff" to get inode usage: stat /var/lib/containers/storage/overlay/eff2bfc278a1187dff0b455efac80129ebf0dbcbac8bb9fe36cbe4994c9bd840/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-6d4655d9cf-v7lmz_e379cfaf-3a4c-40e7-8641-3524b3669295/openshift-apiserver-operator/0.log" to get inode usage: stat /var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-6d4655d9cf-v7lmz_e379cfaf-3a4c-40e7-8641-3524b3669295/openshift-apiserver-operator/0.log: no such file or directory Feb 16 02:08:44.749610 master-0 kubenswrapper[7721]: I0216 02:08:44.749467 7721 generic.go:334] "Generic (PLEG): container finished" podID="91938be6-9ae4-4849-abe8-fc842daecd23" containerID="3af1f9b9834764b079edacabd51db4c771ce412df5b31f88b96200c070e64727" exitCode=0 Feb 16 02:08:47.774414 master-0 kubenswrapper[7721]: I0216 02:08:47.774286 7721 generic.go:334] "Generic (PLEG): container finished" podID="724ac845-3835-458b-9645-e665be135ff9" containerID="7d8525382e7c303df250ff37074c2b59dae064f1c16fab17985b8492c29587df" exitCode=0 Feb 16 02:08:48.204293 master-0 kubenswrapper[7721]: E0216 02:08:48.204142 7721 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 02:08:48.204293 master-0 kubenswrapper[7721]: I0216 02:08:48.204237 7721 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 16 02:08:48.630969 master-0 kubenswrapper[7721]: E0216 02:08:48.630759 7721 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 02:08:48.630969 master-0 kubenswrapper[7721]: E0216 02:08:48.630813 7721 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 02:08:48.690060 master-0 kubenswrapper[7721]: E0216 02:08:48.689959 7721 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Feb 16 02:08:53.277302 master-0 kubenswrapper[7721]: I0216 02:08:53.277180 7721 status_manager.go:851] "Failed to get status for pod" podUID="456e6c3a-c16c-470b-a0cd-bb79865b54f0" pod="openshift-network-operator/network-operator-6fcf4c966-dctqr" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods network-operator-6fcf4c966-dctqr)" Feb 16 02:08:53.808404 master-0 kubenswrapper[7721]: E0216 02:08:53.808293 7721 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 16 02:08:53.808404 master-0 kubenswrapper[7721]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cloud-credential-operator-595c8f9ff-n8xmg_openshift-cloud-credential-operator_30fef0d5-46ea-4fa3-9ffa-88187d010ffe_0(4c1c0a82ebd9bb30f2ddd0f339019e495af2b800eb6f634fcef62aa303c73e6e): error adding pod openshift-cloud-credential-operator_cloud-credential-operator-595c8f9ff-n8xmg to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"4c1c0a82ebd9bb30f2ddd0f339019e495af2b800eb6f634fcef62aa303c73e6e" Netns:"/var/run/netns/87b73545-2852-46c5-8f4a-5b57f47e0916" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cloud-credential-operator;K8S_POD_NAME=cloud-credential-operator-595c8f9ff-n8xmg;K8S_POD_INFRA_CONTAINER_ID=4c1c0a82ebd9bb30f2ddd0f339019e495af2b800eb6f634fcef62aa303c73e6e;K8S_POD_UID=30fef0d5-46ea-4fa3-9ffa-88187d010ffe" Path:"" ERRORED: error configuring pod [openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-n8xmg] networking: Multus: [openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-n8xmg/30fef0d5-46ea-4fa3-9ffa-88187d010ffe]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cloud-credential-operator-595c8f9ff-n8xmg in out of cluster comm: SetNetworkStatus: failed to update the pod cloud-credential-operator-595c8f9ff-n8xmg in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/pods/cloud-credential-operator-595c8f9ff-n8xmg?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 16 02:08:53.808404 master-0 kubenswrapper[7721]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 02:08:53.808404 master-0 kubenswrapper[7721]: > Feb 16 02:08:53.809022 master-0 kubenswrapper[7721]: E0216 02:08:53.808413 7721 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 16 02:08:53.809022 master-0 kubenswrapper[7721]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cloud-credential-operator-595c8f9ff-n8xmg_openshift-cloud-credential-operator_30fef0d5-46ea-4fa3-9ffa-88187d010ffe_0(4c1c0a82ebd9bb30f2ddd0f339019e495af2b800eb6f634fcef62aa303c73e6e): error adding pod openshift-cloud-credential-operator_cloud-credential-operator-595c8f9ff-n8xmg to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"4c1c0a82ebd9bb30f2ddd0f339019e495af2b800eb6f634fcef62aa303c73e6e" Netns:"/var/run/netns/87b73545-2852-46c5-8f4a-5b57f47e0916" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cloud-credential-operator;K8S_POD_NAME=cloud-credential-operator-595c8f9ff-n8xmg;K8S_POD_INFRA_CONTAINER_ID=4c1c0a82ebd9bb30f2ddd0f339019e495af2b800eb6f634fcef62aa303c73e6e;K8S_POD_UID=30fef0d5-46ea-4fa3-9ffa-88187d010ffe" Path:"" ERRORED: error configuring pod [openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-n8xmg] networking: Multus: [openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-n8xmg/30fef0d5-46ea-4fa3-9ffa-88187d010ffe]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cloud-credential-operator-595c8f9ff-n8xmg in out of cluster comm: SetNetworkStatus: failed to update the pod cloud-credential-operator-595c8f9ff-n8xmg in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/pods/cloud-credential-operator-595c8f9ff-n8xmg?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 16 02:08:53.809022 master-0 kubenswrapper[7721]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 02:08:53.809022 master-0 kubenswrapper[7721]: > pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-n8xmg" Feb 16 02:08:53.809022 master-0 kubenswrapper[7721]: E0216 02:08:53.808498 7721 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 16 02:08:53.809022 master-0 kubenswrapper[7721]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cloud-credential-operator-595c8f9ff-n8xmg_openshift-cloud-credential-operator_30fef0d5-46ea-4fa3-9ffa-88187d010ffe_0(4c1c0a82ebd9bb30f2ddd0f339019e495af2b800eb6f634fcef62aa303c73e6e): error adding pod openshift-cloud-credential-operator_cloud-credential-operator-595c8f9ff-n8xmg to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"4c1c0a82ebd9bb30f2ddd0f339019e495af2b800eb6f634fcef62aa303c73e6e" Netns:"/var/run/netns/87b73545-2852-46c5-8f4a-5b57f47e0916" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cloud-credential-operator;K8S_POD_NAME=cloud-credential-operator-595c8f9ff-n8xmg;K8S_POD_INFRA_CONTAINER_ID=4c1c0a82ebd9bb30f2ddd0f339019e495af2b800eb6f634fcef62aa303c73e6e;K8S_POD_UID=30fef0d5-46ea-4fa3-9ffa-88187d010ffe" Path:"" ERRORED: error configuring pod [openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-n8xmg] networking: Multus: [openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-n8xmg/30fef0d5-46ea-4fa3-9ffa-88187d010ffe]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cloud-credential-operator-595c8f9ff-n8xmg in out of cluster comm: SetNetworkStatus: failed to update the pod cloud-credential-operator-595c8f9ff-n8xmg in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/pods/cloud-credential-operator-595c8f9ff-n8xmg?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 16 02:08:53.809022 master-0 kubenswrapper[7721]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 02:08:53.809022 master-0 kubenswrapper[7721]: > pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-n8xmg" Feb 16 02:08:53.809022 master-0 kubenswrapper[7721]: E0216 02:08:53.808595 7721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cloud-credential-operator-595c8f9ff-n8xmg_openshift-cloud-credential-operator(30fef0d5-46ea-4fa3-9ffa-88187d010ffe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cloud-credential-operator-595c8f9ff-n8xmg_openshift-cloud-credential-operator(30fef0d5-46ea-4fa3-9ffa-88187d010ffe)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cloud-credential-operator-595c8f9ff-n8xmg_openshift-cloud-credential-operator_30fef0d5-46ea-4fa3-9ffa-88187d010ffe_0(4c1c0a82ebd9bb30f2ddd0f339019e495af2b800eb6f634fcef62aa303c73e6e): error adding pod openshift-cloud-credential-operator_cloud-credential-operator-595c8f9ff-n8xmg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"4c1c0a82ebd9bb30f2ddd0f339019e495af2b800eb6f634fcef62aa303c73e6e\\\" Netns:\\\"/var/run/netns/87b73545-2852-46c5-8f4a-5b57f47e0916\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cloud-credential-operator;K8S_POD_NAME=cloud-credential-operator-595c8f9ff-n8xmg;K8S_POD_INFRA_CONTAINER_ID=4c1c0a82ebd9bb30f2ddd0f339019e495af2b800eb6f634fcef62aa303c73e6e;K8S_POD_UID=30fef0d5-46ea-4fa3-9ffa-88187d010ffe\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-n8xmg] networking: Multus: [openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-n8xmg/30fef0d5-46ea-4fa3-9ffa-88187d010ffe]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cloud-credential-operator-595c8f9ff-n8xmg in out of cluster comm: SetNetworkStatus: failed to update the pod cloud-credential-operator-595c8f9ff-n8xmg in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/pods/cloud-credential-operator-595c8f9ff-n8xmg?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-n8xmg" podUID="30fef0d5-46ea-4fa3-9ffa-88187d010ffe" Feb 16 02:08:53.850358 master-0 kubenswrapper[7721]: I0216 02:08:53.848609 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-kffmg_dbc5b101-936f-4bf3-bbf3-f30966b0ab50/approver/0.log" Feb 16 02:08:53.850358 master-0 kubenswrapper[7721]: I0216 02:08:53.849899 7721 generic.go:334] "Generic (PLEG): container finished" podID="dbc5b101-936f-4bf3-bbf3-f30966b0ab50" containerID="73c5d4096ed4f5f723bea74695c09c9920b7cf6836ef92fa2286119a88696c78" exitCode=1 Feb 16 02:08:53.850358 master-0 kubenswrapper[7721]: I0216 02:08:53.850008 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-n8xmg" Feb 16 02:08:53.851719 master-0 kubenswrapper[7721]: I0216 02:08:53.851684 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-n8xmg" Feb 16 02:08:53.856851 master-0 kubenswrapper[7721]: E0216 02:08:53.856606 7721 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 16 02:08:53.856851 master-0 kubenswrapper[7721]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-2-master-0_openshift-kube-controller-manager_1f35c7c9-16ec-486e-99ff-f1cbcce76eb3_0(669ed33a69ff47843ed79d4b81518eadb52fd57111481860b0dc8824f552b0f3): error adding pod openshift-kube-controller-manager_installer-2-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"669ed33a69ff47843ed79d4b81518eadb52fd57111481860b0dc8824f552b0f3" Netns:"/var/run/netns/9cbafa4c-a8f9-4d92-ba38-eb144fbc45cc" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-2-master-0;K8S_POD_INFRA_CONTAINER_ID=669ed33a69ff47843ed79d4b81518eadb52fd57111481860b0dc8824f552b0f3;K8S_POD_UID=1f35c7c9-16ec-486e-99ff-f1cbcce76eb3" Path:"" ERRORED: error configuring pod [openshift-kube-controller-manager/installer-2-master-0] networking: Multus: [openshift-kube-controller-manager/installer-2-master-0/1f35c7c9-16ec-486e-99ff-f1cbcce76eb3]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-2-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-2-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 16 02:08:53.856851 master-0 kubenswrapper[7721]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 02:08:53.856851 master-0 kubenswrapper[7721]: > Feb 16 02:08:53.856851 master-0 kubenswrapper[7721]: E0216 02:08:53.856673 7721 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 16 02:08:53.856851 master-0 kubenswrapper[7721]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-2-master-0_openshift-kube-controller-manager_1f35c7c9-16ec-486e-99ff-f1cbcce76eb3_0(669ed33a69ff47843ed79d4b81518eadb52fd57111481860b0dc8824f552b0f3): error adding pod openshift-kube-controller-manager_installer-2-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"669ed33a69ff47843ed79d4b81518eadb52fd57111481860b0dc8824f552b0f3" Netns:"/var/run/netns/9cbafa4c-a8f9-4d92-ba38-eb144fbc45cc" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-2-master-0;K8S_POD_INFRA_CONTAINER_ID=669ed33a69ff47843ed79d4b81518eadb52fd57111481860b0dc8824f552b0f3;K8S_POD_UID=1f35c7c9-16ec-486e-99ff-f1cbcce76eb3" Path:"" ERRORED: error configuring pod [openshift-kube-controller-manager/installer-2-master-0] networking: Multus: [openshift-kube-controller-manager/installer-2-master-0/1f35c7c9-16ec-486e-99ff-f1cbcce76eb3]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-2-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-2-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 16 02:08:53.856851 master-0 kubenswrapper[7721]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 02:08:53.856851 master-0 kubenswrapper[7721]: > pod="openshift-kube-controller-manager/installer-2-master-0" Feb 16 02:08:53.856851 master-0 kubenswrapper[7721]: E0216 02:08:53.856697 7721 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 16 02:08:53.856851 master-0 kubenswrapper[7721]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-2-master-0_openshift-kube-controller-manager_1f35c7c9-16ec-486e-99ff-f1cbcce76eb3_0(669ed33a69ff47843ed79d4b81518eadb52fd57111481860b0dc8824f552b0f3): error adding pod openshift-kube-controller-manager_installer-2-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"669ed33a69ff47843ed79d4b81518eadb52fd57111481860b0dc8824f552b0f3" Netns:"/var/run/netns/9cbafa4c-a8f9-4d92-ba38-eb144fbc45cc" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-2-master-0;K8S_POD_INFRA_CONTAINER_ID=669ed33a69ff47843ed79d4b81518eadb52fd57111481860b0dc8824f552b0f3;K8S_POD_UID=1f35c7c9-16ec-486e-99ff-f1cbcce76eb3" Path:"" ERRORED: error configuring pod [openshift-kube-controller-manager/installer-2-master-0] networking: Multus: [openshift-kube-controller-manager/installer-2-master-0/1f35c7c9-16ec-486e-99ff-f1cbcce76eb3]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-2-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-2-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 16 02:08:53.856851 master-0 kubenswrapper[7721]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 02:08:53.856851 master-0 kubenswrapper[7721]: > pod="openshift-kube-controller-manager/installer-2-master-0" Feb 16 02:08:53.856851 master-0 kubenswrapper[7721]: E0216 02:08:53.856772 7721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-2-master-0_openshift-kube-controller-manager(1f35c7c9-16ec-486e-99ff-f1cbcce76eb3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-2-master-0_openshift-kube-controller-manager(1f35c7c9-16ec-486e-99ff-f1cbcce76eb3)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-2-master-0_openshift-kube-controller-manager_1f35c7c9-16ec-486e-99ff-f1cbcce76eb3_0(669ed33a69ff47843ed79d4b81518eadb52fd57111481860b0dc8824f552b0f3): error adding pod openshift-kube-controller-manager_installer-2-master-0 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"669ed33a69ff47843ed79d4b81518eadb52fd57111481860b0dc8824f552b0f3\\\" Netns:\\\"/var/run/netns/9cbafa4c-a8f9-4d92-ba38-eb144fbc45cc\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-2-master-0;K8S_POD_INFRA_CONTAINER_ID=669ed33a69ff47843ed79d4b81518eadb52fd57111481860b0dc8824f552b0f3;K8S_POD_UID=1f35c7c9-16ec-486e-99ff-f1cbcce76eb3\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-kube-controller-manager/installer-2-master-0] networking: Multus: [openshift-kube-controller-manager/installer-2-master-0/1f35c7c9-16ec-486e-99ff-f1cbcce76eb3]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-2-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-2-master-0 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-kube-controller-manager/installer-2-master-0" podUID="1f35c7c9-16ec-486e-99ff-f1cbcce76eb3" Feb 16 02:08:53.865154 master-0 kubenswrapper[7721]: E0216 02:08:53.865070 7721 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 16 02:08:53.865154 master-0 kubenswrapper[7721]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-baremetal-operator-7bc947fc7d-frvgm_openshift-machine-api_27a42eb0-677c-414d-b0ec-f945ec39b7e9_0(cadc5441e05b9fa1450de35e6ae49b1211df76390ff17cc1846481dc8196699d): error adding pod openshift-machine-api_cluster-baremetal-operator-7bc947fc7d-frvgm to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"cadc5441e05b9fa1450de35e6ae49b1211df76390ff17cc1846481dc8196699d" Netns:"/var/run/netns/6f24556d-5c25-4835-9b5f-79b0e2987443" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=cluster-baremetal-operator-7bc947fc7d-frvgm;K8S_POD_INFRA_CONTAINER_ID=cadc5441e05b9fa1450de35e6ae49b1211df76390ff17cc1846481dc8196699d;K8S_POD_UID=27a42eb0-677c-414d-b0ec-f945ec39b7e9" Path:"" ERRORED: error configuring pod [openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-frvgm] networking: Multus: [openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-frvgm/27a42eb0-677c-414d-b0ec-f945ec39b7e9]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-baremetal-operator-7bc947fc7d-frvgm in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-baremetal-operator-7bc947fc7d-frvgm in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-baremetal-operator-7bc947fc7d-frvgm?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 16 02:08:53.865154 master-0 kubenswrapper[7721]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 02:08:53.865154 master-0 kubenswrapper[7721]: > Feb 16 02:08:53.865479 master-0 kubenswrapper[7721]: E0216 02:08:53.865164 7721 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 16 02:08:53.865479 master-0 kubenswrapper[7721]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-baremetal-operator-7bc947fc7d-frvgm_openshift-machine-api_27a42eb0-677c-414d-b0ec-f945ec39b7e9_0(cadc5441e05b9fa1450de35e6ae49b1211df76390ff17cc1846481dc8196699d): error adding pod openshift-machine-api_cluster-baremetal-operator-7bc947fc7d-frvgm to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"cadc5441e05b9fa1450de35e6ae49b1211df76390ff17cc1846481dc8196699d" Netns:"/var/run/netns/6f24556d-5c25-4835-9b5f-79b0e2987443" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=cluster-baremetal-operator-7bc947fc7d-frvgm;K8S_POD_INFRA_CONTAINER_ID=cadc5441e05b9fa1450de35e6ae49b1211df76390ff17cc1846481dc8196699d;K8S_POD_UID=27a42eb0-677c-414d-b0ec-f945ec39b7e9" Path:"" ERRORED: error configuring pod [openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-frvgm] networking: Multus: [openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-frvgm/27a42eb0-677c-414d-b0ec-f945ec39b7e9]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-baremetal-operator-7bc947fc7d-frvgm in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-baremetal-operator-7bc947fc7d-frvgm in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-baremetal-operator-7bc947fc7d-frvgm?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 16 02:08:53.865479 master-0 kubenswrapper[7721]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 02:08:53.865479 master-0 kubenswrapper[7721]: > pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-frvgm" Feb 16 02:08:53.865479 master-0 kubenswrapper[7721]: E0216 02:08:53.865194 7721 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 16 02:08:53.865479 master-0 kubenswrapper[7721]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-baremetal-operator-7bc947fc7d-frvgm_openshift-machine-api_27a42eb0-677c-414d-b0ec-f945ec39b7e9_0(cadc5441e05b9fa1450de35e6ae49b1211df76390ff17cc1846481dc8196699d): error adding pod openshift-machine-api_cluster-baremetal-operator-7bc947fc7d-frvgm to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"cadc5441e05b9fa1450de35e6ae49b1211df76390ff17cc1846481dc8196699d" Netns:"/var/run/netns/6f24556d-5c25-4835-9b5f-79b0e2987443" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=cluster-baremetal-operator-7bc947fc7d-frvgm;K8S_POD_INFRA_CONTAINER_ID=cadc5441e05b9fa1450de35e6ae49b1211df76390ff17cc1846481dc8196699d;K8S_POD_UID=27a42eb0-677c-414d-b0ec-f945ec39b7e9" Path:"" ERRORED: error configuring pod [openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-frvgm] networking: Multus: [openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-frvgm/27a42eb0-677c-414d-b0ec-f945ec39b7e9]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-baremetal-operator-7bc947fc7d-frvgm in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-baremetal-operator-7bc947fc7d-frvgm in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-baremetal-operator-7bc947fc7d-frvgm?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 16 02:08:53.865479 master-0 kubenswrapper[7721]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 02:08:53.865479 master-0 kubenswrapper[7721]: > pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-frvgm" Feb 16 02:08:53.865479 master-0 kubenswrapper[7721]: E0216 02:08:53.865277 7721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cluster-baremetal-operator-7bc947fc7d-frvgm_openshift-machine-api(27a42eb0-677c-414d-b0ec-f945ec39b7e9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cluster-baremetal-operator-7bc947fc7d-frvgm_openshift-machine-api(27a42eb0-677c-414d-b0ec-f945ec39b7e9)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-baremetal-operator-7bc947fc7d-frvgm_openshift-machine-api_27a42eb0-677c-414d-b0ec-f945ec39b7e9_0(cadc5441e05b9fa1450de35e6ae49b1211df76390ff17cc1846481dc8196699d): error adding pod openshift-machine-api_cluster-baremetal-operator-7bc947fc7d-frvgm to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"cadc5441e05b9fa1450de35e6ae49b1211df76390ff17cc1846481dc8196699d\\\" Netns:\\\"/var/run/netns/6f24556d-5c25-4835-9b5f-79b0e2987443\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=cluster-baremetal-operator-7bc947fc7d-frvgm;K8S_POD_INFRA_CONTAINER_ID=cadc5441e05b9fa1450de35e6ae49b1211df76390ff17cc1846481dc8196699d;K8S_POD_UID=27a42eb0-677c-414d-b0ec-f945ec39b7e9\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-frvgm] networking: Multus: [openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-frvgm/27a42eb0-677c-414d-b0ec-f945ec39b7e9]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-baremetal-operator-7bc947fc7d-frvgm in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-baremetal-operator-7bc947fc7d-frvgm in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-baremetal-operator-7bc947fc7d-frvgm?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-frvgm" podUID="27a42eb0-677c-414d-b0ec-f945ec39b7e9" Feb 16 02:08:53.871339 master-0 kubenswrapper[7721]: E0216 02:08:53.871264 7721 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 16 02:08:53.871339 master-0 kubenswrapper[7721]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-storage-operator-75b869db96-qm7rm_openshift-cluster-storage-operator_a77e2f8f-d164-4a58-aab2-f3444c05cacb_0(1dee064bb5bd1b7527527c7c8f3d5597940850a81a81c2dc80a67ec89f33bc4a): error adding pod openshift-cluster-storage-operator_cluster-storage-operator-75b869db96-qm7rm to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"1dee064bb5bd1b7527527c7c8f3d5597940850a81a81c2dc80a67ec89f33bc4a" Netns:"/var/run/netns/2361ae1d-1ba2-47f0-be7c-1dacead0f37f" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-storage-operator;K8S_POD_NAME=cluster-storage-operator-75b869db96-qm7rm;K8S_POD_INFRA_CONTAINER_ID=1dee064bb5bd1b7527527c7c8f3d5597940850a81a81c2dc80a67ec89f33bc4a;K8S_POD_UID=a77e2f8f-d164-4a58-aab2-f3444c05cacb" Path:"" ERRORED: error configuring pod [openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-qm7rm] networking: Multus: [openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-qm7rm/a77e2f8f-d164-4a58-aab2-f3444c05cacb]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-storage-operator-75b869db96-qm7rm in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-storage-operator-75b869db96-qm7rm in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/cluster-storage-operator-75b869db96-qm7rm?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 16 02:08:53.871339 master-0 kubenswrapper[7721]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 02:08:53.871339 master-0 kubenswrapper[7721]: > Feb 16 02:08:53.871635 master-0 kubenswrapper[7721]: E0216 02:08:53.871355 7721 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 16 02:08:53.871635 master-0 kubenswrapper[7721]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-storage-operator-75b869db96-qm7rm_openshift-cluster-storage-operator_a77e2f8f-d164-4a58-aab2-f3444c05cacb_0(1dee064bb5bd1b7527527c7c8f3d5597940850a81a81c2dc80a67ec89f33bc4a): error adding pod openshift-cluster-storage-operator_cluster-storage-operator-75b869db96-qm7rm to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"1dee064bb5bd1b7527527c7c8f3d5597940850a81a81c2dc80a67ec89f33bc4a" Netns:"/var/run/netns/2361ae1d-1ba2-47f0-be7c-1dacead0f37f" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-storage-operator;K8S_POD_NAME=cluster-storage-operator-75b869db96-qm7rm;K8S_POD_INFRA_CONTAINER_ID=1dee064bb5bd1b7527527c7c8f3d5597940850a81a81c2dc80a67ec89f33bc4a;K8S_POD_UID=a77e2f8f-d164-4a58-aab2-f3444c05cacb" Path:"" ERRORED: error configuring pod [openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-qm7rm] networking: Multus: [openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-qm7rm/a77e2f8f-d164-4a58-aab2-f3444c05cacb]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-storage-operator-75b869db96-qm7rm in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-storage-operator-75b869db96-qm7rm in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/cluster-storage-operator-75b869db96-qm7rm?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 16 02:08:53.871635 master-0 kubenswrapper[7721]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 02:08:53.871635 master-0 kubenswrapper[7721]: > pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-qm7rm" Feb 16 02:08:53.871635 master-0 kubenswrapper[7721]: E0216 02:08:53.871381 7721 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 16 02:08:53.871635 master-0 kubenswrapper[7721]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-storage-operator-75b869db96-qm7rm_openshift-cluster-storage-operator_a77e2f8f-d164-4a58-aab2-f3444c05cacb_0(1dee064bb5bd1b7527527c7c8f3d5597940850a81a81c2dc80a67ec89f33bc4a): error adding pod openshift-cluster-storage-operator_cluster-storage-operator-75b869db96-qm7rm to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"1dee064bb5bd1b7527527c7c8f3d5597940850a81a81c2dc80a67ec89f33bc4a" Netns:"/var/run/netns/2361ae1d-1ba2-47f0-be7c-1dacead0f37f" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-storage-operator;K8S_POD_NAME=cluster-storage-operator-75b869db96-qm7rm;K8S_POD_INFRA_CONTAINER_ID=1dee064bb5bd1b7527527c7c8f3d5597940850a81a81c2dc80a67ec89f33bc4a;K8S_POD_UID=a77e2f8f-d164-4a58-aab2-f3444c05cacb" Path:"" ERRORED: error configuring pod [openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-qm7rm] networking: Multus: [openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-qm7rm/a77e2f8f-d164-4a58-aab2-f3444c05cacb]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-storage-operator-75b869db96-qm7rm in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-storage-operator-75b869db96-qm7rm in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/cluster-storage-operator-75b869db96-qm7rm?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 16 02:08:53.871635 master-0 kubenswrapper[7721]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 02:08:53.871635 master-0 kubenswrapper[7721]: > pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-qm7rm" Feb 16 02:08:53.871635 master-0 kubenswrapper[7721]: E0216 02:08:53.871484 7721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cluster-storage-operator-75b869db96-qm7rm_openshift-cluster-storage-operator(a77e2f8f-d164-4a58-aab2-f3444c05cacb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cluster-storage-operator-75b869db96-qm7rm_openshift-cluster-storage-operator(a77e2f8f-d164-4a58-aab2-f3444c05cacb)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-storage-operator-75b869db96-qm7rm_openshift-cluster-storage-operator_a77e2f8f-d164-4a58-aab2-f3444c05cacb_0(1dee064bb5bd1b7527527c7c8f3d5597940850a81a81c2dc80a67ec89f33bc4a): error adding pod openshift-cluster-storage-operator_cluster-storage-operator-75b869db96-qm7rm to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"1dee064bb5bd1b7527527c7c8f3d5597940850a81a81c2dc80a67ec89f33bc4a\\\" Netns:\\\"/var/run/netns/2361ae1d-1ba2-47f0-be7c-1dacead0f37f\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-storage-operator;K8S_POD_NAME=cluster-storage-operator-75b869db96-qm7rm;K8S_POD_INFRA_CONTAINER_ID=1dee064bb5bd1b7527527c7c8f3d5597940850a81a81c2dc80a67ec89f33bc4a;K8S_POD_UID=a77e2f8f-d164-4a58-aab2-f3444c05cacb\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-qm7rm] networking: Multus: [openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-qm7rm/a77e2f8f-d164-4a58-aab2-f3444c05cacb]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-storage-operator-75b869db96-qm7rm in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-storage-operator-75b869db96-qm7rm in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/cluster-storage-operator-75b869db96-qm7rm?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-qm7rm" podUID="a77e2f8f-d164-4a58-aab2-f3444c05cacb" Feb 16 02:08:54.055126 master-0 kubenswrapper[7721]: E0216 02:08:54.053918 7721 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 16 02:08:54.055126 master-0 kubenswrapper[7721]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_machine-config-operator-84976bb859-5gs6g_openshift-machine-config-operator_d870332c-2498-4135-a9b3-a71e67c2805b_0(a72b5c78c9fcfa97e9e7b48a7b7d132cc90d036ec7cf584720d35a155b84d862): error adding pod openshift-machine-config-operator_machine-config-operator-84976bb859-5gs6g to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"a72b5c78c9fcfa97e9e7b48a7b7d132cc90d036ec7cf584720d35a155b84d862" Netns:"/var/run/netns/30447401-ef28-47e7-9e80-d54dcdd258b4" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-config-operator;K8S_POD_NAME=machine-config-operator-84976bb859-5gs6g;K8S_POD_INFRA_CONTAINER_ID=a72b5c78c9fcfa97e9e7b48a7b7d132cc90d036ec7cf584720d35a155b84d862;K8S_POD_UID=d870332c-2498-4135-a9b3-a71e67c2805b" Path:"" ERRORED: error configuring pod [openshift-machine-config-operator/machine-config-operator-84976bb859-5gs6g] networking: Multus: [openshift-machine-config-operator/machine-config-operator-84976bb859-5gs6g/d870332c-2498-4135-a9b3-a71e67c2805b]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod machine-config-operator-84976bb859-5gs6g in out of cluster comm: SetNetworkStatus: failed to update the pod machine-config-operator-84976bb859-5gs6g in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-operator-84976bb859-5gs6g?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 16 02:08:54.055126 master-0 kubenswrapper[7721]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 02:08:54.055126 master-0 kubenswrapper[7721]: > Feb 16 02:08:54.055126 master-0 kubenswrapper[7721]: E0216 02:08:54.053994 7721 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 16 02:08:54.055126 master-0 kubenswrapper[7721]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_machine-config-operator-84976bb859-5gs6g_openshift-machine-config-operator_d870332c-2498-4135-a9b3-a71e67c2805b_0(a72b5c78c9fcfa97e9e7b48a7b7d132cc90d036ec7cf584720d35a155b84d862): error adding pod openshift-machine-config-operator_machine-config-operator-84976bb859-5gs6g to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"a72b5c78c9fcfa97e9e7b48a7b7d132cc90d036ec7cf584720d35a155b84d862" Netns:"/var/run/netns/30447401-ef28-47e7-9e80-d54dcdd258b4" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-config-operator;K8S_POD_NAME=machine-config-operator-84976bb859-5gs6g;K8S_POD_INFRA_CONTAINER_ID=a72b5c78c9fcfa97e9e7b48a7b7d132cc90d036ec7cf584720d35a155b84d862;K8S_POD_UID=d870332c-2498-4135-a9b3-a71e67c2805b" Path:"" ERRORED: error configuring pod [openshift-machine-config-operator/machine-config-operator-84976bb859-5gs6g] networking: Multus: [openshift-machine-config-operator/machine-config-operator-84976bb859-5gs6g/d870332c-2498-4135-a9b3-a71e67c2805b]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod machine-config-operator-84976bb859-5gs6g in out of cluster comm: SetNetworkStatus: failed to update the pod machine-config-operator-84976bb859-5gs6g in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-operator-84976bb859-5gs6g?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 16 02:08:54.055126 master-0 kubenswrapper[7721]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 02:08:54.055126 master-0 kubenswrapper[7721]: > pod="openshift-machine-config-operator/machine-config-operator-84976bb859-5gs6g" Feb 16 02:08:54.055126 master-0 kubenswrapper[7721]: E0216 02:08:54.054029 7721 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 16 02:08:54.055126 master-0 kubenswrapper[7721]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_machine-config-operator-84976bb859-5gs6g_openshift-machine-config-operator_d870332c-2498-4135-a9b3-a71e67c2805b_0(a72b5c78c9fcfa97e9e7b48a7b7d132cc90d036ec7cf584720d35a155b84d862): error adding pod openshift-machine-config-operator_machine-config-operator-84976bb859-5gs6g to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"a72b5c78c9fcfa97e9e7b48a7b7d132cc90d036ec7cf584720d35a155b84d862" Netns:"/var/run/netns/30447401-ef28-47e7-9e80-d54dcdd258b4" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-config-operator;K8S_POD_NAME=machine-config-operator-84976bb859-5gs6g;K8S_POD_INFRA_CONTAINER_ID=a72b5c78c9fcfa97e9e7b48a7b7d132cc90d036ec7cf584720d35a155b84d862;K8S_POD_UID=d870332c-2498-4135-a9b3-a71e67c2805b" Path:"" ERRORED: error configuring pod [openshift-machine-config-operator/machine-config-operator-84976bb859-5gs6g] networking: Multus: [openshift-machine-config-operator/machine-config-operator-84976bb859-5gs6g/d870332c-2498-4135-a9b3-a71e67c2805b]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod machine-config-operator-84976bb859-5gs6g in out of cluster comm: SetNetworkStatus: failed to update the pod machine-config-operator-84976bb859-5gs6g in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-operator-84976bb859-5gs6g?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 16 02:08:54.055126 master-0 kubenswrapper[7721]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 02:08:54.055126 master-0 kubenswrapper[7721]: > pod="openshift-machine-config-operator/machine-config-operator-84976bb859-5gs6g" Feb 16 02:08:54.055126 master-0 kubenswrapper[7721]: E0216 02:08:54.054103 7721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"machine-config-operator-84976bb859-5gs6g_openshift-machine-config-operator(d870332c-2498-4135-a9b3-a71e67c2805b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"machine-config-operator-84976bb859-5gs6g_openshift-machine-config-operator(d870332c-2498-4135-a9b3-a71e67c2805b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_machine-config-operator-84976bb859-5gs6g_openshift-machine-config-operator_d870332c-2498-4135-a9b3-a71e67c2805b_0(a72b5c78c9fcfa97e9e7b48a7b7d132cc90d036ec7cf584720d35a155b84d862): error adding pod openshift-machine-config-operator_machine-config-operator-84976bb859-5gs6g to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"a72b5c78c9fcfa97e9e7b48a7b7d132cc90d036ec7cf584720d35a155b84d862\\\" Netns:\\\"/var/run/netns/30447401-ef28-47e7-9e80-d54dcdd258b4\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-config-operator;K8S_POD_NAME=machine-config-operator-84976bb859-5gs6g;K8S_POD_INFRA_CONTAINER_ID=a72b5c78c9fcfa97e9e7b48a7b7d132cc90d036ec7cf584720d35a155b84d862;K8S_POD_UID=d870332c-2498-4135-a9b3-a71e67c2805b\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-machine-config-operator/machine-config-operator-84976bb859-5gs6g] networking: Multus: [openshift-machine-config-operator/machine-config-operator-84976bb859-5gs6g/d870332c-2498-4135-a9b3-a71e67c2805b]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod machine-config-operator-84976bb859-5gs6g in out of cluster comm: SetNetworkStatus: failed to update the pod machine-config-operator-84976bb859-5gs6g in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-operator-84976bb859-5gs6g?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-5gs6g" podUID="d870332c-2498-4135-a9b3-a71e67c2805b" Feb 16 02:08:54.066869 master-0 kubenswrapper[7721]: E0216 02:08:54.066782 7721 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 16 02:08:54.066869 master-0 kubenswrapper[7721]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_packageserver-87777c9b7-fxzh6_openshift-operator-lifecycle-manager_dc3354cb-b6c3-40a5-a695-cccb079ad292_0(da95de67fb0edb5fb8df64ae73b9f1edf43efdd11e844bf8acfb0e130cf27042): error adding pod openshift-operator-lifecycle-manager_packageserver-87777c9b7-fxzh6 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"da95de67fb0edb5fb8df64ae73b9f1edf43efdd11e844bf8acfb0e130cf27042" Netns:"/var/run/netns/815f194d-aa81-40db-8fad-e5e827139425" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=packageserver-87777c9b7-fxzh6;K8S_POD_INFRA_CONTAINER_ID=da95de67fb0edb5fb8df64ae73b9f1edf43efdd11e844bf8acfb0e130cf27042;K8S_POD_UID=dc3354cb-b6c3-40a5-a695-cccb079ad292" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/packageserver-87777c9b7-fxzh6] networking: Multus: [openshift-operator-lifecycle-manager/packageserver-87777c9b7-fxzh6/dc3354cb-b6c3-40a5-a695-cccb079ad292]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod packageserver-87777c9b7-fxzh6 in out of cluster comm: SetNetworkStatus: failed to update the pod packageserver-87777c9b7-fxzh6 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/packageserver-87777c9b7-fxzh6?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 16 02:08:54.066869 master-0 kubenswrapper[7721]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 02:08:54.066869 master-0 kubenswrapper[7721]: > Feb 16 02:08:54.067158 master-0 kubenswrapper[7721]: E0216 02:08:54.066886 7721 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 16 02:08:54.067158 master-0 kubenswrapper[7721]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_packageserver-87777c9b7-fxzh6_openshift-operator-lifecycle-manager_dc3354cb-b6c3-40a5-a695-cccb079ad292_0(da95de67fb0edb5fb8df64ae73b9f1edf43efdd11e844bf8acfb0e130cf27042): error adding pod openshift-operator-lifecycle-manager_packageserver-87777c9b7-fxzh6 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"da95de67fb0edb5fb8df64ae73b9f1edf43efdd11e844bf8acfb0e130cf27042" Netns:"/var/run/netns/815f194d-aa81-40db-8fad-e5e827139425" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=packageserver-87777c9b7-fxzh6;K8S_POD_INFRA_CONTAINER_ID=da95de67fb0edb5fb8df64ae73b9f1edf43efdd11e844bf8acfb0e130cf27042;K8S_POD_UID=dc3354cb-b6c3-40a5-a695-cccb079ad292" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/packageserver-87777c9b7-fxzh6] networking: Multus: [openshift-operator-lifecycle-manager/packageserver-87777c9b7-fxzh6/dc3354cb-b6c3-40a5-a695-cccb079ad292]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod packageserver-87777c9b7-fxzh6 in out of cluster comm: SetNetworkStatus: failed to update the pod packageserver-87777c9b7-fxzh6 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/packageserver-87777c9b7-fxzh6?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 16 02:08:54.067158 master-0 kubenswrapper[7721]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 02:08:54.067158 master-0 kubenswrapper[7721]: > pod="openshift-operator-lifecycle-manager/packageserver-87777c9b7-fxzh6" Feb 16 02:08:54.067158 master-0 kubenswrapper[7721]: E0216 02:08:54.066916 7721 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 16 02:08:54.067158 master-0 kubenswrapper[7721]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_packageserver-87777c9b7-fxzh6_openshift-operator-lifecycle-manager_dc3354cb-b6c3-40a5-a695-cccb079ad292_0(da95de67fb0edb5fb8df64ae73b9f1edf43efdd11e844bf8acfb0e130cf27042): error adding pod openshift-operator-lifecycle-manager_packageserver-87777c9b7-fxzh6 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"da95de67fb0edb5fb8df64ae73b9f1edf43efdd11e844bf8acfb0e130cf27042" Netns:"/var/run/netns/815f194d-aa81-40db-8fad-e5e827139425" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=packageserver-87777c9b7-fxzh6;K8S_POD_INFRA_CONTAINER_ID=da95de67fb0edb5fb8df64ae73b9f1edf43efdd11e844bf8acfb0e130cf27042;K8S_POD_UID=dc3354cb-b6c3-40a5-a695-cccb079ad292" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/packageserver-87777c9b7-fxzh6] networking: Multus: [openshift-operator-lifecycle-manager/packageserver-87777c9b7-fxzh6/dc3354cb-b6c3-40a5-a695-cccb079ad292]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod packageserver-87777c9b7-fxzh6 in out of cluster comm: SetNetworkStatus: failed to update the pod packageserver-87777c9b7-fxzh6 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/packageserver-87777c9b7-fxzh6?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 16 02:08:54.067158 master-0 kubenswrapper[7721]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 02:08:54.067158 master-0 kubenswrapper[7721]: > pod="openshift-operator-lifecycle-manager/packageserver-87777c9b7-fxzh6" Feb 16 02:08:54.067158 master-0 kubenswrapper[7721]: E0216 02:08:54.067020 7721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"packageserver-87777c9b7-fxzh6_openshift-operator-lifecycle-manager(dc3354cb-b6c3-40a5-a695-cccb079ad292)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"packageserver-87777c9b7-fxzh6_openshift-operator-lifecycle-manager(dc3354cb-b6c3-40a5-a695-cccb079ad292)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_packageserver-87777c9b7-fxzh6_openshift-operator-lifecycle-manager_dc3354cb-b6c3-40a5-a695-cccb079ad292_0(da95de67fb0edb5fb8df64ae73b9f1edf43efdd11e844bf8acfb0e130cf27042): error adding pod openshift-operator-lifecycle-manager_packageserver-87777c9b7-fxzh6 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"da95de67fb0edb5fb8df64ae73b9f1edf43efdd11e844bf8acfb0e130cf27042\\\" Netns:\\\"/var/run/netns/815f194d-aa81-40db-8fad-e5e827139425\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=packageserver-87777c9b7-fxzh6;K8S_POD_INFRA_CONTAINER_ID=da95de67fb0edb5fb8df64ae73b9f1edf43efdd11e844bf8acfb0e130cf27042;K8S_POD_UID=dc3354cb-b6c3-40a5-a695-cccb079ad292\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/packageserver-87777c9b7-fxzh6] networking: Multus: [openshift-operator-lifecycle-manager/packageserver-87777c9b7-fxzh6/dc3354cb-b6c3-40a5-a695-cccb079ad292]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod packageserver-87777c9b7-fxzh6 in out of cluster comm: SetNetworkStatus: failed to update the pod packageserver-87777c9b7-fxzh6 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/packageserver-87777c9b7-fxzh6?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-operator-lifecycle-manager/packageserver-87777c9b7-fxzh6" podUID="dc3354cb-b6c3-40a5-a695-cccb079ad292" Feb 16 02:08:54.235366 master-0 kubenswrapper[7721]: E0216 02:08:54.235145 7721 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 16 02:08:54.235366 master-0 kubenswrapper[7721]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_machine-api-operator-bd7dd5c46-qw2zq_openshift-machine-api_fec84b8a-a0d1-4b07-8827-cef0beb89ecd_0(ddf392bda525c14a44feb60e45a91b90aee0178b6c82e7ca55201afcbe8c9a82): error adding pod openshift-machine-api_machine-api-operator-bd7dd5c46-qw2zq to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"ddf392bda525c14a44feb60e45a91b90aee0178b6c82e7ca55201afcbe8c9a82" Netns:"/var/run/netns/df6e20e2-ea38-4975-aa7c-a486d9ab5d3c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=machine-api-operator-bd7dd5c46-qw2zq;K8S_POD_INFRA_CONTAINER_ID=ddf392bda525c14a44feb60e45a91b90aee0178b6c82e7ca55201afcbe8c9a82;K8S_POD_UID=fec84b8a-a0d1-4b07-8827-cef0beb89ecd" Path:"" ERRORED: error configuring pod [openshift-machine-api/machine-api-operator-bd7dd5c46-qw2zq] networking: Multus: [openshift-machine-api/machine-api-operator-bd7dd5c46-qw2zq/fec84b8a-a0d1-4b07-8827-cef0beb89ecd]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod machine-api-operator-bd7dd5c46-qw2zq in out of cluster comm: SetNetworkStatus: failed to update the pod machine-api-operator-bd7dd5c46-qw2zq in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/machine-api-operator-bd7dd5c46-qw2zq?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 16 02:08:54.235366 master-0 kubenswrapper[7721]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 02:08:54.235366 master-0 kubenswrapper[7721]: > Feb 16 02:08:54.235615 master-0 kubenswrapper[7721]: E0216 02:08:54.235490 7721 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 16 02:08:54.235615 master-0 kubenswrapper[7721]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_machine-api-operator-bd7dd5c46-qw2zq_openshift-machine-api_fec84b8a-a0d1-4b07-8827-cef0beb89ecd_0(ddf392bda525c14a44feb60e45a91b90aee0178b6c82e7ca55201afcbe8c9a82): error adding pod openshift-machine-api_machine-api-operator-bd7dd5c46-qw2zq to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"ddf392bda525c14a44feb60e45a91b90aee0178b6c82e7ca55201afcbe8c9a82" Netns:"/var/run/netns/df6e20e2-ea38-4975-aa7c-a486d9ab5d3c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=machine-api-operator-bd7dd5c46-qw2zq;K8S_POD_INFRA_CONTAINER_ID=ddf392bda525c14a44feb60e45a91b90aee0178b6c82e7ca55201afcbe8c9a82;K8S_POD_UID=fec84b8a-a0d1-4b07-8827-cef0beb89ecd" Path:"" ERRORED: error configuring pod [openshift-machine-api/machine-api-operator-bd7dd5c46-qw2zq] networking: Multus: [openshift-machine-api/machine-api-operator-bd7dd5c46-qw2zq/fec84b8a-a0d1-4b07-8827-cef0beb89ecd]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod machine-api-operator-bd7dd5c46-qw2zq in out of cluster comm: SetNetworkStatus: failed to update the pod machine-api-operator-bd7dd5c46-qw2zq in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/machine-api-operator-bd7dd5c46-qw2zq?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 16 02:08:54.235615 master-0 kubenswrapper[7721]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 02:08:54.235615 master-0 kubenswrapper[7721]: > pod="openshift-machine-api/machine-api-operator-bd7dd5c46-qw2zq" Feb 16 02:08:54.235615 master-0 kubenswrapper[7721]: E0216 02:08:54.235575 7721 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 16 02:08:54.235615 master-0 kubenswrapper[7721]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_machine-api-operator-bd7dd5c46-qw2zq_openshift-machine-api_fec84b8a-a0d1-4b07-8827-cef0beb89ecd_0(ddf392bda525c14a44feb60e45a91b90aee0178b6c82e7ca55201afcbe8c9a82): error adding pod openshift-machine-api_machine-api-operator-bd7dd5c46-qw2zq to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"ddf392bda525c14a44feb60e45a91b90aee0178b6c82e7ca55201afcbe8c9a82" Netns:"/var/run/netns/df6e20e2-ea38-4975-aa7c-a486d9ab5d3c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=machine-api-operator-bd7dd5c46-qw2zq;K8S_POD_INFRA_CONTAINER_ID=ddf392bda525c14a44feb60e45a91b90aee0178b6c82e7ca55201afcbe8c9a82;K8S_POD_UID=fec84b8a-a0d1-4b07-8827-cef0beb89ecd" Path:"" ERRORED: error configuring pod [openshift-machine-api/machine-api-operator-bd7dd5c46-qw2zq] networking: Multus: [openshift-machine-api/machine-api-operator-bd7dd5c46-qw2zq/fec84b8a-a0d1-4b07-8827-cef0beb89ecd]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod machine-api-operator-bd7dd5c46-qw2zq in out of cluster comm: SetNetworkStatus: failed to update the pod machine-api-operator-bd7dd5c46-qw2zq in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/machine-api-operator-bd7dd5c46-qw2zq?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 16 02:08:54.235615 master-0 kubenswrapper[7721]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 02:08:54.235615 master-0 kubenswrapper[7721]: > pod="openshift-machine-api/machine-api-operator-bd7dd5c46-qw2zq" Feb 16 02:08:54.235898 master-0 kubenswrapper[7721]: E0216 02:08:54.235701 7721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"machine-api-operator-bd7dd5c46-qw2zq_openshift-machine-api(fec84b8a-a0d1-4b07-8827-cef0beb89ecd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"machine-api-operator-bd7dd5c46-qw2zq_openshift-machine-api(fec84b8a-a0d1-4b07-8827-cef0beb89ecd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_machine-api-operator-bd7dd5c46-qw2zq_openshift-machine-api_fec84b8a-a0d1-4b07-8827-cef0beb89ecd_0(ddf392bda525c14a44feb60e45a91b90aee0178b6c82e7ca55201afcbe8c9a82): error adding pod openshift-machine-api_machine-api-operator-bd7dd5c46-qw2zq to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"ddf392bda525c14a44feb60e45a91b90aee0178b6c82e7ca55201afcbe8c9a82\\\" Netns:\\\"/var/run/netns/df6e20e2-ea38-4975-aa7c-a486d9ab5d3c\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=machine-api-operator-bd7dd5c46-qw2zq;K8S_POD_INFRA_CONTAINER_ID=ddf392bda525c14a44feb60e45a91b90aee0178b6c82e7ca55201afcbe8c9a82;K8S_POD_UID=fec84b8a-a0d1-4b07-8827-cef0beb89ecd\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-machine-api/machine-api-operator-bd7dd5c46-qw2zq] networking: Multus: [openshift-machine-api/machine-api-operator-bd7dd5c46-qw2zq/fec84b8a-a0d1-4b07-8827-cef0beb89ecd]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod machine-api-operator-bd7dd5c46-qw2zq in out of cluster comm: SetNetworkStatus: failed to update the pod machine-api-operator-bd7dd5c46-qw2zq in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/machine-api-operator-bd7dd5c46-qw2zq?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-qw2zq" podUID="fec84b8a-a0d1-4b07-8827-cef0beb89ecd" Feb 16 02:08:54.238631 master-0 kubenswrapper[7721]: E0216 02:08:54.238573 7721 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 16 02:08:54.238631 master-0 kubenswrapper[7721]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_insights-operator-cb4f7b4cf-llpf5_openshift-insights_0abea413-e08a-465a-8ec4-2be650bfd5bd_0(bb1a535dee1282a97a3ffd2d02571d7824aeccc3e07cae01b324766d38cf3e41): error adding pod openshift-insights_insights-operator-cb4f7b4cf-llpf5 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"bb1a535dee1282a97a3ffd2d02571d7824aeccc3e07cae01b324766d38cf3e41" Netns:"/var/run/netns/ab93e9ad-cb28-49f4-9adc-b0719026e287" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-insights;K8S_POD_NAME=insights-operator-cb4f7b4cf-llpf5;K8S_POD_INFRA_CONTAINER_ID=bb1a535dee1282a97a3ffd2d02571d7824aeccc3e07cae01b324766d38cf3e41;K8S_POD_UID=0abea413-e08a-465a-8ec4-2be650bfd5bd" Path:"" ERRORED: error configuring pod [openshift-insights/insights-operator-cb4f7b4cf-llpf5] networking: Multus: [openshift-insights/insights-operator-cb4f7b4cf-llpf5/0abea413-e08a-465a-8ec4-2be650bfd5bd]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod insights-operator-cb4f7b4cf-llpf5 in out of cluster comm: SetNetworkStatus: failed to update the pod insights-operator-cb4f7b4cf-llpf5 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/pods/insights-operator-cb4f7b4cf-llpf5?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 16 02:08:54.238631 master-0 kubenswrapper[7721]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 02:08:54.238631 master-0 kubenswrapper[7721]: > Feb 16 02:08:54.238804 master-0 kubenswrapper[7721]: E0216 02:08:54.238651 7721 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 16 02:08:54.238804 master-0 kubenswrapper[7721]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_insights-operator-cb4f7b4cf-llpf5_openshift-insights_0abea413-e08a-465a-8ec4-2be650bfd5bd_0(bb1a535dee1282a97a3ffd2d02571d7824aeccc3e07cae01b324766d38cf3e41): error adding pod openshift-insights_insights-operator-cb4f7b4cf-llpf5 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"bb1a535dee1282a97a3ffd2d02571d7824aeccc3e07cae01b324766d38cf3e41" Netns:"/var/run/netns/ab93e9ad-cb28-49f4-9adc-b0719026e287" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-insights;K8S_POD_NAME=insights-operator-cb4f7b4cf-llpf5;K8S_POD_INFRA_CONTAINER_ID=bb1a535dee1282a97a3ffd2d02571d7824aeccc3e07cae01b324766d38cf3e41;K8S_POD_UID=0abea413-e08a-465a-8ec4-2be650bfd5bd" Path:"" ERRORED: error configuring pod [openshift-insights/insights-operator-cb4f7b4cf-llpf5] networking: Multus: [openshift-insights/insights-operator-cb4f7b4cf-llpf5/0abea413-e08a-465a-8ec4-2be650bfd5bd]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod insights-operator-cb4f7b4cf-llpf5 in out of cluster comm: SetNetworkStatus: failed to update the pod insights-operator-cb4f7b4cf-llpf5 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/pods/insights-operator-cb4f7b4cf-llpf5?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 16 02:08:54.238804 master-0 kubenswrapper[7721]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 02:08:54.238804 master-0 kubenswrapper[7721]: > pod="openshift-insights/insights-operator-cb4f7b4cf-llpf5" Feb 16 02:08:54.238804 master-0 kubenswrapper[7721]: E0216 02:08:54.238686 7721 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 16 02:08:54.238804 master-0 kubenswrapper[7721]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_insights-operator-cb4f7b4cf-llpf5_openshift-insights_0abea413-e08a-465a-8ec4-2be650bfd5bd_0(bb1a535dee1282a97a3ffd2d02571d7824aeccc3e07cae01b324766d38cf3e41): error adding pod openshift-insights_insights-operator-cb4f7b4cf-llpf5 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"bb1a535dee1282a97a3ffd2d02571d7824aeccc3e07cae01b324766d38cf3e41" Netns:"/var/run/netns/ab93e9ad-cb28-49f4-9adc-b0719026e287" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-insights;K8S_POD_NAME=insights-operator-cb4f7b4cf-llpf5;K8S_POD_INFRA_CONTAINER_ID=bb1a535dee1282a97a3ffd2d02571d7824aeccc3e07cae01b324766d38cf3e41;K8S_POD_UID=0abea413-e08a-465a-8ec4-2be650bfd5bd" Path:"" ERRORED: error configuring pod [openshift-insights/insights-operator-cb4f7b4cf-llpf5] networking: Multus: [openshift-insights/insights-operator-cb4f7b4cf-llpf5/0abea413-e08a-465a-8ec4-2be650bfd5bd]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod insights-operator-cb4f7b4cf-llpf5 in out of cluster comm: SetNetworkStatus: failed to update the pod insights-operator-cb4f7b4cf-llpf5 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/pods/insights-operator-cb4f7b4cf-llpf5?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 16 02:08:54.238804 master-0 kubenswrapper[7721]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 02:08:54.238804 master-0 kubenswrapper[7721]: > pod="openshift-insights/insights-operator-cb4f7b4cf-llpf5" Feb 16 02:08:54.239172 master-0 kubenswrapper[7721]: E0216 02:08:54.239034 7721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"insights-operator-cb4f7b4cf-llpf5_openshift-insights(0abea413-e08a-465a-8ec4-2be650bfd5bd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"insights-operator-cb4f7b4cf-llpf5_openshift-insights(0abea413-e08a-465a-8ec4-2be650bfd5bd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_insights-operator-cb4f7b4cf-llpf5_openshift-insights_0abea413-e08a-465a-8ec4-2be650bfd5bd_0(bb1a535dee1282a97a3ffd2d02571d7824aeccc3e07cae01b324766d38cf3e41): error adding pod openshift-insights_insights-operator-cb4f7b4cf-llpf5 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"bb1a535dee1282a97a3ffd2d02571d7824aeccc3e07cae01b324766d38cf3e41\\\" Netns:\\\"/var/run/netns/ab93e9ad-cb28-49f4-9adc-b0719026e287\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-insights;K8S_POD_NAME=insights-operator-cb4f7b4cf-llpf5;K8S_POD_INFRA_CONTAINER_ID=bb1a535dee1282a97a3ffd2d02571d7824aeccc3e07cae01b324766d38cf3e41;K8S_POD_UID=0abea413-e08a-465a-8ec4-2be650bfd5bd\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-insights/insights-operator-cb4f7b4cf-llpf5] networking: Multus: [openshift-insights/insights-operator-cb4f7b4cf-llpf5/0abea413-e08a-465a-8ec4-2be650bfd5bd]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod insights-operator-cb4f7b4cf-llpf5 in out of cluster comm: SetNetworkStatus: failed to update the pod insights-operator-cb4f7b4cf-llpf5 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/pods/insights-operator-cb4f7b4cf-llpf5?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-insights/insights-operator-cb4f7b4cf-llpf5" podUID="0abea413-e08a-465a-8ec4-2be650bfd5bd" Feb 16 02:08:54.256983 master-0 kubenswrapper[7721]: E0216 02:08:54.256916 7721 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 16 02:08:54.256983 master-0 kubenswrapper[7721]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-autoscaler-operator-67fd9768b5-9rvcj_openshift-machine-api_48863ff6-63ac-42d7-bac7-29d888c92db9_0(78c1aee49bc52fc905641630bdd5583d28d1d97478719e45721e361b9267d074): error adding pod openshift-machine-api_cluster-autoscaler-operator-67fd9768b5-9rvcj to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"78c1aee49bc52fc905641630bdd5583d28d1d97478719e45721e361b9267d074" Netns:"/var/run/netns/c28cf4bd-3a2a-42ba-9605-e2eca9af488c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=cluster-autoscaler-operator-67fd9768b5-9rvcj;K8S_POD_INFRA_CONTAINER_ID=78c1aee49bc52fc905641630bdd5583d28d1d97478719e45721e361b9267d074;K8S_POD_UID=48863ff6-63ac-42d7-bac7-29d888c92db9" Path:"" ERRORED: error configuring pod [openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-9rvcj] networking: Multus: [openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-9rvcj/48863ff6-63ac-42d7-bac7-29d888c92db9]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-autoscaler-operator-67fd9768b5-9rvcj in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-autoscaler-operator-67fd9768b5-9rvcj in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-autoscaler-operator-67fd9768b5-9rvcj?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 16 02:08:54.256983 master-0 kubenswrapper[7721]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 02:08:54.256983 master-0 kubenswrapper[7721]: > Feb 16 02:08:54.257160 master-0 kubenswrapper[7721]: E0216 02:08:54.257036 7721 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 16 02:08:54.257160 master-0 kubenswrapper[7721]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-autoscaler-operator-67fd9768b5-9rvcj_openshift-machine-api_48863ff6-63ac-42d7-bac7-29d888c92db9_0(78c1aee49bc52fc905641630bdd5583d28d1d97478719e45721e361b9267d074): error adding pod openshift-machine-api_cluster-autoscaler-operator-67fd9768b5-9rvcj to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"78c1aee49bc52fc905641630bdd5583d28d1d97478719e45721e361b9267d074" Netns:"/var/run/netns/c28cf4bd-3a2a-42ba-9605-e2eca9af488c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=cluster-autoscaler-operator-67fd9768b5-9rvcj;K8S_POD_INFRA_CONTAINER_ID=78c1aee49bc52fc905641630bdd5583d28d1d97478719e45721e361b9267d074;K8S_POD_UID=48863ff6-63ac-42d7-bac7-29d888c92db9" Path:"" ERRORED: error configuring pod [openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-9rvcj] networking: Multus: [openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-9rvcj/48863ff6-63ac-42d7-bac7-29d888c92db9]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-autoscaler-operator-67fd9768b5-9rvcj in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-autoscaler-operator-67fd9768b5-9rvcj in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-autoscaler-operator-67fd9768b5-9rvcj?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 16 02:08:54.257160 master-0 kubenswrapper[7721]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 02:08:54.257160 master-0 kubenswrapper[7721]: > pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-9rvcj" Feb 16 02:08:54.257160 master-0 kubenswrapper[7721]: E0216 02:08:54.257081 7721 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 16 02:08:54.257160 master-0 kubenswrapper[7721]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-autoscaler-operator-67fd9768b5-9rvcj_openshift-machine-api_48863ff6-63ac-42d7-bac7-29d888c92db9_0(78c1aee49bc52fc905641630bdd5583d28d1d97478719e45721e361b9267d074): error adding pod openshift-machine-api_cluster-autoscaler-operator-67fd9768b5-9rvcj to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"78c1aee49bc52fc905641630bdd5583d28d1d97478719e45721e361b9267d074" Netns:"/var/run/netns/c28cf4bd-3a2a-42ba-9605-e2eca9af488c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=cluster-autoscaler-operator-67fd9768b5-9rvcj;K8S_POD_INFRA_CONTAINER_ID=78c1aee49bc52fc905641630bdd5583d28d1d97478719e45721e361b9267d074;K8S_POD_UID=48863ff6-63ac-42d7-bac7-29d888c92db9" Path:"" ERRORED: error configuring pod [openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-9rvcj] networking: Multus: [openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-9rvcj/48863ff6-63ac-42d7-bac7-29d888c92db9]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-autoscaler-operator-67fd9768b5-9rvcj in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-autoscaler-operator-67fd9768b5-9rvcj in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-autoscaler-operator-67fd9768b5-9rvcj?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 16 02:08:54.257160 master-0 kubenswrapper[7721]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 02:08:54.257160 master-0 kubenswrapper[7721]: > pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-9rvcj" Feb 16 02:08:54.257449 master-0 kubenswrapper[7721]: E0216 02:08:54.257231 7721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cluster-autoscaler-operator-67fd9768b5-9rvcj_openshift-machine-api(48863ff6-63ac-42d7-bac7-29d888c92db9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cluster-autoscaler-operator-67fd9768b5-9rvcj_openshift-machine-api(48863ff6-63ac-42d7-bac7-29d888c92db9)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-autoscaler-operator-67fd9768b5-9rvcj_openshift-machine-api_48863ff6-63ac-42d7-bac7-29d888c92db9_0(78c1aee49bc52fc905641630bdd5583d28d1d97478719e45721e361b9267d074): error adding pod openshift-machine-api_cluster-autoscaler-operator-67fd9768b5-9rvcj to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"78c1aee49bc52fc905641630bdd5583d28d1d97478719e45721e361b9267d074\\\" Netns:\\\"/var/run/netns/c28cf4bd-3a2a-42ba-9605-e2eca9af488c\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=cluster-autoscaler-operator-67fd9768b5-9rvcj;K8S_POD_INFRA_CONTAINER_ID=78c1aee49bc52fc905641630bdd5583d28d1d97478719e45721e361b9267d074;K8S_POD_UID=48863ff6-63ac-42d7-bac7-29d888c92db9\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-9rvcj] networking: Multus: [openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-9rvcj/48863ff6-63ac-42d7-bac7-29d888c92db9]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-autoscaler-operator-67fd9768b5-9rvcj in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-autoscaler-operator-67fd9768b5-9rvcj in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-autoscaler-operator-67fd9768b5-9rvcj?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-9rvcj" podUID="48863ff6-63ac-42d7-bac7-29d888c92db9" Feb 16 02:08:54.857202 master-0 kubenswrapper[7721]: I0216 02:08:54.857096 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Feb 16 02:08:54.857202 master-0 kubenswrapper[7721]: I0216 02:08:54.857172 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-cb4f7b4cf-llpf5" Feb 16 02:08:54.858333 master-0 kubenswrapper[7721]: I0216 02:08:54.857223 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-qm7rm" Feb 16 02:08:54.858333 master-0 kubenswrapper[7721]: I0216 02:08:54.857340 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-9rvcj" Feb 16 02:08:54.858333 master-0 kubenswrapper[7721]: I0216 02:08:54.857463 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-frvgm" Feb 16 02:08:54.858333 master-0 kubenswrapper[7721]: I0216 02:08:54.857636 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-87777c9b7-fxzh6" Feb 16 02:08:54.858333 master-0 kubenswrapper[7721]: I0216 02:08:54.857698 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-5gs6g" Feb 16 02:08:54.858333 master-0 kubenswrapper[7721]: I0216 02:08:54.857745 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-qw2zq" Feb 16 02:08:54.858333 master-0 kubenswrapper[7721]: I0216 02:08:54.858006 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Feb 16 02:08:54.858333 master-0 kubenswrapper[7721]: I0216 02:08:54.858333 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-cb4f7b4cf-llpf5" Feb 16 02:08:54.859034 master-0 kubenswrapper[7721]: I0216 02:08:54.858647 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-87777c9b7-fxzh6" Feb 16 02:08:54.859034 master-0 kubenswrapper[7721]: I0216 02:08:54.858704 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-9rvcj" Feb 16 02:08:54.859034 master-0 kubenswrapper[7721]: I0216 02:08:54.858877 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-5gs6g" Feb 16 02:08:54.860307 master-0 kubenswrapper[7721]: I0216 02:08:54.859111 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-qm7rm" Feb 16 02:08:54.860307 master-0 kubenswrapper[7721]: I0216 02:08:54.859706 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-qw2zq" Feb 16 02:08:54.860307 master-0 kubenswrapper[7721]: I0216 02:08:54.860143 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-frvgm" Feb 16 02:08:58.205194 master-0 kubenswrapper[7721]: E0216 02:08:58.205023 7721 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="200ms" Feb 16 02:08:58.740729 master-0 kubenswrapper[7721]: E0216 02:08:58.740618 7721 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Feb 16 02:08:58.741045 master-0 kubenswrapper[7721]: E0216 02:08:58.740944 7721 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.017s" Feb 16 02:08:58.753757 master-0 kubenswrapper[7721]: I0216 02:08:58.753677 7721 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Feb 16 02:09:01.283567 master-0 kubenswrapper[7721]: E0216 02:09:01.283309 7721 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{cluster-samples-operator-f8cbff74c-k8jz5.189497fea9ea2474 openshift-cluster-samples-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-cluster-samples-operator,Name:cluster-samples-operator-f8cbff74c-k8jz5,UID:5b62004d-7fe3-47ae-8e26-8496befb047c,APIVersion:v1,ResourceVersion:8804,FieldPath:spec.containers{cluster-samples-operator},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e446723bbab96c4e4662ff058d5eccba72d0c36d26c7b8b3f07183fa49d3ab9\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:07:53.491629172 +0000 UTC m=+76.985863434,LastTimestamp:2026-02-16 02:07:53.491629172 +0000 UTC m=+76.985863434,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:09:01.912565 master-0 kubenswrapper[7721]: I0216 02:09:01.912245 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-0_8ea4c28c-8f53-4b41-9c85-c8c50599d7cd/installer/0.log" Feb 16 02:09:01.912565 master-0 kubenswrapper[7721]: I0216 02:09:01.912373 7721 generic.go:334] "Generic (PLEG): container finished" podID="8ea4c28c-8f53-4b41-9c85-c8c50599d7cd" containerID="bb9ffb6ca918ba3341c8df0e7c8c8ba7325d86a68eb6b2856270c9f7326551b5" exitCode=1 Feb 16 02:09:02.344552 master-0 kubenswrapper[7721]: E0216 02:09:02.344427 7721 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/8faeb53b68a176c805372cd5599e2828bcc41d833304a4005952056810219343/diff" to get inode usage: stat /var/lib/containers/storage/overlay/8faeb53b68a176c805372cd5599e2828bcc41d833304a4005952056810219343/diff: no such file or directory, extraDiskErr: Feb 16 02:09:03.930507 master-0 kubenswrapper[7721]: I0216 02:09:03.930374 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-c588d8cb4-nbjz6_04804a08-e3a5-46f3-abcb-967866834baa/ingress-operator/0.log" Feb 16 02:09:03.930507 master-0 kubenswrapper[7721]: I0216 02:09:03.930490 7721 generic.go:334] "Generic (PLEG): container finished" podID="04804a08-e3a5-46f3-abcb-967866834baa" containerID="87993edba6f07930300de55e54a0440afea4e88c5ea50fe933142a412c18bfd2" exitCode=1 Feb 16 02:09:08.406636 master-0 kubenswrapper[7721]: E0216 02:09:08.406543 7721 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms" Feb 16 02:09:08.920193 master-0 kubenswrapper[7721]: E0216 02:09:08.919871 7721 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T02:08:58Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T02:08:58Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T02:08:58Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T02:08:58Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:3e2f869b1c4f98a628b2e54c1516a0d0c09c760c91e0e1a940cb76149217661b\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:97930d07a108f20287bd5ceb046a5ab125604b2e3564077db9f7d7c077cc5852\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1701129928},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec\\\"],\\\"sizeBytes\\\":1631983282},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe683caef773a1963fc13f96afe58892563ea9921db8ac39369e3a9a05ea7181\\\"],\\\"sizeBytes\\\":1232696860},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1e9c22ee3a299d04b38dd66f73f7c33a2a8a1917885eebea853656e5476d3b7f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:c1f4e2201c8669fcabb7f8eeb19d9f1fcfb999619a0c1f564460705534e1e625\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1230725325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:382900636cc6bc44aef54886ff17e1948ee0729f04aa833c8319c40657b4fce7\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:a577a7b619533a5a450bcc025f40b4792b88c27fdbd8f48fd62b9ffd57d3c5f2\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1212986054},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:28df36269fc553eb1adba5566d6dfc258a1a74063c4cfe8b5bdd3f202591cf56\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:7fa59a55753e6c646b3b56a1a7080a5d70767fb964f1857c411fdf4e05ad4c71\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1201887930},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9250bc5d881852654c420b833aa018257e927522e9d8e1b74307dd7b4b0bfc42\\\"],\\\"sizeBytes\\\":987280724},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d\\\"],\\\"sizeBytes\\\":938665460},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc\\\"],\\\"sizeBytes\\\":913084961},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:df623c15a78ca969fb8ad134bde911c2047bf82b50244ee8e523763b6587e072\\\"],\\\"sizeBytes\\\":870929735},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c\\\"],\\\"sizeBytes\\\":857432360},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07093043bca0089b3c56d9e5331e68f549541e5661e2a39a260aa534dc9528bd\\\"],\\\"sizeBytes\\\":767663184},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e30865ea7d55b76cb925c7d26c650f0bc70fd9a02d7d59d0fe1a3024426229ad\\\"],\\\"sizeBytes\\\":682673937},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e786e28fbe0b95c4f5723bebc3abde1333b259fd26673716fc5638d88286d8b7\\\"],\\\"sizeBytes\\\":677894171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dffbd86bfae06921432678caf184b335bf2fc6ac7ee128f48aee396d57ea55\\\"],\\\"sizeBytes\\\":672642165},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aacc2698d097e25bf26e35393ef3536f7a240880d0a87f46a2b7ea3c13731d1e\\\"],\\\"sizeBytes\\\":616473928},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b41a8ae60c0eafa4a13e6dcd0e79ba63b0d7bd2bdc28aaed434b3bef98a5dc95\\\"],\\\"sizeBytes\\\":584205881},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e155421620a4ac28a759265f53059b75308fdd1491caeba6a9a34d2fbeab4954\\\"],\\\"sizeBytes\\\":576983707},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f122c11c2f6a10ca150b136f7291d2e135b3a182d67809aa49727da289787cee\\\"],\\\"sizeBytes\\\":553036394},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9f2b80358f029728d7f4ce46418bb6859d9ea7365de7b6f97a5f549ed6e77471\\\"],\\\"sizeBytes\\\":552251951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc03f91dbf08df9907c0ebad30c54a7fa92285b19ec4e440ed762b197378a861\\\"],\\\"sizeBytes\\\":543577525},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3\\\"],\\\"sizeBytes\\\":524042902},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bfc52d6ca96f377d53757dc437ca720e860e3e016d16c084bd5f6f2e337d3a1d\\\"],\\\"sizeBytes\\\":523760203},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd9324942b3d09b4b9a768f36b47be4e555d947910ee3d115fc5448c95f7399\\\"],\\\"sizeBytes\\\":513211213},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc\\\"],\\\"sizeBytes\\\":512819769},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49\\\"],\\\"sizeBytes\\\":509806416},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:047699c5a63593f45e9dd6f9fac0fa636ffc012331ee592891bfb08001bdd963\\\"],\\\"sizeBytes\\\":508050651},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd544a8a6b4d08fe0f4fd076109c09cf181302ab6056ec6b2b89d68a52954c5\\\"],\\\"sizeBytes\\\":507103881},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3d21c51712e6e0cdd9c776479f6d1ab55bc1085df5bb5f583e69ee192d11fd3\\\"],\\\"sizeBytes\\\":506056636},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:333e6572029953b4c4676076f0991ee6e5c7d28cbe2887c71b1682f19831d8a1\\\"],\\\"sizeBytes\\\":505990615},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39\\\"],\\\"sizeBytes\\\":503717987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5abe992def861ec075251ae17bbd66fa23bd05bd3701953c0fdcf68a8d161f1e\\\"],\\\"sizeBytes\\\":503374574},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f353131d8a1223db7f637c9851016b3a348d80c2b2be663a2db6d01e14ddca88\\\"],\\\"sizeBytes\\\":502798848},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:13d06502c0f0a3c73f69bf8d0743718f7cfc46e71f4a12916517ad7e9bff17e1\\\"],\\\"sizeBytes\\\":501305896},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a\\\"],\\\"sizeBytes\\\":501222351},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cf24751d6b6d66fcfc26aa8e0f94a4248a3edab6dbfe3fe9651a90b6b4d92192\\\"],\\\"sizeBytes\\\":500175306},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d54bd262ca625a326b01ea2bfd33db10a402c05590e6b710b0959712e1bf30b\\\"],\\\"sizeBytes\\\":500068323},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e391fce0b2e04f22fc089597db9e0671ba7f8b5b3a709151b5f33dd23b262144\\\"],\\\"sizeBytes\\\":499445182},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2431778975829348e271dc9bf123c7a24c81a52537a61414cd17c8436436078b\\\"],\\\"sizeBytes\\\":490819380},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae30b3ab740f21c451d0272bceacb99fa34d22bbf2ea22f1e1e18230a156104b\\\"],\\\"sizeBytes\\\":489891070},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0871b6c16b38a2eda5d1c89fd75079aff0775224307e940557e6fda6ba229f38\\\"],\\\"sizeBytes\\\":481921522},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0100b616991bd8bca68d583e902283aa4cc0d388046437d5d68407190e3fb041\\\"],\\\"sizeBytes\\\":479280723},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8ea13b0cbfe9be0d3d7ea80d50e512af6a453921a553c7c79b566530142b611b\\\"],\\\"sizeBytes\\\":479006001},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47c1d88223ffb35bb36a4d2bde736fb3e45f08e204519387e0e52e3e3dc00cfb\\\"],\\\"sizeBytes\\\":465507019},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1c8b9784a60860a08bd47935f0767b7b7f8f36c5c0adb7623a31b82c01d4c09\\\"],\\\"sizeBytes\\\":463090242},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e7ac69aff2f28f6b3cbdb166c7dac7a3490167bcd670cd7057bdde1e1e7684d\\\"],\\\"sizeBytes\\\":462065055},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0\\\"],\\\"sizeBytes\\\":459915626},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a26b20d3ef7b75aeb05acf9be2702f9d478822c43f679ff578811843692b960c\\\"],\\\"sizeBytes\\\":458531660},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dab7a82d88f90f1ef4ac307b16132d4d573a4fa9080acc3272ca084613bd902a\\\"],\\\"sizeBytes\\\":452956763},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bbe162375a11ed3810a1081c30dd400f461f2421d5f1e27d8792048bbd216956\\\"],\\\"sizeBytes\\\":451401927}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 02:09:08.983349 master-0 kubenswrapper[7721]: I0216 02:09:08.983253 7721 generic.go:334] "Generic (PLEG): container finished" podID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerID="f7e5042e1717f873aa5ed64ccd2f2b11417f41ea4156f1f41e924e94dbf23445" exitCode=1 Feb 16 02:09:13.303011 master-0 kubenswrapper[7721]: W0216 02:09:13.302890 7721 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode379cfaf_3a4c_40e7_8641_3524b3669295.slice/crio-conmon-e691a05529b4feda1459fb089aa0bfd36c24c35f07686b8d317ee98a6be4be8a.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode379cfaf_3a4c_40e7_8641_3524b3669295.slice/crio-conmon-e691a05529b4feda1459fb089aa0bfd36c24c35f07686b8d317ee98a6be4be8a.scope: no such file or directory Feb 16 02:09:13.303922 master-0 kubenswrapper[7721]: W0216 02:09:13.303021 7721 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e5d40eb_8051_46a8_9cd9_d2b1f152dbf0.slice/crio-conmon-c32d539f67c811ff9746ff9280cdab7ab4b0cb4f483d1d5325c5241516366b56.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e5d40eb_8051_46a8_9cd9_d2b1f152dbf0.slice/crio-conmon-c32d539f67c811ff9746ff9280cdab7ab4b0cb4f483d1d5325c5241516366b56.scope: no such file or directory Feb 16 02:09:13.303922 master-0 kubenswrapper[7721]: W0216 02:09:13.303081 7721 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod774b6ff9_0e37_48fd_96c6_571859fec492.slice/crio-conmon-7be24ae6b55716532fb5e51eb053c2830fcfe8aa6050ec53c06ef1084c4bbe8d.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod774b6ff9_0e37_48fd_96c6_571859fec492.slice/crio-conmon-7be24ae6b55716532fb5e51eb053c2830fcfe8aa6050ec53c06ef1084c4bbe8d.scope: no such file or directory Feb 16 02:09:13.303922 master-0 kubenswrapper[7721]: W0216 02:09:13.303129 7721 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod456e6c3a_c16c_470b_a0cd_bb79865b54f0.slice/crio-conmon-d6b8bd52621bc720ed9dc674f34ff05c02fea3c605d0d925e1c0610bec6f8610.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod456e6c3a_c16c_470b_a0cd_bb79865b54f0.slice/crio-conmon-d6b8bd52621bc720ed9dc674f34ff05c02fea3c605d0d925e1c0610bec6f8610.scope: no such file or directory Feb 16 02:09:13.303922 master-0 kubenswrapper[7721]: W0216 02:09:13.303177 7721 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bdb65c2_c4bc_4e33_9e5a_61542c659700.slice/crio-conmon-e5940b75272319c7aabca48d2d6edec79fe11d41b3036bd4f4cedae45e24b5d7.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bdb65c2_c4bc_4e33_9e5a_61542c659700.slice/crio-conmon-e5940b75272319c7aabca48d2d6edec79fe11d41b3036bd4f4cedae45e24b5d7.scope: no such file or directory Feb 16 02:09:13.303922 master-0 kubenswrapper[7721]: W0216 02:09:13.303222 7721 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode379cfaf_3a4c_40e7_8641_3524b3669295.slice/crio-e691a05529b4feda1459fb089aa0bfd36c24c35f07686b8d317ee98a6be4be8a.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode379cfaf_3a4c_40e7_8641_3524b3669295.slice/crio-e691a05529b4feda1459fb089aa0bfd36c24c35f07686b8d317ee98a6be4be8a.scope: no such file or directory Feb 16 02:09:13.303922 master-0 kubenswrapper[7721]: W0216 02:09:13.303270 7721 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e5d40eb_8051_46a8_9cd9_d2b1f152dbf0.slice/crio-c32d539f67c811ff9746ff9280cdab7ab4b0cb4f483d1d5325c5241516366b56.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e5d40eb_8051_46a8_9cd9_d2b1f152dbf0.slice/crio-c32d539f67c811ff9746ff9280cdab7ab4b0cb4f483d1d5325c5241516366b56.scope: no such file or directory Feb 16 02:09:13.303922 master-0 kubenswrapper[7721]: W0216 02:09:13.303322 7721 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab463f74_d1e7_44f1_9634_d9f63685b06d.slice/crio-conmon-699ab655e0c4a6b31309afab886f90e9b48704fa893a80a048fd279bf60a2f9d.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab463f74_d1e7_44f1_9634_d9f63685b06d.slice/crio-conmon-699ab655e0c4a6b31309afab886f90e9b48704fa893a80a048fd279bf60a2f9d.scope: no such file or directory Feb 16 02:09:13.303922 master-0 kubenswrapper[7721]: W0216 02:09:13.303368 7721 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod774b6ff9_0e37_48fd_96c6_571859fec492.slice/crio-7be24ae6b55716532fb5e51eb053c2830fcfe8aa6050ec53c06ef1084c4bbe8d.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod774b6ff9_0e37_48fd_96c6_571859fec492.slice/crio-7be24ae6b55716532fb5e51eb053c2830fcfe8aa6050ec53c06ef1084c4bbe8d.scope: no such file or directory Feb 16 02:09:13.303922 master-0 kubenswrapper[7721]: W0216 02:09:13.303668 7721 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod456e6c3a_c16c_470b_a0cd_bb79865b54f0.slice/crio-d6b8bd52621bc720ed9dc674f34ff05c02fea3c605d0d925e1c0610bec6f8610.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod456e6c3a_c16c_470b_a0cd_bb79865b54f0.slice/crio-d6b8bd52621bc720ed9dc674f34ff05c02fea3c605d0d925e1c0610bec6f8610.scope: no such file or directory Feb 16 02:09:13.303922 master-0 kubenswrapper[7721]: W0216 02:09:13.303734 7721 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bdb65c2_c4bc_4e33_9e5a_61542c659700.slice/crio-e5940b75272319c7aabca48d2d6edec79fe11d41b3036bd4f4cedae45e24b5d7.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bdb65c2_c4bc_4e33_9e5a_61542c659700.slice/crio-e5940b75272319c7aabca48d2d6edec79fe11d41b3036bd4f4cedae45e24b5d7.scope: no such file or directory Feb 16 02:09:13.304824 master-0 kubenswrapper[7721]: W0216 02:09:13.304028 7721 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab463f74_d1e7_44f1_9634_d9f63685b06d.slice/crio-699ab655e0c4a6b31309afab886f90e9b48704fa893a80a048fd279bf60a2f9d.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab463f74_d1e7_44f1_9634_d9f63685b06d.slice/crio-699ab655e0c4a6b31309afab886f90e9b48704fa893a80a048fd279bf60a2f9d.scope: no such file or directory Feb 16 02:09:13.391143 master-0 kubenswrapper[7721]: W0216 02:09:13.390899 7721 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80420f2e7c3cdda71f7d0d6ccbe6f9f3.slice/crio-conmon-f7e5042e1717f873aa5ed64ccd2f2b11417f41ea4156f1f41e924e94dbf23445.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80420f2e7c3cdda71f7d0d6ccbe6f9f3.slice/crio-conmon-f7e5042e1717f873aa5ed64ccd2f2b11417f41ea4156f1f41e924e94dbf23445.scope: no such file or directory Feb 16 02:09:13.391143 master-0 kubenswrapper[7721]: W0216 02:09:13.391002 7721 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80420f2e7c3cdda71f7d0d6ccbe6f9f3.slice/crio-f7e5042e1717f873aa5ed64ccd2f2b11417f41ea4156f1f41e924e94dbf23445.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80420f2e7c3cdda71f7d0d6ccbe6f9f3.slice/crio-f7e5042e1717f873aa5ed64ccd2f2b11417f41ea4156f1f41e924e94dbf23445.scope: no such file or directory Feb 16 02:09:13.441928 master-0 kubenswrapper[7721]: E0216 02:09:13.441793 7721 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2a7e185_78f4_4d69_b126_d465374a6218.slice/crio-4fc83c72923d06bb27dc72e1294113ac5b34e6ed149d10d4a4b519d50c5b89a9\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2a7e185_78f4_4d69_b126_d465374a6218.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7eda8a42_765e_47cf_896f_324e8185062e.slice/crio-71f88debd6e32c8926a0394683094104a5453ca71a39b14a81064d10cb0255f9\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-pod0615fd34_eaf9_4a3a_8543_25a7a5747194.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9cd32bc_a13a_44ee_ba52_7bb335c7007b.slice/crio-3c59868e46c60a0139fbb9feace033d7ff3288c7e5f3febf6586656bff57983b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9460ca0802075a8a6a10d7b3e6052c4d.slice/crio-0d84c00dcc11900a2f5a4ff15f798ef8c8b6cc92a9b7e1f32a7c33bfeed4a478.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod980aa005_f51d_4ca2_aee6_a6fdeefd86d0.slice/crio-ff658673f7455373622a5b5ee3d4af2de4dca2c4bf35a4e09ed477558c99902a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a5b01c1_1231_4e69_8b6c_c4981b65b26e.slice/crio-conmon-5fc216122a0910fe569f2678eb8d9427d5895c0ca4368e57e2f60b2f9f7164e2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9460ca0802075a8a6a10d7b3e6052c4d.slice/crio-conmon-0d84c00dcc11900a2f5a4ff15f798ef8c8b6cc92a9b7e1f32a7c33bfeed4a478.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a5b01c1_1231_4e69_8b6c_c4981b65b26e.slice/crio-5fc216122a0910fe569f2678eb8d9427d5895c0ca4368e57e2f60b2f9f7164e2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod980aa005_f51d_4ca2_aee6_a6fdeefd86d0.slice/crio-conmon-ff658673f7455373622a5b5ee3d4af2de4dca2c4bf35a4e09ed477558c99902a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-pod80f43f07_ce08_4c21_9463_ea983a110244.slice/crio-b4ee28c7394858e0cf4928c25f023b84c89ee6af3676ca6c853d7b858571c63f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-pod4733c2df_0f5a_4696_b8c6_2568ebc7debc.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod91938be6_9ae4_4849_abe8_fc842daecd23.slice/crio-conmon-3af1f9b9834764b079edacabd51db4c771ce412df5b31f88b96200c070e64727.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c02961f_30ec_4405_b7fa_9c4192342ae9.slice/crio-4af3b63baf882cf9b9d02a791b48a2c854ad5ccd1fbd43903fb2e66b8e587e95.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-pod8ea4c28c_8f53_4b41_9c85_c8c50599d7cd.slice/crio-conmon-bb9ffb6ca918ba3341c8df0e7c8c8ba7325d86a68eb6b2856270c9f7326551b5.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-pod4733c2df_0f5a_4696_b8c6_2568ebc7debc.slice/crio-3537f96b40e8859a6a366ec6550aaba73f34d9f862f4f2e89eccfbc047d01b00.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-pod80f43f07_ce08_4c21_9463_ea983a110244.slice/crio-conmon-b4ee28c7394858e0cf4928c25f023b84c89ee6af3676ca6c853d7b858571c63f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1743372f_bdb0_4558_b47b_3714f3aa3fde.slice/crio-conmon-1f742ab76573db69bc143df83fcf581f4c09f3de9ec005f01809b1af5690b4d3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7eda8a42_765e_47cf_896f_324e8185062e.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1743372f_bdb0_4558_b47b_3714f3aa3fde.slice/crio-1f742ab76573db69bc143df83fcf581f4c09f3de9ec005f01809b1af5690b4d3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod400a178a4d5e9a88ba5bbbd1da2ad15e.slice/crio-a24c34b6599e046cb5b217ee112cd5793502433694aca39a7811b07f3f980447\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-pod80f43f07_ce08_4c21_9463_ea983a110244.slice/crio-aae309ad89c83d46c9fddf6708eda09d37f1fa06aa9277a0a246c53f3525897c\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-pod4733c2df_0f5a_4696_b8c6_2568ebc7debc.slice/crio-conmon-3537f96b40e8859a6a366ec6550aaba73f34d9f862f4f2e89eccfbc047d01b00.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod400a178a4d5e9a88ba5bbbd1da2ad15e.slice/crio-conmon-d3c90f1e73202b8ff7d7463b840dabc14c0987d4a5dea05816767a582d4b8f44.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddbc5b101_936f_4bf3_bbf3_f30966b0ab50.slice/crio-conmon-73c5d4096ed4f5f723bea74695c09c9920b7cf6836ef92fa2286119a88696c78.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod400a178a4d5e9a88ba5bbbd1da2ad15e.slice/crio-1ac49b435dd0ca350c530f89dad4bc64dffda1e4e142d28d15199074e3eba071.scope\": RecentStats: unable to find data in memory cache]" Feb 16 02:09:13.708081 master-0 kubenswrapper[7721]: I0216 02:09:13.707730 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-747969bcdd-dth9n_d8b1d77b-0955-44f3-a780-e8b6813aff0b/oauth-apiserver/0.log" Feb 16 02:09:13.708683 master-0 kubenswrapper[7721]: I0216 02:09:13.708625 7721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-747969bcdd-dth9n" Feb 16 02:09:13.814146 master-0 kubenswrapper[7721]: I0216 02:09:13.813995 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8b1d77b-0955-44f3-a780-e8b6813aff0b-trusted-ca-bundle\") pod \"d8b1d77b-0955-44f3-a780-e8b6813aff0b\" (UID: \"d8b1d77b-0955-44f3-a780-e8b6813aff0b\") " Feb 16 02:09:13.814509 master-0 kubenswrapper[7721]: I0216 02:09:13.814183 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d8b1d77b-0955-44f3-a780-e8b6813aff0b-etcd-serving-ca\") pod \"d8b1d77b-0955-44f3-a780-e8b6813aff0b\" (UID: \"d8b1d77b-0955-44f3-a780-e8b6813aff0b\") " Feb 16 02:09:13.814509 master-0 kubenswrapper[7721]: I0216 02:09:13.814286 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d8b1d77b-0955-44f3-a780-e8b6813aff0b-audit-dir\") pod \"d8b1d77b-0955-44f3-a780-e8b6813aff0b\" (UID: \"d8b1d77b-0955-44f3-a780-e8b6813aff0b\") " Feb 16 02:09:13.814509 master-0 kubenswrapper[7721]: I0216 02:09:13.814397 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8b1d77b-0955-44f3-a780-e8b6813aff0b-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "d8b1d77b-0955-44f3-a780-e8b6813aff0b" (UID: "d8b1d77b-0955-44f3-a780-e8b6813aff0b"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:09:13.814711 master-0 kubenswrapper[7721]: I0216 02:09:13.814533 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d8b1d77b-0955-44f3-a780-e8b6813aff0b-encryption-config\") pod \"d8b1d77b-0955-44f3-a780-e8b6813aff0b\" (UID: \"d8b1d77b-0955-44f3-a780-e8b6813aff0b\") " Feb 16 02:09:13.814711 master-0 kubenswrapper[7721]: I0216 02:09:13.814685 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d8b1d77b-0955-44f3-a780-e8b6813aff0b-etcd-client\") pod \"d8b1d77b-0955-44f3-a780-e8b6813aff0b\" (UID: \"d8b1d77b-0955-44f3-a780-e8b6813aff0b\") " Feb 16 02:09:13.814852 master-0 kubenswrapper[7721]: I0216 02:09:13.814724 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9jxm\" (UniqueName: \"kubernetes.io/projected/d8b1d77b-0955-44f3-a780-e8b6813aff0b-kube-api-access-r9jxm\") pod \"d8b1d77b-0955-44f3-a780-e8b6813aff0b\" (UID: \"d8b1d77b-0955-44f3-a780-e8b6813aff0b\") " Feb 16 02:09:13.814852 master-0 kubenswrapper[7721]: I0216 02:09:13.814781 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d8b1d77b-0955-44f3-a780-e8b6813aff0b-audit-policies\") pod \"d8b1d77b-0955-44f3-a780-e8b6813aff0b\" (UID: \"d8b1d77b-0955-44f3-a780-e8b6813aff0b\") " Feb 16 02:09:13.814852 master-0 kubenswrapper[7721]: I0216 02:09:13.814839 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8b1d77b-0955-44f3-a780-e8b6813aff0b-serving-cert\") pod \"d8b1d77b-0955-44f3-a780-e8b6813aff0b\" (UID: \"d8b1d77b-0955-44f3-a780-e8b6813aff0b\") " Feb 16 02:09:13.815907 master-0 kubenswrapper[7721]: I0216 02:09:13.815112 7721 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d8b1d77b-0955-44f3-a780-e8b6813aff0b-audit-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 02:09:13.815907 master-0 kubenswrapper[7721]: I0216 02:09:13.815159 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8b1d77b-0955-44f3-a780-e8b6813aff0b-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "d8b1d77b-0955-44f3-a780-e8b6813aff0b" (UID: "d8b1d77b-0955-44f3-a780-e8b6813aff0b"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:09:13.815907 master-0 kubenswrapper[7721]: I0216 02:09:13.815329 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8b1d77b-0955-44f3-a780-e8b6813aff0b-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d8b1d77b-0955-44f3-a780-e8b6813aff0b" (UID: "d8b1d77b-0955-44f3-a780-e8b6813aff0b"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:09:13.816160 master-0 kubenswrapper[7721]: I0216 02:09:13.816051 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8b1d77b-0955-44f3-a780-e8b6813aff0b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "d8b1d77b-0955-44f3-a780-e8b6813aff0b" (UID: "d8b1d77b-0955-44f3-a780-e8b6813aff0b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:09:13.820282 master-0 kubenswrapper[7721]: I0216 02:09:13.820223 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8b1d77b-0955-44f3-a780-e8b6813aff0b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d8b1d77b-0955-44f3-a780-e8b6813aff0b" (UID: "d8b1d77b-0955-44f3-a780-e8b6813aff0b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:09:13.820786 master-0 kubenswrapper[7721]: I0216 02:09:13.820689 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8b1d77b-0955-44f3-a780-e8b6813aff0b-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "d8b1d77b-0955-44f3-a780-e8b6813aff0b" (UID: "d8b1d77b-0955-44f3-a780-e8b6813aff0b"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:09:13.821023 master-0 kubenswrapper[7721]: I0216 02:09:13.820977 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8b1d77b-0955-44f3-a780-e8b6813aff0b-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "d8b1d77b-0955-44f3-a780-e8b6813aff0b" (UID: "d8b1d77b-0955-44f3-a780-e8b6813aff0b"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:09:13.821584 master-0 kubenswrapper[7721]: I0216 02:09:13.821370 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8b1d77b-0955-44f3-a780-e8b6813aff0b-kube-api-access-r9jxm" (OuterVolumeSpecName: "kube-api-access-r9jxm") pod "d8b1d77b-0955-44f3-a780-e8b6813aff0b" (UID: "d8b1d77b-0955-44f3-a780-e8b6813aff0b"). InnerVolumeSpecName "kube-api-access-r9jxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:09:13.916690 master-0 kubenswrapper[7721]: I0216 02:09:13.916482 7721 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d8b1d77b-0955-44f3-a780-e8b6813aff0b-etcd-client\") on node \"master-0\" DevicePath \"\"" Feb 16 02:09:13.916690 master-0 kubenswrapper[7721]: I0216 02:09:13.916559 7721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9jxm\" (UniqueName: \"kubernetes.io/projected/d8b1d77b-0955-44f3-a780-e8b6813aff0b-kube-api-access-r9jxm\") on node \"master-0\" DevicePath \"\"" Feb 16 02:09:13.916690 master-0 kubenswrapper[7721]: I0216 02:09:13.916588 7721 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d8b1d77b-0955-44f3-a780-e8b6813aff0b-audit-policies\") on node \"master-0\" DevicePath \"\"" Feb 16 02:09:13.916690 master-0 kubenswrapper[7721]: I0216 02:09:13.916612 7721 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8b1d77b-0955-44f3-a780-e8b6813aff0b-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 16 02:09:13.916690 master-0 kubenswrapper[7721]: I0216 02:09:13.916632 7721 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8b1d77b-0955-44f3-a780-e8b6813aff0b-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 02:09:13.916690 master-0 kubenswrapper[7721]: I0216 02:09:13.916649 7721 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d8b1d77b-0955-44f3-a780-e8b6813aff0b-etcd-serving-ca\") on node \"master-0\" DevicePath \"\"" Feb 16 02:09:13.916690 master-0 kubenswrapper[7721]: I0216 02:09:13.916668 7721 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d8b1d77b-0955-44f3-a780-e8b6813aff0b-encryption-config\") on node \"master-0\" DevicePath \"\"" Feb 16 02:09:14.017627 master-0 kubenswrapper[7721]: I0216 02:09:14.017554 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-747969bcdd-dth9n_d8b1d77b-0955-44f3-a780-e8b6813aff0b/oauth-apiserver/0.log" Feb 16 02:09:14.018322 master-0 kubenswrapper[7721]: I0216 02:09:14.018254 7721 generic.go:334] "Generic (PLEG): container finished" podID="d8b1d77b-0955-44f3-a780-e8b6813aff0b" containerID="83c4f2970f98a6eeab62cbdfc7dd76ac41aa01e82c1c4f0bce75a351953e7d24" exitCode=137 Feb 16 02:09:14.018505 master-0 kubenswrapper[7721]: I0216 02:09:14.018384 7721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-747969bcdd-dth9n" Feb 16 02:09:18.808938 master-0 kubenswrapper[7721]: E0216 02:09:18.808790 7721 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="800ms" Feb 16 02:09:18.921562 master-0 kubenswrapper[7721]: E0216 02:09:18.921414 7721 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 02:09:28.922640 master-0 kubenswrapper[7721]: E0216 02:09:28.922531 7721 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 02:09:29.541504 master-0 kubenswrapper[7721]: I0216 02:09:29.541390 7721 patch_prober.go:28] interesting pod/etcd-operator-67bf55ccdd-htjgz container/etcd-operator namespace/openshift-etcd-operator: Liveness probe status=failure output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" start-of-body= Feb 16 02:09:29.541802 master-0 kubenswrapper[7721]: I0216 02:09:29.541524 7721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-htjgz" podUID="724ac845-3835-458b-9645-e665be135ff9" containerName="etcd-operator" probeResult="failure" output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" Feb 16 02:09:29.697325 master-0 kubenswrapper[7721]: E0216 02:09:29.697238 7721 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="1.6s" Feb 16 02:09:32.528867 master-0 kubenswrapper[7721]: E0216 02:09:32.522687 7721 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="33.782s" Feb 16 02:09:32.528867 master-0 kubenswrapper[7721]: I0216 02:09:32.522783 7721 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-755d954778-bngv9" Feb 16 02:09:32.528867 master-0 kubenswrapper[7721]: I0216 02:09:32.522824 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-x2sh4" event={"ID":"4a5b01c1-1231-4e69-8b6c-c4981b65b26e","Type":"ContainerDied","Data":"5fc216122a0910fe569f2678eb8d9427d5895c0ca4368e57e2f60b2f9f7164e2"} Feb 16 02:09:32.528867 master-0 kubenswrapper[7721]: I0216 02:09:32.522876 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-dsjz2" event={"ID":"980aa005-f51d-4ca2-aee6-a6fdeefd86d0","Type":"ContainerDied","Data":"ff658673f7455373622a5b5ee3d4af2de4dca2c4bf35a4e09ed477558c99902a"} Feb 16 02:09:32.532520 master-0 kubenswrapper[7721]: I0216 02:09:32.531375 7721 scope.go:117] "RemoveContainer" containerID="5fc216122a0910fe569f2678eb8d9427d5895c0ca4368e57e2f60b2f9f7164e2" Feb 16 02:09:32.533805 master-0 kubenswrapper[7721]: I0216 02:09:32.532657 7721 scope.go:117] "RemoveContainer" containerID="ff658673f7455373622a5b5ee3d4af2de4dca2c4bf35a4e09ed477558c99902a" Feb 16 02:09:32.533805 master-0 kubenswrapper[7721]: I0216 02:09:32.533404 7721 scope.go:117] "RemoveContainer" containerID="3c59868e46c60a0139fbb9feace033d7ff3288c7e5f3febf6586656bff57983b" Feb 16 02:09:32.567045 master-0 kubenswrapper[7721]: I0216 02:09:32.566979 7721 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Feb 16 02:09:32.571235 master-0 kubenswrapper[7721]: I0216 02:09:32.571184 7721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 02:09:32.572546 master-0 kubenswrapper[7721]: I0216 02:09:32.572497 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-755d954778-bngv9" event={"ID":"c9cd32bc-a13a-44ee-ba52-7bb335c7007b","Type":"ContainerDied","Data":"3c59868e46c60a0139fbb9feace033d7ff3288c7e5f3febf6586656bff57983b"} Feb 16 02:09:32.572546 master-0 kubenswrapper[7721]: I0216 02:09:32.572544 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"401699cb53e7098157e808a83125b0e4","Type":"ContainerDied","Data":"62af0446d65ad4423070101103807a98e30f740477e4dc3f78e2f74fd5837d04"} Feb 16 02:09:32.572752 master-0 kubenswrapper[7721]: I0216 02:09:32.572578 7721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Feb 16 02:09:32.572752 master-0 kubenswrapper[7721]: I0216 02:09:32.572596 7721 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-etcd/etcd-master-0-master-0" mirrorPodUID="3eb85169-4551-4faa-8640-c9b308521d8f" Feb 16 02:09:32.572752 master-0 kubenswrapper[7721]: I0216 02:09:32.572614 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-mmhcs" event={"ID":"1743372f-bdb0-4558-b47b-3714f3aa3fde","Type":"ContainerDied","Data":"1f742ab76573db69bc143df83fcf581f4c09f3de9ec005f01809b1af5690b4d3"} Feb 16 02:09:32.572752 master-0 kubenswrapper[7721]: I0216 02:09:32.572629 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-ck5nr" event={"ID":"91938be6-9ae4-4849-abe8-fc842daecd23","Type":"ContainerDied","Data":"3af1f9b9834764b079edacabd51db4c771ce412df5b31f88b96200c070e64727"} Feb 16 02:09:32.572752 master-0 kubenswrapper[7721]: I0216 02:09:32.572658 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-htjgz" event={"ID":"724ac845-3835-458b-9645-e665be135ff9","Type":"ContainerDied","Data":"7d8525382e7c303df250ff37074c2b59dae064f1c16fab17985b8492c29587df"} Feb 16 02:09:32.572752 master-0 kubenswrapper[7721]: I0216 02:09:32.572674 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"401699cb53e7098157e808a83125b0e4","Type":"ContainerStarted","Data":"552743736ff647b8843ecd9320b831ffe94e088a851c3824a47b4abd72c5bf6c"} Feb 16 02:09:32.572752 master-0 kubenswrapper[7721]: I0216 02:09:32.572697 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"401699cb53e7098157e808a83125b0e4","Type":"ContainerStarted","Data":"833612f242200ef19bcc5a8b7695eb61621624c88cc11bacb1ca3e93309cd101"} Feb 16 02:09:32.572752 master-0 kubenswrapper[7721]: I0216 02:09:32.572709 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"401699cb53e7098157e808a83125b0e4","Type":"ContainerStarted","Data":"515103720c79c12544e44c114caf39f1fead71aaf1f7b32099dd6e9f8d85dad1"} Feb 16 02:09:32.572752 master-0 kubenswrapper[7721]: I0216 02:09:32.572722 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"401699cb53e7098157e808a83125b0e4","Type":"ContainerStarted","Data":"3923bf88ac48e47171dba4bde6b1f5e832036c71d78be372ba2409d3f0539359"} Feb 16 02:09:32.572752 master-0 kubenswrapper[7721]: I0216 02:09:32.572734 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"401699cb53e7098157e808a83125b0e4","Type":"ContainerStarted","Data":"bd0d944155c33386f42c58adaa5c5fc430daa3c34d69e35637d722fa28cc7f3d"} Feb 16 02:09:32.572752 master-0 kubenswrapper[7721]: I0216 02:09:32.572754 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-kffmg" event={"ID":"dbc5b101-936f-4bf3-bbf3-f30966b0ab50","Type":"ContainerDied","Data":"73c5d4096ed4f5f723bea74695c09c9920b7cf6836ef92fa2286119a88696c78"} Feb 16 02:09:32.572752 master-0 kubenswrapper[7721]: I0216 02:09:32.572773 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"8ea4c28c-8f53-4b41-9c85-c8c50599d7cd","Type":"ContainerDied","Data":"bb9ffb6ca918ba3341c8df0e7c8c8ba7325d86a68eb6b2856270c9f7326551b5"} Feb 16 02:09:32.573565 master-0 kubenswrapper[7721]: I0216 02:09:32.572795 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nbjz6" event={"ID":"04804a08-e3a5-46f3-abcb-967866834baa","Type":"ContainerDied","Data":"87993edba6f07930300de55e54a0440afea4e88c5ea50fe933142a412c18bfd2"} Feb 16 02:09:32.573565 master-0 kubenswrapper[7721]: I0216 02:09:32.572823 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerDied","Data":"f7e5042e1717f873aa5ed64ccd2f2b11417f41ea4156f1f41e924e94dbf23445"} Feb 16 02:09:32.573565 master-0 kubenswrapper[7721]: I0216 02:09:32.572840 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-747969bcdd-dth9n" event={"ID":"d8b1d77b-0955-44f3-a780-e8b6813aff0b","Type":"ContainerDied","Data":"83c4f2970f98a6eeab62cbdfc7dd76ac41aa01e82c1c4f0bce75a351953e7d24"} Feb 16 02:09:32.573565 master-0 kubenswrapper[7721]: I0216 02:09:32.572856 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-747969bcdd-dth9n" event={"ID":"d8b1d77b-0955-44f3-a780-e8b6813aff0b","Type":"ContainerDied","Data":"e1b1da20eba3376b5d5cc12627127c03688f3c11231a9b29f93539b8e871d089"} Feb 16 02:09:32.573565 master-0 kubenswrapper[7721]: I0216 02:09:32.572880 7721 scope.go:117] "RemoveContainer" containerID="e320e64d785f6de34aed9795724368979c84944d7c7a25afb100430d56e9ef3e" Feb 16 02:09:32.576396 master-0 kubenswrapper[7721]: I0216 02:09:32.576350 7721 scope.go:117] "RemoveContainer" containerID="f7e5042e1717f873aa5ed64ccd2f2b11417f41ea4156f1f41e924e94dbf23445" Feb 16 02:09:32.577244 master-0 kubenswrapper[7721]: I0216 02:09:32.577132 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-84976bb859-5gs6g"] Feb 16 02:09:32.577418 master-0 kubenswrapper[7721]: I0216 02:09:32.577308 7721 scope.go:117] "RemoveContainer" containerID="d6b8bd52621bc720ed9dc674f34ff05c02fea3c605d0d925e1c0610bec6f8610" Feb 16 02:09:32.583764 master-0 kubenswrapper[7721]: I0216 02:09:32.583616 7721 scope.go:117] "RemoveContainer" containerID="7d8525382e7c303df250ff37074c2b59dae064f1c16fab17985b8492c29587df" Feb 16 02:09:32.585685 master-0 kubenswrapper[7721]: I0216 02:09:32.585634 7721 scope.go:117] "RemoveContainer" containerID="73c5d4096ed4f5f723bea74695c09c9920b7cf6836ef92fa2286119a88696c78" Feb 16 02:09:32.585803 master-0 kubenswrapper[7721]: I0216 02:09:32.585722 7721 scope.go:117] "RemoveContainer" containerID="1f742ab76573db69bc143df83fcf581f4c09f3de9ec005f01809b1af5690b4d3" Feb 16 02:09:32.586873 master-0 kubenswrapper[7721]: I0216 02:09:32.586833 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-87777c9b7-fxzh6"] Feb 16 02:09:32.588023 master-0 kubenswrapper[7721]: I0216 02:09:32.587961 7721 scope.go:117] "RemoveContainer" containerID="e691a05529b4feda1459fb089aa0bfd36c24c35f07686b8d317ee98a6be4be8a" Feb 16 02:09:32.588567 master-0 kubenswrapper[7721]: I0216 02:09:32.588487 7721 scope.go:117] "RemoveContainer" containerID="87993edba6f07930300de55e54a0440afea4e88c5ea50fe933142a412c18bfd2" Feb 16 02:09:32.590041 master-0 kubenswrapper[7721]: I0216 02:09:32.589464 7721 scope.go:117] "RemoveContainer" containerID="3af1f9b9834764b079edacabd51db4c771ce412df5b31f88b96200c070e64727" Feb 16 02:09:32.590041 master-0 kubenswrapper[7721]: I0216 02:09:32.589999 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-operator-cb4f7b4cf-llpf5"] Feb 16 02:09:32.593759 master-0 kubenswrapper[7721]: I0216 02:09:32.593685 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Feb 16 02:09:32.596383 master-0 kubenswrapper[7721]: I0216 02:09:32.596320 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-n8xmg"] Feb 16 02:09:32.599118 master-0 kubenswrapper[7721]: I0216 02:09:32.598996 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-bd7dd5c46-qw2zq"] Feb 16 02:09:32.602894 master-0 kubenswrapper[7721]: I0216 02:09:32.602840 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-qm7rm"] Feb 16 02:09:32.605590 master-0 kubenswrapper[7721]: I0216 02:09:32.605524 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-frvgm"] Feb 16 02:09:32.608088 master-0 kubenswrapper[7721]: I0216 02:09:32.607841 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-9rvcj"] Feb 16 02:09:32.609808 master-0 kubenswrapper[7721]: I0216 02:09:32.609722 7721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Feb 16 02:09:32.609808 master-0 kubenswrapper[7721]: I0216 02:09:32.609771 7721 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-etcd/etcd-master-0-master-0" mirrorPodUID="3eb85169-4551-4faa-8640-c9b308521d8f" Feb 16 02:09:32.678526 master-0 kubenswrapper[7721]: I0216 02:09:32.678406 7721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-trqk8" podStartSLOduration=94.361356878 podStartE2EDuration="2m8.678375508s" podCreationTimestamp="2026-02-16 02:07:24 +0000 UTC" firstStartedPulling="2026-02-16 02:07:26.461330271 +0000 UTC m=+49.955564533" lastFinishedPulling="2026-02-16 02:08:00.778348891 +0000 UTC m=+84.272583163" observedRunningTime="2026-02-16 02:09:32.675866406 +0000 UTC m=+176.170100708" watchObservedRunningTime="2026-02-16 02:09:32.678375508 +0000 UTC m=+176.172609810" Feb 16 02:09:32.694734 master-0 kubenswrapper[7721]: I0216 02:09:32.694017 7721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Feb 16 02:09:32.706238 master-0 kubenswrapper[7721]: I0216 02:09:32.705972 7721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Feb 16 02:09:32.730764 master-0 kubenswrapper[7721]: I0216 02:09:32.730687 7721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-s52cg" podStartSLOduration=100.335502727 podStartE2EDuration="1m44.730661706s" podCreationTimestamp="2026-02-16 02:07:48 +0000 UTC" firstStartedPulling="2026-02-16 02:07:52.545706324 +0000 UTC m=+76.039940586" lastFinishedPulling="2026-02-16 02:07:56.940865293 +0000 UTC m=+80.435099565" observedRunningTime="2026-02-16 02:09:32.728073842 +0000 UTC m=+176.222308124" watchObservedRunningTime="2026-02-16 02:09:32.730661706 +0000 UTC m=+176.224895978" Feb 16 02:09:32.732657 master-0 kubenswrapper[7721]: I0216 02:09:32.732607 7721 scope.go:117] "RemoveContainer" containerID="83c4f2970f98a6eeab62cbdfc7dd76ac41aa01e82c1c4f0bce75a351953e7d24" Feb 16 02:09:32.735578 master-0 kubenswrapper[7721]: I0216 02:09:32.735539 7721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45476b57-538b-4031-80c9-8025a49e8e88" path="/var/lib/kubelet/pods/45476b57-538b-4031-80c9-8025a49e8e88/volumes" Feb 16 02:09:32.801426 master-0 kubenswrapper[7721]: I0216 02:09:32.801375 7721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5574f479df-xqnpg"] Feb 16 02:09:32.817747 master-0 kubenswrapper[7721]: I0216 02:09:32.817705 7721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5574f479df-xqnpg"] Feb 16 02:09:32.826926 master-0 kubenswrapper[7721]: I0216 02:09:32.826645 7721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-j4xhj" podStartSLOduration=109.841297631 podStartE2EDuration="1m53.826624418s" podCreationTimestamp="2026-02-16 02:07:39 +0000 UTC" firstStartedPulling="2026-02-16 02:07:52.918808544 +0000 UTC m=+76.413042806" lastFinishedPulling="2026-02-16 02:07:56.904135331 +0000 UTC m=+80.398369593" observedRunningTime="2026-02-16 02:09:32.823995733 +0000 UTC m=+176.318230005" watchObservedRunningTime="2026-02-16 02:09:32.826624418 +0000 UTC m=+176.320858680" Feb 16 02:09:32.844849 master-0 kubenswrapper[7721]: I0216 02:09:32.844603 7721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-696cfb9f87-87b8w"] Feb 16 02:09:32.853322 master-0 kubenswrapper[7721]: I0216 02:09:32.853257 7721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-696cfb9f87-87b8w"] Feb 16 02:09:32.896154 master-0 kubenswrapper[7721]: I0216 02:09:32.896046 7721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-r5l9f" podStartSLOduration=113.926975167 podStartE2EDuration="1m57.89601119s" podCreationTimestamp="2026-02-16 02:07:35 +0000 UTC" firstStartedPulling="2026-02-16 02:07:52.935354045 +0000 UTC m=+76.429588297" lastFinishedPulling="2026-02-16 02:07:56.904390018 +0000 UTC m=+80.398624320" observedRunningTime="2026-02-16 02:09:32.892164555 +0000 UTC m=+176.386398817" watchObservedRunningTime="2026-02-16 02:09:32.89601119 +0000 UTC m=+176.390245472" Feb 16 02:09:32.900967 master-0 kubenswrapper[7721]: I0216 02:09:32.900895 7721 scope.go:117] "RemoveContainer" containerID="16aa4cd5e26bb981ce33f09412128895c5e974af71a06a250577d00d2a158d8c" Feb 16 02:09:32.968283 master-0 kubenswrapper[7721]: I0216 02:09:32.968198 7721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-oauth-apiserver/apiserver-747969bcdd-dth9n"] Feb 16 02:09:32.973122 master-0 kubenswrapper[7721]: I0216 02:09:32.973058 7721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-oauth-apiserver/apiserver-747969bcdd-dth9n"] Feb 16 02:09:32.991847 master-0 kubenswrapper[7721]: I0216 02:09:32.991751 7721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9qtbw" podStartSLOduration=93.714557565 podStartE2EDuration="2m7.991712765s" podCreationTimestamp="2026-02-16 02:07:25 +0000 UTC" firstStartedPulling="2026-02-16 02:07:26.473111253 +0000 UTC m=+49.967345515" lastFinishedPulling="2026-02-16 02:08:00.750266443 +0000 UTC m=+84.244500715" observedRunningTime="2026-02-16 02:09:32.987249145 +0000 UTC m=+176.481483407" watchObservedRunningTime="2026-02-16 02:09:32.991712765 +0000 UTC m=+176.485947027" Feb 16 02:09:33.016228 master-0 kubenswrapper[7721]: I0216 02:09:33.015753 7721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9kb98" podStartSLOduration=95.676705035 podStartE2EDuration="2m11.015728311s" podCreationTimestamp="2026-02-16 02:07:22 +0000 UTC" firstStartedPulling="2026-02-16 02:07:25.439451518 +0000 UTC m=+48.933685780" lastFinishedPulling="2026-02-16 02:08:00.778474754 +0000 UTC m=+84.272709056" observedRunningTime="2026-02-16 02:09:33.011408374 +0000 UTC m=+176.505642636" watchObservedRunningTime="2026-02-16 02:09:33.015728311 +0000 UTC m=+176.509962573" Feb 16 02:09:33.030797 master-0 kubenswrapper[7721]: I0216 02:09:33.030703 7721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vkp55" podStartSLOduration=95.769889079 podStartE2EDuration="2m11.030681913s" podCreationTimestamp="2026-02-16 02:07:22 +0000 UTC" firstStartedPulling="2026-02-16 02:07:25.445426626 +0000 UTC m=+48.939660908" lastFinishedPulling="2026-02-16 02:08:00.70621944 +0000 UTC m=+84.200453742" observedRunningTime="2026-02-16 02:09:33.029940954 +0000 UTC m=+176.524175236" watchObservedRunningTime="2026-02-16 02:09:33.030681913 +0000 UTC m=+176.524916175" Feb 16 02:09:33.069320 master-0 kubenswrapper[7721]: I0216 02:09:33.069177 7721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Feb 16 02:09:33.079368 master-0 kubenswrapper[7721]: I0216 02:09:33.077172 7721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Feb 16 02:09:33.121734 master-0 kubenswrapper[7721]: I0216 02:09:33.112714 7721 scope.go:117] "RemoveContainer" containerID="83c4f2970f98a6eeab62cbdfc7dd76ac41aa01e82c1c4f0bce75a351953e7d24" Feb 16 02:09:33.121734 master-0 kubenswrapper[7721]: E0216 02:09:33.120182 7721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83c4f2970f98a6eeab62cbdfc7dd76ac41aa01e82c1c4f0bce75a351953e7d24\": container with ID starting with 83c4f2970f98a6eeab62cbdfc7dd76ac41aa01e82c1c4f0bce75a351953e7d24 not found: ID does not exist" containerID="83c4f2970f98a6eeab62cbdfc7dd76ac41aa01e82c1c4f0bce75a351953e7d24" Feb 16 02:09:33.121734 master-0 kubenswrapper[7721]: I0216 02:09:33.120226 7721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83c4f2970f98a6eeab62cbdfc7dd76ac41aa01e82c1c4f0bce75a351953e7d24"} err="failed to get container status \"83c4f2970f98a6eeab62cbdfc7dd76ac41aa01e82c1c4f0bce75a351953e7d24\": rpc error: code = NotFound desc = could not find container \"83c4f2970f98a6eeab62cbdfc7dd76ac41aa01e82c1c4f0bce75a351953e7d24\": container with ID starting with 83c4f2970f98a6eeab62cbdfc7dd76ac41aa01e82c1c4f0bce75a351953e7d24 not found: ID does not exist" Feb 16 02:09:33.121734 master-0 kubenswrapper[7721]: I0216 02:09:33.120256 7721 scope.go:117] "RemoveContainer" containerID="16aa4cd5e26bb981ce33f09412128895c5e974af71a06a250577d00d2a158d8c" Feb 16 02:09:33.142664 master-0 kubenswrapper[7721]: E0216 02:09:33.130892 7721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16aa4cd5e26bb981ce33f09412128895c5e974af71a06a250577d00d2a158d8c\": container with ID starting with 16aa4cd5e26bb981ce33f09412128895c5e974af71a06a250577d00d2a158d8c not found: ID does not exist" containerID="16aa4cd5e26bb981ce33f09412128895c5e974af71a06a250577d00d2a158d8c" Feb 16 02:09:33.142664 master-0 kubenswrapper[7721]: I0216 02:09:33.130936 7721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16aa4cd5e26bb981ce33f09412128895c5e974af71a06a250577d00d2a158d8c"} err="failed to get container status \"16aa4cd5e26bb981ce33f09412128895c5e974af71a06a250577d00d2a158d8c\": rpc error: code = NotFound desc = could not find container \"16aa4cd5e26bb981ce33f09412128895c5e974af71a06a250577d00d2a158d8c\": container with ID starting with 16aa4cd5e26bb981ce33f09412128895c5e974af71a06a250577d00d2a158d8c not found: ID does not exist" Feb 16 02:09:33.187242 master-0 kubenswrapper[7721]: I0216 02:09:33.187182 7721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-k8jz5" podStartSLOduration=105.774592876 podStartE2EDuration="1m49.187153786s" podCreationTimestamp="2026-02-16 02:07:44 +0000 UTC" firstStartedPulling="2026-02-16 02:07:53.491624112 +0000 UTC m=+76.985858374" lastFinishedPulling="2026-02-16 02:07:56.904185002 +0000 UTC m=+80.398419284" observedRunningTime="2026-02-16 02:09:33.147761719 +0000 UTC m=+176.641995981" watchObservedRunningTime="2026-02-16 02:09:33.187153786 +0000 UTC m=+176.681388048" Feb 16 02:09:33.191520 master-0 kubenswrapper[7721]: I0216 02:09:33.191249 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-755d954778-bngv9" event={"ID":"c9cd32bc-a13a-44ee-ba52-7bb335c7007b","Type":"ContainerStarted","Data":"00959d2f0302d7424610700e0682f887d9314d52514346143681e4644ccabd4b"} Feb 16 02:09:33.199667 master-0 kubenswrapper[7721]: I0216 02:09:33.199605 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-qm7rm" event={"ID":"a77e2f8f-d164-4a58-aab2-f3444c05cacb","Type":"ContainerStarted","Data":"6c0cfba2536520f6ae9edb17fbb1f2d62f0a336f61c097893c2d906e44086caa"} Feb 16 02:09:33.220672 master-0 kubenswrapper[7721]: I0216 02:09:33.216498 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-n8xmg" event={"ID":"30fef0d5-46ea-4fa3-9ffa-88187d010ffe","Type":"ContainerStarted","Data":"41b5c41034cb2148b1d5fd3324b4593a07df9354af0a87ec90c46af0858415c8"} Feb 16 02:09:33.220672 master-0 kubenswrapper[7721]: I0216 02:09:33.216557 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-n8xmg" event={"ID":"30fef0d5-46ea-4fa3-9ffa-88187d010ffe","Type":"ContainerStarted","Data":"f1504c9a4b0e4bf6149e0491153df3c7ffe2143b38a16877cba6aa9e83843b5a"} Feb 16 02:09:33.220672 master-0 kubenswrapper[7721]: I0216 02:09:33.219779 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-qw2zq" event={"ID":"fec84b8a-a0d1-4b07-8827-cef0beb89ecd","Type":"ContainerStarted","Data":"1ec6f44f4ada0ba2497ec6a89eb6dca7d3c20475b88100af577ab7e20a92a920"} Feb 16 02:09:33.220672 master-0 kubenswrapper[7721]: I0216 02:09:33.219817 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-qw2zq" event={"ID":"fec84b8a-a0d1-4b07-8827-cef0beb89ecd","Type":"ContainerStarted","Data":"86f122d4749ad1424c989c1ff460643fbb5843c1b201386f01f941204bc41b87"} Feb 16 02:09:33.224256 master-0 kubenswrapper[7721]: I0216 02:09:33.224212 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-x2sh4" event={"ID":"4a5b01c1-1231-4e69-8b6c-c4981b65b26e","Type":"ContainerStarted","Data":"46a31a50590f28665abe6bbfd0c6e529464f6c0c2c9645356e13a0a930c9a81f"} Feb 16 02:09:33.231505 master-0 kubenswrapper[7721]: I0216 02:09:33.231462 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"1f35c7c9-16ec-486e-99ff-f1cbcce76eb3","Type":"ContainerStarted","Data":"98dcc1912f40abe649e3505484d85a2636bf298671bda45fdf2eb9864ccd1111"} Feb 16 02:09:33.239192 master-0 kubenswrapper[7721]: I0216 02:09:33.234680 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-frvgm" event={"ID":"27a42eb0-677c-414d-b0ec-f945ec39b7e9","Type":"ContainerStarted","Data":"c7eb8cc3989ea5b05dd2c5ae1244d08f1947ba602e6ae89eb69848dbf5ea8e95"} Feb 16 02:09:33.239192 master-0 kubenswrapper[7721]: I0216 02:09:33.236856 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-9rvcj" event={"ID":"48863ff6-63ac-42d7-bac7-29d888c92db9","Type":"ContainerStarted","Data":"b0ab1e4e2fad74f40299776a55406eb6765bae583e11fb12ff835ab3d7861429"} Feb 16 02:09:33.239192 master-0 kubenswrapper[7721]: I0216 02:09:33.236878 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-9rvcj" event={"ID":"48863ff6-63ac-42d7-bac7-29d888c92db9","Type":"ContainerStarted","Data":"4476a4bc67f1c6ee7cd4f19dd630e65931f829a02ef5d857f284d2df2a08dd8d"} Feb 16 02:09:33.250941 master-0 kubenswrapper[7721]: I0216 02:09:33.250915 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-c588d8cb4-nbjz6_04804a08-e3a5-46f3-abcb-967866834baa/ingress-operator/0.log" Feb 16 02:09:33.251055 master-0 kubenswrapper[7721]: I0216 02:09:33.251029 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nbjz6" event={"ID":"04804a08-e3a5-46f3-abcb-967866834baa","Type":"ContainerStarted","Data":"7c934fcf17603ba6880730036301dc7740655f1f475a9dcfef2ce3f1ec5f5b71"} Feb 16 02:09:33.265121 master-0 kubenswrapper[7721]: I0216 02:09:33.265066 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-87777c9b7-fxzh6" event={"ID":"dc3354cb-b6c3-40a5-a695-cccb079ad292","Type":"ContainerStarted","Data":"cdd12ede85d8d0f0b36af1731de04fc7394c2e26b11dd32c115b4d2123b49330"} Feb 16 02:09:33.265201 master-0 kubenswrapper[7721]: I0216 02:09:33.265138 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-87777c9b7-fxzh6" event={"ID":"dc3354cb-b6c3-40a5-a695-cccb079ad292","Type":"ContainerStarted","Data":"4b908a349a6f9bb67998eaa77c0cb0b67337fd06d9753261cfa10d0744b50e07"} Feb 16 02:09:33.266949 master-0 kubenswrapper[7721]: I0216 02:09:33.266405 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-87777c9b7-fxzh6" Feb 16 02:09:33.268817 master-0 kubenswrapper[7721]: I0216 02:09:33.268782 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-cb4f7b4cf-llpf5" event={"ID":"0abea413-e08a-465a-8ec4-2be650bfd5bd","Type":"ContainerStarted","Data":"e04624272a6ae061a3899df44b95c1f16652305181d31b35b7d1c234a03226ba"} Feb 16 02:09:33.273177 master-0 kubenswrapper[7721]: I0216 02:09:33.271383 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-5gs6g" event={"ID":"d870332c-2498-4135-a9b3-a71e67c2805b","Type":"ContainerStarted","Data":"8a15ec6edf531733b3fdbab5958c503602c9f05e39693986c688462128642a62"} Feb 16 02:09:33.273177 master-0 kubenswrapper[7721]: I0216 02:09:33.271425 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-5gs6g" event={"ID":"d870332c-2498-4135-a9b3-a71e67c2805b","Type":"ContainerStarted","Data":"aab7eeb6e8bf766155c633f93a77e37a4ca269be0e48fc054214cf6cfcafebc6"} Feb 16 02:09:33.278850 master-0 kubenswrapper[7721]: I0216 02:09:33.278820 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-dsjz2" event={"ID":"980aa005-f51d-4ca2-aee6-a6fdeefd86d0","Type":"ContainerStarted","Data":"a6d90aff6f8ce2ab976f48907c4d1b01e98afde362aa201e2dc712d88fff6eb6"} Feb 16 02:09:33.518068 master-0 kubenswrapper[7721]: I0216 02:09:33.517605 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-0_8ea4c28c-8f53-4b41-9c85-c8c50599d7cd/installer/0.log" Feb 16 02:09:33.518068 master-0 kubenswrapper[7721]: I0216 02:09:33.517672 7721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Feb 16 02:09:33.534711 master-0 kubenswrapper[7721]: I0216 02:09:33.533872 7721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-2-master-0" podStartSLOduration=104.533847021 podStartE2EDuration="1m44.533847021s" podCreationTimestamp="2026-02-16 02:07:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:09:33.510252596 +0000 UTC m=+177.004486848" watchObservedRunningTime="2026-02-16 02:09:33.533847021 +0000 UTC m=+177.028081283" Feb 16 02:09:33.614452 master-0 kubenswrapper[7721]: I0216 02:09:33.614306 7721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-87777c9b7-fxzh6" podStartSLOduration=105.614281508 podStartE2EDuration="1m45.614281508s" podCreationTimestamp="2026-02-16 02:07:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:09:33.612028882 +0000 UTC m=+177.106263144" watchObservedRunningTime="2026-02-16 02:09:33.614281508 +0000 UTC m=+177.108515770" Feb 16 02:09:33.614896 master-0 kubenswrapper[7721]: I0216 02:09:33.614855 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8ea4c28c-8f53-4b41-9c85-c8c50599d7cd-kubelet-dir\") pod \"8ea4c28c-8f53-4b41-9c85-c8c50599d7cd\" (UID: \"8ea4c28c-8f53-4b41-9c85-c8c50599d7cd\") " Feb 16 02:09:33.615003 master-0 kubenswrapper[7721]: I0216 02:09:33.614929 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ea4c28c-8f53-4b41-9c85-c8c50599d7cd-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "8ea4c28c-8f53-4b41-9c85-c8c50599d7cd" (UID: "8ea4c28c-8f53-4b41-9c85-c8c50599d7cd"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:09:33.615003 master-0 kubenswrapper[7721]: I0216 02:09:33.614988 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8ea4c28c-8f53-4b41-9c85-c8c50599d7cd-var-lock\") pod \"8ea4c28c-8f53-4b41-9c85-c8c50599d7cd\" (UID: \"8ea4c28c-8f53-4b41-9c85-c8c50599d7cd\") " Feb 16 02:09:33.615796 master-0 kubenswrapper[7721]: I0216 02:09:33.615056 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8ea4c28c-8f53-4b41-9c85-c8c50599d7cd-kube-api-access\") pod \"8ea4c28c-8f53-4b41-9c85-c8c50599d7cd\" (UID: \"8ea4c28c-8f53-4b41-9c85-c8c50599d7cd\") " Feb 16 02:09:33.615796 master-0 kubenswrapper[7721]: I0216 02:09:33.615080 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ea4c28c-8f53-4b41-9c85-c8c50599d7cd-var-lock" (OuterVolumeSpecName: "var-lock") pod "8ea4c28c-8f53-4b41-9c85-c8c50599d7cd" (UID: "8ea4c28c-8f53-4b41-9c85-c8c50599d7cd"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:09:33.615796 master-0 kubenswrapper[7721]: I0216 02:09:33.615260 7721 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8ea4c28c-8f53-4b41-9c85-c8c50599d7cd-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 16 02:09:33.615796 master-0 kubenswrapper[7721]: I0216 02:09:33.615273 7721 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8ea4c28c-8f53-4b41-9c85-c8c50599d7cd-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 02:09:33.619110 master-0 kubenswrapper[7721]: I0216 02:09:33.619017 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ea4c28c-8f53-4b41-9c85-c8c50599d7cd-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "8ea4c28c-8f53-4b41-9c85-c8c50599d7cd" (UID: "8ea4c28c-8f53-4b41-9c85-c8c50599d7cd"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:09:33.716015 master-0 kubenswrapper[7721]: I0216 02:09:33.715968 7721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8ea4c28c-8f53-4b41-9c85-c8c50599d7cd-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 16 02:09:34.267063 master-0 kubenswrapper[7721]: I0216 02:09:34.266987 7721 patch_prober.go:28] interesting pod/packageserver-87777c9b7-fxzh6 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.128.0.65:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 02:09:34.267280 master-0 kubenswrapper[7721]: I0216 02:09:34.267082 7721 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-87777c9b7-fxzh6" podUID="dc3354cb-b6c3-40a5-a695-cccb079ad292" containerName="packageserver" probeResult="failure" output="Get \"https://10.128.0.65:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 02:09:34.288108 master-0 kubenswrapper[7721]: I0216 02:09:34.288053 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-ck5nr" event={"ID":"91938be6-9ae4-4849-abe8-fc842daecd23","Type":"ContainerStarted","Data":"8a48a537d71041a80c937732c1f781180f7ddc98ac17ab0ac136bc4201988932"} Feb 16 02:09:34.290788 master-0 kubenswrapper[7721]: I0216 02:09:34.290738 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-6d4655d9cf-v7lmz_e379cfaf-3a4c-40e7-8641-3524b3669295/openshift-apiserver-operator/1.log" Feb 16 02:09:34.290905 master-0 kubenswrapper[7721]: I0216 02:09:34.290865 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-v7lmz" event={"ID":"e379cfaf-3a4c-40e7-8641-3524b3669295","Type":"ContainerStarted","Data":"29588e18b21fc378729e293fc4d3e978d87e6e1444fa9f91d1cf677cd080ce85"} Feb 16 02:09:34.293575 master-0 kubenswrapper[7721]: I0216 02:09:34.293532 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-mmhcs" event={"ID":"1743372f-bdb0-4558-b47b-3714f3aa3fde","Type":"ContainerStarted","Data":"0dc42258a91591e663cff6bfadf2f4cf0cbbf5b87ca16464ed15e85bd5889c2f"} Feb 16 02:09:34.296686 master-0 kubenswrapper[7721]: I0216 02:09:34.296650 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-6fcf4c966-dctqr_456e6c3a-c16c-470b-a0cd-bb79865b54f0/network-operator/1.log" Feb 16 02:09:34.296832 master-0 kubenswrapper[7721]: I0216 02:09:34.296796 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-6fcf4c966-dctqr" event={"ID":"456e6c3a-c16c-470b-a0cd-bb79865b54f0","Type":"ContainerStarted","Data":"0315328a7c0259163748331a3160b081a82efff7afa5ee439e110ed017ac4025"} Feb 16 02:09:34.299417 master-0 kubenswrapper[7721]: I0216 02:09:34.299383 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-kffmg_dbc5b101-936f-4bf3-bbf3-f30966b0ab50/approver/0.log" Feb 16 02:09:34.299823 master-0 kubenswrapper[7721]: I0216 02:09:34.299786 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-kffmg" event={"ID":"dbc5b101-936f-4bf3-bbf3-f30966b0ab50","Type":"ContainerStarted","Data":"e2b95d6d9e0e9f98872131f6b8b2e0daa77ccd636475f9813d73f42413bc869a"} Feb 16 02:09:34.303236 master-0 kubenswrapper[7721]: I0216 02:09:34.303199 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-5gs6g" event={"ID":"d870332c-2498-4135-a9b3-a71e67c2805b","Type":"ContainerStarted","Data":"94f11df713e789e5394924e9c22e3db8cd20d4b4de5af6e35ce0305dc1c7e1e3"} Feb 16 02:09:34.304850 master-0 kubenswrapper[7721]: I0216 02:09:34.304816 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-0_8ea4c28c-8f53-4b41-9c85-c8c50599d7cd/installer/0.log" Feb 16 02:09:34.304923 master-0 kubenswrapper[7721]: I0216 02:09:34.304897 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"8ea4c28c-8f53-4b41-9c85-c8c50599d7cd","Type":"ContainerDied","Data":"c1cda764f5f0471c7463d4e4932eaea865d91aa81030462076bd5270b356dfca"} Feb 16 02:09:34.304982 master-0 kubenswrapper[7721]: I0216 02:09:34.304917 7721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Feb 16 02:09:34.305030 master-0 kubenswrapper[7721]: I0216 02:09:34.304920 7721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1cda764f5f0471c7463d4e4932eaea865d91aa81030462076bd5270b356dfca" Feb 16 02:09:34.308314 master-0 kubenswrapper[7721]: I0216 02:09:34.306875 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"1f35c7c9-16ec-486e-99ff-f1cbcce76eb3","Type":"ContainerStarted","Data":"6880014992fa93e0c0801558387fe49a32761a32c34c61cc54ee116a4f50adda"} Feb 16 02:09:34.310827 master-0 kubenswrapper[7721]: I0216 02:09:34.310791 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-htjgz" event={"ID":"724ac845-3835-458b-9645-e665be135ff9","Type":"ContainerStarted","Data":"f4cc6bf86c33c3e578a43a1648d54a69838bb79c81f9072d23717330a60f1d97"} Feb 16 02:09:34.314257 master-0 kubenswrapper[7721]: I0216 02:09:34.314221 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerStarted","Data":"faf5128620c105dbf4c0b83460e5c6d63ea7e16d1417f90a62c09817a9c5e166"} Feb 16 02:09:34.362908 master-0 kubenswrapper[7721]: I0216 02:09:34.362816 7721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-5gs6g" podStartSLOduration=107.362796707 podStartE2EDuration="1m47.362796707s" podCreationTimestamp="2026-02-16 02:07:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:09:34.361700099 +0000 UTC m=+177.855934391" watchObservedRunningTime="2026-02-16 02:09:34.362796707 +0000 UTC m=+177.857030969" Feb 16 02:09:34.735009 master-0 kubenswrapper[7721]: I0216 02:09:34.734915 7721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0615fd34-eaf9-4a3a-8543-25a7a5747194" path="/var/lib/kubelet/pods/0615fd34-eaf9-4a3a-8543-25a7a5747194/volumes" Feb 16 02:09:34.736144 master-0 kubenswrapper[7721]: I0216 02:09:34.736090 7721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7eda8a42-765e-47cf-896f-324e8185062e" path="/var/lib/kubelet/pods/7eda8a42-765e-47cf-896f-324e8185062e/volumes" Feb 16 02:09:34.737156 master-0 kubenswrapper[7721]: I0216 02:09:34.737121 7721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8b1d77b-0955-44f3-a780-e8b6813aff0b" path="/var/lib/kubelet/pods/d8b1d77b-0955-44f3-a780-e8b6813aff0b/volumes" Feb 16 02:09:34.739143 master-0 kubenswrapper[7721]: I0216 02:09:34.739101 7721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2a7e185-78f4-4d69-b126-d465374a6218" path="/var/lib/kubelet/pods/f2a7e185-78f4-4d69-b126-d465374a6218/volumes" Feb 16 02:09:35.287402 master-0 kubenswrapper[7721]: E0216 02:09:35.287254 7721 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{community-operators-trqk8.189497fed508d26e openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:community-operators-trqk8,UID:ab463f74-d1e7-44f1-9634-d9f63685b06d,APIVersion:v1,ResourceVersion:7445,FieldPath:spec.containers{registry-server},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:07:54.215060078 +0000 UTC m=+77.709294340,LastTimestamp:2026-02-16 02:07:54.215060078 +0000 UTC m=+77.709294340,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:09:35.315876 master-0 kubenswrapper[7721]: I0216 02:09:35.315827 7721 patch_prober.go:28] interesting pod/packageserver-87777c9b7-fxzh6 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.128.0.65:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 02:09:35.316034 master-0 kubenswrapper[7721]: I0216 02:09:35.315893 7721 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-87777c9b7-fxzh6" podUID="dc3354cb-b6c3-40a5-a695-cccb079ad292" containerName="packageserver" probeResult="failure" output="Get \"https://10.128.0.65:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 02:09:36.320608 master-0 kubenswrapper[7721]: I0216 02:09:36.320523 7721 patch_prober.go:28] interesting pod/packageserver-87777c9b7-fxzh6 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.128.0.65:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 02:09:36.321204 master-0 kubenswrapper[7721]: I0216 02:09:36.320611 7721 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-87777c9b7-fxzh6" podUID="dc3354cb-b6c3-40a5-a695-cccb079ad292" containerName="packageserver" probeResult="failure" output="Get \"https://10.128.0.65:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 02:09:36.478645 master-0 kubenswrapper[7721]: I0216 02:09:36.478461 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Feb 16 02:09:36.478645 master-0 kubenswrapper[7721]: I0216 02:09:36.478629 7721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Feb 16 02:09:36.501606 master-0 kubenswrapper[7721]: I0216 02:09:36.501575 7721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Feb 16 02:09:37.335714 master-0 kubenswrapper[7721]: I0216 02:09:37.335614 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-9rvcj" event={"ID":"48863ff6-63ac-42d7-bac7-29d888c92db9","Type":"ContainerStarted","Data":"1169ba7d80653acfb978496c38f306905e7dc8028752f494ebda1e9356b7b0b5"} Feb 16 02:09:37.341657 master-0 kubenswrapper[7721]: I0216 02:09:37.341608 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-cb4f7b4cf-llpf5" event={"ID":"0abea413-e08a-465a-8ec4-2be650bfd5bd","Type":"ContainerStarted","Data":"afb96953f5c3cd6aee2ad5eefab84419a5d156247d7b12ea9a161e50afc4ca77"} Feb 16 02:09:37.345220 master-0 kubenswrapper[7721]: I0216 02:09:37.345168 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-frvgm" event={"ID":"27a42eb0-677c-414d-b0ec-f945ec39b7e9","Type":"ContainerStarted","Data":"267696fb38babb20fc395a54079b38f152fefbd86e2ea03cbc3e7911ddb32292"} Feb 16 02:09:37.345322 master-0 kubenswrapper[7721]: I0216 02:09:37.345228 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-frvgm" event={"ID":"27a42eb0-677c-414d-b0ec-f945ec39b7e9","Type":"ContainerStarted","Data":"88fcce026048d13fa9f5a17c335729461124a54c88a8e317918ea36be6c9ba26"} Feb 16 02:09:37.347378 master-0 kubenswrapper[7721]: I0216 02:09:37.347328 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-qm7rm" event={"ID":"a77e2f8f-d164-4a58-aab2-f3444c05cacb","Type":"ContainerStarted","Data":"992140dbf9ae65014df74f84c27ac943b6aa3fa48ebab2a299f13ea17d92ff73"} Feb 16 02:09:37.370367 master-0 kubenswrapper[7721]: I0216 02:09:37.370247 7721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-9rvcj" podStartSLOduration=108.813102972 podStartE2EDuration="1m52.370218091s" podCreationTimestamp="2026-02-16 02:07:45 +0000 UTC" firstStartedPulling="2026-02-16 02:09:33.056817601 +0000 UTC m=+176.551051863" lastFinishedPulling="2026-02-16 02:09:36.61393272 +0000 UTC m=+180.108166982" observedRunningTime="2026-02-16 02:09:37.362802327 +0000 UTC m=+180.857036649" watchObservedRunningTime="2026-02-16 02:09:37.370218091 +0000 UTC m=+180.864452383" Feb 16 02:09:37.393910 master-0 kubenswrapper[7721]: I0216 02:09:37.393794 7721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-insights/insights-operator-cb4f7b4cf-llpf5" podStartSLOduration=107.370471437 podStartE2EDuration="1m51.393750016s" podCreationTimestamp="2026-02-16 02:07:46 +0000 UTC" firstStartedPulling="2026-02-16 02:09:32.588730033 +0000 UTC m=+176.082964325" lastFinishedPulling="2026-02-16 02:09:36.612008602 +0000 UTC m=+180.106242904" observedRunningTime="2026-02-16 02:09:37.387423178 +0000 UTC m=+180.881657450" watchObservedRunningTime="2026-02-16 02:09:37.393750016 +0000 UTC m=+180.887984288" Feb 16 02:09:37.416521 master-0 kubenswrapper[7721]: I0216 02:09:37.415244 7721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-frvgm" podStartSLOduration=109.381723616 podStartE2EDuration="1m53.415216388s" podCreationTimestamp="2026-02-16 02:07:44 +0000 UTC" firstStartedPulling="2026-02-16 02:09:32.569782033 +0000 UTC m=+176.064016315" lastFinishedPulling="2026-02-16 02:09:36.603274825 +0000 UTC m=+180.097509087" observedRunningTime="2026-02-16 02:09:37.410516992 +0000 UTC m=+180.904751264" watchObservedRunningTime="2026-02-16 02:09:37.415216388 +0000 UTC m=+180.909450660" Feb 16 02:09:37.455693 master-0 kubenswrapper[7721]: I0216 02:09:37.455556 7721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-qm7rm" podStartSLOduration=107.397150659 podStartE2EDuration="1m51.455526279s" podCreationTimestamp="2026-02-16 02:07:46 +0000 UTC" firstStartedPulling="2026-02-16 02:09:32.565103107 +0000 UTC m=+176.059337409" lastFinishedPulling="2026-02-16 02:09:36.623478767 +0000 UTC m=+180.117713029" observedRunningTime="2026-02-16 02:09:37.451341095 +0000 UTC m=+180.945575367" watchObservedRunningTime="2026-02-16 02:09:37.455526279 +0000 UTC m=+180.949760581" Feb 16 02:09:38.923015 master-0 kubenswrapper[7721]: E0216 02:09:38.922882 7721 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 02:09:40.135288 master-0 kubenswrapper[7721]: I0216 02:09:40.135222 7721 patch_prober.go:28] interesting pod/packageserver-87777c9b7-fxzh6 container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.128.0.65:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 02:09:40.135934 master-0 kubenswrapper[7721]: I0216 02:09:40.135300 7721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-87777c9b7-fxzh6" podUID="dc3354cb-b6c3-40a5-a695-cccb079ad292" containerName="packageserver" probeResult="failure" output="Get \"https://10.128.0.65:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 02:09:40.135934 master-0 kubenswrapper[7721]: I0216 02:09:40.135407 7721 patch_prober.go:28] interesting pod/packageserver-87777c9b7-fxzh6 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.128.0.65:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 02:09:40.135934 master-0 kubenswrapper[7721]: I0216 02:09:40.135551 7721 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-87777c9b7-fxzh6" podUID="dc3354cb-b6c3-40a5-a695-cccb079ad292" containerName="packageserver" probeResult="failure" output="Get \"https://10.128.0.65:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 02:09:40.826025 master-0 kubenswrapper[7721]: I0216 02:09:40.825936 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 02:09:41.299023 master-0 kubenswrapper[7721]: E0216 02:09:41.298919 7721 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Feb 16 02:09:41.496283 master-0 kubenswrapper[7721]: I0216 02:09:41.496200 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Feb 16 02:09:41.843061 master-0 kubenswrapper[7721]: I0216 02:09:41.843007 7721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 02:09:42.382474 master-0 kubenswrapper[7721]: I0216 02:09:42.382143 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-qw2zq" event={"ID":"fec84b8a-a0d1-4b07-8827-cef0beb89ecd","Type":"ContainerStarted","Data":"54e4f3bd63acfa80c546903eac7441d247818158a150a69ea32c8395383dd3ba"} Feb 16 02:09:42.383982 master-0 kubenswrapper[7721]: I0216 02:09:42.383897 7721 generic.go:334] "Generic (PLEG): container finished" podID="bde83629-b39c-401e-bc30-5ce205638918" containerID="4d828055a40abd365d5f9304f3bb2e1eea303420e0dae2b1729b6e96c17c65b6" exitCode=0 Feb 16 02:09:42.384108 master-0 kubenswrapper[7721]: I0216 02:09:42.384021 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-8nl7s" event={"ID":"bde83629-b39c-401e-bc30-5ce205638918","Type":"ContainerDied","Data":"4d828055a40abd365d5f9304f3bb2e1eea303420e0dae2b1729b6e96c17c65b6"} Feb 16 02:09:42.386026 master-0 kubenswrapper[7721]: I0216 02:09:42.385960 7721 scope.go:117] "RemoveContainer" containerID="4d828055a40abd365d5f9304f3bb2e1eea303420e0dae2b1729b6e96c17c65b6" Feb 16 02:09:42.386213 master-0 kubenswrapper[7721]: I0216 02:09:42.386147 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-n8xmg" event={"ID":"30fef0d5-46ea-4fa3-9ffa-88187d010ffe","Type":"ContainerStarted","Data":"4f4386d569551a2cb1add9279ae5e39db1d0c3382f70cefdecbf2167f005bf64"} Feb 16 02:09:42.388719 master-0 kubenswrapper[7721]: I0216 02:09:42.388614 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-466x9_a3065737-c7c0-4fbb-b484-f2a9204d4908/snapshot-controller/0.log" Feb 16 02:09:42.388719 master-0 kubenswrapper[7721]: I0216 02:09:42.388652 7721 generic.go:334] "Generic (PLEG): container finished" podID="a3065737-c7c0-4fbb-b484-f2a9204d4908" containerID="37393f3209e22fdba80463ac1612aee9793e0477a277020982d8df5dfbf209db" exitCode=1 Feb 16 02:09:42.388719 master-0 kubenswrapper[7721]: I0216 02:09:42.388677 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-466x9" event={"ID":"a3065737-c7c0-4fbb-b484-f2a9204d4908","Type":"ContainerDied","Data":"37393f3209e22fdba80463ac1612aee9793e0477a277020982d8df5dfbf209db"} Feb 16 02:09:42.388975 master-0 kubenswrapper[7721]: I0216 02:09:42.388929 7721 scope.go:117] "RemoveContainer" containerID="37393f3209e22fdba80463ac1612aee9793e0477a277020982d8df5dfbf209db" Feb 16 02:09:42.419766 master-0 kubenswrapper[7721]: I0216 02:09:42.419653 7721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-qw2zq" podStartSLOduration=105.444891474 podStartE2EDuration="1m54.419626689s" podCreationTimestamp="2026-02-16 02:07:48 +0000 UTC" firstStartedPulling="2026-02-16 02:09:33.035555974 +0000 UTC m=+176.529790236" lastFinishedPulling="2026-02-16 02:09:42.010291149 +0000 UTC m=+185.504525451" observedRunningTime="2026-02-16 02:09:42.418894541 +0000 UTC m=+185.913128833" watchObservedRunningTime="2026-02-16 02:09:42.419626689 +0000 UTC m=+185.913860981" Feb 16 02:09:42.451341 master-0 kubenswrapper[7721]: I0216 02:09:42.451239 7721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-n8xmg" podStartSLOduration=111.549024468 podStartE2EDuration="2m0.451218673s" podCreationTimestamp="2026-02-16 02:07:42 +0000 UTC" firstStartedPulling="2026-02-16 02:09:33.138648392 +0000 UTC m=+176.632882644" lastFinishedPulling="2026-02-16 02:09:42.040842577 +0000 UTC m=+185.535076849" observedRunningTime="2026-02-16 02:09:42.448884515 +0000 UTC m=+185.943118807" watchObservedRunningTime="2026-02-16 02:09:42.451218673 +0000 UTC m=+185.945452965" Feb 16 02:09:43.400076 master-0 kubenswrapper[7721]: I0216 02:09:43.399987 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-466x9_a3065737-c7c0-4fbb-b484-f2a9204d4908/snapshot-controller/0.log" Feb 16 02:09:43.400957 master-0 kubenswrapper[7721]: I0216 02:09:43.400200 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-466x9" event={"ID":"a3065737-c7c0-4fbb-b484-f2a9204d4908","Type":"ContainerStarted","Data":"913a5d08144597878e120125e796f5de9a81becfa80d2751d362c9b509551b8f"} Feb 16 02:09:43.404834 master-0 kubenswrapper[7721]: I0216 02:09:43.404743 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-8nl7s" event={"ID":"bde83629-b39c-401e-bc30-5ce205638918","Type":"ContainerStarted","Data":"878600941ff09ae766ef1ccc9a324f0c6d5cbe6f0b05660545fe5e976ad49b02"} Feb 16 02:09:43.405559 master-0 kubenswrapper[7721]: I0216 02:09:43.405485 7721 status_manager.go:317] "Container readiness changed for unknown container" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-8nl7s" containerID="cri-o://4d828055a40abd365d5f9304f3bb2e1eea303420e0dae2b1729b6e96c17c65b6" Feb 16 02:09:43.405559 master-0 kubenswrapper[7721]: I0216 02:09:43.405542 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-8nl7s" Feb 16 02:09:44.411382 master-0 kubenswrapper[7721]: I0216 02:09:44.411268 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-8nl7s" Feb 16 02:09:44.413481 master-0 kubenswrapper[7721]: I0216 02:09:44.413403 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-8nl7s" Feb 16 02:09:44.843112 master-0 kubenswrapper[7721]: I0216 02:09:44.842976 7721 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 02:09:45.585348 master-0 kubenswrapper[7721]: E0216 02:09:45.585227 7721 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Feb 16 02:09:47.437089 master-0 kubenswrapper[7721]: I0216 02:09:47.436837 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-67bc7c997f-zc2br_857357a1-dc98-4dd5-98b3-c94b1ddf9dec/manager/0.log" Feb 16 02:09:47.438062 master-0 kubenswrapper[7721]: I0216 02:09:47.437500 7721 generic.go:334] "Generic (PLEG): container finished" podID="857357a1-dc98-4dd5-98b3-c94b1ddf9dec" containerID="5636e1e80751f3a3c96789a21a3143daf15c7ab0cfa132d87dcb28a679f13f01" exitCode=1 Feb 16 02:09:47.438062 master-0 kubenswrapper[7721]: I0216 02:09:47.437602 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-zc2br" event={"ID":"857357a1-dc98-4dd5-98b3-c94b1ddf9dec","Type":"ContainerDied","Data":"5636e1e80751f3a3c96789a21a3143daf15c7ab0cfa132d87dcb28a679f13f01"} Feb 16 02:09:47.438768 master-0 kubenswrapper[7721]: I0216 02:09:47.438722 7721 scope.go:117] "RemoveContainer" containerID="5636e1e80751f3a3c96789a21a3143daf15c7ab0cfa132d87dcb28a679f13f01" Feb 16 02:09:47.441375 master-0 kubenswrapper[7721]: I0216 02:09:47.441312 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-85c9b89969-g9lcm_27d876a7-6a48-4942-ad96-ed8ed3aa104b/manager/0.log" Feb 16 02:09:47.441711 master-0 kubenswrapper[7721]: I0216 02:09:47.441388 7721 generic.go:334] "Generic (PLEG): container finished" podID="27d876a7-6a48-4942-ad96-ed8ed3aa104b" containerID="940771c91c013a004b3132c01c764c048ed22316fa2e21d7b58deed65f3ed4cf" exitCode=1 Feb 16 02:09:47.441711 master-0 kubenswrapper[7721]: I0216 02:09:47.441461 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-g9lcm" event={"ID":"27d876a7-6a48-4942-ad96-ed8ed3aa104b","Type":"ContainerDied","Data":"940771c91c013a004b3132c01c764c048ed22316fa2e21d7b58deed65f3ed4cf"} Feb 16 02:09:47.441990 master-0 kubenswrapper[7721]: I0216 02:09:47.441944 7721 scope.go:117] "RemoveContainer" containerID="940771c91c013a004b3132c01c764c048ed22316fa2e21d7b58deed65f3ed4cf" Feb 16 02:09:48.455772 master-0 kubenswrapper[7721]: I0216 02:09:48.455673 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-67bc7c997f-zc2br_857357a1-dc98-4dd5-98b3-c94b1ddf9dec/manager/0.log" Feb 16 02:09:48.456591 master-0 kubenswrapper[7721]: I0216 02:09:48.456354 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-zc2br" event={"ID":"857357a1-dc98-4dd5-98b3-c94b1ddf9dec","Type":"ContainerStarted","Data":"0f7ba85e2cd54b9d76dada87f2712d689c26beb2b9f0778369e602e1815aefe6"} Feb 16 02:09:48.456886 master-0 kubenswrapper[7721]: I0216 02:09:48.456821 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-zc2br" Feb 16 02:09:48.460151 master-0 kubenswrapper[7721]: I0216 02:09:48.460076 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-85c9b89969-g9lcm_27d876a7-6a48-4942-ad96-ed8ed3aa104b/manager/0.log" Feb 16 02:09:48.460314 master-0 kubenswrapper[7721]: I0216 02:09:48.460165 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-g9lcm" event={"ID":"27d876a7-6a48-4942-ad96-ed8ed3aa104b","Type":"ContainerStarted","Data":"bb54fbd185420265f2400dbce5bb93b2c07ec50f3a0611291aab6640cc25bca3"} Feb 16 02:09:48.461272 master-0 kubenswrapper[7721]: I0216 02:09:48.461209 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-g9lcm" Feb 16 02:09:49.827191 master-0 kubenswrapper[7721]: I0216 02:09:49.827109 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0"] Feb 16 02:09:50.135602 master-0 kubenswrapper[7721]: I0216 02:09:50.135408 7721 patch_prober.go:28] interesting pod/packageserver-87777c9b7-fxzh6 container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.128.0.65:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 02:09:50.135602 master-0 kubenswrapper[7721]: I0216 02:09:50.135486 7721 patch_prober.go:28] interesting pod/packageserver-87777c9b7-fxzh6 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.128.0.65:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 02:09:50.135904 master-0 kubenswrapper[7721]: I0216 02:09:50.135587 7721 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-87777c9b7-fxzh6" podUID="dc3354cb-b6c3-40a5-a695-cccb079ad292" containerName="packageserver" probeResult="failure" output="Get \"https://10.128.0.65:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 02:09:50.135904 master-0 kubenswrapper[7721]: I0216 02:09:50.135513 7721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-87777c9b7-fxzh6" podUID="dc3354cb-b6c3-40a5-a695-cccb079ad292" containerName="packageserver" probeResult="failure" output="Get \"https://10.128.0.65:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 02:09:51.487882 master-0 kubenswrapper[7721]: I0216 02:09:51.487786 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-5f5f84757d-b47jp_6c02961f-30ec-4405-b7fa-9c4192342ae9/openshift-controller-manager-operator/1.log" Feb 16 02:09:51.489255 master-0 kubenswrapper[7721]: I0216 02:09:51.489189 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-5f5f84757d-b47jp_6c02961f-30ec-4405-b7fa-9c4192342ae9/openshift-controller-manager-operator/0.log" Feb 16 02:09:51.489355 master-0 kubenswrapper[7721]: I0216 02:09:51.489280 7721 generic.go:334] "Generic (PLEG): container finished" podID="6c02961f-30ec-4405-b7fa-9c4192342ae9" containerID="907bfaa35e251ac0a99127e043064ef8d7828048025a8b998d4e1bd9a8208385" exitCode=255 Feb 16 02:09:51.489355 master-0 kubenswrapper[7721]: I0216 02:09:51.489330 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-b47jp" event={"ID":"6c02961f-30ec-4405-b7fa-9c4192342ae9","Type":"ContainerDied","Data":"907bfaa35e251ac0a99127e043064ef8d7828048025a8b998d4e1bd9a8208385"} Feb 16 02:09:51.489548 master-0 kubenswrapper[7721]: I0216 02:09:51.489381 7721 scope.go:117] "RemoveContainer" containerID="4af3b63baf882cf9b9d02a791b48a2c854ad5ccd1fbd43903fb2e66b8e587e95" Feb 16 02:09:51.490331 master-0 kubenswrapper[7721]: I0216 02:09:51.490259 7721 scope.go:117] "RemoveContainer" containerID="907bfaa35e251ac0a99127e043064ef8d7828048025a8b998d4e1bd9a8208385" Feb 16 02:09:51.490776 master-0 kubenswrapper[7721]: E0216 02:09:51.490697 7721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-controller-manager-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=openshift-controller-manager-operator pod=openshift-controller-manager-operator-5f5f84757d-b47jp_openshift-controller-manager-operator(6c02961f-30ec-4405-b7fa-9c4192342ae9)\"" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-b47jp" podUID="6c02961f-30ec-4405-b7fa-9c4192342ae9" Feb 16 02:09:51.557473 master-0 kubenswrapper[7721]: I0216 02:09:51.556033 7721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0" podStartSLOduration=2.556005207 podStartE2EDuration="2.556005207s" podCreationTimestamp="2026-02-16 02:09:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:09:51.545811304 +0000 UTC m=+195.040045606" watchObservedRunningTime="2026-02-16 02:09:51.556005207 +0000 UTC m=+195.050239509" Feb 16 02:09:51.849501 master-0 kubenswrapper[7721]: I0216 02:09:51.849316 7721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 02:09:51.857345 master-0 kubenswrapper[7721]: I0216 02:09:51.857282 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 02:09:52.500583 master-0 kubenswrapper[7721]: I0216 02:09:52.500378 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-5f5f84757d-b47jp_6c02961f-30ec-4405-b7fa-9c4192342ae9/openshift-controller-manager-operator/1.log" Feb 16 02:09:56.846138 master-0 kubenswrapper[7721]: I0216 02:09:56.846005 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-zc2br" Feb 16 02:09:56.964644 master-0 kubenswrapper[7721]: I0216 02:09:56.964571 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-g9lcm" Feb 16 02:09:58.020179 master-0 kubenswrapper[7721]: I0216 02:09:58.020085 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5788fc6459-29m25"] Feb 16 02:09:58.021342 master-0 kubenswrapper[7721]: E0216 02:09:58.020523 7721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8b1d77b-0955-44f3-a780-e8b6813aff0b" containerName="fix-audit-permissions" Feb 16 02:09:58.021342 master-0 kubenswrapper[7721]: I0216 02:09:58.020547 7721 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8b1d77b-0955-44f3-a780-e8b6813aff0b" containerName="fix-audit-permissions" Feb 16 02:09:58.021342 master-0 kubenswrapper[7721]: E0216 02:09:58.020568 7721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ea4c28c-8f53-4b41-9c85-c8c50599d7cd" containerName="installer" Feb 16 02:09:58.021342 master-0 kubenswrapper[7721]: I0216 02:09:58.020581 7721 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ea4c28c-8f53-4b41-9c85-c8c50599d7cd" containerName="installer" Feb 16 02:09:58.021342 master-0 kubenswrapper[7721]: E0216 02:09:58.020607 7721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80f43f07-ce08-4c21-9463-ea983a110244" containerName="installer" Feb 16 02:09:58.021342 master-0 kubenswrapper[7721]: I0216 02:09:58.020620 7721 state_mem.go:107] "Deleted CPUSet assignment" podUID="80f43f07-ce08-4c21-9463-ea983a110244" containerName="installer" Feb 16 02:09:58.021342 master-0 kubenswrapper[7721]: E0216 02:09:58.020645 7721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8b1d77b-0955-44f3-a780-e8b6813aff0b" containerName="oauth-apiserver" Feb 16 02:09:58.021342 master-0 kubenswrapper[7721]: I0216 02:09:58.020658 7721 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8b1d77b-0955-44f3-a780-e8b6813aff0b" containerName="oauth-apiserver" Feb 16 02:09:58.021342 master-0 kubenswrapper[7721]: E0216 02:09:58.020686 7721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4733c2df-0f5a-4696-b8c6-2568ebc7debc" containerName="installer" Feb 16 02:09:58.021342 master-0 kubenswrapper[7721]: I0216 02:09:58.020698 7721 state_mem.go:107] "Deleted CPUSet assignment" podUID="4733c2df-0f5a-4696-b8c6-2568ebc7debc" containerName="installer" Feb 16 02:09:58.021342 master-0 kubenswrapper[7721]: E0216 02:09:58.020725 7721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0615fd34-eaf9-4a3a-8543-25a7a5747194" containerName="installer" Feb 16 02:09:58.021342 master-0 kubenswrapper[7721]: I0216 02:09:58.020740 7721 state_mem.go:107] "Deleted CPUSet assignment" podUID="0615fd34-eaf9-4a3a-8543-25a7a5747194" containerName="installer" Feb 16 02:09:58.021342 master-0 kubenswrapper[7721]: I0216 02:09:58.020896 7721 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ea4c28c-8f53-4b41-9c85-c8c50599d7cd" containerName="installer" Feb 16 02:09:58.021342 master-0 kubenswrapper[7721]: I0216 02:09:58.020916 7721 memory_manager.go:354] "RemoveStaleState removing state" podUID="0615fd34-eaf9-4a3a-8543-25a7a5747194" containerName="installer" Feb 16 02:09:58.021342 master-0 kubenswrapper[7721]: I0216 02:09:58.020939 7721 memory_manager.go:354] "RemoveStaleState removing state" podUID="4733c2df-0f5a-4696-b8c6-2568ebc7debc" containerName="installer" Feb 16 02:09:58.021342 master-0 kubenswrapper[7721]: I0216 02:09:58.020959 7721 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8b1d77b-0955-44f3-a780-e8b6813aff0b" containerName="oauth-apiserver" Feb 16 02:09:58.021342 master-0 kubenswrapper[7721]: I0216 02:09:58.020976 7721 memory_manager.go:354] "RemoveStaleState removing state" podUID="80f43f07-ce08-4c21-9463-ea983a110244" containerName="installer" Feb 16 02:09:58.022743 master-0 kubenswrapper[7721]: I0216 02:09:58.021624 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5788fc6459-29m25" Feb 16 02:09:58.025593 master-0 kubenswrapper[7721]: I0216 02:09:58.025526 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 02:09:58.026134 master-0 kubenswrapper[7721]: I0216 02:09:58.026049 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 02:09:58.026552 master-0 kubenswrapper[7721]: I0216 02:09:58.026500 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 02:09:58.027363 master-0 kubenswrapper[7721]: I0216 02:09:58.027308 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 02:09:58.028770 master-0 kubenswrapper[7721]: I0216 02:09:58.028715 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-8pgh8" Feb 16 02:09:58.029318 master-0 kubenswrapper[7721]: I0216 02:09:58.029266 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-998bd8b4b-hm5k2"] Feb 16 02:09:58.030523 master-0 kubenswrapper[7721]: I0216 02:09:58.030471 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-998bd8b4b-hm5k2" Feb 16 02:09:58.030650 master-0 kubenswrapper[7721]: I0216 02:09:58.030603 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 02:09:58.036094 master-0 kubenswrapper[7721]: I0216 02:09:58.036006 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-6796f86fd6-qtxkl"] Feb 16 02:09:58.036321 master-0 kubenswrapper[7721]: I0216 02:09:58.036255 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 02:09:58.037530 master-0 kubenswrapper[7721]: I0216 02:09:58.037492 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-6796f86fd6-qtxkl" Feb 16 02:09:58.040605 master-0 kubenswrapper[7721]: I0216 02:09:58.040535 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 02:09:58.040879 master-0 kubenswrapper[7721]: I0216 02:09:58.040838 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 02:09:58.041132 master-0 kubenswrapper[7721]: I0216 02:09:58.041086 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 02:09:58.042385 master-0 kubenswrapper[7721]: I0216 02:09:58.042331 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 02:09:58.042717 master-0 kubenswrapper[7721]: I0216 02:09:58.042663 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 16 02:09:58.045108 master-0 kubenswrapper[7721]: I0216 02:09:58.045048 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 16 02:09:58.045108 master-0 kubenswrapper[7721]: I0216 02:09:58.045095 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 16 02:09:58.045322 master-0 kubenswrapper[7721]: I0216 02:09:58.045208 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 16 02:09:58.045615 master-0 kubenswrapper[7721]: I0216 02:09:58.045577 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 16 02:09:58.045615 master-0 kubenswrapper[7721]: I0216 02:09:58.045607 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 16 02:09:58.045779 master-0 kubenswrapper[7721]: I0216 02:09:58.045662 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 02:09:58.045941 master-0 kubenswrapper[7721]: I0216 02:09:58.045893 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 16 02:09:58.046405 master-0 kubenswrapper[7721]: I0216 02:09:58.046192 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 16 02:09:58.056982 master-0 kubenswrapper[7721]: I0216 02:09:58.056390 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5788fc6459-29m25"] Feb 16 02:09:58.060466 master-0 kubenswrapper[7721]: I0216 02:09:58.060375 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-998bd8b4b-hm5k2"] Feb 16 02:09:58.072585 master-0 kubenswrapper[7721]: I0216 02:09:58.072464 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-6796f86fd6-qtxkl"] Feb 16 02:09:58.100873 master-0 kubenswrapper[7721]: I0216 02:09:58.100794 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9n58j\" (UniqueName: \"kubernetes.io/projected/22739961-e322-47f1-b232-eaa4cc35319c-kube-api-access-9n58j\") pod \"apiserver-6796f86fd6-qtxkl\" (UID: \"22739961-e322-47f1-b232-eaa4cc35319c\") " pod="openshift-oauth-apiserver/apiserver-6796f86fd6-qtxkl" Feb 16 02:09:58.101098 master-0 kubenswrapper[7721]: I0216 02:09:58.100891 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22739961-e322-47f1-b232-eaa4cc35319c-trusted-ca-bundle\") pod \"apiserver-6796f86fd6-qtxkl\" (UID: \"22739961-e322-47f1-b232-eaa4cc35319c\") " pod="openshift-oauth-apiserver/apiserver-6796f86fd6-qtxkl" Feb 16 02:09:58.101098 master-0 kubenswrapper[7721]: I0216 02:09:58.101049 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/22739961-e322-47f1-b232-eaa4cc35319c-audit-policies\") pod \"apiserver-6796f86fd6-qtxkl\" (UID: \"22739961-e322-47f1-b232-eaa4cc35319c\") " pod="openshift-oauth-apiserver/apiserver-6796f86fd6-qtxkl" Feb 16 02:09:58.101230 master-0 kubenswrapper[7721]: I0216 02:09:58.101103 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22739961-e322-47f1-b232-eaa4cc35319c-serving-cert\") pod \"apiserver-6796f86fd6-qtxkl\" (UID: \"22739961-e322-47f1-b232-eaa4cc35319c\") " pod="openshift-oauth-apiserver/apiserver-6796f86fd6-qtxkl" Feb 16 02:09:58.101230 master-0 kubenswrapper[7721]: I0216 02:09:58.101206 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/22739961-e322-47f1-b232-eaa4cc35319c-audit-dir\") pod \"apiserver-6796f86fd6-qtxkl\" (UID: \"22739961-e322-47f1-b232-eaa4cc35319c\") " pod="openshift-oauth-apiserver/apiserver-6796f86fd6-qtxkl" Feb 16 02:09:58.101352 master-0 kubenswrapper[7721]: I0216 02:09:58.101267 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/22739961-e322-47f1-b232-eaa4cc35319c-etcd-serving-ca\") pod \"apiserver-6796f86fd6-qtxkl\" (UID: \"22739961-e322-47f1-b232-eaa4cc35319c\") " pod="openshift-oauth-apiserver/apiserver-6796f86fd6-qtxkl" Feb 16 02:09:58.101352 master-0 kubenswrapper[7721]: I0216 02:09:58.101336 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/22739961-e322-47f1-b232-eaa4cc35319c-etcd-client\") pod \"apiserver-6796f86fd6-qtxkl\" (UID: \"22739961-e322-47f1-b232-eaa4cc35319c\") " pod="openshift-oauth-apiserver/apiserver-6796f86fd6-qtxkl" Feb 16 02:09:58.101601 master-0 kubenswrapper[7721]: I0216 02:09:58.101540 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/22739961-e322-47f1-b232-eaa4cc35319c-encryption-config\") pod \"apiserver-6796f86fd6-qtxkl\" (UID: \"22739961-e322-47f1-b232-eaa4cc35319c\") " pod="openshift-oauth-apiserver/apiserver-6796f86fd6-qtxkl" Feb 16 02:09:58.204123 master-0 kubenswrapper[7721]: I0216 02:09:58.204051 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/83883885-f493-4559-9c0f-e28d69712475-client-ca\") pod \"route-controller-manager-998bd8b4b-hm5k2\" (UID: \"83883885-f493-4559-9c0f-e28d69712475\") " pod="openshift-route-controller-manager/route-controller-manager-998bd8b4b-hm5k2" Feb 16 02:09:58.204123 master-0 kubenswrapper[7721]: I0216 02:09:58.204102 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e491b5ed-9c09-4308-9843-fba8d43bd3ae-config\") pod \"controller-manager-5788fc6459-29m25\" (UID: \"e491b5ed-9c09-4308-9843-fba8d43bd3ae\") " pod="openshift-controller-manager/controller-manager-5788fc6459-29m25" Feb 16 02:09:58.204123 master-0 kubenswrapper[7721]: I0216 02:09:58.204127 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9n58j\" (UniqueName: \"kubernetes.io/projected/22739961-e322-47f1-b232-eaa4cc35319c-kube-api-access-9n58j\") pod \"apiserver-6796f86fd6-qtxkl\" (UID: \"22739961-e322-47f1-b232-eaa4cc35319c\") " pod="openshift-oauth-apiserver/apiserver-6796f86fd6-qtxkl" Feb 16 02:09:58.204578 master-0 kubenswrapper[7721]: I0216 02:09:58.204146 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22739961-e322-47f1-b232-eaa4cc35319c-trusted-ca-bundle\") pod \"apiserver-6796f86fd6-qtxkl\" (UID: \"22739961-e322-47f1-b232-eaa4cc35319c\") " pod="openshift-oauth-apiserver/apiserver-6796f86fd6-qtxkl" Feb 16 02:09:58.204578 master-0 kubenswrapper[7721]: I0216 02:09:58.204343 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nmjz\" (UniqueName: \"kubernetes.io/projected/83883885-f493-4559-9c0f-e28d69712475-kube-api-access-6nmjz\") pod \"route-controller-manager-998bd8b4b-hm5k2\" (UID: \"83883885-f493-4559-9c0f-e28d69712475\") " pod="openshift-route-controller-manager/route-controller-manager-998bd8b4b-hm5k2" Feb 16 02:09:58.204704 master-0 kubenswrapper[7721]: I0216 02:09:58.204591 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e491b5ed-9c09-4308-9843-fba8d43bd3ae-client-ca\") pod \"controller-manager-5788fc6459-29m25\" (UID: \"e491b5ed-9c09-4308-9843-fba8d43bd3ae\") " pod="openshift-controller-manager/controller-manager-5788fc6459-29m25" Feb 16 02:09:58.204704 master-0 kubenswrapper[7721]: I0216 02:09:58.204645 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/22739961-e322-47f1-b232-eaa4cc35319c-audit-policies\") pod \"apiserver-6796f86fd6-qtxkl\" (UID: \"22739961-e322-47f1-b232-eaa4cc35319c\") " pod="openshift-oauth-apiserver/apiserver-6796f86fd6-qtxkl" Feb 16 02:09:58.204704 master-0 kubenswrapper[7721]: I0216 02:09:58.204677 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22739961-e322-47f1-b232-eaa4cc35319c-trusted-ca-bundle\") pod \"apiserver-6796f86fd6-qtxkl\" (UID: \"22739961-e322-47f1-b232-eaa4cc35319c\") " pod="openshift-oauth-apiserver/apiserver-6796f86fd6-qtxkl" Feb 16 02:09:58.204704 master-0 kubenswrapper[7721]: I0216 02:09:58.204682 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22739961-e322-47f1-b232-eaa4cc35319c-serving-cert\") pod \"apiserver-6796f86fd6-qtxkl\" (UID: \"22739961-e322-47f1-b232-eaa4cc35319c\") " pod="openshift-oauth-apiserver/apiserver-6796f86fd6-qtxkl" Feb 16 02:09:58.204950 master-0 kubenswrapper[7721]: I0216 02:09:58.204727 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4p8p\" (UniqueName: \"kubernetes.io/projected/e491b5ed-9c09-4308-9843-fba8d43bd3ae-kube-api-access-j4p8p\") pod \"controller-manager-5788fc6459-29m25\" (UID: \"e491b5ed-9c09-4308-9843-fba8d43bd3ae\") " pod="openshift-controller-manager/controller-manager-5788fc6459-29m25" Feb 16 02:09:58.204950 master-0 kubenswrapper[7721]: I0216 02:09:58.204749 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83883885-f493-4559-9c0f-e28d69712475-config\") pod \"route-controller-manager-998bd8b4b-hm5k2\" (UID: \"83883885-f493-4559-9c0f-e28d69712475\") " pod="openshift-route-controller-manager/route-controller-manager-998bd8b4b-hm5k2" Feb 16 02:09:58.204950 master-0 kubenswrapper[7721]: I0216 02:09:58.204773 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/22739961-e322-47f1-b232-eaa4cc35319c-audit-dir\") pod \"apiserver-6796f86fd6-qtxkl\" (UID: \"22739961-e322-47f1-b232-eaa4cc35319c\") " pod="openshift-oauth-apiserver/apiserver-6796f86fd6-qtxkl" Feb 16 02:09:58.204950 master-0 kubenswrapper[7721]: I0216 02:09:58.204794 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/22739961-e322-47f1-b232-eaa4cc35319c-etcd-serving-ca\") pod \"apiserver-6796f86fd6-qtxkl\" (UID: \"22739961-e322-47f1-b232-eaa4cc35319c\") " pod="openshift-oauth-apiserver/apiserver-6796f86fd6-qtxkl" Feb 16 02:09:58.204950 master-0 kubenswrapper[7721]: I0216 02:09:58.204814 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/22739961-e322-47f1-b232-eaa4cc35319c-etcd-client\") pod \"apiserver-6796f86fd6-qtxkl\" (UID: \"22739961-e322-47f1-b232-eaa4cc35319c\") " pod="openshift-oauth-apiserver/apiserver-6796f86fd6-qtxkl" Feb 16 02:09:58.204950 master-0 kubenswrapper[7721]: I0216 02:09:58.204832 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e491b5ed-9c09-4308-9843-fba8d43bd3ae-serving-cert\") pod \"controller-manager-5788fc6459-29m25\" (UID: \"e491b5ed-9c09-4308-9843-fba8d43bd3ae\") " pod="openshift-controller-manager/controller-manager-5788fc6459-29m25" Feb 16 02:09:58.204950 master-0 kubenswrapper[7721]: I0216 02:09:58.204859 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e491b5ed-9c09-4308-9843-fba8d43bd3ae-proxy-ca-bundles\") pod \"controller-manager-5788fc6459-29m25\" (UID: \"e491b5ed-9c09-4308-9843-fba8d43bd3ae\") " pod="openshift-controller-manager/controller-manager-5788fc6459-29m25" Feb 16 02:09:58.204950 master-0 kubenswrapper[7721]: I0216 02:09:58.204879 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83883885-f493-4559-9c0f-e28d69712475-serving-cert\") pod \"route-controller-manager-998bd8b4b-hm5k2\" (UID: \"83883885-f493-4559-9c0f-e28d69712475\") " pod="openshift-route-controller-manager/route-controller-manager-998bd8b4b-hm5k2" Feb 16 02:09:58.204950 master-0 kubenswrapper[7721]: I0216 02:09:58.204872 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/22739961-e322-47f1-b232-eaa4cc35319c-audit-dir\") pod \"apiserver-6796f86fd6-qtxkl\" (UID: \"22739961-e322-47f1-b232-eaa4cc35319c\") " pod="openshift-oauth-apiserver/apiserver-6796f86fd6-qtxkl" Feb 16 02:09:58.204950 master-0 kubenswrapper[7721]: I0216 02:09:58.204903 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/22739961-e322-47f1-b232-eaa4cc35319c-encryption-config\") pod \"apiserver-6796f86fd6-qtxkl\" (UID: \"22739961-e322-47f1-b232-eaa4cc35319c\") " pod="openshift-oauth-apiserver/apiserver-6796f86fd6-qtxkl" Feb 16 02:09:58.206163 master-0 kubenswrapper[7721]: I0216 02:09:58.206096 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/22739961-e322-47f1-b232-eaa4cc35319c-etcd-serving-ca\") pod \"apiserver-6796f86fd6-qtxkl\" (UID: \"22739961-e322-47f1-b232-eaa4cc35319c\") " pod="openshift-oauth-apiserver/apiserver-6796f86fd6-qtxkl" Feb 16 02:09:58.207467 master-0 kubenswrapper[7721]: I0216 02:09:58.207060 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/22739961-e322-47f1-b232-eaa4cc35319c-audit-policies\") pod \"apiserver-6796f86fd6-qtxkl\" (UID: \"22739961-e322-47f1-b232-eaa4cc35319c\") " pod="openshift-oauth-apiserver/apiserver-6796f86fd6-qtxkl" Feb 16 02:09:58.210612 master-0 kubenswrapper[7721]: I0216 02:09:58.209769 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/22739961-e322-47f1-b232-eaa4cc35319c-encryption-config\") pod \"apiserver-6796f86fd6-qtxkl\" (UID: \"22739961-e322-47f1-b232-eaa4cc35319c\") " pod="openshift-oauth-apiserver/apiserver-6796f86fd6-qtxkl" Feb 16 02:09:58.213271 master-0 kubenswrapper[7721]: I0216 02:09:58.213210 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/22739961-e322-47f1-b232-eaa4cc35319c-etcd-client\") pod \"apiserver-6796f86fd6-qtxkl\" (UID: \"22739961-e322-47f1-b232-eaa4cc35319c\") " pod="openshift-oauth-apiserver/apiserver-6796f86fd6-qtxkl" Feb 16 02:09:58.213271 master-0 kubenswrapper[7721]: I0216 02:09:58.213261 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22739961-e322-47f1-b232-eaa4cc35319c-serving-cert\") pod \"apiserver-6796f86fd6-qtxkl\" (UID: \"22739961-e322-47f1-b232-eaa4cc35319c\") " pod="openshift-oauth-apiserver/apiserver-6796f86fd6-qtxkl" Feb 16 02:09:58.228358 master-0 kubenswrapper[7721]: I0216 02:09:58.228310 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9n58j\" (UniqueName: \"kubernetes.io/projected/22739961-e322-47f1-b232-eaa4cc35319c-kube-api-access-9n58j\") pod \"apiserver-6796f86fd6-qtxkl\" (UID: \"22739961-e322-47f1-b232-eaa4cc35319c\") " pod="openshift-oauth-apiserver/apiserver-6796f86fd6-qtxkl" Feb 16 02:09:58.305496 master-0 kubenswrapper[7721]: I0216 02:09:58.305317 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4p8p\" (UniqueName: \"kubernetes.io/projected/e491b5ed-9c09-4308-9843-fba8d43bd3ae-kube-api-access-j4p8p\") pod \"controller-manager-5788fc6459-29m25\" (UID: \"e491b5ed-9c09-4308-9843-fba8d43bd3ae\") " pod="openshift-controller-manager/controller-manager-5788fc6459-29m25" Feb 16 02:09:58.305496 master-0 kubenswrapper[7721]: I0216 02:09:58.305371 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83883885-f493-4559-9c0f-e28d69712475-config\") pod \"route-controller-manager-998bd8b4b-hm5k2\" (UID: \"83883885-f493-4559-9c0f-e28d69712475\") " pod="openshift-route-controller-manager/route-controller-manager-998bd8b4b-hm5k2" Feb 16 02:09:58.305855 master-0 kubenswrapper[7721]: I0216 02:09:58.305675 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e491b5ed-9c09-4308-9843-fba8d43bd3ae-serving-cert\") pod \"controller-manager-5788fc6459-29m25\" (UID: \"e491b5ed-9c09-4308-9843-fba8d43bd3ae\") " pod="openshift-controller-manager/controller-manager-5788fc6459-29m25" Feb 16 02:09:58.305855 master-0 kubenswrapper[7721]: I0216 02:09:58.305807 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e491b5ed-9c09-4308-9843-fba8d43bd3ae-proxy-ca-bundles\") pod \"controller-manager-5788fc6459-29m25\" (UID: \"e491b5ed-9c09-4308-9843-fba8d43bd3ae\") " pod="openshift-controller-manager/controller-manager-5788fc6459-29m25" Feb 16 02:09:58.305998 master-0 kubenswrapper[7721]: I0216 02:09:58.305866 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83883885-f493-4559-9c0f-e28d69712475-serving-cert\") pod \"route-controller-manager-998bd8b4b-hm5k2\" (UID: \"83883885-f493-4559-9c0f-e28d69712475\") " pod="openshift-route-controller-manager/route-controller-manager-998bd8b4b-hm5k2" Feb 16 02:09:58.306182 master-0 kubenswrapper[7721]: I0216 02:09:58.306137 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/83883885-f493-4559-9c0f-e28d69712475-client-ca\") pod \"route-controller-manager-998bd8b4b-hm5k2\" (UID: \"83883885-f493-4559-9c0f-e28d69712475\") " pod="openshift-route-controller-manager/route-controller-manager-998bd8b4b-hm5k2" Feb 16 02:09:58.306274 master-0 kubenswrapper[7721]: I0216 02:09:58.306200 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e491b5ed-9c09-4308-9843-fba8d43bd3ae-config\") pod \"controller-manager-5788fc6459-29m25\" (UID: \"e491b5ed-9c09-4308-9843-fba8d43bd3ae\") " pod="openshift-controller-manager/controller-manager-5788fc6459-29m25" Feb 16 02:09:58.306343 master-0 kubenswrapper[7721]: I0216 02:09:58.306255 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nmjz\" (UniqueName: \"kubernetes.io/projected/83883885-f493-4559-9c0f-e28d69712475-kube-api-access-6nmjz\") pod \"route-controller-manager-998bd8b4b-hm5k2\" (UID: \"83883885-f493-4559-9c0f-e28d69712475\") " pod="openshift-route-controller-manager/route-controller-manager-998bd8b4b-hm5k2" Feb 16 02:09:58.306405 master-0 kubenswrapper[7721]: I0216 02:09:58.306342 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e491b5ed-9c09-4308-9843-fba8d43bd3ae-client-ca\") pod \"controller-manager-5788fc6459-29m25\" (UID: \"e491b5ed-9c09-4308-9843-fba8d43bd3ae\") " pod="openshift-controller-manager/controller-manager-5788fc6459-29m25" Feb 16 02:09:58.308486 master-0 kubenswrapper[7721]: I0216 02:09:58.307416 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/83883885-f493-4559-9c0f-e28d69712475-client-ca\") pod \"route-controller-manager-998bd8b4b-hm5k2\" (UID: \"83883885-f493-4559-9c0f-e28d69712475\") " pod="openshift-route-controller-manager/route-controller-manager-998bd8b4b-hm5k2" Feb 16 02:09:58.308486 master-0 kubenswrapper[7721]: I0216 02:09:58.308249 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83883885-f493-4559-9c0f-e28d69712475-config\") pod \"route-controller-manager-998bd8b4b-hm5k2\" (UID: \"83883885-f493-4559-9c0f-e28d69712475\") " pod="openshift-route-controller-manager/route-controller-manager-998bd8b4b-hm5k2" Feb 16 02:09:58.308709 master-0 kubenswrapper[7721]: I0216 02:09:58.308505 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e491b5ed-9c09-4308-9843-fba8d43bd3ae-config\") pod \"controller-manager-5788fc6459-29m25\" (UID: \"e491b5ed-9c09-4308-9843-fba8d43bd3ae\") " pod="openshift-controller-manager/controller-manager-5788fc6459-29m25" Feb 16 02:09:58.309924 master-0 kubenswrapper[7721]: I0216 02:09:58.309881 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e491b5ed-9c09-4308-9843-fba8d43bd3ae-serving-cert\") pod \"controller-manager-5788fc6459-29m25\" (UID: \"e491b5ed-9c09-4308-9843-fba8d43bd3ae\") " pod="openshift-controller-manager/controller-manager-5788fc6459-29m25" Feb 16 02:09:58.312334 master-0 kubenswrapper[7721]: I0216 02:09:58.312265 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e491b5ed-9c09-4308-9843-fba8d43bd3ae-proxy-ca-bundles\") pod \"controller-manager-5788fc6459-29m25\" (UID: \"e491b5ed-9c09-4308-9843-fba8d43bd3ae\") " pod="openshift-controller-manager/controller-manager-5788fc6459-29m25" Feb 16 02:09:58.312505 master-0 kubenswrapper[7721]: I0216 02:09:58.312287 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e491b5ed-9c09-4308-9843-fba8d43bd3ae-client-ca\") pod \"controller-manager-5788fc6459-29m25\" (UID: \"e491b5ed-9c09-4308-9843-fba8d43bd3ae\") " pod="openshift-controller-manager/controller-manager-5788fc6459-29m25" Feb 16 02:09:58.314269 master-0 kubenswrapper[7721]: I0216 02:09:58.314193 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83883885-f493-4559-9c0f-e28d69712475-serving-cert\") pod \"route-controller-manager-998bd8b4b-hm5k2\" (UID: \"83883885-f493-4559-9c0f-e28d69712475\") " pod="openshift-route-controller-manager/route-controller-manager-998bd8b4b-hm5k2" Feb 16 02:09:58.336497 master-0 kubenswrapper[7721]: I0216 02:09:58.332528 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4p8p\" (UniqueName: \"kubernetes.io/projected/e491b5ed-9c09-4308-9843-fba8d43bd3ae-kube-api-access-j4p8p\") pod \"controller-manager-5788fc6459-29m25\" (UID: \"e491b5ed-9c09-4308-9843-fba8d43bd3ae\") " pod="openshift-controller-manager/controller-manager-5788fc6459-29m25" Feb 16 02:09:58.336497 master-0 kubenswrapper[7721]: I0216 02:09:58.335428 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nmjz\" (UniqueName: \"kubernetes.io/projected/83883885-f493-4559-9c0f-e28d69712475-kube-api-access-6nmjz\") pod \"route-controller-manager-998bd8b4b-hm5k2\" (UID: \"83883885-f493-4559-9c0f-e28d69712475\") " pod="openshift-route-controller-manager/route-controller-manager-998bd8b4b-hm5k2" Feb 16 02:09:58.372154 master-0 kubenswrapper[7721]: I0216 02:09:58.372053 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5788fc6459-29m25" Feb 16 02:09:58.398744 master-0 kubenswrapper[7721]: I0216 02:09:58.398662 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-998bd8b4b-hm5k2" Feb 16 02:09:58.419428 master-0 kubenswrapper[7721]: I0216 02:09:58.419343 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-6796f86fd6-qtxkl" Feb 16 02:09:58.921009 master-0 kubenswrapper[7721]: I0216 02:09:58.920946 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5788fc6459-29m25"] Feb 16 02:09:58.926529 master-0 kubenswrapper[7721]: W0216 02:09:58.925635 7721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode491b5ed_9c09_4308_9843_fba8d43bd3ae.slice/crio-debb4f03d3db8741a1ba37d50a33cf649d64e1d2b1233200aa072be50cd42b72 WatchSource:0}: Error finding container debb4f03d3db8741a1ba37d50a33cf649d64e1d2b1233200aa072be50cd42b72: Status 404 returned error can't find the container with id debb4f03d3db8741a1ba37d50a33cf649d64e1d2b1233200aa072be50cd42b72 Feb 16 02:09:58.984923 master-0 kubenswrapper[7721]: I0216 02:09:58.984873 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-998bd8b4b-hm5k2"] Feb 16 02:09:59.005824 master-0 kubenswrapper[7721]: W0216 02:09:59.005792 7721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod83883885_f493_4559_9c0f_e28d69712475.slice/crio-e55ecc9e109900a7552879fd5133496a07323a5858e7c3c83ce557b826a22cc6 WatchSource:0}: Error finding container e55ecc9e109900a7552879fd5133496a07323a5858e7c3c83ce557b826a22cc6: Status 404 returned error can't find the container with id e55ecc9e109900a7552879fd5133496a07323a5858e7c3c83ce557b826a22cc6 Feb 16 02:09:59.061288 master-0 kubenswrapper[7721]: I0216 02:09:59.061216 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-6796f86fd6-qtxkl"] Feb 16 02:09:59.069555 master-0 kubenswrapper[7721]: W0216 02:09:59.069499 7721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod22739961_e322_47f1_b232_eaa4cc35319c.slice/crio-4fa491ec351633ec2d1e1b13971562156d70b9f7ae47702242163650f593c658 WatchSource:0}: Error finding container 4fa491ec351633ec2d1e1b13971562156d70b9f7ae47702242163650f593c658: Status 404 returned error can't find the container with id 4fa491ec351633ec2d1e1b13971562156d70b9f7ae47702242163650f593c658 Feb 16 02:09:59.142482 master-0 kubenswrapper[7721]: I0216 02:09:59.142386 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-87777c9b7-fxzh6" Feb 16 02:09:59.554956 master-0 kubenswrapper[7721]: I0216 02:09:59.554706 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-998bd8b4b-hm5k2" event={"ID":"83883885-f493-4559-9c0f-e28d69712475","Type":"ContainerStarted","Data":"2a160f2c1742de9f4ba99becfe5db3107e11e652c40ff70cb3349e1627a9a147"} Feb 16 02:09:59.554956 master-0 kubenswrapper[7721]: I0216 02:09:59.554759 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-998bd8b4b-hm5k2" event={"ID":"83883885-f493-4559-9c0f-e28d69712475","Type":"ContainerStarted","Data":"e55ecc9e109900a7552879fd5133496a07323a5858e7c3c83ce557b826a22cc6"} Feb 16 02:09:59.555201 master-0 kubenswrapper[7721]: I0216 02:09:59.555033 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-998bd8b4b-hm5k2" Feb 16 02:09:59.556847 master-0 kubenswrapper[7721]: I0216 02:09:59.556812 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5788fc6459-29m25" event={"ID":"e491b5ed-9c09-4308-9843-fba8d43bd3ae","Type":"ContainerStarted","Data":"fb538a41cea5f683a2ab8b99be06eb74affc528bd353ff7cabad5516264bee81"} Feb 16 02:09:59.556928 master-0 kubenswrapper[7721]: I0216 02:09:59.556853 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5788fc6459-29m25" event={"ID":"e491b5ed-9c09-4308-9843-fba8d43bd3ae","Type":"ContainerStarted","Data":"debb4f03d3db8741a1ba37d50a33cf649d64e1d2b1233200aa072be50cd42b72"} Feb 16 02:09:59.557201 master-0 kubenswrapper[7721]: I0216 02:09:59.557118 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5788fc6459-29m25" Feb 16 02:09:59.559139 master-0 kubenswrapper[7721]: I0216 02:09:59.559099 7721 generic.go:334] "Generic (PLEG): container finished" podID="22739961-e322-47f1-b232-eaa4cc35319c" containerID="65a84e4599891e5bba954abf5a8b237b752aec5f97fd2072d6a6720ec3678bea" exitCode=0 Feb 16 02:09:59.559195 master-0 kubenswrapper[7721]: I0216 02:09:59.559140 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-6796f86fd6-qtxkl" event={"ID":"22739961-e322-47f1-b232-eaa4cc35319c","Type":"ContainerDied","Data":"65a84e4599891e5bba954abf5a8b237b752aec5f97fd2072d6a6720ec3678bea"} Feb 16 02:09:59.559195 master-0 kubenswrapper[7721]: I0216 02:09:59.559160 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-6796f86fd6-qtxkl" event={"ID":"22739961-e322-47f1-b232-eaa4cc35319c","Type":"ContainerStarted","Data":"4fa491ec351633ec2d1e1b13971562156d70b9f7ae47702242163650f593c658"} Feb 16 02:09:59.565191 master-0 kubenswrapper[7721]: I0216 02:09:59.565154 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5788fc6459-29m25" Feb 16 02:09:59.575832 master-0 kubenswrapper[7721]: I0216 02:09:59.575774 7721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-998bd8b4b-hm5k2" podStartSLOduration=133.575760949 podStartE2EDuration="2m13.575760949s" podCreationTimestamp="2026-02-16 02:07:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:09:59.574898157 +0000 UTC m=+203.069132429" watchObservedRunningTime="2026-02-16 02:09:59.575760949 +0000 UTC m=+203.069995211" Feb 16 02:09:59.597454 master-0 kubenswrapper[7721]: I0216 02:09:59.595572 7721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5788fc6459-29m25" podStartSLOduration=133.59555204 podStartE2EDuration="2m13.59555204s" podCreationTimestamp="2026-02-16 02:07:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:09:59.594131085 +0000 UTC m=+203.088365367" watchObservedRunningTime="2026-02-16 02:09:59.59555204 +0000 UTC m=+203.089786302" Feb 16 02:09:59.687919 master-0 kubenswrapper[7721]: I0216 02:09:59.687866 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-998bd8b4b-hm5k2" Feb 16 02:10:00.571559 master-0 kubenswrapper[7721]: I0216 02:10:00.571479 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-6796f86fd6-qtxkl" event={"ID":"22739961-e322-47f1-b232-eaa4cc35319c","Type":"ContainerStarted","Data":"63d14fb00b9d51bf6fb3887d2b1599dab1216d665bfe55b202a6a7f7c0d37e1e"} Feb 16 02:10:00.782871 master-0 kubenswrapper[7721]: I0216 02:10:00.782710 7721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-6796f86fd6-qtxkl" podStartSLOduration=168.782672675 podStartE2EDuration="2m48.782672675s" podCreationTimestamp="2026-02-16 02:07:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:10:00.617050254 +0000 UTC m=+204.111284526" watchObservedRunningTime="2026-02-16 02:10:00.782672675 +0000 UTC m=+204.276906977" Feb 16 02:10:00.787107 master-0 kubenswrapper[7721]: I0216 02:10:00.787026 7721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9kb98"] Feb 16 02:10:00.787600 master-0 kubenswrapper[7721]: I0216 02:10:00.787537 7721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9kb98" podUID="774b6ff9-0e37-48fd-96c6-571859fec492" containerName="registry-server" containerID="cri-o://60918c66097ffeb7d50714bdcecec18d468a6dbbe93f7cbc2d5000b882984224" gracePeriod=2 Feb 16 02:10:00.794558 master-0 kubenswrapper[7721]: I0216 02:10:00.794485 7721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vkp55"] Feb 16 02:10:00.794988 master-0 kubenswrapper[7721]: I0216 02:10:00.794923 7721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vkp55" podUID="0e5d40eb-8051-46a8-9cd9-d2b1f152dbf0" containerName="registry-server" containerID="cri-o://d4e6dcba58c6aa96b4941baa62882cd9180552a541add0468ddd2b0e1f5a6373" gracePeriod=2 Feb 16 02:10:00.825099 master-0 kubenswrapper[7721]: I0216 02:10:00.824942 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gkbtj"] Feb 16 02:10:00.826817 master-0 kubenswrapper[7721]: I0216 02:10:00.826770 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gkbtj" Feb 16 02:10:00.838870 master-0 kubenswrapper[7721]: I0216 02:10:00.838817 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-mhjgf" Feb 16 02:10:00.840275 master-0 kubenswrapper[7721]: I0216 02:10:00.840223 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9c6g5"] Feb 16 02:10:00.846013 master-0 kubenswrapper[7721]: I0216 02:10:00.845950 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9c6g5" Feb 16 02:10:00.888944 master-0 kubenswrapper[7721]: I0216 02:10:00.852991 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-tmpwz" Feb 16 02:10:00.888944 master-0 kubenswrapper[7721]: I0216 02:10:00.861807 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gkbtj"] Feb 16 02:10:00.888944 master-0 kubenswrapper[7721]: I0216 02:10:00.868061 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9c6g5"] Feb 16 02:10:01.016043 master-0 kubenswrapper[7721]: I0216 02:10:01.015114 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89041b37-18f6-499d-89ec-a0523a25dc58-utilities\") pod \"redhat-operators-9c6g5\" (UID: \"89041b37-18f6-499d-89ec-a0523a25dc58\") " pod="openshift-marketplace/redhat-operators-9c6g5" Feb 16 02:10:01.016043 master-0 kubenswrapper[7721]: I0216 02:10:01.015185 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b923d74-bad3-4780-8e7e-e8365ac9ea06-utilities\") pod \"certified-operators-gkbtj\" (UID: \"5b923d74-bad3-4780-8e7e-e8365ac9ea06\") " pod="openshift-marketplace/certified-operators-gkbtj" Feb 16 02:10:01.016043 master-0 kubenswrapper[7721]: I0216 02:10:01.015214 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89041b37-18f6-499d-89ec-a0523a25dc58-catalog-content\") pod \"redhat-operators-9c6g5\" (UID: \"89041b37-18f6-499d-89ec-a0523a25dc58\") " pod="openshift-marketplace/redhat-operators-9c6g5" Feb 16 02:10:01.016043 master-0 kubenswrapper[7721]: I0216 02:10:01.015287 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvzpb\" (UniqueName: \"kubernetes.io/projected/89041b37-18f6-499d-89ec-a0523a25dc58-kube-api-access-zvzpb\") pod \"redhat-operators-9c6g5\" (UID: \"89041b37-18f6-499d-89ec-a0523a25dc58\") " pod="openshift-marketplace/redhat-operators-9c6g5" Feb 16 02:10:01.016043 master-0 kubenswrapper[7721]: I0216 02:10:01.015343 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cstlg\" (UniqueName: \"kubernetes.io/projected/5b923d74-bad3-4780-8e7e-e8365ac9ea06-kube-api-access-cstlg\") pod \"certified-operators-gkbtj\" (UID: \"5b923d74-bad3-4780-8e7e-e8365ac9ea06\") " pod="openshift-marketplace/certified-operators-gkbtj" Feb 16 02:10:01.016043 master-0 kubenswrapper[7721]: I0216 02:10:01.015393 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b923d74-bad3-4780-8e7e-e8365ac9ea06-catalog-content\") pod \"certified-operators-gkbtj\" (UID: \"5b923d74-bad3-4780-8e7e-e8365ac9ea06\") " pod="openshift-marketplace/certified-operators-gkbtj" Feb 16 02:10:01.117697 master-0 kubenswrapper[7721]: I0216 02:10:01.117538 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89041b37-18f6-499d-89ec-a0523a25dc58-utilities\") pod \"redhat-operators-9c6g5\" (UID: \"89041b37-18f6-499d-89ec-a0523a25dc58\") " pod="openshift-marketplace/redhat-operators-9c6g5" Feb 16 02:10:01.117955 master-0 kubenswrapper[7721]: I0216 02:10:01.117820 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b923d74-bad3-4780-8e7e-e8365ac9ea06-utilities\") pod \"certified-operators-gkbtj\" (UID: \"5b923d74-bad3-4780-8e7e-e8365ac9ea06\") " pod="openshift-marketplace/certified-operators-gkbtj" Feb 16 02:10:01.117955 master-0 kubenswrapper[7721]: I0216 02:10:01.117844 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89041b37-18f6-499d-89ec-a0523a25dc58-catalog-content\") pod \"redhat-operators-9c6g5\" (UID: \"89041b37-18f6-499d-89ec-a0523a25dc58\") " pod="openshift-marketplace/redhat-operators-9c6g5" Feb 16 02:10:01.118097 master-0 kubenswrapper[7721]: I0216 02:10:01.117962 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvzpb\" (UniqueName: \"kubernetes.io/projected/89041b37-18f6-499d-89ec-a0523a25dc58-kube-api-access-zvzpb\") pod \"redhat-operators-9c6g5\" (UID: \"89041b37-18f6-499d-89ec-a0523a25dc58\") " pod="openshift-marketplace/redhat-operators-9c6g5" Feb 16 02:10:01.118097 master-0 kubenswrapper[7721]: I0216 02:10:01.118001 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cstlg\" (UniqueName: \"kubernetes.io/projected/5b923d74-bad3-4780-8e7e-e8365ac9ea06-kube-api-access-cstlg\") pod \"certified-operators-gkbtj\" (UID: \"5b923d74-bad3-4780-8e7e-e8365ac9ea06\") " pod="openshift-marketplace/certified-operators-gkbtj" Feb 16 02:10:01.118097 master-0 kubenswrapper[7721]: I0216 02:10:01.118047 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b923d74-bad3-4780-8e7e-e8365ac9ea06-catalog-content\") pod \"certified-operators-gkbtj\" (UID: \"5b923d74-bad3-4780-8e7e-e8365ac9ea06\") " pod="openshift-marketplace/certified-operators-gkbtj" Feb 16 02:10:01.118288 master-0 kubenswrapper[7721]: I0216 02:10:01.118130 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89041b37-18f6-499d-89ec-a0523a25dc58-utilities\") pod \"redhat-operators-9c6g5\" (UID: \"89041b37-18f6-499d-89ec-a0523a25dc58\") " pod="openshift-marketplace/redhat-operators-9c6g5" Feb 16 02:10:01.118712 master-0 kubenswrapper[7721]: I0216 02:10:01.118577 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b923d74-bad3-4780-8e7e-e8365ac9ea06-catalog-content\") pod \"certified-operators-gkbtj\" (UID: \"5b923d74-bad3-4780-8e7e-e8365ac9ea06\") " pod="openshift-marketplace/certified-operators-gkbtj" Feb 16 02:10:01.118712 master-0 kubenswrapper[7721]: I0216 02:10:01.118581 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89041b37-18f6-499d-89ec-a0523a25dc58-catalog-content\") pod \"redhat-operators-9c6g5\" (UID: \"89041b37-18f6-499d-89ec-a0523a25dc58\") " pod="openshift-marketplace/redhat-operators-9c6g5" Feb 16 02:10:01.119003 master-0 kubenswrapper[7721]: I0216 02:10:01.118958 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b923d74-bad3-4780-8e7e-e8365ac9ea06-utilities\") pod \"certified-operators-gkbtj\" (UID: \"5b923d74-bad3-4780-8e7e-e8365ac9ea06\") " pod="openshift-marketplace/certified-operators-gkbtj" Feb 16 02:10:01.135535 master-0 kubenswrapper[7721]: I0216 02:10:01.135488 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cstlg\" (UniqueName: \"kubernetes.io/projected/5b923d74-bad3-4780-8e7e-e8365ac9ea06-kube-api-access-cstlg\") pod \"certified-operators-gkbtj\" (UID: \"5b923d74-bad3-4780-8e7e-e8365ac9ea06\") " pod="openshift-marketplace/certified-operators-gkbtj" Feb 16 02:10:01.151988 master-0 kubenswrapper[7721]: I0216 02:10:01.151939 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvzpb\" (UniqueName: \"kubernetes.io/projected/89041b37-18f6-499d-89ec-a0523a25dc58-kube-api-access-zvzpb\") pod \"redhat-operators-9c6g5\" (UID: \"89041b37-18f6-499d-89ec-a0523a25dc58\") " pod="openshift-marketplace/redhat-operators-9c6g5" Feb 16 02:10:01.229097 master-0 kubenswrapper[7721]: I0216 02:10:01.229011 7721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9kb98" Feb 16 02:10:01.302925 master-0 kubenswrapper[7721]: I0216 02:10:01.302855 7721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vkp55" Feb 16 02:10:01.320359 master-0 kubenswrapper[7721]: I0216 02:10:01.320285 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzcbl\" (UniqueName: \"kubernetes.io/projected/774b6ff9-0e37-48fd-96c6-571859fec492-kube-api-access-vzcbl\") pod \"774b6ff9-0e37-48fd-96c6-571859fec492\" (UID: \"774b6ff9-0e37-48fd-96c6-571859fec492\") " Feb 16 02:10:01.320481 master-0 kubenswrapper[7721]: I0216 02:10:01.320379 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e5d40eb-8051-46a8-9cd9-d2b1f152dbf0-catalog-content\") pod \"0e5d40eb-8051-46a8-9cd9-d2b1f152dbf0\" (UID: \"0e5d40eb-8051-46a8-9cd9-d2b1f152dbf0\") " Feb 16 02:10:01.320605 master-0 kubenswrapper[7721]: I0216 02:10:01.320563 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/774b6ff9-0e37-48fd-96c6-571859fec492-utilities\") pod \"774b6ff9-0e37-48fd-96c6-571859fec492\" (UID: \"774b6ff9-0e37-48fd-96c6-571859fec492\") " Feb 16 02:10:01.320654 master-0 kubenswrapper[7721]: I0216 02:10:01.320635 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/774b6ff9-0e37-48fd-96c6-571859fec492-catalog-content\") pod \"774b6ff9-0e37-48fd-96c6-571859fec492\" (UID: \"774b6ff9-0e37-48fd-96c6-571859fec492\") " Feb 16 02:10:01.320702 master-0 kubenswrapper[7721]: I0216 02:10:01.320684 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbf5f\" (UniqueName: \"kubernetes.io/projected/0e5d40eb-8051-46a8-9cd9-d2b1f152dbf0-kube-api-access-gbf5f\") pod \"0e5d40eb-8051-46a8-9cd9-d2b1f152dbf0\" (UID: \"0e5d40eb-8051-46a8-9cd9-d2b1f152dbf0\") " Feb 16 02:10:01.320771 master-0 kubenswrapper[7721]: I0216 02:10:01.320741 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e5d40eb-8051-46a8-9cd9-d2b1f152dbf0-utilities\") pod \"0e5d40eb-8051-46a8-9cd9-d2b1f152dbf0\" (UID: \"0e5d40eb-8051-46a8-9cd9-d2b1f152dbf0\") " Feb 16 02:10:01.321580 master-0 kubenswrapper[7721]: I0216 02:10:01.321533 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/774b6ff9-0e37-48fd-96c6-571859fec492-utilities" (OuterVolumeSpecName: "utilities") pod "774b6ff9-0e37-48fd-96c6-571859fec492" (UID: "774b6ff9-0e37-48fd-96c6-571859fec492"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 02:10:01.322359 master-0 kubenswrapper[7721]: I0216 02:10:01.322273 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e5d40eb-8051-46a8-9cd9-d2b1f152dbf0-utilities" (OuterVolumeSpecName: "utilities") pod "0e5d40eb-8051-46a8-9cd9-d2b1f152dbf0" (UID: "0e5d40eb-8051-46a8-9cd9-d2b1f152dbf0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 02:10:01.327156 master-0 kubenswrapper[7721]: I0216 02:10:01.327083 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/774b6ff9-0e37-48fd-96c6-571859fec492-kube-api-access-vzcbl" (OuterVolumeSpecName: "kube-api-access-vzcbl") pod "774b6ff9-0e37-48fd-96c6-571859fec492" (UID: "774b6ff9-0e37-48fd-96c6-571859fec492"). InnerVolumeSpecName "kube-api-access-vzcbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:10:01.327535 master-0 kubenswrapper[7721]: I0216 02:10:01.327488 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e5d40eb-8051-46a8-9cd9-d2b1f152dbf0-kube-api-access-gbf5f" (OuterVolumeSpecName: "kube-api-access-gbf5f") pod "0e5d40eb-8051-46a8-9cd9-d2b1f152dbf0" (UID: "0e5d40eb-8051-46a8-9cd9-d2b1f152dbf0"). InnerVolumeSpecName "kube-api-access-gbf5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:10:01.330396 master-0 kubenswrapper[7721]: I0216 02:10:01.330322 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gkbtj" Feb 16 02:10:01.342847 master-0 kubenswrapper[7721]: I0216 02:10:01.342587 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9c6g5" Feb 16 02:10:01.422302 master-0 kubenswrapper[7721]: I0216 02:10:01.422238 7721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbf5f\" (UniqueName: \"kubernetes.io/projected/0e5d40eb-8051-46a8-9cd9-d2b1f152dbf0-kube-api-access-gbf5f\") on node \"master-0\" DevicePath \"\"" Feb 16 02:10:01.422302 master-0 kubenswrapper[7721]: I0216 02:10:01.422299 7721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e5d40eb-8051-46a8-9cd9-d2b1f152dbf0-utilities\") on node \"master-0\" DevicePath \"\"" Feb 16 02:10:01.422584 master-0 kubenswrapper[7721]: I0216 02:10:01.422322 7721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzcbl\" (UniqueName: \"kubernetes.io/projected/774b6ff9-0e37-48fd-96c6-571859fec492-kube-api-access-vzcbl\") on node \"master-0\" DevicePath \"\"" Feb 16 02:10:01.422584 master-0 kubenswrapper[7721]: I0216 02:10:01.422344 7721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/774b6ff9-0e37-48fd-96c6-571859fec492-utilities\") on node \"master-0\" DevicePath \"\"" Feb 16 02:10:01.428676 master-0 kubenswrapper[7721]: I0216 02:10:01.428098 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e5d40eb-8051-46a8-9cd9-d2b1f152dbf0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0e5d40eb-8051-46a8-9cd9-d2b1f152dbf0" (UID: "0e5d40eb-8051-46a8-9cd9-d2b1f152dbf0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 02:10:01.485502 master-0 kubenswrapper[7721]: I0216 02:10:01.485451 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/774b6ff9-0e37-48fd-96c6-571859fec492-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "774b6ff9-0e37-48fd-96c6-571859fec492" (UID: "774b6ff9-0e37-48fd-96c6-571859fec492"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 02:10:01.523256 master-0 kubenswrapper[7721]: I0216 02:10:01.523206 7721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/774b6ff9-0e37-48fd-96c6-571859fec492-catalog-content\") on node \"master-0\" DevicePath \"\"" Feb 16 02:10:01.523256 master-0 kubenswrapper[7721]: I0216 02:10:01.523254 7721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e5d40eb-8051-46a8-9cd9-d2b1f152dbf0-catalog-content\") on node \"master-0\" DevicePath \"\"" Feb 16 02:10:01.584541 master-0 kubenswrapper[7721]: I0216 02:10:01.584485 7721 generic.go:334] "Generic (PLEG): container finished" podID="774b6ff9-0e37-48fd-96c6-571859fec492" containerID="60918c66097ffeb7d50714bdcecec18d468a6dbbe93f7cbc2d5000b882984224" exitCode=0 Feb 16 02:10:01.585248 master-0 kubenswrapper[7721]: I0216 02:10:01.584573 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9kb98" event={"ID":"774b6ff9-0e37-48fd-96c6-571859fec492","Type":"ContainerDied","Data":"60918c66097ffeb7d50714bdcecec18d468a6dbbe93f7cbc2d5000b882984224"} Feb 16 02:10:01.585477 master-0 kubenswrapper[7721]: I0216 02:10:01.584588 7721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9kb98" Feb 16 02:10:01.586001 master-0 kubenswrapper[7721]: I0216 02:10:01.585352 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9kb98" event={"ID":"774b6ff9-0e37-48fd-96c6-571859fec492","Type":"ContainerDied","Data":"a1ddda24c47c0110d6cb5389d382a77f53efd1546a0833c2415421c0d7cbe70f"} Feb 16 02:10:01.586001 master-0 kubenswrapper[7721]: I0216 02:10:01.585369 7721 scope.go:117] "RemoveContainer" containerID="60918c66097ffeb7d50714bdcecec18d468a6dbbe93f7cbc2d5000b882984224" Feb 16 02:10:01.590080 master-0 kubenswrapper[7721]: I0216 02:10:01.590012 7721 generic.go:334] "Generic (PLEG): container finished" podID="0e5d40eb-8051-46a8-9cd9-d2b1f152dbf0" containerID="d4e6dcba58c6aa96b4941baa62882cd9180552a541add0468ddd2b0e1f5a6373" exitCode=0 Feb 16 02:10:01.590942 master-0 kubenswrapper[7721]: I0216 02:10:01.590150 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vkp55" event={"ID":"0e5d40eb-8051-46a8-9cd9-d2b1f152dbf0","Type":"ContainerDied","Data":"d4e6dcba58c6aa96b4941baa62882cd9180552a541add0468ddd2b0e1f5a6373"} Feb 16 02:10:01.590942 master-0 kubenswrapper[7721]: I0216 02:10:01.590430 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vkp55" event={"ID":"0e5d40eb-8051-46a8-9cd9-d2b1f152dbf0","Type":"ContainerDied","Data":"09423cf84bc18b39153d050c3d29a41cb735dc66d3e929960f6e1f6ec404f766"} Feb 16 02:10:01.590942 master-0 kubenswrapper[7721]: I0216 02:10:01.590820 7721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vkp55" Feb 16 02:10:01.615839 master-0 kubenswrapper[7721]: I0216 02:10:01.614964 7721 scope.go:117] "RemoveContainer" containerID="7be24ae6b55716532fb5e51eb053c2830fcfe8aa6050ec53c06ef1084c4bbe8d" Feb 16 02:10:01.660873 master-0 kubenswrapper[7721]: I0216 02:10:01.660383 7721 scope.go:117] "RemoveContainer" containerID="f2664a967c697ea91de1f2c899a0e2505b3571321a1d6f66745c91fa25ea05f7" Feb 16 02:10:01.660873 master-0 kubenswrapper[7721]: I0216 02:10:01.660843 7721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vkp55"] Feb 16 02:10:01.668747 master-0 kubenswrapper[7721]: I0216 02:10:01.668552 7721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vkp55"] Feb 16 02:10:01.672306 master-0 kubenswrapper[7721]: I0216 02:10:01.671870 7721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9kb98"] Feb 16 02:10:01.675765 master-0 kubenswrapper[7721]: I0216 02:10:01.675697 7721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9kb98"] Feb 16 02:10:01.690239 master-0 kubenswrapper[7721]: I0216 02:10:01.690185 7721 scope.go:117] "RemoveContainer" containerID="60918c66097ffeb7d50714bdcecec18d468a6dbbe93f7cbc2d5000b882984224" Feb 16 02:10:01.690957 master-0 kubenswrapper[7721]: E0216 02:10:01.690899 7721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60918c66097ffeb7d50714bdcecec18d468a6dbbe93f7cbc2d5000b882984224\": container with ID starting with 60918c66097ffeb7d50714bdcecec18d468a6dbbe93f7cbc2d5000b882984224 not found: ID does not exist" containerID="60918c66097ffeb7d50714bdcecec18d468a6dbbe93f7cbc2d5000b882984224" Feb 16 02:10:01.690957 master-0 kubenswrapper[7721]: I0216 02:10:01.690938 7721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60918c66097ffeb7d50714bdcecec18d468a6dbbe93f7cbc2d5000b882984224"} err="failed to get container status \"60918c66097ffeb7d50714bdcecec18d468a6dbbe93f7cbc2d5000b882984224\": rpc error: code = NotFound desc = could not find container \"60918c66097ffeb7d50714bdcecec18d468a6dbbe93f7cbc2d5000b882984224\": container with ID starting with 60918c66097ffeb7d50714bdcecec18d468a6dbbe93f7cbc2d5000b882984224 not found: ID does not exist" Feb 16 02:10:01.691532 master-0 kubenswrapper[7721]: I0216 02:10:01.690967 7721 scope.go:117] "RemoveContainer" containerID="7be24ae6b55716532fb5e51eb053c2830fcfe8aa6050ec53c06ef1084c4bbe8d" Feb 16 02:10:01.691692 master-0 kubenswrapper[7721]: E0216 02:10:01.691558 7721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7be24ae6b55716532fb5e51eb053c2830fcfe8aa6050ec53c06ef1084c4bbe8d\": container with ID starting with 7be24ae6b55716532fb5e51eb053c2830fcfe8aa6050ec53c06ef1084c4bbe8d not found: ID does not exist" containerID="7be24ae6b55716532fb5e51eb053c2830fcfe8aa6050ec53c06ef1084c4bbe8d" Feb 16 02:10:01.691692 master-0 kubenswrapper[7721]: I0216 02:10:01.691586 7721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7be24ae6b55716532fb5e51eb053c2830fcfe8aa6050ec53c06ef1084c4bbe8d"} err="failed to get container status \"7be24ae6b55716532fb5e51eb053c2830fcfe8aa6050ec53c06ef1084c4bbe8d\": rpc error: code = NotFound desc = could not find container \"7be24ae6b55716532fb5e51eb053c2830fcfe8aa6050ec53c06ef1084c4bbe8d\": container with ID starting with 7be24ae6b55716532fb5e51eb053c2830fcfe8aa6050ec53c06ef1084c4bbe8d not found: ID does not exist" Feb 16 02:10:01.691692 master-0 kubenswrapper[7721]: I0216 02:10:01.691607 7721 scope.go:117] "RemoveContainer" containerID="f2664a967c697ea91de1f2c899a0e2505b3571321a1d6f66745c91fa25ea05f7" Feb 16 02:10:01.692189 master-0 kubenswrapper[7721]: E0216 02:10:01.692111 7721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2664a967c697ea91de1f2c899a0e2505b3571321a1d6f66745c91fa25ea05f7\": container with ID starting with f2664a967c697ea91de1f2c899a0e2505b3571321a1d6f66745c91fa25ea05f7 not found: ID does not exist" containerID="f2664a967c697ea91de1f2c899a0e2505b3571321a1d6f66745c91fa25ea05f7" Feb 16 02:10:01.692189 master-0 kubenswrapper[7721]: I0216 02:10:01.692140 7721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2664a967c697ea91de1f2c899a0e2505b3571321a1d6f66745c91fa25ea05f7"} err="failed to get container status \"f2664a967c697ea91de1f2c899a0e2505b3571321a1d6f66745c91fa25ea05f7\": rpc error: code = NotFound desc = could not find container \"f2664a967c697ea91de1f2c899a0e2505b3571321a1d6f66745c91fa25ea05f7\": container with ID starting with f2664a967c697ea91de1f2c899a0e2505b3571321a1d6f66745c91fa25ea05f7 not found: ID does not exist" Feb 16 02:10:01.692189 master-0 kubenswrapper[7721]: I0216 02:10:01.692158 7721 scope.go:117] "RemoveContainer" containerID="d4e6dcba58c6aa96b4941baa62882cd9180552a541add0468ddd2b0e1f5a6373" Feb 16 02:10:01.711878 master-0 kubenswrapper[7721]: I0216 02:10:01.711827 7721 scope.go:117] "RemoveContainer" containerID="c32d539f67c811ff9746ff9280cdab7ab4b0cb4f483d1d5325c5241516366b56" Feb 16 02:10:01.732663 master-0 kubenswrapper[7721]: I0216 02:10:01.732593 7721 scope.go:117] "RemoveContainer" containerID="dd47a692ee6e4d8b43793e4e2b5ea092378722da29abfccc0503b64fd6fa2b5a" Feb 16 02:10:01.759092 master-0 kubenswrapper[7721]: I0216 02:10:01.759036 7721 scope.go:117] "RemoveContainer" containerID="d4e6dcba58c6aa96b4941baa62882cd9180552a541add0468ddd2b0e1f5a6373" Feb 16 02:10:01.760039 master-0 kubenswrapper[7721]: E0216 02:10:01.759960 7721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4e6dcba58c6aa96b4941baa62882cd9180552a541add0468ddd2b0e1f5a6373\": container with ID starting with d4e6dcba58c6aa96b4941baa62882cd9180552a541add0468ddd2b0e1f5a6373 not found: ID does not exist" containerID="d4e6dcba58c6aa96b4941baa62882cd9180552a541add0468ddd2b0e1f5a6373" Feb 16 02:10:01.760184 master-0 kubenswrapper[7721]: I0216 02:10:01.760038 7721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4e6dcba58c6aa96b4941baa62882cd9180552a541add0468ddd2b0e1f5a6373"} err="failed to get container status \"d4e6dcba58c6aa96b4941baa62882cd9180552a541add0468ddd2b0e1f5a6373\": rpc error: code = NotFound desc = could not find container \"d4e6dcba58c6aa96b4941baa62882cd9180552a541add0468ddd2b0e1f5a6373\": container with ID starting with d4e6dcba58c6aa96b4941baa62882cd9180552a541add0468ddd2b0e1f5a6373 not found: ID does not exist" Feb 16 02:10:01.760184 master-0 kubenswrapper[7721]: I0216 02:10:01.760088 7721 scope.go:117] "RemoveContainer" containerID="c32d539f67c811ff9746ff9280cdab7ab4b0cb4f483d1d5325c5241516366b56" Feb 16 02:10:01.760789 master-0 kubenswrapper[7721]: E0216 02:10:01.760732 7721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c32d539f67c811ff9746ff9280cdab7ab4b0cb4f483d1d5325c5241516366b56\": container with ID starting with c32d539f67c811ff9746ff9280cdab7ab4b0cb4f483d1d5325c5241516366b56 not found: ID does not exist" containerID="c32d539f67c811ff9746ff9280cdab7ab4b0cb4f483d1d5325c5241516366b56" Feb 16 02:10:01.760912 master-0 kubenswrapper[7721]: I0216 02:10:01.760772 7721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c32d539f67c811ff9746ff9280cdab7ab4b0cb4f483d1d5325c5241516366b56"} err="failed to get container status \"c32d539f67c811ff9746ff9280cdab7ab4b0cb4f483d1d5325c5241516366b56\": rpc error: code = NotFound desc = could not find container \"c32d539f67c811ff9746ff9280cdab7ab4b0cb4f483d1d5325c5241516366b56\": container with ID starting with c32d539f67c811ff9746ff9280cdab7ab4b0cb4f483d1d5325c5241516366b56 not found: ID does not exist" Feb 16 02:10:01.760912 master-0 kubenswrapper[7721]: I0216 02:10:01.760888 7721 scope.go:117] "RemoveContainer" containerID="dd47a692ee6e4d8b43793e4e2b5ea092378722da29abfccc0503b64fd6fa2b5a" Feb 16 02:10:01.761767 master-0 kubenswrapper[7721]: E0216 02:10:01.761696 7721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd47a692ee6e4d8b43793e4e2b5ea092378722da29abfccc0503b64fd6fa2b5a\": container with ID starting with dd47a692ee6e4d8b43793e4e2b5ea092378722da29abfccc0503b64fd6fa2b5a not found: ID does not exist" containerID="dd47a692ee6e4d8b43793e4e2b5ea092378722da29abfccc0503b64fd6fa2b5a" Feb 16 02:10:01.761954 master-0 kubenswrapper[7721]: I0216 02:10:01.761762 7721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd47a692ee6e4d8b43793e4e2b5ea092378722da29abfccc0503b64fd6fa2b5a"} err="failed to get container status \"dd47a692ee6e4d8b43793e4e2b5ea092378722da29abfccc0503b64fd6fa2b5a\": rpc error: code = NotFound desc = could not find container \"dd47a692ee6e4d8b43793e4e2b5ea092378722da29abfccc0503b64fd6fa2b5a\": container with ID starting with dd47a692ee6e4d8b43793e4e2b5ea092378722da29abfccc0503b64fd6fa2b5a not found: ID does not exist" Feb 16 02:10:01.867231 master-0 kubenswrapper[7721]: I0216 02:10:01.867147 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gkbtj"] Feb 16 02:10:01.870956 master-0 kubenswrapper[7721]: W0216 02:10:01.870880 7721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b923d74_bad3_4780_8e7e_e8365ac9ea06.slice/crio-49dbd775e2346c33a70cf58828eb787b58a10dad1b4af76f1c103cfe7d36ce1d WatchSource:0}: Error finding container 49dbd775e2346c33a70cf58828eb787b58a10dad1b4af76f1c103cfe7d36ce1d: Status 404 returned error can't find the container with id 49dbd775e2346c33a70cf58828eb787b58a10dad1b4af76f1c103cfe7d36ce1d Feb 16 02:10:01.914692 master-0 kubenswrapper[7721]: I0216 02:10:01.914225 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9c6g5"] Feb 16 02:10:01.927247 master-0 kubenswrapper[7721]: W0216 02:10:01.927168 7721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89041b37_18f6_499d_89ec_a0523a25dc58.slice/crio-7ed4da3dff52ca3a67ddd91e519fed951330dcfe09c0cb1c3559d79f70e3808d WatchSource:0}: Error finding container 7ed4da3dff52ca3a67ddd91e519fed951330dcfe09c0cb1c3559d79f70e3808d: Status 404 returned error can't find the container with id 7ed4da3dff52ca3a67ddd91e519fed951330dcfe09c0cb1c3559d79f70e3808d Feb 16 02:10:02.604369 master-0 kubenswrapper[7721]: I0216 02:10:02.604180 7721 generic.go:334] "Generic (PLEG): container finished" podID="89041b37-18f6-499d-89ec-a0523a25dc58" containerID="22ae7ce64aa3ec99af80eb81cfd477c02a07e643cce741b7421f2d5683a30b06" exitCode=0 Feb 16 02:10:02.604369 master-0 kubenswrapper[7721]: I0216 02:10:02.604300 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9c6g5" event={"ID":"89041b37-18f6-499d-89ec-a0523a25dc58","Type":"ContainerDied","Data":"22ae7ce64aa3ec99af80eb81cfd477c02a07e643cce741b7421f2d5683a30b06"} Feb 16 02:10:02.605684 master-0 kubenswrapper[7721]: I0216 02:10:02.604381 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9c6g5" event={"ID":"89041b37-18f6-499d-89ec-a0523a25dc58","Type":"ContainerStarted","Data":"7ed4da3dff52ca3a67ddd91e519fed951330dcfe09c0cb1c3559d79f70e3808d"} Feb 16 02:10:02.606623 master-0 kubenswrapper[7721]: I0216 02:10:02.606569 7721 generic.go:334] "Generic (PLEG): container finished" podID="5b923d74-bad3-4780-8e7e-e8365ac9ea06" containerID="9e46f269d8f0d63b18fad69b8b71e42902785d9a1b409d9e8c0b0f57a24c8b43" exitCode=0 Feb 16 02:10:02.606774 master-0 kubenswrapper[7721]: I0216 02:10:02.606640 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gkbtj" event={"ID":"5b923d74-bad3-4780-8e7e-e8365ac9ea06","Type":"ContainerDied","Data":"9e46f269d8f0d63b18fad69b8b71e42902785d9a1b409d9e8c0b0f57a24c8b43"} Feb 16 02:10:02.606774 master-0 kubenswrapper[7721]: I0216 02:10:02.606707 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gkbtj" event={"ID":"5b923d74-bad3-4780-8e7e-e8365ac9ea06","Type":"ContainerStarted","Data":"49dbd775e2346c33a70cf58828eb787b58a10dad1b4af76f1c103cfe7d36ce1d"} Feb 16 02:10:02.739281 master-0 kubenswrapper[7721]: I0216 02:10:02.739127 7721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e5d40eb-8051-46a8-9cd9-d2b1f152dbf0" path="/var/lib/kubelet/pods/0e5d40eb-8051-46a8-9cd9-d2b1f152dbf0/volumes" Feb 16 02:10:02.741203 master-0 kubenswrapper[7721]: I0216 02:10:02.741151 7721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="774b6ff9-0e37-48fd-96c6-571859fec492" path="/var/lib/kubelet/pods/774b6ff9-0e37-48fd-96c6-571859fec492/volumes" Feb 16 02:10:02.968925 master-0 kubenswrapper[7721]: I0216 02:10:02.968764 7721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-trqk8"] Feb 16 02:10:02.969382 master-0 kubenswrapper[7721]: I0216 02:10:02.969321 7721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-trqk8" podUID="ab463f74-d1e7-44f1-9634-d9f63685b06d" containerName="registry-server" containerID="cri-o://fcd918e42e09edbde82af27329ac4d0663845d79ca2085b97d9bb5eab9b7e0af" gracePeriod=2 Feb 16 02:10:03.166652 master-0 kubenswrapper[7721]: I0216 02:10:03.166596 7721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9qtbw"] Feb 16 02:10:03.167345 master-0 kubenswrapper[7721]: I0216 02:10:03.167297 7721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9qtbw" podUID="0bdb65c2-c4bc-4e33-9e5a-61542c659700" containerName="registry-server" containerID="cri-o://74431daf39a9feaef137ae8d22f9b9d06dc8b940ba1cc1cbd03fb059358f6dbd" gracePeriod=2 Feb 16 02:10:03.398629 master-0 kubenswrapper[7721]: I0216 02:10:03.398547 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-s95k9"] Feb 16 02:10:03.399672 master-0 kubenswrapper[7721]: E0216 02:10:03.399588 7721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="774b6ff9-0e37-48fd-96c6-571859fec492" containerName="registry-server" Feb 16 02:10:03.399672 master-0 kubenswrapper[7721]: I0216 02:10:03.399660 7721 state_mem.go:107] "Deleted CPUSet assignment" podUID="774b6ff9-0e37-48fd-96c6-571859fec492" containerName="registry-server" Feb 16 02:10:03.400197 master-0 kubenswrapper[7721]: E0216 02:10:03.400000 7721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e5d40eb-8051-46a8-9cd9-d2b1f152dbf0" containerName="extract-utilities" Feb 16 02:10:03.400197 master-0 kubenswrapper[7721]: I0216 02:10:03.400042 7721 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e5d40eb-8051-46a8-9cd9-d2b1f152dbf0" containerName="extract-utilities" Feb 16 02:10:03.400197 master-0 kubenswrapper[7721]: E0216 02:10:03.400080 7721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="774b6ff9-0e37-48fd-96c6-571859fec492" containerName="extract-content" Feb 16 02:10:03.400197 master-0 kubenswrapper[7721]: I0216 02:10:03.400098 7721 state_mem.go:107] "Deleted CPUSet assignment" podUID="774b6ff9-0e37-48fd-96c6-571859fec492" containerName="extract-content" Feb 16 02:10:03.400197 master-0 kubenswrapper[7721]: E0216 02:10:03.400121 7721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e5d40eb-8051-46a8-9cd9-d2b1f152dbf0" containerName="extract-content" Feb 16 02:10:03.400197 master-0 kubenswrapper[7721]: I0216 02:10:03.400131 7721 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e5d40eb-8051-46a8-9cd9-d2b1f152dbf0" containerName="extract-content" Feb 16 02:10:03.400197 master-0 kubenswrapper[7721]: E0216 02:10:03.400149 7721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e5d40eb-8051-46a8-9cd9-d2b1f152dbf0" containerName="registry-server" Feb 16 02:10:03.400197 master-0 kubenswrapper[7721]: I0216 02:10:03.400159 7721 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e5d40eb-8051-46a8-9cd9-d2b1f152dbf0" containerName="registry-server" Feb 16 02:10:03.400197 master-0 kubenswrapper[7721]: E0216 02:10:03.400183 7721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="774b6ff9-0e37-48fd-96c6-571859fec492" containerName="extract-utilities" Feb 16 02:10:03.400197 master-0 kubenswrapper[7721]: I0216 02:10:03.400192 7721 state_mem.go:107] "Deleted CPUSet assignment" podUID="774b6ff9-0e37-48fd-96c6-571859fec492" containerName="extract-utilities" Feb 16 02:10:03.401490 master-0 kubenswrapper[7721]: I0216 02:10:03.400565 7721 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e5d40eb-8051-46a8-9cd9-d2b1f152dbf0" containerName="registry-server" Feb 16 02:10:03.401490 master-0 kubenswrapper[7721]: I0216 02:10:03.400600 7721 memory_manager.go:354] "RemoveStaleState removing state" podUID="774b6ff9-0e37-48fd-96c6-571859fec492" containerName="registry-server" Feb 16 02:10:03.402779 master-0 kubenswrapper[7721]: I0216 02:10:03.402701 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s95k9" Feb 16 02:10:03.407136 master-0 kubenswrapper[7721]: I0216 02:10:03.407072 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-whkzn" Feb 16 02:10:03.419976 master-0 kubenswrapper[7721]: I0216 02:10:03.419916 7721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-6796f86fd6-qtxkl" Feb 16 02:10:03.421316 master-0 kubenswrapper[7721]: I0216 02:10:03.421259 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s95k9"] Feb 16 02:10:03.421498 master-0 kubenswrapper[7721]: I0216 02:10:03.421362 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-6796f86fd6-qtxkl" Feb 16 02:10:03.432718 master-0 kubenswrapper[7721]: I0216 02:10:03.432566 7721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-6796f86fd6-qtxkl" Feb 16 02:10:03.557036 master-0 kubenswrapper[7721]: I0216 02:10:03.556995 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f810ea0-e32d-4097-beca-5194349a57a6-utilities\") pod \"community-operators-s95k9\" (UID: \"5f810ea0-e32d-4097-beca-5194349a57a6\") " pod="openshift-marketplace/community-operators-s95k9" Feb 16 02:10:03.557128 master-0 kubenswrapper[7721]: I0216 02:10:03.557080 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p49hf\" (UniqueName: \"kubernetes.io/projected/5f810ea0-e32d-4097-beca-5194349a57a6-kube-api-access-p49hf\") pod \"community-operators-s95k9\" (UID: \"5f810ea0-e32d-4097-beca-5194349a57a6\") " pod="openshift-marketplace/community-operators-s95k9" Feb 16 02:10:03.557161 master-0 kubenswrapper[7721]: I0216 02:10:03.557133 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f810ea0-e32d-4097-beca-5194349a57a6-catalog-content\") pod \"community-operators-s95k9\" (UID: \"5f810ea0-e32d-4097-beca-5194349a57a6\") " pod="openshift-marketplace/community-operators-s95k9" Feb 16 02:10:03.570974 master-0 kubenswrapper[7721]: I0216 02:10:03.570935 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-thm6w"] Feb 16 02:10:03.571946 master-0 kubenswrapper[7721]: I0216 02:10:03.571916 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-thm6w" Feb 16 02:10:03.578274 master-0 kubenswrapper[7721]: I0216 02:10:03.578246 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-w8x86" Feb 16 02:10:03.585583 master-0 kubenswrapper[7721]: I0216 02:10:03.585548 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-thm6w"] Feb 16 02:10:03.619612 master-0 kubenswrapper[7721]: I0216 02:10:03.618757 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9c6g5" event={"ID":"89041b37-18f6-499d-89ec-a0523a25dc58","Type":"ContainerStarted","Data":"7d03945f26a5afefa2326f18d81617f6a565587e2b4f83c138528190839d7076"} Feb 16 02:10:03.631531 master-0 kubenswrapper[7721]: I0216 02:10:03.623759 7721 generic.go:334] "Generic (PLEG): container finished" podID="ab463f74-d1e7-44f1-9634-d9f63685b06d" containerID="fcd918e42e09edbde82af27329ac4d0663845d79ca2085b97d9bb5eab9b7e0af" exitCode=0 Feb 16 02:10:03.631531 master-0 kubenswrapper[7721]: I0216 02:10:03.623828 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-trqk8" event={"ID":"ab463f74-d1e7-44f1-9634-d9f63685b06d","Type":"ContainerDied","Data":"fcd918e42e09edbde82af27329ac4d0663845d79ca2085b97d9bb5eab9b7e0af"} Feb 16 02:10:03.631531 master-0 kubenswrapper[7721]: I0216 02:10:03.623858 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-trqk8" event={"ID":"ab463f74-d1e7-44f1-9634-d9f63685b06d","Type":"ContainerDied","Data":"63ff9ecc1fd3652504bc8f536c52d520abfef70fdd743636d7aff4953ee9f4f4"} Feb 16 02:10:03.631531 master-0 kubenswrapper[7721]: I0216 02:10:03.623871 7721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63ff9ecc1fd3652504bc8f536c52d520abfef70fdd743636d7aff4953ee9f4f4" Feb 16 02:10:03.631531 master-0 kubenswrapper[7721]: I0216 02:10:03.624547 7721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-trqk8" Feb 16 02:10:03.631531 master-0 kubenswrapper[7721]: I0216 02:10:03.626097 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gkbtj" event={"ID":"5b923d74-bad3-4780-8e7e-e8365ac9ea06","Type":"ContainerStarted","Data":"aea61f85c2790ce795bc5e29cc6af9ceb9ed3e98bdf75492709e6ca02b00c1e9"} Feb 16 02:10:03.631531 master-0 kubenswrapper[7721]: I0216 02:10:03.627940 7721 generic.go:334] "Generic (PLEG): container finished" podID="0bdb65c2-c4bc-4e33-9e5a-61542c659700" containerID="74431daf39a9feaef137ae8d22f9b9d06dc8b940ba1cc1cbd03fb059358f6dbd" exitCode=0 Feb 16 02:10:03.631531 master-0 kubenswrapper[7721]: I0216 02:10:03.628008 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9qtbw" event={"ID":"0bdb65c2-c4bc-4e33-9e5a-61542c659700","Type":"ContainerDied","Data":"74431daf39a9feaef137ae8d22f9b9d06dc8b940ba1cc1cbd03fb059358f6dbd"} Feb 16 02:10:03.632633 master-0 kubenswrapper[7721]: I0216 02:10:03.632612 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-6796f86fd6-qtxkl" Feb 16 02:10:03.658569 master-0 kubenswrapper[7721]: I0216 02:10:03.658481 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f810ea0-e32d-4097-beca-5194349a57a6-utilities\") pod \"community-operators-s95k9\" (UID: \"5f810ea0-e32d-4097-beca-5194349a57a6\") " pod="openshift-marketplace/community-operators-s95k9" Feb 16 02:10:03.658702 master-0 kubenswrapper[7721]: I0216 02:10:03.658661 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p49hf\" (UniqueName: \"kubernetes.io/projected/5f810ea0-e32d-4097-beca-5194349a57a6-kube-api-access-p49hf\") pod \"community-operators-s95k9\" (UID: \"5f810ea0-e32d-4097-beca-5194349a57a6\") " pod="openshift-marketplace/community-operators-s95k9" Feb 16 02:10:03.658755 master-0 kubenswrapper[7721]: I0216 02:10:03.658736 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f810ea0-e32d-4097-beca-5194349a57a6-catalog-content\") pod \"community-operators-s95k9\" (UID: \"5f810ea0-e32d-4097-beca-5194349a57a6\") " pod="openshift-marketplace/community-operators-s95k9" Feb 16 02:10:03.659270 master-0 kubenswrapper[7721]: I0216 02:10:03.659190 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f810ea0-e32d-4097-beca-5194349a57a6-utilities\") pod \"community-operators-s95k9\" (UID: \"5f810ea0-e32d-4097-beca-5194349a57a6\") " pod="openshift-marketplace/community-operators-s95k9" Feb 16 02:10:03.661923 master-0 kubenswrapper[7721]: I0216 02:10:03.661873 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f810ea0-e32d-4097-beca-5194349a57a6-catalog-content\") pod \"community-operators-s95k9\" (UID: \"5f810ea0-e32d-4097-beca-5194349a57a6\") " pod="openshift-marketplace/community-operators-s95k9" Feb 16 02:10:03.686473 master-0 kubenswrapper[7721]: I0216 02:10:03.681761 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p49hf\" (UniqueName: \"kubernetes.io/projected/5f810ea0-e32d-4097-beca-5194349a57a6-kube-api-access-p49hf\") pod \"community-operators-s95k9\" (UID: \"5f810ea0-e32d-4097-beca-5194349a57a6\") " pod="openshift-marketplace/community-operators-s95k9" Feb 16 02:10:03.694548 master-0 kubenswrapper[7721]: I0216 02:10:03.691903 7721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9qtbw" Feb 16 02:10:03.760075 master-0 kubenswrapper[7721]: I0216 02:10:03.760009 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab463f74-d1e7-44f1-9634-d9f63685b06d-catalog-content\") pod \"ab463f74-d1e7-44f1-9634-d9f63685b06d\" (UID: \"ab463f74-d1e7-44f1-9634-d9f63685b06d\") " Feb 16 02:10:03.760273 master-0 kubenswrapper[7721]: I0216 02:10:03.760115 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab463f74-d1e7-44f1-9634-d9f63685b06d-utilities\") pod \"ab463f74-d1e7-44f1-9634-d9f63685b06d\" (UID: \"ab463f74-d1e7-44f1-9634-d9f63685b06d\") " Feb 16 02:10:03.760273 master-0 kubenswrapper[7721]: I0216 02:10:03.760237 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7jgf\" (UniqueName: \"kubernetes.io/projected/ab463f74-d1e7-44f1-9634-d9f63685b06d-kube-api-access-h7jgf\") pod \"ab463f74-d1e7-44f1-9634-d9f63685b06d\" (UID: \"ab463f74-d1e7-44f1-9634-d9f63685b06d\") " Feb 16 02:10:03.760457 master-0 kubenswrapper[7721]: I0216 02:10:03.760410 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxq28\" (UniqueName: \"kubernetes.io/projected/1487f82c-c14a-4f65-be77-5af2612f56f4-kube-api-access-wxq28\") pod \"redhat-marketplace-thm6w\" (UID: \"1487f82c-c14a-4f65-be77-5af2612f56f4\") " pod="openshift-marketplace/redhat-marketplace-thm6w" Feb 16 02:10:03.760525 master-0 kubenswrapper[7721]: I0216 02:10:03.760476 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1487f82c-c14a-4f65-be77-5af2612f56f4-utilities\") pod \"redhat-marketplace-thm6w\" (UID: \"1487f82c-c14a-4f65-be77-5af2612f56f4\") " pod="openshift-marketplace/redhat-marketplace-thm6w" Feb 16 02:10:03.760571 master-0 kubenswrapper[7721]: I0216 02:10:03.760525 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1487f82c-c14a-4f65-be77-5af2612f56f4-catalog-content\") pod \"redhat-marketplace-thm6w\" (UID: \"1487f82c-c14a-4f65-be77-5af2612f56f4\") " pod="openshift-marketplace/redhat-marketplace-thm6w" Feb 16 02:10:03.761728 master-0 kubenswrapper[7721]: I0216 02:10:03.761698 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab463f74-d1e7-44f1-9634-d9f63685b06d-utilities" (OuterVolumeSpecName: "utilities") pod "ab463f74-d1e7-44f1-9634-d9f63685b06d" (UID: "ab463f74-d1e7-44f1-9634-d9f63685b06d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 02:10:03.769095 master-0 kubenswrapper[7721]: I0216 02:10:03.769026 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab463f74-d1e7-44f1-9634-d9f63685b06d-kube-api-access-h7jgf" (OuterVolumeSpecName: "kube-api-access-h7jgf") pod "ab463f74-d1e7-44f1-9634-d9f63685b06d" (UID: "ab463f74-d1e7-44f1-9634-d9f63685b06d"). InnerVolumeSpecName "kube-api-access-h7jgf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:10:03.822066 master-0 kubenswrapper[7721]: I0216 02:10:03.821933 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab463f74-d1e7-44f1-9634-d9f63685b06d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ab463f74-d1e7-44f1-9634-d9f63685b06d" (UID: "ab463f74-d1e7-44f1-9634-d9f63685b06d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 02:10:03.861867 master-0 kubenswrapper[7721]: I0216 02:10:03.861834 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xvw2\" (UniqueName: \"kubernetes.io/projected/0bdb65c2-c4bc-4e33-9e5a-61542c659700-kube-api-access-8xvw2\") pod \"0bdb65c2-c4bc-4e33-9e5a-61542c659700\" (UID: \"0bdb65c2-c4bc-4e33-9e5a-61542c659700\") " Feb 16 02:10:03.862222 master-0 kubenswrapper[7721]: I0216 02:10:03.862200 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bdb65c2-c4bc-4e33-9e5a-61542c659700-utilities\") pod \"0bdb65c2-c4bc-4e33-9e5a-61542c659700\" (UID: \"0bdb65c2-c4bc-4e33-9e5a-61542c659700\") " Feb 16 02:10:03.862369 master-0 kubenswrapper[7721]: I0216 02:10:03.862352 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bdb65c2-c4bc-4e33-9e5a-61542c659700-catalog-content\") pod \"0bdb65c2-c4bc-4e33-9e5a-61542c659700\" (UID: \"0bdb65c2-c4bc-4e33-9e5a-61542c659700\") " Feb 16 02:10:03.862924 master-0 kubenswrapper[7721]: I0216 02:10:03.862904 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxq28\" (UniqueName: \"kubernetes.io/projected/1487f82c-c14a-4f65-be77-5af2612f56f4-kube-api-access-wxq28\") pod \"redhat-marketplace-thm6w\" (UID: \"1487f82c-c14a-4f65-be77-5af2612f56f4\") " pod="openshift-marketplace/redhat-marketplace-thm6w" Feb 16 02:10:03.863041 master-0 kubenswrapper[7721]: I0216 02:10:03.863028 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1487f82c-c14a-4f65-be77-5af2612f56f4-utilities\") pod \"redhat-marketplace-thm6w\" (UID: \"1487f82c-c14a-4f65-be77-5af2612f56f4\") " pod="openshift-marketplace/redhat-marketplace-thm6w" Feb 16 02:10:03.863139 master-0 kubenswrapper[7721]: I0216 02:10:03.863126 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1487f82c-c14a-4f65-be77-5af2612f56f4-catalog-content\") pod \"redhat-marketplace-thm6w\" (UID: \"1487f82c-c14a-4f65-be77-5af2612f56f4\") " pod="openshift-marketplace/redhat-marketplace-thm6w" Feb 16 02:10:03.863264 master-0 kubenswrapper[7721]: I0216 02:10:03.863251 7721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab463f74-d1e7-44f1-9634-d9f63685b06d-utilities\") on node \"master-0\" DevicePath \"\"" Feb 16 02:10:03.863330 master-0 kubenswrapper[7721]: I0216 02:10:03.863319 7721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7jgf\" (UniqueName: \"kubernetes.io/projected/ab463f74-d1e7-44f1-9634-d9f63685b06d-kube-api-access-h7jgf\") on node \"master-0\" DevicePath \"\"" Feb 16 02:10:03.863399 master-0 kubenswrapper[7721]: I0216 02:10:03.863389 7721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab463f74-d1e7-44f1-9634-d9f63685b06d-catalog-content\") on node \"master-0\" DevicePath \"\"" Feb 16 02:10:03.863882 master-0 kubenswrapper[7721]: I0216 02:10:03.863868 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1487f82c-c14a-4f65-be77-5af2612f56f4-catalog-content\") pod \"redhat-marketplace-thm6w\" (UID: \"1487f82c-c14a-4f65-be77-5af2612f56f4\") " pod="openshift-marketplace/redhat-marketplace-thm6w" Feb 16 02:10:03.864734 master-0 kubenswrapper[7721]: I0216 02:10:03.864049 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bdb65c2-c4bc-4e33-9e5a-61542c659700-utilities" (OuterVolumeSpecName: "utilities") pod "0bdb65c2-c4bc-4e33-9e5a-61542c659700" (UID: "0bdb65c2-c4bc-4e33-9e5a-61542c659700"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 02:10:03.864810 master-0 kubenswrapper[7721]: I0216 02:10:03.864187 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1487f82c-c14a-4f65-be77-5af2612f56f4-utilities\") pod \"redhat-marketplace-thm6w\" (UID: \"1487f82c-c14a-4f65-be77-5af2612f56f4\") " pod="openshift-marketplace/redhat-marketplace-thm6w" Feb 16 02:10:03.865177 master-0 kubenswrapper[7721]: I0216 02:10:03.865118 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bdb65c2-c4bc-4e33-9e5a-61542c659700-kube-api-access-8xvw2" (OuterVolumeSpecName: "kube-api-access-8xvw2") pod "0bdb65c2-c4bc-4e33-9e5a-61542c659700" (UID: "0bdb65c2-c4bc-4e33-9e5a-61542c659700"). InnerVolumeSpecName "kube-api-access-8xvw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:10:03.893167 master-0 kubenswrapper[7721]: I0216 02:10:03.893108 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxq28\" (UniqueName: \"kubernetes.io/projected/1487f82c-c14a-4f65-be77-5af2612f56f4-kube-api-access-wxq28\") pod \"redhat-marketplace-thm6w\" (UID: \"1487f82c-c14a-4f65-be77-5af2612f56f4\") " pod="openshift-marketplace/redhat-marketplace-thm6w" Feb 16 02:10:03.913231 master-0 kubenswrapper[7721]: I0216 02:10:03.913134 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bdb65c2-c4bc-4e33-9e5a-61542c659700-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0bdb65c2-c4bc-4e33-9e5a-61542c659700" (UID: "0bdb65c2-c4bc-4e33-9e5a-61542c659700"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 02:10:03.921313 master-0 kubenswrapper[7721]: I0216 02:10:03.921269 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s95k9" Feb 16 02:10:03.950120 master-0 kubenswrapper[7721]: I0216 02:10:03.950073 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-thm6w" Feb 16 02:10:03.965183 master-0 kubenswrapper[7721]: I0216 02:10:03.965124 7721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bdb65c2-c4bc-4e33-9e5a-61542c659700-utilities\") on node \"master-0\" DevicePath \"\"" Feb 16 02:10:03.965183 master-0 kubenswrapper[7721]: I0216 02:10:03.965184 7721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bdb65c2-c4bc-4e33-9e5a-61542c659700-catalog-content\") on node \"master-0\" DevicePath \"\"" Feb 16 02:10:03.966507 master-0 kubenswrapper[7721]: I0216 02:10:03.965207 7721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xvw2\" (UniqueName: \"kubernetes.io/projected/0bdb65c2-c4bc-4e33-9e5a-61542c659700-kube-api-access-8xvw2\") on node \"master-0\" DevicePath \"\"" Feb 16 02:10:04.367360 master-0 kubenswrapper[7721]: I0216 02:10:04.367235 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s95k9"] Feb 16 02:10:04.370955 master-0 kubenswrapper[7721]: W0216 02:10:04.370752 7721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f810ea0_e32d_4097_beca_5194349a57a6.slice/crio-0458436e1991707e1a1730601b13074a765d2ad430ff6238224fc587bdd11634 WatchSource:0}: Error finding container 0458436e1991707e1a1730601b13074a765d2ad430ff6238224fc587bdd11634: Status 404 returned error can't find the container with id 0458436e1991707e1a1730601b13074a765d2ad430ff6238224fc587bdd11634 Feb 16 02:10:04.433122 master-0 kubenswrapper[7721]: I0216 02:10:04.433048 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-thm6w"] Feb 16 02:10:04.641105 master-0 kubenswrapper[7721]: I0216 02:10:04.641001 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9qtbw" event={"ID":"0bdb65c2-c4bc-4e33-9e5a-61542c659700","Type":"ContainerDied","Data":"239825a3989258bbf59de5ff95b00559d0502acecd844ad3bed64d4e2e8c2676"} Feb 16 02:10:04.641822 master-0 kubenswrapper[7721]: I0216 02:10:04.641162 7721 scope.go:117] "RemoveContainer" containerID="74431daf39a9feaef137ae8d22f9b9d06dc8b940ba1cc1cbd03fb059358f6dbd" Feb 16 02:10:04.641822 master-0 kubenswrapper[7721]: I0216 02:10:04.641535 7721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9qtbw" Feb 16 02:10:04.646024 master-0 kubenswrapper[7721]: I0216 02:10:04.645831 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s95k9" event={"ID":"5f810ea0-e32d-4097-beca-5194349a57a6","Type":"ContainerStarted","Data":"0458436e1991707e1a1730601b13074a765d2ad430ff6238224fc587bdd11634"} Feb 16 02:10:04.649153 master-0 kubenswrapper[7721]: I0216 02:10:04.649066 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-thm6w" event={"ID":"1487f82c-c14a-4f65-be77-5af2612f56f4","Type":"ContainerStarted","Data":"ba4f7bcf968605deb298487c68a8b9824d062c97781f01a71a2b9894c49e23ed"} Feb 16 02:10:04.654732 master-0 kubenswrapper[7721]: I0216 02:10:04.654653 7721 generic.go:334] "Generic (PLEG): container finished" podID="89041b37-18f6-499d-89ec-a0523a25dc58" containerID="7d03945f26a5afefa2326f18d81617f6a565587e2b4f83c138528190839d7076" exitCode=0 Feb 16 02:10:04.655258 master-0 kubenswrapper[7721]: I0216 02:10:04.654818 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9c6g5" event={"ID":"89041b37-18f6-499d-89ec-a0523a25dc58","Type":"ContainerDied","Data":"7d03945f26a5afefa2326f18d81617f6a565587e2b4f83c138528190839d7076"} Feb 16 02:10:04.657826 master-0 kubenswrapper[7721]: I0216 02:10:04.657776 7721 generic.go:334] "Generic (PLEG): container finished" podID="5b923d74-bad3-4780-8e7e-e8365ac9ea06" containerID="aea61f85c2790ce795bc5e29cc6af9ceb9ed3e98bdf75492709e6ca02b00c1e9" exitCode=0 Feb 16 02:10:04.658261 master-0 kubenswrapper[7721]: I0216 02:10:04.658086 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gkbtj" event={"ID":"5b923d74-bad3-4780-8e7e-e8365ac9ea06","Type":"ContainerDied","Data":"aea61f85c2790ce795bc5e29cc6af9ceb9ed3e98bdf75492709e6ca02b00c1e9"} Feb 16 02:10:04.658426 master-0 kubenswrapper[7721]: I0216 02:10:04.658294 7721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-trqk8" Feb 16 02:10:04.685278 master-0 kubenswrapper[7721]: I0216 02:10:04.683856 7721 scope.go:117] "RemoveContainer" containerID="e5940b75272319c7aabca48d2d6edec79fe11d41b3036bd4f4cedae45e24b5d7" Feb 16 02:10:04.805231 master-0 kubenswrapper[7721]: I0216 02:10:04.805183 7721 scope.go:117] "RemoveContainer" containerID="94f8bb1308558c15bc7708b6c955735ebf5750573898ff17e0375408e94d34fd" Feb 16 02:10:04.840715 master-0 kubenswrapper[7721]: I0216 02:10:04.840550 7721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9qtbw"] Feb 16 02:10:04.847182 master-0 kubenswrapper[7721]: I0216 02:10:04.847136 7721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9qtbw"] Feb 16 02:10:04.868154 master-0 kubenswrapper[7721]: I0216 02:10:04.868120 7721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-trqk8"] Feb 16 02:10:04.873544 master-0 kubenswrapper[7721]: I0216 02:10:04.873476 7721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-trqk8"] Feb 16 02:10:05.669747 master-0 kubenswrapper[7721]: I0216 02:10:05.669651 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gkbtj" event={"ID":"5b923d74-bad3-4780-8e7e-e8365ac9ea06","Type":"ContainerStarted","Data":"fae98d0d41c752bf71000336c489ee1d474221b3c0785a1e23221f673af7dd7e"} Feb 16 02:10:05.674826 master-0 kubenswrapper[7721]: I0216 02:10:05.674729 7721 generic.go:334] "Generic (PLEG): container finished" podID="5f810ea0-e32d-4097-beca-5194349a57a6" containerID="1699b8c4f02ae23c693a51c49da941feb9c55db5efaac1f61f4c4aee2139bea0" exitCode=0 Feb 16 02:10:05.675022 master-0 kubenswrapper[7721]: I0216 02:10:05.674854 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s95k9" event={"ID":"5f810ea0-e32d-4097-beca-5194349a57a6","Type":"ContainerDied","Data":"1699b8c4f02ae23c693a51c49da941feb9c55db5efaac1f61f4c4aee2139bea0"} Feb 16 02:10:05.678183 master-0 kubenswrapper[7721]: I0216 02:10:05.678103 7721 generic.go:334] "Generic (PLEG): container finished" podID="1487f82c-c14a-4f65-be77-5af2612f56f4" containerID="21b7e564dfe8c5595be3274f81dfdbf1c60d502a3297f83f36bd1e41f4f2b4cb" exitCode=0 Feb 16 02:10:05.678344 master-0 kubenswrapper[7721]: I0216 02:10:05.678245 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-thm6w" event={"ID":"1487f82c-c14a-4f65-be77-5af2612f56f4","Type":"ContainerDied","Data":"21b7e564dfe8c5595be3274f81dfdbf1c60d502a3297f83f36bd1e41f4f2b4cb"} Feb 16 02:10:05.683619 master-0 kubenswrapper[7721]: I0216 02:10:05.683556 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9c6g5" event={"ID":"89041b37-18f6-499d-89ec-a0523a25dc58","Type":"ContainerStarted","Data":"f9f71f659e418dee82443c66f57f702f3b65fd2bcec85ca7b8b781f3ef55e480"} Feb 16 02:10:05.704048 master-0 kubenswrapper[7721]: I0216 02:10:05.703923 7721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gkbtj" podStartSLOduration=3.121198708 podStartE2EDuration="5.703895531s" podCreationTimestamp="2026-02-16 02:10:00 +0000 UTC" firstStartedPulling="2026-02-16 02:10:02.608484583 +0000 UTC m=+206.102718875" lastFinishedPulling="2026-02-16 02:10:05.191181406 +0000 UTC m=+208.685415698" observedRunningTime="2026-02-16 02:10:05.70059578 +0000 UTC m=+209.194830082" watchObservedRunningTime="2026-02-16 02:10:05.703895531 +0000 UTC m=+209.198129823" Feb 16 02:10:05.755220 master-0 kubenswrapper[7721]: I0216 02:10:05.755108 7721 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Feb 16 02:10:05.755977 master-0 kubenswrapper[7721]: I0216 02:10:05.755852 7721 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" containerID="cri-o://faf5128620c105dbf4c0b83460e5c6d63ea7e16d1417f90a62c09817a9c5e166" gracePeriod=30 Feb 16 02:10:05.756470 master-0 kubenswrapper[7721]: I0216 02:10:05.756396 7721 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 16 02:10:05.756868 master-0 kubenswrapper[7721]: E0216 02:10:05.756822 7721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" Feb 16 02:10:05.756950 master-0 kubenswrapper[7721]: I0216 02:10:05.756867 7721 state_mem.go:107] "Deleted CPUSet assignment" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" Feb 16 02:10:05.756950 master-0 kubenswrapper[7721]: E0216 02:10:05.756889 7721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bdb65c2-c4bc-4e33-9e5a-61542c659700" containerName="extract-content" Feb 16 02:10:05.756950 master-0 kubenswrapper[7721]: I0216 02:10:05.756907 7721 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bdb65c2-c4bc-4e33-9e5a-61542c659700" containerName="extract-content" Feb 16 02:10:05.756950 master-0 kubenswrapper[7721]: E0216 02:10:05.756938 7721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="cluster-policy-controller" Feb 16 02:10:05.757180 master-0 kubenswrapper[7721]: I0216 02:10:05.756954 7721 state_mem.go:107] "Deleted CPUSet assignment" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="cluster-policy-controller" Feb 16 02:10:05.757180 master-0 kubenswrapper[7721]: E0216 02:10:05.756980 7721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab463f74-d1e7-44f1-9634-d9f63685b06d" containerName="registry-server" Feb 16 02:10:05.757180 master-0 kubenswrapper[7721]: I0216 02:10:05.756996 7721 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab463f74-d1e7-44f1-9634-d9f63685b06d" containerName="registry-server" Feb 16 02:10:05.757180 master-0 kubenswrapper[7721]: E0216 02:10:05.757018 7721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bdb65c2-c4bc-4e33-9e5a-61542c659700" containerName="extract-utilities" Feb 16 02:10:05.757180 master-0 kubenswrapper[7721]: I0216 02:10:05.757035 7721 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bdb65c2-c4bc-4e33-9e5a-61542c659700" containerName="extract-utilities" Feb 16 02:10:05.757180 master-0 kubenswrapper[7721]: E0216 02:10:05.757063 7721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" Feb 16 02:10:05.757180 master-0 kubenswrapper[7721]: I0216 02:10:05.757079 7721 state_mem.go:107] "Deleted CPUSet assignment" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" Feb 16 02:10:05.757180 master-0 kubenswrapper[7721]: E0216 02:10:05.757104 7721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab463f74-d1e7-44f1-9634-d9f63685b06d" containerName="extract-content" Feb 16 02:10:05.757180 master-0 kubenswrapper[7721]: I0216 02:10:05.757120 7721 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab463f74-d1e7-44f1-9634-d9f63685b06d" containerName="extract-content" Feb 16 02:10:05.757180 master-0 kubenswrapper[7721]: E0216 02:10:05.757148 7721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bdb65c2-c4bc-4e33-9e5a-61542c659700" containerName="registry-server" Feb 16 02:10:05.757180 master-0 kubenswrapper[7721]: I0216 02:10:05.757164 7721 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bdb65c2-c4bc-4e33-9e5a-61542c659700" containerName="registry-server" Feb 16 02:10:05.757769 master-0 kubenswrapper[7721]: E0216 02:10:05.757197 7721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab463f74-d1e7-44f1-9634-d9f63685b06d" containerName="extract-utilities" Feb 16 02:10:05.757769 master-0 kubenswrapper[7721]: I0216 02:10:05.757214 7721 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab463f74-d1e7-44f1-9634-d9f63685b06d" containerName="extract-utilities" Feb 16 02:10:05.757769 master-0 kubenswrapper[7721]: I0216 02:10:05.757424 7721 memory_manager.go:354] "RemoveStaleState removing state" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" Feb 16 02:10:05.757769 master-0 kubenswrapper[7721]: I0216 02:10:05.757509 7721 memory_manager.go:354] "RemoveStaleState removing state" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="cluster-policy-controller" Feb 16 02:10:05.757769 master-0 kubenswrapper[7721]: I0216 02:10:05.757545 7721 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bdb65c2-c4bc-4e33-9e5a-61542c659700" containerName="registry-server" Feb 16 02:10:05.757769 master-0 kubenswrapper[7721]: I0216 02:10:05.757571 7721 memory_manager.go:354] "RemoveStaleState removing state" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" Feb 16 02:10:05.757769 master-0 kubenswrapper[7721]: I0216 02:10:05.757589 7721 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab463f74-d1e7-44f1-9634-d9f63685b06d" containerName="registry-server" Feb 16 02:10:05.758110 master-0 kubenswrapper[7721]: E0216 02:10:05.757819 7721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" Feb 16 02:10:05.758110 master-0 kubenswrapper[7721]: I0216 02:10:05.757845 7721 state_mem.go:107] "Deleted CPUSet assignment" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" Feb 16 02:10:05.758110 master-0 kubenswrapper[7721]: I0216 02:10:05.758049 7721 memory_manager.go:354] "RemoveStaleState removing state" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" Feb 16 02:10:05.759811 master-0 kubenswrapper[7721]: I0216 02:10:05.759757 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:10:05.762369 master-0 kubenswrapper[7721]: I0216 02:10:05.761585 7721 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="cluster-policy-controller" containerID="cri-o://7921033cca2163ce5e4549f18d23b23e3797f9935bb1bd7ed5580d96e9031f08" gracePeriod=30 Feb 16 02:10:05.797356 master-0 kubenswrapper[7721]: I0216 02:10:05.797242 7721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9c6g5" podStartSLOduration=3.33676812 podStartE2EDuration="5.797212728s" podCreationTimestamp="2026-02-16 02:10:00 +0000 UTC" firstStartedPulling="2026-02-16 02:10:02.610304568 +0000 UTC m=+206.104538840" lastFinishedPulling="2026-02-16 02:10:05.070749146 +0000 UTC m=+208.564983448" observedRunningTime="2026-02-16 02:10:05.793602848 +0000 UTC m=+209.287837150" watchObservedRunningTime="2026-02-16 02:10:05.797212728 +0000 UTC m=+209.291447030" Feb 16 02:10:05.830576 master-0 kubenswrapper[7721]: I0216 02:10:05.830511 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-755d954778-bngv9_c9cd32bc-a13a-44ee-ba52-7bb335c7007b/authentication-operator/0.log" Feb 16 02:10:05.839265 master-0 kubenswrapper[7721]: I0216 02:10:05.838286 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 16 02:10:05.909038 master-0 kubenswrapper[7721]: I0216 02:10:05.908961 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/971c7312e8ac72eb9932acb64a3dd785-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"971c7312e8ac72eb9932acb64a3dd785\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:10:05.909136 master-0 kubenswrapper[7721]: I0216 02:10:05.909062 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/971c7312e8ac72eb9932acb64a3dd785-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"971c7312e8ac72eb9932acb64a3dd785\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:10:05.919988 master-0 kubenswrapper[7721]: I0216 02:10:05.919908 7721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 02:10:05.985924 master-0 kubenswrapper[7721]: I0216 02:10:05.985838 7721 kubelet.go:2706] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="8825afde-4071-436d-b023-ebc48932b8b2" Feb 16 02:10:06.011102 master-0 kubenswrapper[7721]: I0216 02:10:06.010669 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/971c7312e8ac72eb9932acb64a3dd785-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"971c7312e8ac72eb9932acb64a3dd785\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:10:06.011102 master-0 kubenswrapper[7721]: I0216 02:10:06.010567 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/971c7312e8ac72eb9932acb64a3dd785-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"971c7312e8ac72eb9932acb64a3dd785\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:10:06.011102 master-0 kubenswrapper[7721]: I0216 02:10:06.010941 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/971c7312e8ac72eb9932acb64a3dd785-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"971c7312e8ac72eb9932acb64a3dd785\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:10:06.011102 master-0 kubenswrapper[7721]: I0216 02:10:06.011046 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/971c7312e8ac72eb9932acb64a3dd785-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"971c7312e8ac72eb9932acb64a3dd785\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:10:06.023992 master-0 kubenswrapper[7721]: I0216 02:10:06.023871 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-755d954778-bngv9_c9cd32bc-a13a-44ee-ba52-7bb335c7007b/authentication-operator/1.log" Feb 16 02:10:06.112339 master-0 kubenswrapper[7721]: I0216 02:10:06.112281 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-config\") pod \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " Feb 16 02:10:06.112487 master-0 kubenswrapper[7721]: I0216 02:10:06.112399 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-etc-kubernetes-cloud\") pod \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " Feb 16 02:10:06.112487 master-0 kubenswrapper[7721]: I0216 02:10:06.112470 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-secrets\") pod \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " Feb 16 02:10:06.112583 master-0 kubenswrapper[7721]: I0216 02:10:06.112457 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-config" (OuterVolumeSpecName: "config") pod "80420f2e7c3cdda71f7d0d6ccbe6f9f3" (UID: "80420f2e7c3cdda71f7d0d6ccbe6f9f3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:10:06.112583 master-0 kubenswrapper[7721]: I0216 02:10:06.112525 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-logs\") pod \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " Feb 16 02:10:06.112583 master-0 kubenswrapper[7721]: I0216 02:10:06.112559 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-secrets" (OuterVolumeSpecName: "secrets") pod "80420f2e7c3cdda71f7d0d6ccbe6f9f3" (UID: "80420f2e7c3cdda71f7d0d6ccbe6f9f3"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:10:06.112701 master-0 kubenswrapper[7721]: I0216 02:10:06.112595 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-ssl-certs-host\") pod \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " Feb 16 02:10:06.112701 master-0 kubenswrapper[7721]: I0216 02:10:06.112607 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-etc-kubernetes-cloud" (OuterVolumeSpecName: "etc-kubernetes-cloud") pod "80420f2e7c3cdda71f7d0d6ccbe6f9f3" (UID: "80420f2e7c3cdda71f7d0d6ccbe6f9f3"). InnerVolumeSpecName "etc-kubernetes-cloud". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:10:06.112701 master-0 kubenswrapper[7721]: I0216 02:10:06.112682 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-logs" (OuterVolumeSpecName: "logs") pod "80420f2e7c3cdda71f7d0d6ccbe6f9f3" (UID: "80420f2e7c3cdda71f7d0d6ccbe6f9f3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:10:06.112815 master-0 kubenswrapper[7721]: I0216 02:10:06.112767 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-ssl-certs-host" (OuterVolumeSpecName: "ssl-certs-host") pod "80420f2e7c3cdda71f7d0d6ccbe6f9f3" (UID: "80420f2e7c3cdda71f7d0d6ccbe6f9f3"). InnerVolumeSpecName "ssl-certs-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:10:06.113190 master-0 kubenswrapper[7721]: I0216 02:10:06.113157 7721 reconciler_common.go:293] "Volume detached for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-etc-kubernetes-cloud\") on node \"master-0\" DevicePath \"\"" Feb 16 02:10:06.113190 master-0 kubenswrapper[7721]: I0216 02:10:06.113188 7721 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-secrets\") on node \"master-0\" DevicePath \"\"" Feb 16 02:10:06.113366 master-0 kubenswrapper[7721]: I0216 02:10:06.113202 7721 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-logs\") on node \"master-0\" DevicePath \"\"" Feb 16 02:10:06.113366 master-0 kubenswrapper[7721]: I0216 02:10:06.113214 7721 reconciler_common.go:293] "Volume detached for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-ssl-certs-host\") on node \"master-0\" DevicePath \"\"" Feb 16 02:10:06.113366 master-0 kubenswrapper[7721]: I0216 02:10:06.113228 7721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-config\") on node \"master-0\" DevicePath \"\"" Feb 16 02:10:06.136789 master-0 kubenswrapper[7721]: I0216 02:10:06.136744 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:10:06.167855 master-0 kubenswrapper[7721]: W0216 02:10:06.167789 7721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod971c7312e8ac72eb9932acb64a3dd785.slice/crio-3382fccbb3d34fbfaf90ce0b807cc46f9761dae67dfbad31837566d7894b9fdf WatchSource:0}: Error finding container 3382fccbb3d34fbfaf90ce0b807cc46f9761dae67dfbad31837566d7894b9fdf: Status 404 returned error can't find the container with id 3382fccbb3d34fbfaf90ce0b807cc46f9761dae67dfbad31837566d7894b9fdf Feb 16 02:10:06.424113 master-0 kubenswrapper[7721]: I0216 02:10:06.423992 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-6796f86fd6-qtxkl_22739961-e322-47f1-b232-eaa4cc35319c/fix-audit-permissions/0.log" Feb 16 02:10:06.623283 master-0 kubenswrapper[7721]: I0216 02:10:06.623225 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-6796f86fd6-qtxkl_22739961-e322-47f1-b232-eaa4cc35319c/oauth-apiserver/0.log" Feb 16 02:10:06.693776 master-0 kubenswrapper[7721]: I0216 02:10:06.690510 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s95k9" event={"ID":"5f810ea0-e32d-4097-beca-5194349a57a6","Type":"ContainerStarted","Data":"0c089d20e3c3f3c6a423f5f3bfd60d7fd11adf6b1d0bb658070186cbb28fa86e"} Feb 16 02:10:06.696873 master-0 kubenswrapper[7721]: I0216 02:10:06.693815 7721 generic.go:334] "Generic (PLEG): container finished" podID="1487f82c-c14a-4f65-be77-5af2612f56f4" containerID="49af9cb6f60854b313f38f722eafca91d152dc52885026eb8064608e8405a048" exitCode=0 Feb 16 02:10:06.696873 master-0 kubenswrapper[7721]: I0216 02:10:06.693962 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-thm6w" event={"ID":"1487f82c-c14a-4f65-be77-5af2612f56f4","Type":"ContainerDied","Data":"49af9cb6f60854b313f38f722eafca91d152dc52885026eb8064608e8405a048"} Feb 16 02:10:06.702225 master-0 kubenswrapper[7721]: I0216 02:10:06.702166 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"971c7312e8ac72eb9932acb64a3dd785","Type":"ContainerStarted","Data":"462bc8e54438708fbe0de05ecb433d15f63ff46542c44ae6f1cb6f59fc242a3b"} Feb 16 02:10:06.702410 master-0 kubenswrapper[7721]: I0216 02:10:06.702232 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"971c7312e8ac72eb9932acb64a3dd785","Type":"ContainerStarted","Data":"3382fccbb3d34fbfaf90ce0b807cc46f9761dae67dfbad31837566d7894b9fdf"} Feb 16 02:10:06.703954 master-0 kubenswrapper[7721]: I0216 02:10:06.703888 7721 generic.go:334] "Generic (PLEG): container finished" podID="1f35c7c9-16ec-486e-99ff-f1cbcce76eb3" containerID="6880014992fa93e0c0801558387fe49a32761a32c34c61cc54ee116a4f50adda" exitCode=0 Feb 16 02:10:06.704029 master-0 kubenswrapper[7721]: I0216 02:10:06.703998 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"1f35c7c9-16ec-486e-99ff-f1cbcce76eb3","Type":"ContainerDied","Data":"6880014992fa93e0c0801558387fe49a32761a32c34c61cc54ee116a4f50adda"} Feb 16 02:10:06.707235 master-0 kubenswrapper[7721]: I0216 02:10:06.706566 7721 generic.go:334] "Generic (PLEG): container finished" podID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerID="faf5128620c105dbf4c0b83460e5c6d63ea7e16d1417f90a62c09817a9c5e166" exitCode=0 Feb 16 02:10:06.707235 master-0 kubenswrapper[7721]: I0216 02:10:06.706606 7721 generic.go:334] "Generic (PLEG): container finished" podID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerID="7921033cca2163ce5e4549f18d23b23e3797f9935bb1bd7ed5580d96e9031f08" exitCode=0 Feb 16 02:10:06.708013 master-0 kubenswrapper[7721]: I0216 02:10:06.707811 7721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 02:10:06.711621 master-0 kubenswrapper[7721]: I0216 02:10:06.711532 7721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e4ba8ef0f2d2dcfdef03df990cea8e18604ff8954454af1715e76176988bea9" Feb 16 02:10:06.711621 master-0 kubenswrapper[7721]: I0216 02:10:06.711584 7721 scope.go:117] "RemoveContainer" containerID="f7e5042e1717f873aa5ed64ccd2f2b11417f41ea4156f1f41e924e94dbf23445" Feb 16 02:10:06.725234 master-0 kubenswrapper[7721]: I0216 02:10:06.725193 7721 scope.go:117] "RemoveContainer" containerID="907bfaa35e251ac0a99127e043064ef8d7828048025a8b998d4e1bd9a8208385" Feb 16 02:10:06.750840 master-0 kubenswrapper[7721]: I0216 02:10:06.750759 7721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bdb65c2-c4bc-4e33-9e5a-61542c659700" path="/var/lib/kubelet/pods/0bdb65c2-c4bc-4e33-9e5a-61542c659700/volumes" Feb 16 02:10:06.753133 master-0 kubenswrapper[7721]: I0216 02:10:06.753089 7721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" path="/var/lib/kubelet/pods/80420f2e7c3cdda71f7d0d6ccbe6f9f3/volumes" Feb 16 02:10:06.755909 master-0 kubenswrapper[7721]: I0216 02:10:06.755818 7721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab463f74-d1e7-44f1-9634-d9f63685b06d" path="/var/lib/kubelet/pods/ab463f74-d1e7-44f1-9634-d9f63685b06d/volumes" Feb 16 02:10:06.758915 master-0 kubenswrapper[7721]: I0216 02:10:06.758826 7721 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="" Feb 16 02:10:06.784634 master-0 kubenswrapper[7721]: I0216 02:10:06.784595 7721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Feb 16 02:10:06.784759 master-0 kubenswrapper[7721]: I0216 02:10:06.784742 7721 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="8825afde-4071-436d-b023-ebc48932b8b2" Feb 16 02:10:06.794803 master-0 kubenswrapper[7721]: I0216 02:10:06.794734 7721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Feb 16 02:10:06.794803 master-0 kubenswrapper[7721]: I0216 02:10:06.794773 7721 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="8825afde-4071-436d-b023-ebc48932b8b2" Feb 16 02:10:06.825420 master-0 kubenswrapper[7721]: I0216 02:10:06.825249 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-67bf55ccdd-htjgz_724ac845-3835-458b-9645-e665be135ff9/etcd-operator/0.log" Feb 16 02:10:07.109892 master-0 kubenswrapper[7721]: I0216 02:10:07.109788 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-67bf55ccdd-htjgz_724ac845-3835-458b-9645-e665be135ff9/etcd-operator/1.log" Feb 16 02:10:07.568777 master-0 kubenswrapper[7721]: I0216 02:10:07.566984 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_401699cb53e7098157e808a83125b0e4/setup/0.log" Feb 16 02:10:07.700868 master-0 kubenswrapper[7721]: I0216 02:10:07.700808 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_401699cb53e7098157e808a83125b0e4/etcd-ensure-env-vars/0.log" Feb 16 02:10:07.720082 master-0 kubenswrapper[7721]: I0216 02:10:07.720033 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-5f5f84757d-b47jp_6c02961f-30ec-4405-b7fa-9c4192342ae9/openshift-controller-manager-operator/1.log" Feb 16 02:10:07.720275 master-0 kubenswrapper[7721]: I0216 02:10:07.720107 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-b47jp" event={"ID":"6c02961f-30ec-4405-b7fa-9c4192342ae9","Type":"ContainerStarted","Data":"ef5d9da1b91f2be498c6fee49630a5d1626e9f5b6cace63e038f76957d7bdc73"} Feb 16 02:10:07.731696 master-0 kubenswrapper[7721]: I0216 02:10:07.731651 7721 generic.go:334] "Generic (PLEG): container finished" podID="5f810ea0-e32d-4097-beca-5194349a57a6" containerID="0c089d20e3c3f3c6a423f5f3bfd60d7fd11adf6b1d0bb658070186cbb28fa86e" exitCode=0 Feb 16 02:10:07.731780 master-0 kubenswrapper[7721]: I0216 02:10:07.731735 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s95k9" event={"ID":"5f810ea0-e32d-4097-beca-5194349a57a6","Type":"ContainerDied","Data":"0c089d20e3c3f3c6a423f5f3bfd60d7fd11adf6b1d0bb658070186cbb28fa86e"} Feb 16 02:10:07.745380 master-0 kubenswrapper[7721]: I0216 02:10:07.745338 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"971c7312e8ac72eb9932acb64a3dd785","Type":"ContainerStarted","Data":"1f0cb68115478c6fd515542fbb0fa0d43b3b478c6e2bb7366eec3aa3beebf374"} Feb 16 02:10:07.745491 master-0 kubenswrapper[7721]: I0216 02:10:07.745391 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"971c7312e8ac72eb9932acb64a3dd785","Type":"ContainerStarted","Data":"e06f2cd26b4721860828d787726c09450d829acd1f0cf5360dbf2c9f1becfde8"} Feb 16 02:10:07.745491 master-0 kubenswrapper[7721]: I0216 02:10:07.745402 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"971c7312e8ac72eb9932acb64a3dd785","Type":"ContainerStarted","Data":"f7886612dab7fdbb2c8fa01ccf5ff672b9f28739bb24c915a3676c6391134016"} Feb 16 02:10:07.814412 master-0 kubenswrapper[7721]: I0216 02:10:07.813516 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_401699cb53e7098157e808a83125b0e4/etcd-resources-copy/0.log" Feb 16 02:10:07.998143 master-0 kubenswrapper[7721]: I0216 02:10:07.998078 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_401699cb53e7098157e808a83125b0e4/etcdctl/0.log" Feb 16 02:10:08.126144 master-0 kubenswrapper[7721]: I0216 02:10:08.126105 7721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Feb 16 02:10:08.218045 master-0 kubenswrapper[7721]: I0216 02:10:08.217991 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_401699cb53e7098157e808a83125b0e4/etcd/0.log" Feb 16 02:10:08.249469 master-0 kubenswrapper[7721]: I0216 02:10:08.249418 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1f35c7c9-16ec-486e-99ff-f1cbcce76eb3-kube-api-access\") pod \"1f35c7c9-16ec-486e-99ff-f1cbcce76eb3\" (UID: \"1f35c7c9-16ec-486e-99ff-f1cbcce76eb3\") " Feb 16 02:10:08.249838 master-0 kubenswrapper[7721]: I0216 02:10:08.249818 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1f35c7c9-16ec-486e-99ff-f1cbcce76eb3-var-lock\") pod \"1f35c7c9-16ec-486e-99ff-f1cbcce76eb3\" (UID: \"1f35c7c9-16ec-486e-99ff-f1cbcce76eb3\") " Feb 16 02:10:08.249956 master-0 kubenswrapper[7721]: I0216 02:10:08.249939 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1f35c7c9-16ec-486e-99ff-f1cbcce76eb3-kubelet-dir\") pod \"1f35c7c9-16ec-486e-99ff-f1cbcce76eb3\" (UID: \"1f35c7c9-16ec-486e-99ff-f1cbcce76eb3\") " Feb 16 02:10:08.250173 master-0 kubenswrapper[7721]: I0216 02:10:08.249921 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f35c7c9-16ec-486e-99ff-f1cbcce76eb3-var-lock" (OuterVolumeSpecName: "var-lock") pod "1f35c7c9-16ec-486e-99ff-f1cbcce76eb3" (UID: "1f35c7c9-16ec-486e-99ff-f1cbcce76eb3"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:10:08.250239 master-0 kubenswrapper[7721]: I0216 02:10:08.250080 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f35c7c9-16ec-486e-99ff-f1cbcce76eb3-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1f35c7c9-16ec-486e-99ff-f1cbcce76eb3" (UID: "1f35c7c9-16ec-486e-99ff-f1cbcce76eb3"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:10:08.250470 master-0 kubenswrapper[7721]: I0216 02:10:08.250452 7721 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1f35c7c9-16ec-486e-99ff-f1cbcce76eb3-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 16 02:10:08.250564 master-0 kubenswrapper[7721]: I0216 02:10:08.250551 7721 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1f35c7c9-16ec-486e-99ff-f1cbcce76eb3-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 02:10:08.252786 master-0 kubenswrapper[7721]: I0216 02:10:08.252729 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f35c7c9-16ec-486e-99ff-f1cbcce76eb3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1f35c7c9-16ec-486e-99ff-f1cbcce76eb3" (UID: "1f35c7c9-16ec-486e-99ff-f1cbcce76eb3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:10:08.351971 master-0 kubenswrapper[7721]: I0216 02:10:08.351855 7721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1f35c7c9-16ec-486e-99ff-f1cbcce76eb3-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 16 02:10:08.696039 master-0 kubenswrapper[7721]: I0216 02:10:08.695974 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_401699cb53e7098157e808a83125b0e4/etcd-metrics/0.log" Feb 16 02:10:08.709088 master-0 kubenswrapper[7721]: I0216 02:10:08.709042 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_401699cb53e7098157e808a83125b0e4/etcd-readyz/0.log" Feb 16 02:10:08.757626 master-0 kubenswrapper[7721]: I0216 02:10:08.756062 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-thm6w" event={"ID":"1487f82c-c14a-4f65-be77-5af2612f56f4","Type":"ContainerStarted","Data":"036aa397d998a3e73461d8cb4ef6baaf1dbc300667f630a78c183789b4b4e1b0"} Feb 16 02:10:08.760840 master-0 kubenswrapper[7721]: I0216 02:10:08.760791 7721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Feb 16 02:10:08.761453 master-0 kubenswrapper[7721]: I0216 02:10:08.761396 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"1f35c7c9-16ec-486e-99ff-f1cbcce76eb3","Type":"ContainerDied","Data":"98dcc1912f40abe649e3505484d85a2636bf298671bda45fdf2eb9864ccd1111"} Feb 16 02:10:08.761513 master-0 kubenswrapper[7721]: I0216 02:10:08.761467 7721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98dcc1912f40abe649e3505484d85a2636bf298671bda45fdf2eb9864ccd1111" Feb 16 02:10:09.047098 master-0 kubenswrapper[7721]: I0216 02:10:09.046926 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_401699cb53e7098157e808a83125b0e4/etcd-rev/0.log" Feb 16 02:10:09.061904 master-0 kubenswrapper[7721]: I0216 02:10:09.061842 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-1-master-0_4733c2df-0f5a-4696-b8c6-2568ebc7debc/installer/0.log" Feb 16 02:10:09.405944 master-0 kubenswrapper[7721]: I0216 02:10:09.405863 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-54984b6678-dsjz2_980aa005-f51d-4ca2-aee6-a6fdeefd86d0/kube-apiserver-operator/0.log" Feb 16 02:10:09.447642 master-0 kubenswrapper[7721]: I0216 02:10:09.447575 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-54984b6678-dsjz2_980aa005-f51d-4ca2-aee6-a6fdeefd86d0/kube-apiserver-operator/1.log" Feb 16 02:10:09.654872 master-0 kubenswrapper[7721]: I0216 02:10:09.654706 7721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=4.654687101 podStartE2EDuration="4.654687101s" podCreationTimestamp="2026-02-16 02:10:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:10:09.389553191 +0000 UTC m=+212.883787473" watchObservedRunningTime="2026-02-16 02:10:09.654687101 +0000 UTC m=+213.148921363" Feb 16 02:10:09.655157 master-0 kubenswrapper[7721]: I0216 02:10:09.654985 7721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-thm6w" podStartSLOduration=4.645729709 podStartE2EDuration="6.654981179s" podCreationTimestamp="2026-02-16 02:10:03 +0000 UTC" firstStartedPulling="2026-02-16 02:10:05.680145232 +0000 UTC m=+209.174379534" lastFinishedPulling="2026-02-16 02:10:07.689396742 +0000 UTC m=+211.183631004" observedRunningTime="2026-02-16 02:10:09.654053416 +0000 UTC m=+213.148287698" watchObservedRunningTime="2026-02-16 02:10:09.654981179 +0000 UTC m=+213.149215441" Feb 16 02:10:09.683858 master-0 kubenswrapper[7721]: I0216 02:10:09.683810 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_5d1e91e5a1fed5cf7076a92d2830d36f/setup/0.log" Feb 16 02:10:09.768111 master-0 kubenswrapper[7721]: I0216 02:10:09.768037 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s95k9" event={"ID":"5f810ea0-e32d-4097-beca-5194349a57a6","Type":"ContainerStarted","Data":"7ae2347f19b82d53544383ee866d2dc8e808d1cd12b91750e8d6c75f94c40d4f"} Feb 16 02:10:09.802008 master-0 kubenswrapper[7721]: I0216 02:10:09.801953 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_5d1e91e5a1fed5cf7076a92d2830d36f/kube-apiserver/0.log" Feb 16 02:10:09.819386 master-0 kubenswrapper[7721]: I0216 02:10:09.819290 7721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-s95k9" podStartSLOduration=3.37588609 podStartE2EDuration="6.819265956s" podCreationTimestamp="2026-02-16 02:10:03 +0000 UTC" firstStartedPulling="2026-02-16 02:10:05.677100106 +0000 UTC m=+209.171334378" lastFinishedPulling="2026-02-16 02:10:09.120479952 +0000 UTC m=+212.614714244" observedRunningTime="2026-02-16 02:10:09.817289547 +0000 UTC m=+213.311523819" watchObservedRunningTime="2026-02-16 02:10:09.819265956 +0000 UTC m=+213.313500228" Feb 16 02:10:09.823051 master-0 kubenswrapper[7721]: I0216 02:10:09.823006 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_5d1e91e5a1fed5cf7076a92d2830d36f/kube-apiserver-insecure-readyz/0.log" Feb 16 02:10:10.024137 master-0 kubenswrapper[7721]: I0216 02:10:10.024097 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_80f43f07-ce08-4c21-9463-ea983a110244/installer/0.log" Feb 16 02:10:10.255422 master-0 kubenswrapper[7721]: I0216 02:10:10.255343 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-0_1f35c7c9-16ec-486e-99ff-f1cbcce76eb3/installer/0.log" Feb 16 02:10:10.430136 master-0 kubenswrapper[7721]: I0216 02:10:10.430072 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-78ff47c7c5-dgxhp_a8f33151-61df-4b66-ba85-9ba210779059/kube-controller-manager-operator/0.log" Feb 16 02:10:11.330715 master-0 kubenswrapper[7721]: I0216 02:10:11.330561 7721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gkbtj" Feb 16 02:10:11.330715 master-0 kubenswrapper[7721]: I0216 02:10:11.330622 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gkbtj" Feb 16 02:10:11.343180 master-0 kubenswrapper[7721]: I0216 02:10:11.343139 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9c6g5" Feb 16 02:10:11.346003 master-0 kubenswrapper[7721]: I0216 02:10:11.343415 7721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9c6g5" Feb 16 02:10:11.380575 master-0 kubenswrapper[7721]: I0216 02:10:11.380511 7721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gkbtj" Feb 16 02:10:11.426321 master-0 kubenswrapper[7721]: I0216 02:10:11.426241 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-scheduler-master-0_9460ca0802075a8a6a10d7b3e6052c4d/kube-scheduler/0.log" Feb 16 02:10:11.627590 master-0 kubenswrapper[7721]: I0216 02:10:11.627420 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-scheduler-master-0_9460ca0802075a8a6a10d7b3e6052c4d/kube-scheduler/1.log" Feb 16 02:10:11.824389 master-0 kubenswrapper[7721]: I0216 02:10:11.824316 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-0_8ea4c28c-8f53-4b41-9c85-c8c50599d7cd/installer/0.log" Feb 16 02:10:11.852333 master-0 kubenswrapper[7721]: I0216 02:10:11.852270 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gkbtj" Feb 16 02:10:12.029050 master-0 kubenswrapper[7721]: I0216 02:10:12.028961 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-7485d55966-mmhcs_1743372f-bdb0-4558-b47b-3714f3aa3fde/kube-scheduler-operator-container/0.log" Feb 16 02:10:12.225087 master-0 kubenswrapper[7721]: I0216 02:10:12.225013 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-7485d55966-mmhcs_1743372f-bdb0-4558-b47b-3714f3aa3fde/kube-scheduler-operator-container/1.log" Feb 16 02:10:12.403809 master-0 kubenswrapper[7721]: I0216 02:10:12.403638 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9c6g5" podUID="89041b37-18f6-499d-89ec-a0523a25dc58" containerName="registry-server" probeResult="failure" output=< Feb 16 02:10:12.403809 master-0 kubenswrapper[7721]: timeout: failed to connect service ":50051" within 1s Feb 16 02:10:12.403809 master-0 kubenswrapper[7721]: > Feb 16 02:10:12.421516 master-0 kubenswrapper[7721]: I0216 02:10:12.421412 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-6d4655d9cf-v7lmz_e379cfaf-3a4c-40e7-8641-3524b3669295/openshift-apiserver-operator/1.log" Feb 16 02:10:12.629864 master-0 kubenswrapper[7721]: I0216 02:10:12.629798 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-6d4655d9cf-v7lmz_e379cfaf-3a4c-40e7-8641-3524b3669295/openshift-apiserver-operator/2.log" Feb 16 02:10:12.824480 master-0 kubenswrapper[7721]: I0216 02:10:12.822089 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-578b9bc556-8g98v_8f918d5b-1a4c-4b56-98a4-5cef638bb615/fix-audit-permissions/0.log" Feb 16 02:10:13.028822 master-0 kubenswrapper[7721]: I0216 02:10:13.028694 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-578b9bc556-8g98v_8f918d5b-1a4c-4b56-98a4-5cef638bb615/openshift-apiserver/0.log" Feb 16 02:10:13.225875 master-0 kubenswrapper[7721]: I0216 02:10:13.225786 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-578b9bc556-8g98v_8f918d5b-1a4c-4b56-98a4-5cef638bb615/openshift-apiserver-check-endpoints/0.log" Feb 16 02:10:13.430257 master-0 kubenswrapper[7721]: I0216 02:10:13.430171 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-67bf55ccdd-htjgz_724ac845-3835-458b-9645-e665be135ff9/etcd-operator/0.log" Feb 16 02:10:13.625036 master-0 kubenswrapper[7721]: I0216 02:10:13.624817 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-67bf55ccdd-htjgz_724ac845-3835-458b-9645-e665be135ff9/etcd-operator/1.log" Feb 16 02:10:13.835468 master-0 kubenswrapper[7721]: I0216 02:10:13.835387 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_catalog-operator-588944557d-2z8fq_76915cba-7c11-4bd8-9943-81de74e7781b/catalog-operator/0.log" Feb 16 02:10:13.921913 master-0 kubenswrapper[7721]: I0216 02:10:13.921817 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-s95k9" Feb 16 02:10:13.921913 master-0 kubenswrapper[7721]: I0216 02:10:13.921886 7721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-s95k9" Feb 16 02:10:13.950664 master-0 kubenswrapper[7721]: I0216 02:10:13.950566 7721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-thm6w" Feb 16 02:10:13.950664 master-0 kubenswrapper[7721]: I0216 02:10:13.950638 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-thm6w" Feb 16 02:10:14.000210 master-0 kubenswrapper[7721]: I0216 02:10:14.000132 7721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-s95k9" Feb 16 02:10:14.031235 master-0 kubenswrapper[7721]: I0216 02:10:14.031127 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_olm-operator-6b56bd877c-qwp9g_467d92a2-1cf3-418d-b41e-8e5f9d7a5b74/olm-operator/0.log" Feb 16 02:10:14.032410 master-0 kubenswrapper[7721]: I0216 02:10:14.032355 7721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-thm6w" Feb 16 02:10:14.224199 master-0 kubenswrapper[7721]: I0216 02:10:14.223981 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-5c696dbdcd-tkqng_23755f7f-dce6-4dcf-9664-22e3aedb5c81/kube-rbac-proxy/0.log" Feb 16 02:10:14.429298 master-0 kubenswrapper[7721]: I0216 02:10:14.429194 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-5c696dbdcd-tkqng_23755f7f-dce6-4dcf-9664-22e3aedb5c81/package-server-manager/0.log" Feb 16 02:10:14.627337 master-0 kubenswrapper[7721]: I0216 02:10:14.627114 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_packageserver-87777c9b7-fxzh6_dc3354cb-b6c3-40a5-a695-cccb079ad292/packageserver/0.log" Feb 16 02:10:14.877217 master-0 kubenswrapper[7721]: I0216 02:10:14.877119 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-s95k9" Feb 16 02:10:14.878951 master-0 kubenswrapper[7721]: I0216 02:10:14.878815 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-thm6w" Feb 16 02:10:16.137690 master-0 kubenswrapper[7721]: I0216 02:10:16.137624 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:10:16.137690 master-0 kubenswrapper[7721]: I0216 02:10:16.137691 7721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:10:16.143231 master-0 kubenswrapper[7721]: I0216 02:10:16.143153 7721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:10:16.143536 master-0 kubenswrapper[7721]: I0216 02:10:16.143259 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:10:16.153939 master-0 kubenswrapper[7721]: I0216 02:10:16.153868 7721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:10:16.159125 master-0 kubenswrapper[7721]: I0216 02:10:16.159079 7721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:10:16.827469 master-0 kubenswrapper[7721]: I0216 02:10:16.827388 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:10:16.827945 master-0 kubenswrapper[7721]: I0216 02:10:16.827900 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:10:21.408509 master-0 kubenswrapper[7721]: I0216 02:10:21.408396 7721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9c6g5" Feb 16 02:10:21.477049 master-0 kubenswrapper[7721]: I0216 02:10:21.476963 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9c6g5" Feb 16 02:10:24.215790 master-0 kubenswrapper[7721]: I0216 02:10:24.215744 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-qd4l7"] Feb 16 02:10:24.218758 master-0 kubenswrapper[7721]: E0216 02:10:24.218711 7721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f35c7c9-16ec-486e-99ff-f1cbcce76eb3" containerName="installer" Feb 16 02:10:24.218758 master-0 kubenswrapper[7721]: I0216 02:10:24.218757 7721 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f35c7c9-16ec-486e-99ff-f1cbcce76eb3" containerName="installer" Feb 16 02:10:24.219237 master-0 kubenswrapper[7721]: I0216 02:10:24.219216 7721 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f35c7c9-16ec-486e-99ff-f1cbcce76eb3" containerName="installer" Feb 16 02:10:24.220296 master-0 kubenswrapper[7721]: I0216 02:10:24.220270 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-qd4l7" Feb 16 02:10:24.223034 master-0 kubenswrapper[7721]: I0216 02:10:24.222985 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 16 02:10:24.223250 master-0 kubenswrapper[7721]: I0216 02:10:24.223225 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-bh67v" Feb 16 02:10:24.344462 master-0 kubenswrapper[7721]: I0216 02:10:24.342888 7721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-6c46d95f74-j4xhj"] Feb 16 02:10:24.344462 master-0 kubenswrapper[7721]: I0216 02:10:24.343131 7721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-j4xhj" podUID="f27ae528-68de-4b59-9536-2d49b7a3cb29" containerName="kube-rbac-proxy" containerID="cri-o://90b9d68e5e14d707d4874d2b9402ccfe55f90fc3cfd436b05bd088c745ba5d22" gracePeriod=30 Feb 16 02:10:24.344462 master-0 kubenswrapper[7721]: I0216 02:10:24.343506 7721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-j4xhj" podUID="f27ae528-68de-4b59-9536-2d49b7a3cb29" containerName="machine-approver-controller" containerID="cri-o://d9db1b3a5f1d0e336b63ef9a33d4cba07ee8d59948b784ecd5b54fc2c10d78c3" gracePeriod=30 Feb 16 02:10:24.380112 master-0 kubenswrapper[7721]: I0216 02:10:24.378063 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-6d678b8d67-8gzlx"] Feb 16 02:10:24.380112 master-0 kubenswrapper[7721]: I0216 02:10:24.379160 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6d678b8d67-8gzlx" Feb 16 02:10:24.383385 master-0 kubenswrapper[7721]: I0216 02:10:24.382068 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-tw6fq" Feb 16 02:10:24.386768 master-0 kubenswrapper[7721]: I0216 02:10:24.386734 7721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-s52cg"] Feb 16 02:10:24.386998 master-0 kubenswrapper[7721]: I0216 02:10:24.386953 7721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-s52cg" podUID="b120a297-2f2b-43f4-a19a-dad863cb2272" containerName="cluster-cloud-controller-manager" containerID="cri-o://e0c807dcdfc9df7b2c6822e6e59cca10d3dbe35d7119bfc15e6122189915614e" gracePeriod=30 Feb 16 02:10:24.387087 master-0 kubenswrapper[7721]: I0216 02:10:24.387072 7721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-s52cg" podUID="b120a297-2f2b-43f4-a19a-dad863cb2272" containerName="config-sync-controllers" containerID="cri-o://86995061c658ce827216a0caa7afe639797ce66a82b23130be90bca7b7ae56d4" gracePeriod=30 Feb 16 02:10:24.394553 master-0 kubenswrapper[7721]: I0216 02:10:24.394466 7721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-s52cg" podUID="b120a297-2f2b-43f4-a19a-dad863cb2272" containerName="kube-rbac-proxy" containerID="cri-o://54ac0511a79cf836ca40b96533cb52642311ac3eb6a7e16a8e1debf0310fdc28" gracePeriod=30 Feb 16 02:10:24.395097 master-0 kubenswrapper[7721]: I0216 02:10:24.395072 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-6d678b8d67-8gzlx"] Feb 16 02:10:24.414201 master-0 kubenswrapper[7721]: I0216 02:10:24.413973 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/0fbc8f91-f8cc-48d8-917c-64fa978069de-rootfs\") pod \"machine-config-daemon-qd4l7\" (UID: \"0fbc8f91-f8cc-48d8-917c-64fa978069de\") " pod="openshift-machine-config-operator/machine-config-daemon-qd4l7" Feb 16 02:10:24.414201 master-0 kubenswrapper[7721]: I0216 02:10:24.414031 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bnwz\" (UniqueName: \"kubernetes.io/projected/0fbc8f91-f8cc-48d8-917c-64fa978069de-kube-api-access-5bnwz\") pod \"machine-config-daemon-qd4l7\" (UID: \"0fbc8f91-f8cc-48d8-917c-64fa978069de\") " pod="openshift-machine-config-operator/machine-config-daemon-qd4l7" Feb 16 02:10:24.414338 master-0 kubenswrapper[7721]: I0216 02:10:24.414184 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0fbc8f91-f8cc-48d8-917c-64fa978069de-mcd-auth-proxy-config\") pod \"machine-config-daemon-qd4l7\" (UID: \"0fbc8f91-f8cc-48d8-917c-64fa978069de\") " pod="openshift-machine-config-operator/machine-config-daemon-qd4l7" Feb 16 02:10:24.414338 master-0 kubenswrapper[7721]: I0216 02:10:24.414300 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0fbc8f91-f8cc-48d8-917c-64fa978069de-proxy-tls\") pod \"machine-config-daemon-qd4l7\" (UID: \"0fbc8f91-f8cc-48d8-917c-64fa978069de\") " pod="openshift-machine-config-operator/machine-config-daemon-qd4l7" Feb 16 02:10:24.512878 master-0 kubenswrapper[7721]: E0216 02:10:24.512818 7721 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf27ae528_68de_4b59_9536_2d49b7a3cb29.slice/crio-conmon-90b9d68e5e14d707d4874d2b9402ccfe55f90fc3cfd436b05bd088c745ba5d22.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf27ae528_68de_4b59_9536_2d49b7a3cb29.slice/crio-conmon-d9db1b3a5f1d0e336b63ef9a33d4cba07ee8d59948b784ecd5b54fc2c10d78c3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf27ae528_68de_4b59_9536_2d49b7a3cb29.slice/crio-d9db1b3a5f1d0e336b63ef9a33d4cba07ee8d59948b784ecd5b54fc2c10d78c3.scope\": RecentStats: unable to find data in memory cache]" Feb 16 02:10:24.519484 master-0 kubenswrapper[7721]: I0216 02:10:24.519414 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bnwz\" (UniqueName: \"kubernetes.io/projected/0fbc8f91-f8cc-48d8-917c-64fa978069de-kube-api-access-5bnwz\") pod \"machine-config-daemon-qd4l7\" (UID: \"0fbc8f91-f8cc-48d8-917c-64fa978069de\") " pod="openshift-machine-config-operator/machine-config-daemon-qd4l7" Feb 16 02:10:24.519547 master-0 kubenswrapper[7721]: I0216 02:10:24.519513 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0fbc8f91-f8cc-48d8-917c-64fa978069de-mcd-auth-proxy-config\") pod \"machine-config-daemon-qd4l7\" (UID: \"0fbc8f91-f8cc-48d8-917c-64fa978069de\") " pod="openshift-machine-config-operator/machine-config-daemon-qd4l7" Feb 16 02:10:24.519580 master-0 kubenswrapper[7721]: I0216 02:10:24.519556 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0fbc8f91-f8cc-48d8-917c-64fa978069de-proxy-tls\") pod \"machine-config-daemon-qd4l7\" (UID: \"0fbc8f91-f8cc-48d8-917c-64fa978069de\") " pod="openshift-machine-config-operator/machine-config-daemon-qd4l7" Feb 16 02:10:24.519615 master-0 kubenswrapper[7721]: I0216 02:10:24.519579 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c8086f93-2d98-4218-afac-20a65e6bf943-webhook-certs\") pod \"multus-admission-controller-6d678b8d67-8gzlx\" (UID: \"c8086f93-2d98-4218-afac-20a65e6bf943\") " pod="openshift-multus/multus-admission-controller-6d678b8d67-8gzlx" Feb 16 02:10:24.519705 master-0 kubenswrapper[7721]: I0216 02:10:24.519676 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/0fbc8f91-f8cc-48d8-917c-64fa978069de-rootfs\") pod \"machine-config-daemon-qd4l7\" (UID: \"0fbc8f91-f8cc-48d8-917c-64fa978069de\") " pod="openshift-machine-config-operator/machine-config-daemon-qd4l7" Feb 16 02:10:24.519705 master-0 kubenswrapper[7721]: I0216 02:10:24.519703 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cz49l\" (UniqueName: \"kubernetes.io/projected/c8086f93-2d98-4218-afac-20a65e6bf943-kube-api-access-cz49l\") pod \"multus-admission-controller-6d678b8d67-8gzlx\" (UID: \"c8086f93-2d98-4218-afac-20a65e6bf943\") " pod="openshift-multus/multus-admission-controller-6d678b8d67-8gzlx" Feb 16 02:10:24.520073 master-0 kubenswrapper[7721]: I0216 02:10:24.520046 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/0fbc8f91-f8cc-48d8-917c-64fa978069de-rootfs\") pod \"machine-config-daemon-qd4l7\" (UID: \"0fbc8f91-f8cc-48d8-917c-64fa978069de\") " pod="openshift-machine-config-operator/machine-config-daemon-qd4l7" Feb 16 02:10:24.521915 master-0 kubenswrapper[7721]: I0216 02:10:24.521885 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0fbc8f91-f8cc-48d8-917c-64fa978069de-mcd-auth-proxy-config\") pod \"machine-config-daemon-qd4l7\" (UID: \"0fbc8f91-f8cc-48d8-917c-64fa978069de\") " pod="openshift-machine-config-operator/machine-config-daemon-qd4l7" Feb 16 02:10:24.523258 master-0 kubenswrapper[7721]: I0216 02:10:24.523225 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0fbc8f91-f8cc-48d8-917c-64fa978069de-proxy-tls\") pod \"machine-config-daemon-qd4l7\" (UID: \"0fbc8f91-f8cc-48d8-917c-64fa978069de\") " pod="openshift-machine-config-operator/machine-config-daemon-qd4l7" Feb 16 02:10:24.549011 master-0 kubenswrapper[7721]: I0216 02:10:24.548963 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bnwz\" (UniqueName: \"kubernetes.io/projected/0fbc8f91-f8cc-48d8-917c-64fa978069de-kube-api-access-5bnwz\") pod \"machine-config-daemon-qd4l7\" (UID: \"0fbc8f91-f8cc-48d8-917c-64fa978069de\") " pod="openshift-machine-config-operator/machine-config-daemon-qd4l7" Feb 16 02:10:24.620576 master-0 kubenswrapper[7721]: I0216 02:10:24.620527 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cz49l\" (UniqueName: \"kubernetes.io/projected/c8086f93-2d98-4218-afac-20a65e6bf943-kube-api-access-cz49l\") pod \"multus-admission-controller-6d678b8d67-8gzlx\" (UID: \"c8086f93-2d98-4218-afac-20a65e6bf943\") " pod="openshift-multus/multus-admission-controller-6d678b8d67-8gzlx" Feb 16 02:10:24.620758 master-0 kubenswrapper[7721]: I0216 02:10:24.620613 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c8086f93-2d98-4218-afac-20a65e6bf943-webhook-certs\") pod \"multus-admission-controller-6d678b8d67-8gzlx\" (UID: \"c8086f93-2d98-4218-afac-20a65e6bf943\") " pod="openshift-multus/multus-admission-controller-6d678b8d67-8gzlx" Feb 16 02:10:24.620980 master-0 kubenswrapper[7721]: I0216 02:10:24.620947 7721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-j4xhj" Feb 16 02:10:24.624337 master-0 kubenswrapper[7721]: I0216 02:10:24.624298 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c8086f93-2d98-4218-afac-20a65e6bf943-webhook-certs\") pod \"multus-admission-controller-6d678b8d67-8gzlx\" (UID: \"c8086f93-2d98-4218-afac-20a65e6bf943\") " pod="openshift-multus/multus-admission-controller-6d678b8d67-8gzlx" Feb 16 02:10:24.626178 master-0 kubenswrapper[7721]: I0216 02:10:24.626137 7721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-s52cg" Feb 16 02:10:24.637895 master-0 kubenswrapper[7721]: I0216 02:10:24.637852 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cz49l\" (UniqueName: \"kubernetes.io/projected/c8086f93-2d98-4218-afac-20a65e6bf943-kube-api-access-cz49l\") pod \"multus-admission-controller-6d678b8d67-8gzlx\" (UID: \"c8086f93-2d98-4218-afac-20a65e6bf943\") " pod="openshift-multus/multus-admission-controller-6d678b8d67-8gzlx" Feb 16 02:10:24.721617 master-0 kubenswrapper[7721]: I0216 02:10:24.721569 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f27ae528-68de-4b59-9536-2d49b7a3cb29-config\") pod \"f27ae528-68de-4b59-9536-2d49b7a3cb29\" (UID: \"f27ae528-68de-4b59-9536-2d49b7a3cb29\") " Feb 16 02:10:24.721821 master-0 kubenswrapper[7721]: I0216 02:10:24.721627 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xwmq\" (UniqueName: \"kubernetes.io/projected/f27ae528-68de-4b59-9536-2d49b7a3cb29-kube-api-access-4xwmq\") pod \"f27ae528-68de-4b59-9536-2d49b7a3cb29\" (UID: \"f27ae528-68de-4b59-9536-2d49b7a3cb29\") " Feb 16 02:10:24.721821 master-0 kubenswrapper[7721]: I0216 02:10:24.721663 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f27ae528-68de-4b59-9536-2d49b7a3cb29-auth-proxy-config\") pod \"f27ae528-68de-4b59-9536-2d49b7a3cb29\" (UID: \"f27ae528-68de-4b59-9536-2d49b7a3cb29\") " Feb 16 02:10:24.721821 master-0 kubenswrapper[7721]: I0216 02:10:24.721694 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/f27ae528-68de-4b59-9536-2d49b7a3cb29-machine-approver-tls\") pod \"f27ae528-68de-4b59-9536-2d49b7a3cb29\" (UID: \"f27ae528-68de-4b59-9536-2d49b7a3cb29\") " Feb 16 02:10:24.722095 master-0 kubenswrapper[7721]: I0216 02:10:24.722048 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f27ae528-68de-4b59-9536-2d49b7a3cb29-config" (OuterVolumeSpecName: "config") pod "f27ae528-68de-4b59-9536-2d49b7a3cb29" (UID: "f27ae528-68de-4b59-9536-2d49b7a3cb29"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:10:24.722260 master-0 kubenswrapper[7721]: I0216 02:10:24.722195 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f27ae528-68de-4b59-9536-2d49b7a3cb29-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "f27ae528-68de-4b59-9536-2d49b7a3cb29" (UID: "f27ae528-68de-4b59-9536-2d49b7a3cb29"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:10:24.725832 master-0 kubenswrapper[7721]: I0216 02:10:24.725756 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f27ae528-68de-4b59-9536-2d49b7a3cb29-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "f27ae528-68de-4b59-9536-2d49b7a3cb29" (UID: "f27ae528-68de-4b59-9536-2d49b7a3cb29"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:10:24.726743 master-0 kubenswrapper[7721]: I0216 02:10:24.726693 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f27ae528-68de-4b59-9536-2d49b7a3cb29-kube-api-access-4xwmq" (OuterVolumeSpecName: "kube-api-access-4xwmq") pod "f27ae528-68de-4b59-9536-2d49b7a3cb29" (UID: "f27ae528-68de-4b59-9536-2d49b7a3cb29"). InnerVolumeSpecName "kube-api-access-4xwmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:10:24.822640 master-0 kubenswrapper[7721]: I0216 02:10:24.822587 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b120a297-2f2b-43f4-a19a-dad863cb2272-auth-proxy-config\") pod \"b120a297-2f2b-43f4-a19a-dad863cb2272\" (UID: \"b120a297-2f2b-43f4-a19a-dad863cb2272\") " Feb 16 02:10:24.822836 master-0 kubenswrapper[7721]: I0216 02:10:24.822659 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/b120a297-2f2b-43f4-a19a-dad863cb2272-host-etc-kube\") pod \"b120a297-2f2b-43f4-a19a-dad863cb2272\" (UID: \"b120a297-2f2b-43f4-a19a-dad863cb2272\") " Feb 16 02:10:24.822836 master-0 kubenswrapper[7721]: I0216 02:10:24.822707 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4rs4\" (UniqueName: \"kubernetes.io/projected/b120a297-2f2b-43f4-a19a-dad863cb2272-kube-api-access-v4rs4\") pod \"b120a297-2f2b-43f4-a19a-dad863cb2272\" (UID: \"b120a297-2f2b-43f4-a19a-dad863cb2272\") " Feb 16 02:10:24.822836 master-0 kubenswrapper[7721]: I0216 02:10:24.822736 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/b120a297-2f2b-43f4-a19a-dad863cb2272-cloud-controller-manager-operator-tls\") pod \"b120a297-2f2b-43f4-a19a-dad863cb2272\" (UID: \"b120a297-2f2b-43f4-a19a-dad863cb2272\") " Feb 16 02:10:24.822836 master-0 kubenswrapper[7721]: I0216 02:10:24.822777 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b120a297-2f2b-43f4-a19a-dad863cb2272-images\") pod \"b120a297-2f2b-43f4-a19a-dad863cb2272\" (UID: \"b120a297-2f2b-43f4-a19a-dad863cb2272\") " Feb 16 02:10:24.823046 master-0 kubenswrapper[7721]: I0216 02:10:24.823025 7721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f27ae528-68de-4b59-9536-2d49b7a3cb29-config\") on node \"master-0\" DevicePath \"\"" Feb 16 02:10:24.823088 master-0 kubenswrapper[7721]: I0216 02:10:24.823048 7721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xwmq\" (UniqueName: \"kubernetes.io/projected/f27ae528-68de-4b59-9536-2d49b7a3cb29-kube-api-access-4xwmq\") on node \"master-0\" DevicePath \"\"" Feb 16 02:10:24.823088 master-0 kubenswrapper[7721]: I0216 02:10:24.823062 7721 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f27ae528-68de-4b59-9536-2d49b7a3cb29-auth-proxy-config\") on node \"master-0\" DevicePath \"\"" Feb 16 02:10:24.823088 master-0 kubenswrapper[7721]: I0216 02:10:24.823077 7721 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/f27ae528-68de-4b59-9536-2d49b7a3cb29-machine-approver-tls\") on node \"master-0\" DevicePath \"\"" Feb 16 02:10:24.823088 master-0 kubenswrapper[7721]: I0216 02:10:24.823061 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b120a297-2f2b-43f4-a19a-dad863cb2272-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "b120a297-2f2b-43f4-a19a-dad863cb2272" (UID: "b120a297-2f2b-43f4-a19a-dad863cb2272"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:10:24.823656 master-0 kubenswrapper[7721]: I0216 02:10:24.823622 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b120a297-2f2b-43f4-a19a-dad863cb2272-images" (OuterVolumeSpecName: "images") pod "b120a297-2f2b-43f4-a19a-dad863cb2272" (UID: "b120a297-2f2b-43f4-a19a-dad863cb2272"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:10:24.823716 master-0 kubenswrapper[7721]: I0216 02:10:24.823682 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b120a297-2f2b-43f4-a19a-dad863cb2272-host-etc-kube" (OuterVolumeSpecName: "host-etc-kube") pod "b120a297-2f2b-43f4-a19a-dad863cb2272" (UID: "b120a297-2f2b-43f4-a19a-dad863cb2272"). InnerVolumeSpecName "host-etc-kube". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:10:24.826871 master-0 kubenswrapper[7721]: I0216 02:10:24.826708 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b120a297-2f2b-43f4-a19a-dad863cb2272-cloud-controller-manager-operator-tls" (OuterVolumeSpecName: "cloud-controller-manager-operator-tls") pod "b120a297-2f2b-43f4-a19a-dad863cb2272" (UID: "b120a297-2f2b-43f4-a19a-dad863cb2272"). InnerVolumeSpecName "cloud-controller-manager-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:10:24.827240 master-0 kubenswrapper[7721]: I0216 02:10:24.827204 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b120a297-2f2b-43f4-a19a-dad863cb2272-kube-api-access-v4rs4" (OuterVolumeSpecName: "kube-api-access-v4rs4") pod "b120a297-2f2b-43f4-a19a-dad863cb2272" (UID: "b120a297-2f2b-43f4-a19a-dad863cb2272"). InnerVolumeSpecName "kube-api-access-v4rs4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:10:24.843349 master-0 kubenswrapper[7721]: I0216 02:10:24.843309 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-qd4l7" Feb 16 02:10:24.865565 master-0 kubenswrapper[7721]: W0216 02:10:24.865505 7721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0fbc8f91_f8cc_48d8_917c_64fa978069de.slice/crio-32c10b6146223a63a25cf689fec7461ab6f7e2b10981648562bd63e55782f342 WatchSource:0}: Error finding container 32c10b6146223a63a25cf689fec7461ab6f7e2b10981648562bd63e55782f342: Status 404 returned error can't find the container with id 32c10b6146223a63a25cf689fec7461ab6f7e2b10981648562bd63e55782f342 Feb 16 02:10:24.887942 master-0 kubenswrapper[7721]: I0216 02:10:24.887891 7721 generic.go:334] "Generic (PLEG): container finished" podID="b120a297-2f2b-43f4-a19a-dad863cb2272" containerID="54ac0511a79cf836ca40b96533cb52642311ac3eb6a7e16a8e1debf0310fdc28" exitCode=0 Feb 16 02:10:24.888235 master-0 kubenswrapper[7721]: I0216 02:10:24.888216 7721 generic.go:334] "Generic (PLEG): container finished" podID="b120a297-2f2b-43f4-a19a-dad863cb2272" containerID="86995061c658ce827216a0caa7afe639797ce66a82b23130be90bca7b7ae56d4" exitCode=0 Feb 16 02:10:24.888572 master-0 kubenswrapper[7721]: I0216 02:10:24.888531 7721 generic.go:334] "Generic (PLEG): container finished" podID="b120a297-2f2b-43f4-a19a-dad863cb2272" containerID="e0c807dcdfc9df7b2c6822e6e59cca10d3dbe35d7119bfc15e6122189915614e" exitCode=0 Feb 16 02:10:24.888757 master-0 kubenswrapper[7721]: I0216 02:10:24.888732 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-s52cg" event={"ID":"b120a297-2f2b-43f4-a19a-dad863cb2272","Type":"ContainerDied","Data":"54ac0511a79cf836ca40b96533cb52642311ac3eb6a7e16a8e1debf0310fdc28"} Feb 16 02:10:24.888883 master-0 kubenswrapper[7721]: I0216 02:10:24.888867 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-s52cg" event={"ID":"b120a297-2f2b-43f4-a19a-dad863cb2272","Type":"ContainerDied","Data":"86995061c658ce827216a0caa7afe639797ce66a82b23130be90bca7b7ae56d4"} Feb 16 02:10:24.889000 master-0 kubenswrapper[7721]: I0216 02:10:24.888966 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-s52cg" event={"ID":"b120a297-2f2b-43f4-a19a-dad863cb2272","Type":"ContainerDied","Data":"e0c807dcdfc9df7b2c6822e6e59cca10d3dbe35d7119bfc15e6122189915614e"} Feb 16 02:10:24.889090 master-0 kubenswrapper[7721]: I0216 02:10:24.889075 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-s52cg" event={"ID":"b120a297-2f2b-43f4-a19a-dad863cb2272","Type":"ContainerDied","Data":"b8494b263fe1529d6f7a8254addf28f0b268f636a2e08846d97c6e9e64889d8b"} Feb 16 02:10:24.889164 master-0 kubenswrapper[7721]: I0216 02:10:24.889153 7721 scope.go:117] "RemoveContainer" containerID="54ac0511a79cf836ca40b96533cb52642311ac3eb6a7e16a8e1debf0310fdc28" Feb 16 02:10:24.889345 master-0 kubenswrapper[7721]: I0216 02:10:24.889333 7721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-s52cg" Feb 16 02:10:24.891896 master-0 kubenswrapper[7721]: I0216 02:10:24.891240 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd4l7" event={"ID":"0fbc8f91-f8cc-48d8-917c-64fa978069de","Type":"ContainerStarted","Data":"32c10b6146223a63a25cf689fec7461ab6f7e2b10981648562bd63e55782f342"} Feb 16 02:10:24.894561 master-0 kubenswrapper[7721]: I0216 02:10:24.894508 7721 generic.go:334] "Generic (PLEG): container finished" podID="f27ae528-68de-4b59-9536-2d49b7a3cb29" containerID="d9db1b3a5f1d0e336b63ef9a33d4cba07ee8d59948b784ecd5b54fc2c10d78c3" exitCode=0 Feb 16 02:10:24.894561 master-0 kubenswrapper[7721]: I0216 02:10:24.894533 7721 generic.go:334] "Generic (PLEG): container finished" podID="f27ae528-68de-4b59-9536-2d49b7a3cb29" containerID="90b9d68e5e14d707d4874d2b9402ccfe55f90fc3cfd436b05bd088c745ba5d22" exitCode=0 Feb 16 02:10:24.894561 master-0 kubenswrapper[7721]: I0216 02:10:24.894551 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-j4xhj" event={"ID":"f27ae528-68de-4b59-9536-2d49b7a3cb29","Type":"ContainerDied","Data":"d9db1b3a5f1d0e336b63ef9a33d4cba07ee8d59948b784ecd5b54fc2c10d78c3"} Feb 16 02:10:24.894561 master-0 kubenswrapper[7721]: I0216 02:10:24.894571 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-j4xhj" event={"ID":"f27ae528-68de-4b59-9536-2d49b7a3cb29","Type":"ContainerDied","Data":"90b9d68e5e14d707d4874d2b9402ccfe55f90fc3cfd436b05bd088c745ba5d22"} Feb 16 02:10:24.894561 master-0 kubenswrapper[7721]: I0216 02:10:24.894584 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-j4xhj" event={"ID":"f27ae528-68de-4b59-9536-2d49b7a3cb29","Type":"ContainerDied","Data":"c8272269dd94d2b1ec1e9213f6473a063042376209d9c67263706c106c76b22d"} Feb 16 02:10:24.894561 master-0 kubenswrapper[7721]: I0216 02:10:24.894639 7721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-j4xhj" Feb 16 02:10:24.913239 master-0 kubenswrapper[7721]: I0216 02:10:24.913213 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6d678b8d67-8gzlx" Feb 16 02:10:24.918181 master-0 kubenswrapper[7721]: I0216 02:10:24.918149 7721 scope.go:117] "RemoveContainer" containerID="86995061c658ce827216a0caa7afe639797ce66a82b23130be90bca7b7ae56d4" Feb 16 02:10:24.922723 master-0 kubenswrapper[7721]: I0216 02:10:24.922667 7721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-6c46d95f74-j4xhj"] Feb 16 02:10:24.924774 master-0 kubenswrapper[7721]: I0216 02:10:24.924748 7721 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b120a297-2f2b-43f4-a19a-dad863cb2272-auth-proxy-config\") on node \"master-0\" DevicePath \"\"" Feb 16 02:10:24.924774 master-0 kubenswrapper[7721]: I0216 02:10:24.924775 7721 reconciler_common.go:293] "Volume detached for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/b120a297-2f2b-43f4-a19a-dad863cb2272-host-etc-kube\") on node \"master-0\" DevicePath \"\"" Feb 16 02:10:24.924872 master-0 kubenswrapper[7721]: I0216 02:10:24.924786 7721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4rs4\" (UniqueName: \"kubernetes.io/projected/b120a297-2f2b-43f4-a19a-dad863cb2272-kube-api-access-v4rs4\") on node \"master-0\" DevicePath \"\"" Feb 16 02:10:24.924872 master-0 kubenswrapper[7721]: I0216 02:10:24.924798 7721 reconciler_common.go:293] "Volume detached for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/b120a297-2f2b-43f4-a19a-dad863cb2272-cloud-controller-manager-operator-tls\") on node \"master-0\" DevicePath \"\"" Feb 16 02:10:24.924872 master-0 kubenswrapper[7721]: I0216 02:10:24.924811 7721 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b120a297-2f2b-43f4-a19a-dad863cb2272-images\") on node \"master-0\" DevicePath \"\"" Feb 16 02:10:24.935557 master-0 kubenswrapper[7721]: I0216 02:10:24.935502 7721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-6c46d95f74-j4xhj"] Feb 16 02:10:24.954402 master-0 kubenswrapper[7721]: I0216 02:10:24.954337 7721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-s52cg"] Feb 16 02:10:24.957842 master-0 kubenswrapper[7721]: I0216 02:10:24.957775 7721 scope.go:117] "RemoveContainer" containerID="e0c807dcdfc9df7b2c6822e6e59cca10d3dbe35d7119bfc15e6122189915614e" Feb 16 02:10:24.968247 master-0 kubenswrapper[7721]: I0216 02:10:24.968208 7721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-s52cg"] Feb 16 02:10:24.977403 master-0 kubenswrapper[7721]: I0216 02:10:24.977294 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-8569dd85ff-vqtcl"] Feb 16 02:10:24.994542 master-0 kubenswrapper[7721]: E0216 02:10:24.987898 7721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b120a297-2f2b-43f4-a19a-dad863cb2272" containerName="cluster-cloud-controller-manager" Feb 16 02:10:24.994542 master-0 kubenswrapper[7721]: I0216 02:10:24.987974 7721 state_mem.go:107] "Deleted CPUSet assignment" podUID="b120a297-2f2b-43f4-a19a-dad863cb2272" containerName="cluster-cloud-controller-manager" Feb 16 02:10:24.994542 master-0 kubenswrapper[7721]: E0216 02:10:24.988017 7721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f27ae528-68de-4b59-9536-2d49b7a3cb29" containerName="machine-approver-controller" Feb 16 02:10:24.994542 master-0 kubenswrapper[7721]: I0216 02:10:24.988037 7721 state_mem.go:107] "Deleted CPUSet assignment" podUID="f27ae528-68de-4b59-9536-2d49b7a3cb29" containerName="machine-approver-controller" Feb 16 02:10:24.994542 master-0 kubenswrapper[7721]: E0216 02:10:24.988076 7721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f27ae528-68de-4b59-9536-2d49b7a3cb29" containerName="kube-rbac-proxy" Feb 16 02:10:24.994542 master-0 kubenswrapper[7721]: I0216 02:10:24.988095 7721 state_mem.go:107] "Deleted CPUSet assignment" podUID="f27ae528-68de-4b59-9536-2d49b7a3cb29" containerName="kube-rbac-proxy" Feb 16 02:10:24.994542 master-0 kubenswrapper[7721]: E0216 02:10:24.988119 7721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b120a297-2f2b-43f4-a19a-dad863cb2272" containerName="config-sync-controllers" Feb 16 02:10:24.994542 master-0 kubenswrapper[7721]: I0216 02:10:24.988136 7721 state_mem.go:107] "Deleted CPUSet assignment" podUID="b120a297-2f2b-43f4-a19a-dad863cb2272" containerName="config-sync-controllers" Feb 16 02:10:24.994542 master-0 kubenswrapper[7721]: E0216 02:10:24.988160 7721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b120a297-2f2b-43f4-a19a-dad863cb2272" containerName="kube-rbac-proxy" Feb 16 02:10:24.994542 master-0 kubenswrapper[7721]: I0216 02:10:24.988177 7721 state_mem.go:107] "Deleted CPUSet assignment" podUID="b120a297-2f2b-43f4-a19a-dad863cb2272" containerName="kube-rbac-proxy" Feb 16 02:10:24.994542 master-0 kubenswrapper[7721]: I0216 02:10:24.988532 7721 memory_manager.go:354] "RemoveStaleState removing state" podUID="f27ae528-68de-4b59-9536-2d49b7a3cb29" containerName="machine-approver-controller" Feb 16 02:10:24.994542 master-0 kubenswrapper[7721]: I0216 02:10:24.988572 7721 memory_manager.go:354] "RemoveStaleState removing state" podUID="f27ae528-68de-4b59-9536-2d49b7a3cb29" containerName="kube-rbac-proxy" Feb 16 02:10:24.994542 master-0 kubenswrapper[7721]: I0216 02:10:24.988596 7721 memory_manager.go:354] "RemoveStaleState removing state" podUID="b120a297-2f2b-43f4-a19a-dad863cb2272" containerName="cluster-cloud-controller-manager" Feb 16 02:10:24.994542 master-0 kubenswrapper[7721]: I0216 02:10:24.988622 7721 memory_manager.go:354] "RemoveStaleState removing state" podUID="b120a297-2f2b-43f4-a19a-dad863cb2272" containerName="config-sync-controllers" Feb 16 02:10:24.994542 master-0 kubenswrapper[7721]: I0216 02:10:24.988637 7721 memory_manager.go:354] "RemoveStaleState removing state" podUID="b120a297-2f2b-43f4-a19a-dad863cb2272" containerName="kube-rbac-proxy" Feb 16 02:10:24.994542 master-0 kubenswrapper[7721]: I0216 02:10:24.989812 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-vqtcl" Feb 16 02:10:24.999276 master-0 kubenswrapper[7721]: I0216 02:10:24.995998 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 16 02:10:24.999276 master-0 kubenswrapper[7721]: I0216 02:10:24.996309 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 16 02:10:24.999276 master-0 kubenswrapper[7721]: I0216 02:10:24.996641 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-kvrqk" Feb 16 02:10:24.999276 master-0 kubenswrapper[7721]: I0216 02:10:24.997570 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 16 02:10:24.999276 master-0 kubenswrapper[7721]: I0216 02:10:24.998040 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 16 02:10:25.005517 master-0 kubenswrapper[7721]: I0216 02:10:25.002109 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 16 02:10:25.021701 master-0 kubenswrapper[7721]: I0216 02:10:25.020089 7721 scope.go:117] "RemoveContainer" containerID="54ac0511a79cf836ca40b96533cb52642311ac3eb6a7e16a8e1debf0310fdc28" Feb 16 02:10:25.021701 master-0 kubenswrapper[7721]: E0216 02:10:25.020647 7721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54ac0511a79cf836ca40b96533cb52642311ac3eb6a7e16a8e1debf0310fdc28\": container with ID starting with 54ac0511a79cf836ca40b96533cb52642311ac3eb6a7e16a8e1debf0310fdc28 not found: ID does not exist" containerID="54ac0511a79cf836ca40b96533cb52642311ac3eb6a7e16a8e1debf0310fdc28" Feb 16 02:10:25.021701 master-0 kubenswrapper[7721]: I0216 02:10:25.020673 7721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54ac0511a79cf836ca40b96533cb52642311ac3eb6a7e16a8e1debf0310fdc28"} err="failed to get container status \"54ac0511a79cf836ca40b96533cb52642311ac3eb6a7e16a8e1debf0310fdc28\": rpc error: code = NotFound desc = could not find container \"54ac0511a79cf836ca40b96533cb52642311ac3eb6a7e16a8e1debf0310fdc28\": container with ID starting with 54ac0511a79cf836ca40b96533cb52642311ac3eb6a7e16a8e1debf0310fdc28 not found: ID does not exist" Feb 16 02:10:25.021701 master-0 kubenswrapper[7721]: I0216 02:10:25.020697 7721 scope.go:117] "RemoveContainer" containerID="86995061c658ce827216a0caa7afe639797ce66a82b23130be90bca7b7ae56d4" Feb 16 02:10:25.021701 master-0 kubenswrapper[7721]: E0216 02:10:25.021013 7721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86995061c658ce827216a0caa7afe639797ce66a82b23130be90bca7b7ae56d4\": container with ID starting with 86995061c658ce827216a0caa7afe639797ce66a82b23130be90bca7b7ae56d4 not found: ID does not exist" containerID="86995061c658ce827216a0caa7afe639797ce66a82b23130be90bca7b7ae56d4" Feb 16 02:10:25.021701 master-0 kubenswrapper[7721]: I0216 02:10:25.021082 7721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86995061c658ce827216a0caa7afe639797ce66a82b23130be90bca7b7ae56d4"} err="failed to get container status \"86995061c658ce827216a0caa7afe639797ce66a82b23130be90bca7b7ae56d4\": rpc error: code = NotFound desc = could not find container \"86995061c658ce827216a0caa7afe639797ce66a82b23130be90bca7b7ae56d4\": container with ID starting with 86995061c658ce827216a0caa7afe639797ce66a82b23130be90bca7b7ae56d4 not found: ID does not exist" Feb 16 02:10:25.021701 master-0 kubenswrapper[7721]: I0216 02:10:25.021116 7721 scope.go:117] "RemoveContainer" containerID="e0c807dcdfc9df7b2c6822e6e59cca10d3dbe35d7119bfc15e6122189915614e" Feb 16 02:10:25.021701 master-0 kubenswrapper[7721]: E0216 02:10:25.021617 7721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0c807dcdfc9df7b2c6822e6e59cca10d3dbe35d7119bfc15e6122189915614e\": container with ID starting with e0c807dcdfc9df7b2c6822e6e59cca10d3dbe35d7119bfc15e6122189915614e not found: ID does not exist" containerID="e0c807dcdfc9df7b2c6822e6e59cca10d3dbe35d7119bfc15e6122189915614e" Feb 16 02:10:25.021701 master-0 kubenswrapper[7721]: I0216 02:10:25.021641 7721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0c807dcdfc9df7b2c6822e6e59cca10d3dbe35d7119bfc15e6122189915614e"} err="failed to get container status \"e0c807dcdfc9df7b2c6822e6e59cca10d3dbe35d7119bfc15e6122189915614e\": rpc error: code = NotFound desc = could not find container \"e0c807dcdfc9df7b2c6822e6e59cca10d3dbe35d7119bfc15e6122189915614e\": container with ID starting with e0c807dcdfc9df7b2c6822e6e59cca10d3dbe35d7119bfc15e6122189915614e not found: ID does not exist" Feb 16 02:10:25.021701 master-0 kubenswrapper[7721]: I0216 02:10:25.021657 7721 scope.go:117] "RemoveContainer" containerID="54ac0511a79cf836ca40b96533cb52642311ac3eb6a7e16a8e1debf0310fdc28" Feb 16 02:10:25.022177 master-0 kubenswrapper[7721]: I0216 02:10:25.021962 7721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54ac0511a79cf836ca40b96533cb52642311ac3eb6a7e16a8e1debf0310fdc28"} err="failed to get container status \"54ac0511a79cf836ca40b96533cb52642311ac3eb6a7e16a8e1debf0310fdc28\": rpc error: code = NotFound desc = could not find container \"54ac0511a79cf836ca40b96533cb52642311ac3eb6a7e16a8e1debf0310fdc28\": container with ID starting with 54ac0511a79cf836ca40b96533cb52642311ac3eb6a7e16a8e1debf0310fdc28 not found: ID does not exist" Feb 16 02:10:25.022177 master-0 kubenswrapper[7721]: I0216 02:10:25.021985 7721 scope.go:117] "RemoveContainer" containerID="86995061c658ce827216a0caa7afe639797ce66a82b23130be90bca7b7ae56d4" Feb 16 02:10:25.023472 master-0 kubenswrapper[7721]: I0216 02:10:25.023395 7721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86995061c658ce827216a0caa7afe639797ce66a82b23130be90bca7b7ae56d4"} err="failed to get container status \"86995061c658ce827216a0caa7afe639797ce66a82b23130be90bca7b7ae56d4\": rpc error: code = NotFound desc = could not find container \"86995061c658ce827216a0caa7afe639797ce66a82b23130be90bca7b7ae56d4\": container with ID starting with 86995061c658ce827216a0caa7afe639797ce66a82b23130be90bca7b7ae56d4 not found: ID does not exist" Feb 16 02:10:25.023472 master-0 kubenswrapper[7721]: I0216 02:10:25.023426 7721 scope.go:117] "RemoveContainer" containerID="e0c807dcdfc9df7b2c6822e6e59cca10d3dbe35d7119bfc15e6122189915614e" Feb 16 02:10:25.024071 master-0 kubenswrapper[7721]: I0216 02:10:25.024013 7721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0c807dcdfc9df7b2c6822e6e59cca10d3dbe35d7119bfc15e6122189915614e"} err="failed to get container status \"e0c807dcdfc9df7b2c6822e6e59cca10d3dbe35d7119bfc15e6122189915614e\": rpc error: code = NotFound desc = could not find container \"e0c807dcdfc9df7b2c6822e6e59cca10d3dbe35d7119bfc15e6122189915614e\": container with ID starting with e0c807dcdfc9df7b2c6822e6e59cca10d3dbe35d7119bfc15e6122189915614e not found: ID does not exist" Feb 16 02:10:25.024071 master-0 kubenswrapper[7721]: I0216 02:10:25.024065 7721 scope.go:117] "RemoveContainer" containerID="54ac0511a79cf836ca40b96533cb52642311ac3eb6a7e16a8e1debf0310fdc28" Feb 16 02:10:25.024414 master-0 kubenswrapper[7721]: I0216 02:10:25.024361 7721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54ac0511a79cf836ca40b96533cb52642311ac3eb6a7e16a8e1debf0310fdc28"} err="failed to get container status \"54ac0511a79cf836ca40b96533cb52642311ac3eb6a7e16a8e1debf0310fdc28\": rpc error: code = NotFound desc = could not find container \"54ac0511a79cf836ca40b96533cb52642311ac3eb6a7e16a8e1debf0310fdc28\": container with ID starting with 54ac0511a79cf836ca40b96533cb52642311ac3eb6a7e16a8e1debf0310fdc28 not found: ID does not exist" Feb 16 02:10:25.024414 master-0 kubenswrapper[7721]: I0216 02:10:25.024391 7721 scope.go:117] "RemoveContainer" containerID="86995061c658ce827216a0caa7afe639797ce66a82b23130be90bca7b7ae56d4" Feb 16 02:10:25.024677 master-0 kubenswrapper[7721]: I0216 02:10:25.024648 7721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86995061c658ce827216a0caa7afe639797ce66a82b23130be90bca7b7ae56d4"} err="failed to get container status \"86995061c658ce827216a0caa7afe639797ce66a82b23130be90bca7b7ae56d4\": rpc error: code = NotFound desc = could not find container \"86995061c658ce827216a0caa7afe639797ce66a82b23130be90bca7b7ae56d4\": container with ID starting with 86995061c658ce827216a0caa7afe639797ce66a82b23130be90bca7b7ae56d4 not found: ID does not exist" Feb 16 02:10:25.024677 master-0 kubenswrapper[7721]: I0216 02:10:25.024671 7721 scope.go:117] "RemoveContainer" containerID="e0c807dcdfc9df7b2c6822e6e59cca10d3dbe35d7119bfc15e6122189915614e" Feb 16 02:10:25.024949 master-0 kubenswrapper[7721]: I0216 02:10:25.024915 7721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0c807dcdfc9df7b2c6822e6e59cca10d3dbe35d7119bfc15e6122189915614e"} err="failed to get container status \"e0c807dcdfc9df7b2c6822e6e59cca10d3dbe35d7119bfc15e6122189915614e\": rpc error: code = NotFound desc = could not find container \"e0c807dcdfc9df7b2c6822e6e59cca10d3dbe35d7119bfc15e6122189915614e\": container with ID starting with e0c807dcdfc9df7b2c6822e6e59cca10d3dbe35d7119bfc15e6122189915614e not found: ID does not exist" Feb 16 02:10:25.024949 master-0 kubenswrapper[7721]: I0216 02:10:25.024943 7721 scope.go:117] "RemoveContainer" containerID="d9db1b3a5f1d0e336b63ef9a33d4cba07ee8d59948b784ecd5b54fc2c10d78c3" Feb 16 02:10:25.030309 master-0 kubenswrapper[7721]: I0216 02:10:25.029227 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c442d349-668b-4d01-a097-5981b7a04eac-auth-proxy-config\") pod \"machine-approver-8569dd85ff-vqtcl\" (UID: \"c442d349-668b-4d01-a097-5981b7a04eac\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-vqtcl" Feb 16 02:10:25.030309 master-0 kubenswrapper[7721]: I0216 02:10:25.029282 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/c442d349-668b-4d01-a097-5981b7a04eac-machine-approver-tls\") pod \"machine-approver-8569dd85ff-vqtcl\" (UID: \"c442d349-668b-4d01-a097-5981b7a04eac\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-vqtcl" Feb 16 02:10:25.030549 master-0 kubenswrapper[7721]: I0216 02:10:25.030426 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c442d349-668b-4d01-a097-5981b7a04eac-config\") pod \"machine-approver-8569dd85ff-vqtcl\" (UID: \"c442d349-668b-4d01-a097-5981b7a04eac\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-vqtcl" Feb 16 02:10:25.030612 master-0 kubenswrapper[7721]: I0216 02:10:25.030591 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vchs\" (UniqueName: \"kubernetes.io/projected/c442d349-668b-4d01-a097-5981b7a04eac-kube-api-access-4vchs\") pod \"machine-approver-8569dd85ff-vqtcl\" (UID: \"c442d349-668b-4d01-a097-5981b7a04eac\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-vqtcl" Feb 16 02:10:25.062764 master-0 kubenswrapper[7721]: I0216 02:10:25.062712 7721 scope.go:117] "RemoveContainer" containerID="90b9d68e5e14d707d4874d2b9402ccfe55f90fc3cfd436b05bd088c745ba5d22" Feb 16 02:10:25.074644 master-0 kubenswrapper[7721]: I0216 02:10:25.073898 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj"] Feb 16 02:10:25.081488 master-0 kubenswrapper[7721]: I0216 02:10:25.081402 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj" Feb 16 02:10:25.091546 master-0 kubenswrapper[7721]: I0216 02:10:25.084540 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Feb 16 02:10:25.091546 master-0 kubenswrapper[7721]: I0216 02:10:25.084629 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Feb 16 02:10:25.091546 master-0 kubenswrapper[7721]: I0216 02:10:25.084926 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Feb 16 02:10:25.091546 master-0 kubenswrapper[7721]: I0216 02:10:25.085110 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Feb 16 02:10:25.091546 master-0 kubenswrapper[7721]: I0216 02:10:25.085290 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Feb 16 02:10:25.092459 master-0 kubenswrapper[7721]: I0216 02:10:25.089042 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-ccbvw" Feb 16 02:10:25.108349 master-0 kubenswrapper[7721]: I0216 02:10:25.108300 7721 scope.go:117] "RemoveContainer" containerID="d9db1b3a5f1d0e336b63ef9a33d4cba07ee8d59948b784ecd5b54fc2c10d78c3" Feb 16 02:10:25.129404 master-0 kubenswrapper[7721]: E0216 02:10:25.129338 7721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9db1b3a5f1d0e336b63ef9a33d4cba07ee8d59948b784ecd5b54fc2c10d78c3\": container with ID starting with d9db1b3a5f1d0e336b63ef9a33d4cba07ee8d59948b784ecd5b54fc2c10d78c3 not found: ID does not exist" containerID="d9db1b3a5f1d0e336b63ef9a33d4cba07ee8d59948b784ecd5b54fc2c10d78c3" Feb 16 02:10:25.129619 master-0 kubenswrapper[7721]: I0216 02:10:25.129420 7721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9db1b3a5f1d0e336b63ef9a33d4cba07ee8d59948b784ecd5b54fc2c10d78c3"} err="failed to get container status \"d9db1b3a5f1d0e336b63ef9a33d4cba07ee8d59948b784ecd5b54fc2c10d78c3\": rpc error: code = NotFound desc = could not find container \"d9db1b3a5f1d0e336b63ef9a33d4cba07ee8d59948b784ecd5b54fc2c10d78c3\": container with ID starting with d9db1b3a5f1d0e336b63ef9a33d4cba07ee8d59948b784ecd5b54fc2c10d78c3 not found: ID does not exist" Feb 16 02:10:25.129619 master-0 kubenswrapper[7721]: I0216 02:10:25.129515 7721 scope.go:117] "RemoveContainer" containerID="90b9d68e5e14d707d4874d2b9402ccfe55f90fc3cfd436b05bd088c745ba5d22" Feb 16 02:10:25.131728 master-0 kubenswrapper[7721]: I0216 02:10:25.131685 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c4a146b2-c712-408a-97d8-5de3a84f3aaf-images\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj\" (UID: \"c4a146b2-c712-408a-97d8-5de3a84f3aaf\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj" Feb 16 02:10:25.131826 master-0 kubenswrapper[7721]: I0216 02:10:25.131750 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/c4a146b2-c712-408a-97d8-5de3a84f3aaf-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj\" (UID: \"c4a146b2-c712-408a-97d8-5de3a84f3aaf\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj" Feb 16 02:10:25.131826 master-0 kubenswrapper[7721]: I0216 02:10:25.131789 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c442d349-668b-4d01-a097-5981b7a04eac-auth-proxy-config\") pod \"machine-approver-8569dd85ff-vqtcl\" (UID: \"c442d349-668b-4d01-a097-5981b7a04eac\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-vqtcl" Feb 16 02:10:25.131826 master-0 kubenswrapper[7721]: E0216 02:10:25.131737 7721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90b9d68e5e14d707d4874d2b9402ccfe55f90fc3cfd436b05bd088c745ba5d22\": container with ID starting with 90b9d68e5e14d707d4874d2b9402ccfe55f90fc3cfd436b05bd088c745ba5d22 not found: ID does not exist" containerID="90b9d68e5e14d707d4874d2b9402ccfe55f90fc3cfd436b05bd088c745ba5d22" Feb 16 02:10:25.131826 master-0 kubenswrapper[7721]: I0216 02:10:25.131818 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/c442d349-668b-4d01-a097-5981b7a04eac-machine-approver-tls\") pod \"machine-approver-8569dd85ff-vqtcl\" (UID: \"c442d349-668b-4d01-a097-5981b7a04eac\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-vqtcl" Feb 16 02:10:25.131994 master-0 kubenswrapper[7721]: I0216 02:10:25.131884 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6p8rc\" (UniqueName: \"kubernetes.io/projected/c4a146b2-c712-408a-97d8-5de3a84f3aaf-kube-api-access-6p8rc\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj\" (UID: \"c4a146b2-c712-408a-97d8-5de3a84f3aaf\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj" Feb 16 02:10:25.131994 master-0 kubenswrapper[7721]: I0216 02:10:25.131922 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c442d349-668b-4d01-a097-5981b7a04eac-config\") pod \"machine-approver-8569dd85ff-vqtcl\" (UID: \"c442d349-668b-4d01-a097-5981b7a04eac\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-vqtcl" Feb 16 02:10:25.132075 master-0 kubenswrapper[7721]: I0216 02:10:25.131989 7721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90b9d68e5e14d707d4874d2b9402ccfe55f90fc3cfd436b05bd088c745ba5d22"} err="failed to get container status \"90b9d68e5e14d707d4874d2b9402ccfe55f90fc3cfd436b05bd088c745ba5d22\": rpc error: code = NotFound desc = could not find container \"90b9d68e5e14d707d4874d2b9402ccfe55f90fc3cfd436b05bd088c745ba5d22\": container with ID starting with 90b9d68e5e14d707d4874d2b9402ccfe55f90fc3cfd436b05bd088c745ba5d22 not found: ID does not exist" Feb 16 02:10:25.132075 master-0 kubenswrapper[7721]: I0216 02:10:25.132031 7721 scope.go:117] "RemoveContainer" containerID="d9db1b3a5f1d0e336b63ef9a33d4cba07ee8d59948b784ecd5b54fc2c10d78c3" Feb 16 02:10:25.132180 master-0 kubenswrapper[7721]: I0216 02:10:25.131967 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vchs\" (UniqueName: \"kubernetes.io/projected/c442d349-668b-4d01-a097-5981b7a04eac-kube-api-access-4vchs\") pod \"machine-approver-8569dd85ff-vqtcl\" (UID: \"c442d349-668b-4d01-a097-5981b7a04eac\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-vqtcl" Feb 16 02:10:25.132229 master-0 kubenswrapper[7721]: I0216 02:10:25.132196 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/c4a146b2-c712-408a-97d8-5de3a84f3aaf-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj\" (UID: \"c4a146b2-c712-408a-97d8-5de3a84f3aaf\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj" Feb 16 02:10:25.132229 master-0 kubenswrapper[7721]: I0216 02:10:25.132225 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c4a146b2-c712-408a-97d8-5de3a84f3aaf-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj\" (UID: \"c4a146b2-c712-408a-97d8-5de3a84f3aaf\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj" Feb 16 02:10:25.154077 master-0 kubenswrapper[7721]: I0216 02:10:25.154017 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c442d349-668b-4d01-a097-5981b7a04eac-auth-proxy-config\") pod \"machine-approver-8569dd85ff-vqtcl\" (UID: \"c442d349-668b-4d01-a097-5981b7a04eac\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-vqtcl" Feb 16 02:10:25.154273 master-0 kubenswrapper[7721]: I0216 02:10:25.154190 7721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9db1b3a5f1d0e336b63ef9a33d4cba07ee8d59948b784ecd5b54fc2c10d78c3"} err="failed to get container status \"d9db1b3a5f1d0e336b63ef9a33d4cba07ee8d59948b784ecd5b54fc2c10d78c3\": rpc error: code = NotFound desc = could not find container \"d9db1b3a5f1d0e336b63ef9a33d4cba07ee8d59948b784ecd5b54fc2c10d78c3\": container with ID starting with d9db1b3a5f1d0e336b63ef9a33d4cba07ee8d59948b784ecd5b54fc2c10d78c3 not found: ID does not exist" Feb 16 02:10:25.154273 master-0 kubenswrapper[7721]: I0216 02:10:25.154223 7721 scope.go:117] "RemoveContainer" containerID="90b9d68e5e14d707d4874d2b9402ccfe55f90fc3cfd436b05bd088c745ba5d22" Feb 16 02:10:25.154362 master-0 kubenswrapper[7721]: I0216 02:10:25.154333 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/c442d349-668b-4d01-a097-5981b7a04eac-machine-approver-tls\") pod \"machine-approver-8569dd85ff-vqtcl\" (UID: \"c442d349-668b-4d01-a097-5981b7a04eac\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-vqtcl" Feb 16 02:10:25.178098 master-0 kubenswrapper[7721]: I0216 02:10:25.178033 7721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90b9d68e5e14d707d4874d2b9402ccfe55f90fc3cfd436b05bd088c745ba5d22"} err="failed to get container status \"90b9d68e5e14d707d4874d2b9402ccfe55f90fc3cfd436b05bd088c745ba5d22\": rpc error: code = NotFound desc = could not find container \"90b9d68e5e14d707d4874d2b9402ccfe55f90fc3cfd436b05bd088c745ba5d22\": container with ID starting with 90b9d68e5e14d707d4874d2b9402ccfe55f90fc3cfd436b05bd088c745ba5d22 not found: ID does not exist" Feb 16 02:10:25.179291 master-0 kubenswrapper[7721]: I0216 02:10:25.179226 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vchs\" (UniqueName: \"kubernetes.io/projected/c442d349-668b-4d01-a097-5981b7a04eac-kube-api-access-4vchs\") pod \"machine-approver-8569dd85ff-vqtcl\" (UID: \"c442d349-668b-4d01-a097-5981b7a04eac\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-vqtcl" Feb 16 02:10:25.185006 master-0 kubenswrapper[7721]: I0216 02:10:25.184960 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c442d349-668b-4d01-a097-5981b7a04eac-config\") pod \"machine-approver-8569dd85ff-vqtcl\" (UID: \"c442d349-668b-4d01-a097-5981b7a04eac\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-vqtcl" Feb 16 02:10:25.233102 master-0 kubenswrapper[7721]: I0216 02:10:25.233058 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/c4a146b2-c712-408a-97d8-5de3a84f3aaf-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj\" (UID: \"c4a146b2-c712-408a-97d8-5de3a84f3aaf\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj" Feb 16 02:10:25.233451 master-0 kubenswrapper[7721]: I0216 02:10:25.233104 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c4a146b2-c712-408a-97d8-5de3a84f3aaf-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj\" (UID: \"c4a146b2-c712-408a-97d8-5de3a84f3aaf\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj" Feb 16 02:10:25.233451 master-0 kubenswrapper[7721]: I0216 02:10:25.233228 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/c4a146b2-c712-408a-97d8-5de3a84f3aaf-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj\" (UID: \"c4a146b2-c712-408a-97d8-5de3a84f3aaf\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj" Feb 16 02:10:25.233451 master-0 kubenswrapper[7721]: I0216 02:10:25.233242 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c4a146b2-c712-408a-97d8-5de3a84f3aaf-images\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj\" (UID: \"c4a146b2-c712-408a-97d8-5de3a84f3aaf\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj" Feb 16 02:10:25.233451 master-0 kubenswrapper[7721]: I0216 02:10:25.233350 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/c4a146b2-c712-408a-97d8-5de3a84f3aaf-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj\" (UID: \"c4a146b2-c712-408a-97d8-5de3a84f3aaf\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj" Feb 16 02:10:25.233451 master-0 kubenswrapper[7721]: I0216 02:10:25.233396 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6p8rc\" (UniqueName: \"kubernetes.io/projected/c4a146b2-c712-408a-97d8-5de3a84f3aaf-kube-api-access-6p8rc\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj\" (UID: \"c4a146b2-c712-408a-97d8-5de3a84f3aaf\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj" Feb 16 02:10:25.234072 master-0 kubenswrapper[7721]: I0216 02:10:25.234038 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c4a146b2-c712-408a-97d8-5de3a84f3aaf-images\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj\" (UID: \"c4a146b2-c712-408a-97d8-5de3a84f3aaf\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj" Feb 16 02:10:25.234669 master-0 kubenswrapper[7721]: I0216 02:10:25.234638 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c4a146b2-c712-408a-97d8-5de3a84f3aaf-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj\" (UID: \"c4a146b2-c712-408a-97d8-5de3a84f3aaf\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj" Feb 16 02:10:25.237623 master-0 kubenswrapper[7721]: I0216 02:10:25.237592 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/c4a146b2-c712-408a-97d8-5de3a84f3aaf-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj\" (UID: \"c4a146b2-c712-408a-97d8-5de3a84f3aaf\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj" Feb 16 02:10:25.258999 master-0 kubenswrapper[7721]: I0216 02:10:25.258958 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6p8rc\" (UniqueName: \"kubernetes.io/projected/c4a146b2-c712-408a-97d8-5de3a84f3aaf-kube-api-access-6p8rc\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj\" (UID: \"c4a146b2-c712-408a-97d8-5de3a84f3aaf\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj" Feb 16 02:10:25.356853 master-0 kubenswrapper[7721]: I0216 02:10:25.356802 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-vqtcl" Feb 16 02:10:25.379489 master-0 kubenswrapper[7721]: W0216 02:10:25.379421 7721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc442d349_668b_4d01_a097_5981b7a04eac.slice/crio-7498fd943d93408ca13b7d162b110c819eb4973f8cff45407cca83058e5ae25e WatchSource:0}: Error finding container 7498fd943d93408ca13b7d162b110c819eb4973f8cff45407cca83058e5ae25e: Status 404 returned error can't find the container with id 7498fd943d93408ca13b7d162b110c819eb4973f8cff45407cca83058e5ae25e Feb 16 02:10:25.468997 master-0 kubenswrapper[7721]: I0216 02:10:25.468949 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-6d678b8d67-8gzlx"] Feb 16 02:10:25.481845 master-0 kubenswrapper[7721]: W0216 02:10:25.481705 7721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8086f93_2d98_4218_afac_20a65e6bf943.slice/crio-bd5a6caddc3fbffdc59300004e09c460ee8e769674df58e5a2d88b92c736576e WatchSource:0}: Error finding container bd5a6caddc3fbffdc59300004e09c460ee8e769674df58e5a2d88b92c736576e: Status 404 returned error can't find the container with id bd5a6caddc3fbffdc59300004e09c460ee8e769674df58e5a2d88b92c736576e Feb 16 02:10:25.496273 master-0 kubenswrapper[7721]: I0216 02:10:25.496235 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj" Feb 16 02:10:25.548211 master-0 kubenswrapper[7721]: W0216 02:10:25.548065 7721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4a146b2_c712_408a_97d8_5de3a84f3aaf.slice/crio-3dde0495e5ec9d118f2ad7d1acff82faceae146e9c312fc50bf88cf24e85f414 WatchSource:0}: Error finding container 3dde0495e5ec9d118f2ad7d1acff82faceae146e9c312fc50bf88cf24e85f414: Status 404 returned error can't find the container with id 3dde0495e5ec9d118f2ad7d1acff82faceae146e9c312fc50bf88cf24e85f414 Feb 16 02:10:25.907844 master-0 kubenswrapper[7721]: I0216 02:10:25.907792 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd4l7" event={"ID":"0fbc8f91-f8cc-48d8-917c-64fa978069de","Type":"ContainerStarted","Data":"8abe98761584ab97e4093fee8b717c00060b36dfe073cec441042579955e8b20"} Feb 16 02:10:25.907844 master-0 kubenswrapper[7721]: I0216 02:10:25.907838 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd4l7" event={"ID":"0fbc8f91-f8cc-48d8-917c-64fa978069de","Type":"ContainerStarted","Data":"ff57256e3c94ab0a7475c6266a389bd0fe7c1c106205800fb64f78834d0ed452"} Feb 16 02:10:25.913295 master-0 kubenswrapper[7721]: I0216 02:10:25.913268 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj" event={"ID":"c4a146b2-c712-408a-97d8-5de3a84f3aaf","Type":"ContainerStarted","Data":"0e6dfb235fe16f13df03b4a59ee89cd057fbaeee70e2959a56474787817390af"} Feb 16 02:10:25.913454 master-0 kubenswrapper[7721]: I0216 02:10:25.913302 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj" event={"ID":"c4a146b2-c712-408a-97d8-5de3a84f3aaf","Type":"ContainerStarted","Data":"3dde0495e5ec9d118f2ad7d1acff82faceae146e9c312fc50bf88cf24e85f414"} Feb 16 02:10:25.919283 master-0 kubenswrapper[7721]: I0216 02:10:25.919254 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-vqtcl" event={"ID":"c442d349-668b-4d01-a097-5981b7a04eac","Type":"ContainerStarted","Data":"f8414eb06a6554efc7bc11e136b76ad73ccf5f94b2ef758b4f5d81223780895f"} Feb 16 02:10:25.919283 master-0 kubenswrapper[7721]: I0216 02:10:25.919283 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-vqtcl" event={"ID":"c442d349-668b-4d01-a097-5981b7a04eac","Type":"ContainerStarted","Data":"7498fd943d93408ca13b7d162b110c819eb4973f8cff45407cca83058e5ae25e"} Feb 16 02:10:25.921963 master-0 kubenswrapper[7721]: I0216 02:10:25.921922 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-6d678b8d67-8gzlx" event={"ID":"c8086f93-2d98-4218-afac-20a65e6bf943","Type":"ContainerStarted","Data":"30e7ac434b2ff8376d8f01a24e4deb497d95be6f36eeba191f63ffea76f881d2"} Feb 16 02:10:25.922580 master-0 kubenswrapper[7721]: I0216 02:10:25.921966 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-6d678b8d67-8gzlx" event={"ID":"c8086f93-2d98-4218-afac-20a65e6bf943","Type":"ContainerStarted","Data":"bd5a6caddc3fbffdc59300004e09c460ee8e769674df58e5a2d88b92c736576e"} Feb 16 02:10:25.926828 master-0 kubenswrapper[7721]: I0216 02:10:25.926725 7721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-qd4l7" podStartSLOduration=1.926677557 podStartE2EDuration="1.926677557s" podCreationTimestamp="2026-02-16 02:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:10:25.92524426 +0000 UTC m=+229.419478562" watchObservedRunningTime="2026-02-16 02:10:25.926677557 +0000 UTC m=+229.420911849" Feb 16 02:10:26.742632 master-0 kubenswrapper[7721]: I0216 02:10:26.742556 7721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b120a297-2f2b-43f4-a19a-dad863cb2272" path="/var/lib/kubelet/pods/b120a297-2f2b-43f4-a19a-dad863cb2272/volumes" Feb 16 02:10:26.744001 master-0 kubenswrapper[7721]: I0216 02:10:26.743937 7721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f27ae528-68de-4b59-9536-2d49b7a3cb29" path="/var/lib/kubelet/pods/f27ae528-68de-4b59-9536-2d49b7a3cb29/volumes" Feb 16 02:10:26.937516 master-0 kubenswrapper[7721]: I0216 02:10:26.937413 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj" event={"ID":"c4a146b2-c712-408a-97d8-5de3a84f3aaf","Type":"ContainerStarted","Data":"2845c9400e3ed57dbd1571ce4ed551a26990ab0c554c0de95dc97621ff5c9c8c"} Feb 16 02:10:26.937516 master-0 kubenswrapper[7721]: I0216 02:10:26.937524 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj" event={"ID":"c4a146b2-c712-408a-97d8-5de3a84f3aaf","Type":"ContainerStarted","Data":"7b8d5b60c64a954457f5d3632cc4eab151ef7d06b7f4c5d6693868e55012ceda"} Feb 16 02:10:26.940464 master-0 kubenswrapper[7721]: I0216 02:10:26.940375 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-vqtcl" event={"ID":"c442d349-668b-4d01-a097-5981b7a04eac","Type":"ContainerStarted","Data":"7519ecb1c789c2c061040595067f6c82e07370c9c08904abeb4e65bb29dba279"} Feb 16 02:10:26.943291 master-0 kubenswrapper[7721]: I0216 02:10:26.943205 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-6d678b8d67-8gzlx" event={"ID":"c8086f93-2d98-4218-afac-20a65e6bf943","Type":"ContainerStarted","Data":"98ee9d5b95bc1d66557aa51d3718ad8ae4d6135a3674e22406a779cde9ce0095"} Feb 16 02:10:26.963595 master-0 kubenswrapper[7721]: I0216 02:10:26.963472 7721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj" podStartSLOduration=1.963410019 podStartE2EDuration="1.963410019s" podCreationTimestamp="2026-02-16 02:10:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:10:26.960858922 +0000 UTC m=+230.455093224" watchObservedRunningTime="2026-02-16 02:10:26.963410019 +0000 UTC m=+230.457644351" Feb 16 02:10:26.985421 master-0 kubenswrapper[7721]: I0216 02:10:26.985324 7721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-vqtcl" podStartSLOduration=2.9851758090000002 podStartE2EDuration="2.985175809s" podCreationTimestamp="2026-02-16 02:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:10:26.983871785 +0000 UTC m=+230.478106117" watchObservedRunningTime="2026-02-16 02:10:26.985175809 +0000 UTC m=+230.479410111" Feb 16 02:10:27.023059 master-0 kubenswrapper[7721]: I0216 02:10:27.022846 7721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-6d678b8d67-8gzlx" podStartSLOduration=3.022401794 podStartE2EDuration="3.022401794s" podCreationTimestamp="2026-02-16 02:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:10:27.015296018 +0000 UTC m=+230.509530340" watchObservedRunningTime="2026-02-16 02:10:27.022401794 +0000 UTC m=+230.516636096" Feb 16 02:10:27.047784 master-0 kubenswrapper[7721]: I0216 02:10:27.047576 7721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-7c64d55f8-62wr2"] Feb 16 02:10:27.048348 master-0 kubenswrapper[7721]: I0216 02:10:27.048242 7721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-7c64d55f8-62wr2" podUID="b6088119-1125-4271-8c0b-0675e700edd9" containerName="multus-admission-controller" containerID="cri-o://1ad7005008299c6a17798fe23c749281008c22dc9cf9892a566f5f5d5a934a24" gracePeriod=30 Feb 16 02:10:27.048348 master-0 kubenswrapper[7721]: I0216 02:10:27.048316 7721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-7c64d55f8-62wr2" podUID="b6088119-1125-4271-8c0b-0675e700edd9" containerName="kube-rbac-proxy" containerID="cri-o://0a7334b26fd5842515d0403030d7f8a503f042b5470ce6d8a2e80440c021f184" gracePeriod=30 Feb 16 02:10:27.952818 master-0 kubenswrapper[7721]: I0216 02:10:27.952752 7721 generic.go:334] "Generic (PLEG): container finished" podID="b6088119-1125-4271-8c0b-0675e700edd9" containerID="0a7334b26fd5842515d0403030d7f8a503f042b5470ce6d8a2e80440c021f184" exitCode=0 Feb 16 02:10:27.953536 master-0 kubenswrapper[7721]: I0216 02:10:27.952822 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7c64d55f8-62wr2" event={"ID":"b6088119-1125-4271-8c0b-0675e700edd9","Type":"ContainerDied","Data":"0a7334b26fd5842515d0403030d7f8a503f042b5470ce6d8a2e80440c021f184"} Feb 16 02:10:29.567498 master-0 kubenswrapper[7721]: I0216 02:10:29.567391 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-686c884b4d-zljgp"] Feb 16 02:10:29.568527 master-0 kubenswrapper[7721]: I0216 02:10:29.568388 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-zljgp" Feb 16 02:10:29.572817 master-0 kubenswrapper[7721]: I0216 02:10:29.572759 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 16 02:10:29.573183 master-0 kubenswrapper[7721]: I0216 02:10:29.573129 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-xrwft" Feb 16 02:10:29.587483 master-0 kubenswrapper[7721]: I0216 02:10:29.587186 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-686c884b4d-zljgp"] Feb 16 02:10:29.616690 master-0 kubenswrapper[7721]: I0216 02:10:29.616610 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/97b8261a-91e3-435e-93f8-0a17f30359fd-proxy-tls\") pod \"machine-config-controller-686c884b4d-zljgp\" (UID: \"97b8261a-91e3-435e-93f8-0a17f30359fd\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-zljgp" Feb 16 02:10:29.616944 master-0 kubenswrapper[7721]: I0216 02:10:29.616730 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmqrb\" (UniqueName: \"kubernetes.io/projected/97b8261a-91e3-435e-93f8-0a17f30359fd-kube-api-access-pmqrb\") pod \"machine-config-controller-686c884b4d-zljgp\" (UID: \"97b8261a-91e3-435e-93f8-0a17f30359fd\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-zljgp" Feb 16 02:10:29.616944 master-0 kubenswrapper[7721]: I0216 02:10:29.616773 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/97b8261a-91e3-435e-93f8-0a17f30359fd-mcc-auth-proxy-config\") pod \"machine-config-controller-686c884b4d-zljgp\" (UID: \"97b8261a-91e3-435e-93f8-0a17f30359fd\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-zljgp" Feb 16 02:10:29.718541 master-0 kubenswrapper[7721]: I0216 02:10:29.718484 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/97b8261a-91e3-435e-93f8-0a17f30359fd-proxy-tls\") pod \"machine-config-controller-686c884b4d-zljgp\" (UID: \"97b8261a-91e3-435e-93f8-0a17f30359fd\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-zljgp" Feb 16 02:10:29.718959 master-0 kubenswrapper[7721]: I0216 02:10:29.718932 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmqrb\" (UniqueName: \"kubernetes.io/projected/97b8261a-91e3-435e-93f8-0a17f30359fd-kube-api-access-pmqrb\") pod \"machine-config-controller-686c884b4d-zljgp\" (UID: \"97b8261a-91e3-435e-93f8-0a17f30359fd\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-zljgp" Feb 16 02:10:29.719127 master-0 kubenswrapper[7721]: I0216 02:10:29.719104 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/97b8261a-91e3-435e-93f8-0a17f30359fd-mcc-auth-proxy-config\") pod \"machine-config-controller-686c884b4d-zljgp\" (UID: \"97b8261a-91e3-435e-93f8-0a17f30359fd\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-zljgp" Feb 16 02:10:29.720070 master-0 kubenswrapper[7721]: I0216 02:10:29.720019 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/97b8261a-91e3-435e-93f8-0a17f30359fd-mcc-auth-proxy-config\") pod \"machine-config-controller-686c884b4d-zljgp\" (UID: \"97b8261a-91e3-435e-93f8-0a17f30359fd\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-zljgp" Feb 16 02:10:29.723494 master-0 kubenswrapper[7721]: I0216 02:10:29.723417 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/97b8261a-91e3-435e-93f8-0a17f30359fd-proxy-tls\") pod \"machine-config-controller-686c884b4d-zljgp\" (UID: \"97b8261a-91e3-435e-93f8-0a17f30359fd\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-zljgp" Feb 16 02:10:29.739760 master-0 kubenswrapper[7721]: I0216 02:10:29.739673 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmqrb\" (UniqueName: \"kubernetes.io/projected/97b8261a-91e3-435e-93f8-0a17f30359fd-kube-api-access-pmqrb\") pod \"machine-config-controller-686c884b4d-zljgp\" (UID: \"97b8261a-91e3-435e-93f8-0a17f30359fd\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-zljgp" Feb 16 02:10:29.908112 master-0 kubenswrapper[7721]: I0216 02:10:29.907952 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-zljgp" Feb 16 02:10:30.195216 master-0 kubenswrapper[7721]: I0216 02:10:30.195148 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-686c884b4d-zljgp"] Feb 16 02:10:30.206881 master-0 kubenswrapper[7721]: W0216 02:10:30.206703 7721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97b8261a_91e3_435e_93f8_0a17f30359fd.slice/crio-667ddc72e9342237de83564517ac6e7d5264569a48bf2ad6536c670aaa43e0af WatchSource:0}: Error finding container 667ddc72e9342237de83564517ac6e7d5264569a48bf2ad6536c670aaa43e0af: Status 404 returned error can't find the container with id 667ddc72e9342237de83564517ac6e7d5264569a48bf2ad6536c670aaa43e0af Feb 16 02:10:30.597144 master-0 kubenswrapper[7721]: I0216 02:10:30.595683 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-7d8f4c8c66-kcnkd"] Feb 16 02:10:30.597144 master-0 kubenswrapper[7721]: I0216 02:10:30.597037 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-kcnkd" Feb 16 02:10:30.609503 master-0 kubenswrapper[7721]: I0216 02:10:30.609345 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-864ddd5f56-ffptx"] Feb 16 02:10:30.612375 master-0 kubenswrapper[7721]: I0216 02:10:30.610639 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-864ddd5f56-ffptx" Feb 16 02:10:30.615299 master-0 kubenswrapper[7721]: I0216 02:10:30.615236 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 16 02:10:30.615517 master-0 kubenswrapper[7721]: I0216 02:10:30.615418 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 16 02:10:30.615611 master-0 kubenswrapper[7721]: I0216 02:10:30.615234 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 16 02:10:30.615820 master-0 kubenswrapper[7721]: I0216 02:10:30.615766 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 16 02:10:30.616023 master-0 kubenswrapper[7721]: I0216 02:10:30.615954 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 16 02:10:30.616023 master-0 kubenswrapper[7721]: I0216 02:10:30.615966 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 16 02:10:30.626870 master-0 kubenswrapper[7721]: I0216 02:10:30.626784 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-695b766898-9dx2k"] Feb 16 02:10:30.628325 master-0 kubenswrapper[7721]: I0216 02:10:30.628273 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-7c8qv"] Feb 16 02:10:30.628897 master-0 kubenswrapper[7721]: I0216 02:10:30.628844 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-9dx2k" Feb 16 02:10:30.629407 master-0 kubenswrapper[7721]: I0216 02:10:30.629352 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-7c8qv" Feb 16 02:10:30.629523 master-0 kubenswrapper[7721]: I0216 02:10:30.629485 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-source-7d8f4c8c66-kcnkd"] Feb 16 02:10:30.631557 master-0 kubenswrapper[7721]: I0216 02:10:30.631493 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Feb 16 02:10:30.639560 master-0 kubenswrapper[7721]: I0216 02:10:30.632528 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-bhr6t" Feb 16 02:10:30.639560 master-0 kubenswrapper[7721]: I0216 02:10:30.632704 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-sysctl-allowlist" Feb 16 02:10:30.651223 master-0 kubenswrapper[7721]: I0216 02:10:30.651083 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-695b766898-9dx2k"] Feb 16 02:10:30.733186 master-0 kubenswrapper[7721]: I0216 02:10:30.733045 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85sdg\" (UniqueName: \"kubernetes.io/projected/17390d9a-148d-4927-a831-5bc4873c43d5-kube-api-access-85sdg\") pod \"router-default-864ddd5f56-ffptx\" (UID: \"17390d9a-148d-4927-a831-5bc4873c43d5\") " pod="openshift-ingress/router-default-864ddd5f56-ffptx" Feb 16 02:10:30.733186 master-0 kubenswrapper[7721]: I0216 02:10:30.733120 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/17390d9a-148d-4927-a831-5bc4873c43d5-default-certificate\") pod \"router-default-864ddd5f56-ffptx\" (UID: \"17390d9a-148d-4927-a831-5bc4873c43d5\") " pod="openshift-ingress/router-default-864ddd5f56-ffptx" Feb 16 02:10:30.733186 master-0 kubenswrapper[7721]: I0216 02:10:30.733156 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/37fd7550-cc81-4180-8540-0bc5f62f63d2-tls-certificates\") pod \"prometheus-operator-admission-webhook-695b766898-9dx2k\" (UID: \"37fd7550-cc81-4180-8540-0bc5f62f63d2\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-9dx2k" Feb 16 02:10:30.733490 master-0 kubenswrapper[7721]: I0216 02:10:30.733198 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pc9jt\" (UniqueName: \"kubernetes.io/projected/75915935-00a2-44ce-99d1-03e2492044d4-kube-api-access-pc9jt\") pod \"network-check-source-7d8f4c8c66-kcnkd\" (UID: \"75915935-00a2-44ce-99d1-03e2492044d4\") " pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-kcnkd" Feb 16 02:10:30.733490 master-0 kubenswrapper[7721]: I0216 02:10:30.733229 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17390d9a-148d-4927-a831-5bc4873c43d5-service-ca-bundle\") pod \"router-default-864ddd5f56-ffptx\" (UID: \"17390d9a-148d-4927-a831-5bc4873c43d5\") " pod="openshift-ingress/router-default-864ddd5f56-ffptx" Feb 16 02:10:30.733490 master-0 kubenswrapper[7721]: I0216 02:10:30.733273 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/17390d9a-148d-4927-a831-5bc4873c43d5-metrics-certs\") pod \"router-default-864ddd5f56-ffptx\" (UID: \"17390d9a-148d-4927-a831-5bc4873c43d5\") " pod="openshift-ingress/router-default-864ddd5f56-ffptx" Feb 16 02:10:30.733490 master-0 kubenswrapper[7721]: I0216 02:10:30.733335 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/009e7f72-fcdc-4a88-9769-09f95bccee6e-ready\") pod \"cni-sysctl-allowlist-ds-7c8qv\" (UID: \"009e7f72-fcdc-4a88-9769-09f95bccee6e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7c8qv" Feb 16 02:10:30.733490 master-0 kubenswrapper[7721]: I0216 02:10:30.733361 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdsdl\" (UniqueName: \"kubernetes.io/projected/009e7f72-fcdc-4a88-9769-09f95bccee6e-kube-api-access-kdsdl\") pod \"cni-sysctl-allowlist-ds-7c8qv\" (UID: \"009e7f72-fcdc-4a88-9769-09f95bccee6e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7c8qv" Feb 16 02:10:30.733490 master-0 kubenswrapper[7721]: I0216 02:10:30.733387 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/009e7f72-fcdc-4a88-9769-09f95bccee6e-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-7c8qv\" (UID: \"009e7f72-fcdc-4a88-9769-09f95bccee6e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7c8qv" Feb 16 02:10:30.733490 master-0 kubenswrapper[7721]: I0216 02:10:30.733468 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/009e7f72-fcdc-4a88-9769-09f95bccee6e-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-7c8qv\" (UID: \"009e7f72-fcdc-4a88-9769-09f95bccee6e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7c8qv" Feb 16 02:10:30.733685 master-0 kubenswrapper[7721]: I0216 02:10:30.733499 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/17390d9a-148d-4927-a831-5bc4873c43d5-stats-auth\") pod \"router-default-864ddd5f56-ffptx\" (UID: \"17390d9a-148d-4927-a831-5bc4873c43d5\") " pod="openshift-ingress/router-default-864ddd5f56-ffptx" Feb 16 02:10:30.835709 master-0 kubenswrapper[7721]: I0216 02:10:30.835624 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/009e7f72-fcdc-4a88-9769-09f95bccee6e-ready\") pod \"cni-sysctl-allowlist-ds-7c8qv\" (UID: \"009e7f72-fcdc-4a88-9769-09f95bccee6e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7c8qv" Feb 16 02:10:30.836008 master-0 kubenswrapper[7721]: I0216 02:10:30.835982 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdsdl\" (UniqueName: \"kubernetes.io/projected/009e7f72-fcdc-4a88-9769-09f95bccee6e-kube-api-access-kdsdl\") pod \"cni-sysctl-allowlist-ds-7c8qv\" (UID: \"009e7f72-fcdc-4a88-9769-09f95bccee6e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7c8qv" Feb 16 02:10:30.836580 master-0 kubenswrapper[7721]: I0216 02:10:30.836556 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/009e7f72-fcdc-4a88-9769-09f95bccee6e-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-7c8qv\" (UID: \"009e7f72-fcdc-4a88-9769-09f95bccee6e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7c8qv" Feb 16 02:10:30.836949 master-0 kubenswrapper[7721]: I0216 02:10:30.836931 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/009e7f72-fcdc-4a88-9769-09f95bccee6e-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-7c8qv\" (UID: \"009e7f72-fcdc-4a88-9769-09f95bccee6e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7c8qv" Feb 16 02:10:30.838085 master-0 kubenswrapper[7721]: I0216 02:10:30.838062 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/17390d9a-148d-4927-a831-5bc4873c43d5-stats-auth\") pod \"router-default-864ddd5f56-ffptx\" (UID: \"17390d9a-148d-4927-a831-5bc4873c43d5\") " pod="openshift-ingress/router-default-864ddd5f56-ffptx" Feb 16 02:10:30.838719 master-0 kubenswrapper[7721]: I0216 02:10:30.838698 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85sdg\" (UniqueName: \"kubernetes.io/projected/17390d9a-148d-4927-a831-5bc4873c43d5-kube-api-access-85sdg\") pod \"router-default-864ddd5f56-ffptx\" (UID: \"17390d9a-148d-4927-a831-5bc4873c43d5\") " pod="openshift-ingress/router-default-864ddd5f56-ffptx" Feb 16 02:10:30.838846 master-0 kubenswrapper[7721]: I0216 02:10:30.838826 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/17390d9a-148d-4927-a831-5bc4873c43d5-default-certificate\") pod \"router-default-864ddd5f56-ffptx\" (UID: \"17390d9a-148d-4927-a831-5bc4873c43d5\") " pod="openshift-ingress/router-default-864ddd5f56-ffptx" Feb 16 02:10:30.838953 master-0 kubenswrapper[7721]: I0216 02:10:30.838934 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/37fd7550-cc81-4180-8540-0bc5f62f63d2-tls-certificates\") pod \"prometheus-operator-admission-webhook-695b766898-9dx2k\" (UID: \"37fd7550-cc81-4180-8540-0bc5f62f63d2\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-9dx2k" Feb 16 02:10:30.839104 master-0 kubenswrapper[7721]: I0216 02:10:30.839086 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pc9jt\" (UniqueName: \"kubernetes.io/projected/75915935-00a2-44ce-99d1-03e2492044d4-kube-api-access-pc9jt\") pod \"network-check-source-7d8f4c8c66-kcnkd\" (UID: \"75915935-00a2-44ce-99d1-03e2492044d4\") " pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-kcnkd" Feb 16 02:10:30.839212 master-0 kubenswrapper[7721]: I0216 02:10:30.839196 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17390d9a-148d-4927-a831-5bc4873c43d5-service-ca-bundle\") pod \"router-default-864ddd5f56-ffptx\" (UID: \"17390d9a-148d-4927-a831-5bc4873c43d5\") " pod="openshift-ingress/router-default-864ddd5f56-ffptx" Feb 16 02:10:30.839320 master-0 kubenswrapper[7721]: I0216 02:10:30.839304 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/17390d9a-148d-4927-a831-5bc4873c43d5-metrics-certs\") pod \"router-default-864ddd5f56-ffptx\" (UID: \"17390d9a-148d-4927-a831-5bc4873c43d5\") " pod="openshift-ingress/router-default-864ddd5f56-ffptx" Feb 16 02:10:30.839534 master-0 kubenswrapper[7721]: I0216 02:10:30.836729 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/009e7f72-fcdc-4a88-9769-09f95bccee6e-ready\") pod \"cni-sysctl-allowlist-ds-7c8qv\" (UID: \"009e7f72-fcdc-4a88-9769-09f95bccee6e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7c8qv" Feb 16 02:10:30.839534 master-0 kubenswrapper[7721]: I0216 02:10:30.836799 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/009e7f72-fcdc-4a88-9769-09f95bccee6e-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-7c8qv\" (UID: \"009e7f72-fcdc-4a88-9769-09f95bccee6e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7c8qv" Feb 16 02:10:30.839691 master-0 kubenswrapper[7721]: I0216 02:10:30.838013 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/009e7f72-fcdc-4a88-9769-09f95bccee6e-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-7c8qv\" (UID: \"009e7f72-fcdc-4a88-9769-09f95bccee6e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7c8qv" Feb 16 02:10:30.843762 master-0 kubenswrapper[7721]: I0216 02:10:30.842657 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17390d9a-148d-4927-a831-5bc4873c43d5-service-ca-bundle\") pod \"router-default-864ddd5f56-ffptx\" (UID: \"17390d9a-148d-4927-a831-5bc4873c43d5\") " pod="openshift-ingress/router-default-864ddd5f56-ffptx" Feb 16 02:10:30.843762 master-0 kubenswrapper[7721]: I0216 02:10:30.843552 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/17390d9a-148d-4927-a831-5bc4873c43d5-stats-auth\") pod \"router-default-864ddd5f56-ffptx\" (UID: \"17390d9a-148d-4927-a831-5bc4873c43d5\") " pod="openshift-ingress/router-default-864ddd5f56-ffptx" Feb 16 02:10:30.844525 master-0 kubenswrapper[7721]: I0216 02:10:30.844502 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/37fd7550-cc81-4180-8540-0bc5f62f63d2-tls-certificates\") pod \"prometheus-operator-admission-webhook-695b766898-9dx2k\" (UID: \"37fd7550-cc81-4180-8540-0bc5f62f63d2\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-9dx2k" Feb 16 02:10:30.845261 master-0 kubenswrapper[7721]: I0216 02:10:30.845210 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/17390d9a-148d-4927-a831-5bc4873c43d5-metrics-certs\") pod \"router-default-864ddd5f56-ffptx\" (UID: \"17390d9a-148d-4927-a831-5bc4873c43d5\") " pod="openshift-ingress/router-default-864ddd5f56-ffptx" Feb 16 02:10:30.848556 master-0 kubenswrapper[7721]: I0216 02:10:30.848481 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/17390d9a-148d-4927-a831-5bc4873c43d5-default-certificate\") pod \"router-default-864ddd5f56-ffptx\" (UID: \"17390d9a-148d-4927-a831-5bc4873c43d5\") " pod="openshift-ingress/router-default-864ddd5f56-ffptx" Feb 16 02:10:30.863968 master-0 kubenswrapper[7721]: I0216 02:10:30.863904 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85sdg\" (UniqueName: \"kubernetes.io/projected/17390d9a-148d-4927-a831-5bc4873c43d5-kube-api-access-85sdg\") pod \"router-default-864ddd5f56-ffptx\" (UID: \"17390d9a-148d-4927-a831-5bc4873c43d5\") " pod="openshift-ingress/router-default-864ddd5f56-ffptx" Feb 16 02:10:30.865226 master-0 kubenswrapper[7721]: I0216 02:10:30.865189 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdsdl\" (UniqueName: \"kubernetes.io/projected/009e7f72-fcdc-4a88-9769-09f95bccee6e-kube-api-access-kdsdl\") pod \"cni-sysctl-allowlist-ds-7c8qv\" (UID: \"009e7f72-fcdc-4a88-9769-09f95bccee6e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7c8qv" Feb 16 02:10:30.868568 master-0 kubenswrapper[7721]: I0216 02:10:30.868538 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pc9jt\" (UniqueName: \"kubernetes.io/projected/75915935-00a2-44ce-99d1-03e2492044d4-kube-api-access-pc9jt\") pod \"network-check-source-7d8f4c8c66-kcnkd\" (UID: \"75915935-00a2-44ce-99d1-03e2492044d4\") " pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-kcnkd" Feb 16 02:10:30.936213 master-0 kubenswrapper[7721]: I0216 02:10:30.936134 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-kcnkd" Feb 16 02:10:30.965309 master-0 kubenswrapper[7721]: I0216 02:10:30.965242 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-864ddd5f56-ffptx" Feb 16 02:10:30.976499 master-0 kubenswrapper[7721]: I0216 02:10:30.976416 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-zljgp" event={"ID":"97b8261a-91e3-435e-93f8-0a17f30359fd","Type":"ContainerStarted","Data":"6530daccb5a5325d05cd9e91b887a1bfb58924fb5d67dacd088ef3374a73c08f"} Feb 16 02:10:30.976580 master-0 kubenswrapper[7721]: I0216 02:10:30.976505 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-zljgp" event={"ID":"97b8261a-91e3-435e-93f8-0a17f30359fd","Type":"ContainerStarted","Data":"eeb4129ac5cfd918d77220d754ae426251845c8fe9b138fc90d8ea5e2b4ddfa4"} Feb 16 02:10:30.976580 master-0 kubenswrapper[7721]: I0216 02:10:30.976533 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-zljgp" event={"ID":"97b8261a-91e3-435e-93f8-0a17f30359fd","Type":"ContainerStarted","Data":"667ddc72e9342237de83564517ac6e7d5264569a48bf2ad6536c670aaa43e0af"} Feb 16 02:10:31.000955 master-0 kubenswrapper[7721]: I0216 02:10:31.000898 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-9dx2k" Feb 16 02:10:31.001454 master-0 kubenswrapper[7721]: W0216 02:10:31.001153 7721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17390d9a_148d_4927_a831_5bc4873c43d5.slice/crio-6e4bdeb9de57d42d4e965ab51dbc3a2aa873e2346f227d5e0b75299b42bac97b WatchSource:0}: Error finding container 6e4bdeb9de57d42d4e965ab51dbc3a2aa873e2346f227d5e0b75299b42bac97b: Status 404 returned error can't find the container with id 6e4bdeb9de57d42d4e965ab51dbc3a2aa873e2346f227d5e0b75299b42bac97b Feb 16 02:10:31.022535 master-0 kubenswrapper[7721]: I0216 02:10:31.022345 7721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-zljgp" podStartSLOduration=2.022305412 podStartE2EDuration="2.022305412s" podCreationTimestamp="2026-02-16 02:10:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:10:31.009201779 +0000 UTC m=+234.503436071" watchObservedRunningTime="2026-02-16 02:10:31.022305412 +0000 UTC m=+234.516539684" Feb 16 02:10:31.024095 master-0 kubenswrapper[7721]: I0216 02:10:31.024040 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-7c8qv" Feb 16 02:10:31.076680 master-0 kubenswrapper[7721]: W0216 02:10:31.076633 7721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod009e7f72_fcdc_4a88_9769_09f95bccee6e.slice/crio-9a477bb9fb47b237818cd392a5e13b01d794b117ac5f6e24e794b17acdd31ff5 WatchSource:0}: Error finding container 9a477bb9fb47b237818cd392a5e13b01d794b117ac5f6e24e794b17acdd31ff5: Status 404 returned error can't find the container with id 9a477bb9fb47b237818cd392a5e13b01d794b117ac5f6e24e794b17acdd31ff5 Feb 16 02:10:31.437721 master-0 kubenswrapper[7721]: I0216 02:10:31.437618 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-source-7d8f4c8c66-kcnkd"] Feb 16 02:10:31.446307 master-0 kubenswrapper[7721]: W0216 02:10:31.446241 7721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75915935_00a2_44ce_99d1_03e2492044d4.slice/crio-a1854497ff5ad67aee5d94d7312b87d4baf7af5b3f4e0b712c8c8a5cd16079c9 WatchSource:0}: Error finding container a1854497ff5ad67aee5d94d7312b87d4baf7af5b3f4e0b712c8c8a5cd16079c9: Status 404 returned error can't find the container with id a1854497ff5ad67aee5d94d7312b87d4baf7af5b3f4e0b712c8c8a5cd16079c9 Feb 16 02:10:31.536207 master-0 kubenswrapper[7721]: I0216 02:10:31.536099 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-695b766898-9dx2k"] Feb 16 02:10:31.619842 master-0 kubenswrapper[7721]: I0216 02:10:31.619729 7721 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 02:10:32.022605 master-0 kubenswrapper[7721]: I0216 02:10:32.022517 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-kcnkd" event={"ID":"75915935-00a2-44ce-99d1-03e2492044d4","Type":"ContainerStarted","Data":"e0652db8c2e6dbd0750964704eadb4949f2c33129ad35641d1222c7644772165"} Feb 16 02:10:32.022605 master-0 kubenswrapper[7721]: I0216 02:10:32.022596 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-kcnkd" event={"ID":"75915935-00a2-44ce-99d1-03e2492044d4","Type":"ContainerStarted","Data":"a1854497ff5ad67aee5d94d7312b87d4baf7af5b3f4e0b712c8c8a5cd16079c9"} Feb 16 02:10:32.027720 master-0 kubenswrapper[7721]: I0216 02:10:32.027525 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-7c8qv" event={"ID":"009e7f72-fcdc-4a88-9769-09f95bccee6e","Type":"ContainerStarted","Data":"2efdaf5ed192249635cc07babe812fdcdd13a7e2f2a52f81ea63d81c53853c44"} Feb 16 02:10:32.027720 master-0 kubenswrapper[7721]: I0216 02:10:32.027573 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-7c8qv" event={"ID":"009e7f72-fcdc-4a88-9769-09f95bccee6e","Type":"ContainerStarted","Data":"9a477bb9fb47b237818cd392a5e13b01d794b117ac5f6e24e794b17acdd31ff5"} Feb 16 02:10:32.028048 master-0 kubenswrapper[7721]: I0216 02:10:32.028006 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-multus/cni-sysctl-allowlist-ds-7c8qv" Feb 16 02:10:32.033488 master-0 kubenswrapper[7721]: I0216 02:10:32.032973 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-9dx2k" event={"ID":"37fd7550-cc81-4180-8540-0bc5f62f63d2","Type":"ContainerStarted","Data":"5be46040ce7c3ed3d2b8402e810c80c1d12c6e7664eed391a78116018fc06276"} Feb 16 02:10:32.035263 master-0 kubenswrapper[7721]: I0216 02:10:32.035208 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-864ddd5f56-ffptx" event={"ID":"17390d9a-148d-4927-a831-5bc4873c43d5","Type":"ContainerStarted","Data":"6e4bdeb9de57d42d4e965ab51dbc3a2aa873e2346f227d5e0b75299b42bac97b"} Feb 16 02:10:32.050544 master-0 kubenswrapper[7721]: I0216 02:10:32.050458 7721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-kcnkd" podStartSLOduration=285.050415578 podStartE2EDuration="4m45.050415578s" podCreationTimestamp="2026-02-16 02:05:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:10:32.049430442 +0000 UTC m=+235.543664704" watchObservedRunningTime="2026-02-16 02:10:32.050415578 +0000 UTC m=+235.544649880" Feb 16 02:10:32.068153 master-0 kubenswrapper[7721]: I0216 02:10:32.068096 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-7c8qv" Feb 16 02:10:32.075570 master-0 kubenswrapper[7721]: I0216 02:10:32.073807 7721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-7c8qv" podStartSLOduration=2.07379539 podStartE2EDuration="2.07379539s" podCreationTimestamp="2026-02-16 02:10:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:10:32.07341467 +0000 UTC m=+235.567648942" watchObservedRunningTime="2026-02-16 02:10:32.07379539 +0000 UTC m=+235.568029672" Feb 16 02:10:33.084622 master-0 kubenswrapper[7721]: I0216 02:10:33.084542 7721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-7c8qv"] Feb 16 02:10:34.037883 master-0 kubenswrapper[7721]: I0216 02:10:34.037777 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-5zv6j"] Feb 16 02:10:34.038754 master-0 kubenswrapper[7721]: I0216 02:10:34.038738 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-5zv6j" Feb 16 02:10:34.061285 master-0 kubenswrapper[7721]: I0216 02:10:34.061234 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-nfl29" Feb 16 02:10:34.061285 master-0 kubenswrapper[7721]: I0216 02:10:34.061262 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 16 02:10:34.061576 master-0 kubenswrapper[7721]: I0216 02:10:34.061558 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 16 02:10:34.072646 master-0 kubenswrapper[7721]: I0216 02:10:34.072522 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-9dx2k" event={"ID":"37fd7550-cc81-4180-8540-0bc5f62f63d2","Type":"ContainerStarted","Data":"2bff5376358603a1030565e9a2621fa7edffebee75aa44c744c4f3284c8b686e"} Feb 16 02:10:34.072827 master-0 kubenswrapper[7721]: I0216 02:10:34.072739 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-9dx2k" Feb 16 02:10:34.074482 master-0 kubenswrapper[7721]: I0216 02:10:34.074393 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-864ddd5f56-ffptx" event={"ID":"17390d9a-148d-4927-a831-5bc4873c43d5","Type":"ContainerStarted","Data":"627ab5d9a2bbd36bf2da2e153f10cfc3717737712db4adb69838076f9b75b2a5"} Feb 16 02:10:34.087524 master-0 kubenswrapper[7721]: I0216 02:10:34.087371 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-9dx2k" Feb 16 02:10:34.104898 master-0 kubenswrapper[7721]: I0216 02:10:34.104859 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grvmr\" (UniqueName: \"kubernetes.io/projected/d8bbd369-4219-48ef-ae2d-b45c81789403-kube-api-access-grvmr\") pod \"machine-config-server-5zv6j\" (UID: \"d8bbd369-4219-48ef-ae2d-b45c81789403\") " pod="openshift-machine-config-operator/machine-config-server-5zv6j" Feb 16 02:10:34.104983 master-0 kubenswrapper[7721]: I0216 02:10:34.104903 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d8bbd369-4219-48ef-ae2d-b45c81789403-certs\") pod \"machine-config-server-5zv6j\" (UID: \"d8bbd369-4219-48ef-ae2d-b45c81789403\") " pod="openshift-machine-config-operator/machine-config-server-5zv6j" Feb 16 02:10:34.104983 master-0 kubenswrapper[7721]: I0216 02:10:34.104956 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d8bbd369-4219-48ef-ae2d-b45c81789403-node-bootstrap-token\") pod \"machine-config-server-5zv6j\" (UID: \"d8bbd369-4219-48ef-ae2d-b45c81789403\") " pod="openshift-machine-config-operator/machine-config-server-5zv6j" Feb 16 02:10:34.116899 master-0 kubenswrapper[7721]: I0216 02:10:34.116825 7721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podStartSLOduration=205.613002707 podStartE2EDuration="3m28.116803486s" podCreationTimestamp="2026-02-16 02:07:06 +0000 UTC" firstStartedPulling="2026-02-16 02:10:31.037071349 +0000 UTC m=+234.531305651" lastFinishedPulling="2026-02-16 02:10:33.540872168 +0000 UTC m=+237.035106430" observedRunningTime="2026-02-16 02:10:34.112493314 +0000 UTC m=+237.606727596" watchObservedRunningTime="2026-02-16 02:10:34.116803486 +0000 UTC m=+237.611037758" Feb 16 02:10:34.137796 master-0 kubenswrapper[7721]: I0216 02:10:34.137389 7721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-9dx2k" podStartSLOduration=191.17193078 podStartE2EDuration="3m13.136840091s" podCreationTimestamp="2026-02-16 02:07:21 +0000 UTC" firstStartedPulling="2026-02-16 02:10:31.562482044 +0000 UTC m=+235.056716326" lastFinishedPulling="2026-02-16 02:10:33.527391385 +0000 UTC m=+237.021625637" observedRunningTime="2026-02-16 02:10:34.13335323 +0000 UTC m=+237.627587492" watchObservedRunningTime="2026-02-16 02:10:34.136840091 +0000 UTC m=+237.631074383" Feb 16 02:10:34.206574 master-0 kubenswrapper[7721]: I0216 02:10:34.206493 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grvmr\" (UniqueName: \"kubernetes.io/projected/d8bbd369-4219-48ef-ae2d-b45c81789403-kube-api-access-grvmr\") pod \"machine-config-server-5zv6j\" (UID: \"d8bbd369-4219-48ef-ae2d-b45c81789403\") " pod="openshift-machine-config-operator/machine-config-server-5zv6j" Feb 16 02:10:34.206574 master-0 kubenswrapper[7721]: I0216 02:10:34.206581 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d8bbd369-4219-48ef-ae2d-b45c81789403-certs\") pod \"machine-config-server-5zv6j\" (UID: \"d8bbd369-4219-48ef-ae2d-b45c81789403\") " pod="openshift-machine-config-operator/machine-config-server-5zv6j" Feb 16 02:10:34.206915 master-0 kubenswrapper[7721]: I0216 02:10:34.206688 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d8bbd369-4219-48ef-ae2d-b45c81789403-node-bootstrap-token\") pod \"machine-config-server-5zv6j\" (UID: \"d8bbd369-4219-48ef-ae2d-b45c81789403\") " pod="openshift-machine-config-operator/machine-config-server-5zv6j" Feb 16 02:10:34.212777 master-0 kubenswrapper[7721]: I0216 02:10:34.212695 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d8bbd369-4219-48ef-ae2d-b45c81789403-certs\") pod \"machine-config-server-5zv6j\" (UID: \"d8bbd369-4219-48ef-ae2d-b45c81789403\") " pod="openshift-machine-config-operator/machine-config-server-5zv6j" Feb 16 02:10:34.215597 master-0 kubenswrapper[7721]: I0216 02:10:34.214315 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d8bbd369-4219-48ef-ae2d-b45c81789403-node-bootstrap-token\") pod \"machine-config-server-5zv6j\" (UID: \"d8bbd369-4219-48ef-ae2d-b45c81789403\") " pod="openshift-machine-config-operator/machine-config-server-5zv6j" Feb 16 02:10:34.227877 master-0 kubenswrapper[7721]: I0216 02:10:34.227851 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grvmr\" (UniqueName: \"kubernetes.io/projected/d8bbd369-4219-48ef-ae2d-b45c81789403-kube-api-access-grvmr\") pod \"machine-config-server-5zv6j\" (UID: \"d8bbd369-4219-48ef-ae2d-b45c81789403\") " pod="openshift-machine-config-operator/machine-config-server-5zv6j" Feb 16 02:10:34.404039 master-0 kubenswrapper[7721]: I0216 02:10:34.403903 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-5zv6j" Feb 16 02:10:34.791075 master-0 kubenswrapper[7721]: I0216 02:10:34.791030 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-7485d645b8-v9mmd"] Feb 16 02:10:34.791873 master-0 kubenswrapper[7721]: I0216 02:10:34.791856 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-7485d645b8-v9mmd" Feb 16 02:10:34.796258 master-0 kubenswrapper[7721]: I0216 02:10:34.794033 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Feb 16 02:10:34.796258 master-0 kubenswrapper[7721]: I0216 02:10:34.794178 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Feb 16 02:10:34.796258 master-0 kubenswrapper[7721]: I0216 02:10:34.794393 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Feb 16 02:10:34.796258 master-0 kubenswrapper[7721]: I0216 02:10:34.794525 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-qmfsl" Feb 16 02:10:34.816233 master-0 kubenswrapper[7721]: I0216 02:10:34.816195 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-7485d645b8-v9mmd"] Feb 16 02:10:34.917805 master-0 kubenswrapper[7721]: I0216 02:10:34.917672 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kws4h\" (UniqueName: \"kubernetes.io/projected/695d1f01-d3c1-4fb9-9dda-daf33eae11f5-kube-api-access-kws4h\") pod \"prometheus-operator-7485d645b8-v9mmd\" (UID: \"695d1f01-d3c1-4fb9-9dda-daf33eae11f5\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-v9mmd" Feb 16 02:10:34.918083 master-0 kubenswrapper[7721]: I0216 02:10:34.917821 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/695d1f01-d3c1-4fb9-9dda-daf33eae11f5-prometheus-operator-tls\") pod \"prometheus-operator-7485d645b8-v9mmd\" (UID: \"695d1f01-d3c1-4fb9-9dda-daf33eae11f5\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-v9mmd" Feb 16 02:10:34.918083 master-0 kubenswrapper[7721]: I0216 02:10:34.917887 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/695d1f01-d3c1-4fb9-9dda-daf33eae11f5-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-7485d645b8-v9mmd\" (UID: \"695d1f01-d3c1-4fb9-9dda-daf33eae11f5\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-v9mmd" Feb 16 02:10:34.918225 master-0 kubenswrapper[7721]: I0216 02:10:34.918083 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/695d1f01-d3c1-4fb9-9dda-daf33eae11f5-metrics-client-ca\") pod \"prometheus-operator-7485d645b8-v9mmd\" (UID: \"695d1f01-d3c1-4fb9-9dda-daf33eae11f5\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-v9mmd" Feb 16 02:10:34.965655 master-0 kubenswrapper[7721]: I0216 02:10:34.965472 7721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-864ddd5f56-ffptx" Feb 16 02:10:34.969558 master-0 kubenswrapper[7721]: I0216 02:10:34.969502 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:10:34.969558 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:10:34.969558 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:10:34.969558 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:10:34.969993 master-0 kubenswrapper[7721]: I0216 02:10:34.969946 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:10:35.020205 master-0 kubenswrapper[7721]: I0216 02:10:35.020083 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kws4h\" (UniqueName: \"kubernetes.io/projected/695d1f01-d3c1-4fb9-9dda-daf33eae11f5-kube-api-access-kws4h\") pod \"prometheus-operator-7485d645b8-v9mmd\" (UID: \"695d1f01-d3c1-4fb9-9dda-daf33eae11f5\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-v9mmd" Feb 16 02:10:35.020205 master-0 kubenswrapper[7721]: I0216 02:10:35.020170 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/695d1f01-d3c1-4fb9-9dda-daf33eae11f5-prometheus-operator-tls\") pod \"prometheus-operator-7485d645b8-v9mmd\" (UID: \"695d1f01-d3c1-4fb9-9dda-daf33eae11f5\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-v9mmd" Feb 16 02:10:35.020580 master-0 kubenswrapper[7721]: I0216 02:10:35.020229 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/695d1f01-d3c1-4fb9-9dda-daf33eae11f5-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-7485d645b8-v9mmd\" (UID: \"695d1f01-d3c1-4fb9-9dda-daf33eae11f5\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-v9mmd" Feb 16 02:10:35.020580 master-0 kubenswrapper[7721]: I0216 02:10:35.020296 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/695d1f01-d3c1-4fb9-9dda-daf33eae11f5-metrics-client-ca\") pod \"prometheus-operator-7485d645b8-v9mmd\" (UID: \"695d1f01-d3c1-4fb9-9dda-daf33eae11f5\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-v9mmd" Feb 16 02:10:35.021844 master-0 kubenswrapper[7721]: I0216 02:10:35.021793 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/695d1f01-d3c1-4fb9-9dda-daf33eae11f5-metrics-client-ca\") pod \"prometheus-operator-7485d645b8-v9mmd\" (UID: \"695d1f01-d3c1-4fb9-9dda-daf33eae11f5\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-v9mmd" Feb 16 02:10:35.026494 master-0 kubenswrapper[7721]: I0216 02:10:35.026403 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/695d1f01-d3c1-4fb9-9dda-daf33eae11f5-prometheus-operator-tls\") pod \"prometheus-operator-7485d645b8-v9mmd\" (UID: \"695d1f01-d3c1-4fb9-9dda-daf33eae11f5\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-v9mmd" Feb 16 02:10:35.027975 master-0 kubenswrapper[7721]: I0216 02:10:35.027913 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/695d1f01-d3c1-4fb9-9dda-daf33eae11f5-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-7485d645b8-v9mmd\" (UID: \"695d1f01-d3c1-4fb9-9dda-daf33eae11f5\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-v9mmd" Feb 16 02:10:35.051269 master-0 kubenswrapper[7721]: I0216 02:10:35.048350 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kws4h\" (UniqueName: \"kubernetes.io/projected/695d1f01-d3c1-4fb9-9dda-daf33eae11f5-kube-api-access-kws4h\") pod \"prometheus-operator-7485d645b8-v9mmd\" (UID: \"695d1f01-d3c1-4fb9-9dda-daf33eae11f5\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-v9mmd" Feb 16 02:10:35.086080 master-0 kubenswrapper[7721]: I0216 02:10:35.085969 7721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-7c8qv" podUID="009e7f72-fcdc-4a88-9769-09f95bccee6e" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://2efdaf5ed192249635cc07babe812fdcdd13a7e2f2a52f81ea63d81c53853c44" gracePeriod=30 Feb 16 02:10:35.086305 master-0 kubenswrapper[7721]: I0216 02:10:35.086282 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-5zv6j" event={"ID":"d8bbd369-4219-48ef-ae2d-b45c81789403","Type":"ContainerStarted","Data":"63423188b9cc52bc915ea0ef9816d0d6cf1afa7a4cffcc38208aaf29551c28c0"} Feb 16 02:10:35.086386 master-0 kubenswrapper[7721]: I0216 02:10:35.086317 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-5zv6j" event={"ID":"d8bbd369-4219-48ef-ae2d-b45c81789403","Type":"ContainerStarted","Data":"c4ca903ca847491a2f54378905e0af98cc694e7eee50b6b0fc0352bdc61947b5"} Feb 16 02:10:35.110179 master-0 kubenswrapper[7721]: I0216 02:10:35.110127 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-7485d645b8-v9mmd" Feb 16 02:10:35.599561 master-0 kubenswrapper[7721]: I0216 02:10:35.599288 7721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-5zv6j" podStartSLOduration=1.599260087 podStartE2EDuration="1.599260087s" podCreationTimestamp="2026-02-16 02:10:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:10:35.122799704 +0000 UTC m=+238.617034006" watchObservedRunningTime="2026-02-16 02:10:35.599260087 +0000 UTC m=+239.093494389" Feb 16 02:10:35.605702 master-0 kubenswrapper[7721]: I0216 02:10:35.605652 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-7485d645b8-v9mmd"] Feb 16 02:10:35.611108 master-0 kubenswrapper[7721]: W0216 02:10:35.611040 7721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod695d1f01_d3c1_4fb9_9dda_daf33eae11f5.slice/crio-0926ad4d371086f5079e933378a680e9ca645c38b72ce4fd73fbac3448ac6883 WatchSource:0}: Error finding container 0926ad4d371086f5079e933378a680e9ca645c38b72ce4fd73fbac3448ac6883: Status 404 returned error can't find the container with id 0926ad4d371086f5079e933378a680e9ca645c38b72ce4fd73fbac3448ac6883 Feb 16 02:10:35.968095 master-0 kubenswrapper[7721]: I0216 02:10:35.968029 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:10:35.968095 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:10:35.968095 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:10:35.968095 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:10:35.968383 master-0 kubenswrapper[7721]: I0216 02:10:35.968129 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:10:36.096425 master-0 kubenswrapper[7721]: I0216 02:10:36.096355 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-7485d645b8-v9mmd" event={"ID":"695d1f01-d3c1-4fb9-9dda-daf33eae11f5","Type":"ContainerStarted","Data":"0926ad4d371086f5079e933378a680e9ca645c38b72ce4fd73fbac3448ac6883"} Feb 16 02:10:36.967633 master-0 kubenswrapper[7721]: I0216 02:10:36.967584 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:10:36.967633 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:10:36.967633 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:10:36.967633 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:10:36.968036 master-0 kubenswrapper[7721]: I0216 02:10:36.967646 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:10:37.968577 master-0 kubenswrapper[7721]: I0216 02:10:37.968509 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:10:37.968577 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:10:37.968577 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:10:37.968577 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:10:37.969215 master-0 kubenswrapper[7721]: I0216 02:10:37.968592 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:10:38.112392 master-0 kubenswrapper[7721]: I0216 02:10:38.112244 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-7485d645b8-v9mmd" event={"ID":"695d1f01-d3c1-4fb9-9dda-daf33eae11f5","Type":"ContainerStarted","Data":"20f993577101428227ca371142f7e4f7c44b3bce893911c45e24db20490e15bc"} Feb 16 02:10:38.112392 master-0 kubenswrapper[7721]: I0216 02:10:38.112338 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-7485d645b8-v9mmd" event={"ID":"695d1f01-d3c1-4fb9-9dda-daf33eae11f5","Type":"ContainerStarted","Data":"f395e0a52145f0551827c30dd2830d34144ce8d137e6f8e2eb3a3215c8e1fb78"} Feb 16 02:10:38.968890 master-0 kubenswrapper[7721]: I0216 02:10:38.968793 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:10:38.968890 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:10:38.968890 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:10:38.968890 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:10:38.970012 master-0 kubenswrapper[7721]: I0216 02:10:38.968929 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:10:39.967740 master-0 kubenswrapper[7721]: I0216 02:10:39.967676 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:10:39.967740 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:10:39.967740 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:10:39.967740 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:10:39.968014 master-0 kubenswrapper[7721]: I0216 02:10:39.967764 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:10:40.141145 master-0 kubenswrapper[7721]: I0216 02:10:40.141082 7721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-7485d645b8-v9mmd" podStartSLOduration=4.271850115 podStartE2EDuration="6.141066943s" podCreationTimestamp="2026-02-16 02:10:34 +0000 UTC" firstStartedPulling="2026-02-16 02:10:35.615517482 +0000 UTC m=+239.109751784" lastFinishedPulling="2026-02-16 02:10:37.48473435 +0000 UTC m=+240.978968612" observedRunningTime="2026-02-16 02:10:38.149334159 +0000 UTC m=+241.643568481" watchObservedRunningTime="2026-02-16 02:10:40.141066943 +0000 UTC m=+243.635301195" Feb 16 02:10:40.142637 master-0 kubenswrapper[7721]: I0216 02:10:40.142605 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-546cc7d765-2zl2r"] Feb 16 02:10:40.143621 master-0 kubenswrapper[7721]: I0216 02:10:40.143600 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-2zl2r" Feb 16 02:10:40.146219 master-0 kubenswrapper[7721]: I0216 02:10:40.146178 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Feb 16 02:10:40.146387 master-0 kubenswrapper[7721]: I0216 02:10:40.146347 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-bxb2h" Feb 16 02:10:40.146494 master-0 kubenswrapper[7721]: I0216 02:10:40.146364 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Feb 16 02:10:40.156257 master-0 kubenswrapper[7721]: I0216 02:10:40.150265 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-jxbq6"] Feb 16 02:10:40.156257 master-0 kubenswrapper[7721]: I0216 02:10:40.152606 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-jxbq6" Feb 16 02:10:40.156257 master-0 kubenswrapper[7721]: I0216 02:10:40.155526 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Feb 16 02:10:40.156257 master-0 kubenswrapper[7721]: I0216 02:10:40.155758 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-r9rvr" Feb 16 02:10:40.168004 master-0 kubenswrapper[7721]: I0216 02:10:40.160342 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Feb 16 02:10:40.183662 master-0 kubenswrapper[7721]: I0216 02:10:40.183502 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-7cc9598d54-2gx8b"] Feb 16 02:10:40.184934 master-0 kubenswrapper[7721]: I0216 02:10:40.184908 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-2gx8b" Feb 16 02:10:40.185925 master-0 kubenswrapper[7721]: I0216 02:10:40.185883 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-546cc7d765-2zl2r"] Feb 16 02:10:40.187197 master-0 kubenswrapper[7721]: I0216 02:10:40.187173 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-9dqnm" Feb 16 02:10:40.187710 master-0 kubenswrapper[7721]: I0216 02:10:40.187692 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Feb 16 02:10:40.188011 master-0 kubenswrapper[7721]: I0216 02:10:40.187995 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Feb 16 02:10:40.188255 master-0 kubenswrapper[7721]: I0216 02:10:40.188238 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Feb 16 02:10:40.219296 master-0 kubenswrapper[7721]: I0216 02:10:40.219217 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-7cc9598d54-2gx8b"] Feb 16 02:10:40.234405 master-0 kubenswrapper[7721]: I0216 02:10:40.234349 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/32d420d6-bbda-42c0-82fe-8b187ad91607-metrics-client-ca\") pod \"openshift-state-metrics-546cc7d765-2zl2r\" (UID: \"32d420d6-bbda-42c0-82fe-8b187ad91607\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-2zl2r" Feb 16 02:10:40.234578 master-0 kubenswrapper[7721]: I0216 02:10:40.234427 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sctj8\" (UniqueName: \"kubernetes.io/projected/0a900f93-91c9-4782-89a3-1cc09f3aec95-kube-api-access-sctj8\") pod \"node-exporter-jxbq6\" (UID: \"0a900f93-91c9-4782-89a3-1cc09f3aec95\") " pod="openshift-monitoring/node-exporter-jxbq6" Feb 16 02:10:40.234578 master-0 kubenswrapper[7721]: I0216 02:10:40.234476 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0a900f93-91c9-4782-89a3-1cc09f3aec95-sys\") pod \"node-exporter-jxbq6\" (UID: \"0a900f93-91c9-4782-89a3-1cc09f3aec95\") " pod="openshift-monitoring/node-exporter-jxbq6" Feb 16 02:10:40.234578 master-0 kubenswrapper[7721]: I0216 02:10:40.234504 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/86af980a-2653-40c3-a368-a795d7fb8558-volume-directive-shadow\") pod \"kube-state-metrics-7cc9598d54-2gx8b\" (UID: \"86af980a-2653-40c3-a368-a795d7fb8558\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-2gx8b" Feb 16 02:10:40.234578 master-0 kubenswrapper[7721]: I0216 02:10:40.234525 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/0a900f93-91c9-4782-89a3-1cc09f3aec95-root\") pod \"node-exporter-jxbq6\" (UID: \"0a900f93-91c9-4782-89a3-1cc09f3aec95\") " pod="openshift-monitoring/node-exporter-jxbq6" Feb 16 02:10:40.234578 master-0 kubenswrapper[7721]: I0216 02:10:40.234547 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/86af980a-2653-40c3-a368-a795d7fb8558-kube-state-metrics-tls\") pod \"kube-state-metrics-7cc9598d54-2gx8b\" (UID: \"86af980a-2653-40c3-a368-a795d7fb8558\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-2gx8b" Feb 16 02:10:40.234727 master-0 kubenswrapper[7721]: I0216 02:10:40.234578 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/86af980a-2653-40c3-a368-a795d7fb8558-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7cc9598d54-2gx8b\" (UID: \"86af980a-2653-40c3-a368-a795d7fb8558\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-2gx8b" Feb 16 02:10:40.234727 master-0 kubenswrapper[7721]: I0216 02:10:40.234613 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/0a900f93-91c9-4782-89a3-1cc09f3aec95-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-jxbq6\" (UID: \"0a900f93-91c9-4782-89a3-1cc09f3aec95\") " pod="openshift-monitoring/node-exporter-jxbq6" Feb 16 02:10:40.234727 master-0 kubenswrapper[7721]: I0216 02:10:40.234641 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/0a900f93-91c9-4782-89a3-1cc09f3aec95-node-exporter-textfile\") pod \"node-exporter-jxbq6\" (UID: \"0a900f93-91c9-4782-89a3-1cc09f3aec95\") " pod="openshift-monitoring/node-exporter-jxbq6" Feb 16 02:10:40.234727 master-0 kubenswrapper[7721]: I0216 02:10:40.234662 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/0a900f93-91c9-4782-89a3-1cc09f3aec95-node-exporter-tls\") pod \"node-exporter-jxbq6\" (UID: \"0a900f93-91c9-4782-89a3-1cc09f3aec95\") " pod="openshift-monitoring/node-exporter-jxbq6" Feb 16 02:10:40.234727 master-0 kubenswrapper[7721]: I0216 02:10:40.234692 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/86af980a-2653-40c3-a368-a795d7fb8558-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7cc9598d54-2gx8b\" (UID: \"86af980a-2653-40c3-a368-a795d7fb8558\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-2gx8b" Feb 16 02:10:40.234727 master-0 kubenswrapper[7721]: I0216 02:10:40.234719 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4mp4\" (UniqueName: \"kubernetes.io/projected/32d420d6-bbda-42c0-82fe-8b187ad91607-kube-api-access-w4mp4\") pod \"openshift-state-metrics-546cc7d765-2zl2r\" (UID: \"32d420d6-bbda-42c0-82fe-8b187ad91607\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-2zl2r" Feb 16 02:10:40.234952 master-0 kubenswrapper[7721]: I0216 02:10:40.234748 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/32d420d6-bbda-42c0-82fe-8b187ad91607-openshift-state-metrics-tls\") pod \"openshift-state-metrics-546cc7d765-2zl2r\" (UID: \"32d420d6-bbda-42c0-82fe-8b187ad91607\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-2zl2r" Feb 16 02:10:40.234952 master-0 kubenswrapper[7721]: I0216 02:10:40.234773 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/86af980a-2653-40c3-a368-a795d7fb8558-metrics-client-ca\") pod \"kube-state-metrics-7cc9598d54-2gx8b\" (UID: \"86af980a-2653-40c3-a368-a795d7fb8558\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-2gx8b" Feb 16 02:10:40.234952 master-0 kubenswrapper[7721]: I0216 02:10:40.234793 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/0a900f93-91c9-4782-89a3-1cc09f3aec95-node-exporter-wtmp\") pod \"node-exporter-jxbq6\" (UID: \"0a900f93-91c9-4782-89a3-1cc09f3aec95\") " pod="openshift-monitoring/node-exporter-jxbq6" Feb 16 02:10:40.234952 master-0 kubenswrapper[7721]: I0216 02:10:40.234819 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/0a900f93-91c9-4782-89a3-1cc09f3aec95-metrics-client-ca\") pod \"node-exporter-jxbq6\" (UID: \"0a900f93-91c9-4782-89a3-1cc09f3aec95\") " pod="openshift-monitoring/node-exporter-jxbq6" Feb 16 02:10:40.234952 master-0 kubenswrapper[7721]: I0216 02:10:40.234843 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgqtf\" (UniqueName: \"kubernetes.io/projected/86af980a-2653-40c3-a368-a795d7fb8558-kube-api-access-tgqtf\") pod \"kube-state-metrics-7cc9598d54-2gx8b\" (UID: \"86af980a-2653-40c3-a368-a795d7fb8558\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-2gx8b" Feb 16 02:10:40.234952 master-0 kubenswrapper[7721]: I0216 02:10:40.234873 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/32d420d6-bbda-42c0-82fe-8b187ad91607-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-546cc7d765-2zl2r\" (UID: \"32d420d6-bbda-42c0-82fe-8b187ad91607\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-2zl2r" Feb 16 02:10:40.335811 master-0 kubenswrapper[7721]: I0216 02:10:40.335749 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sctj8\" (UniqueName: \"kubernetes.io/projected/0a900f93-91c9-4782-89a3-1cc09f3aec95-kube-api-access-sctj8\") pod \"node-exporter-jxbq6\" (UID: \"0a900f93-91c9-4782-89a3-1cc09f3aec95\") " pod="openshift-monitoring/node-exporter-jxbq6" Feb 16 02:10:40.335811 master-0 kubenswrapper[7721]: I0216 02:10:40.335807 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0a900f93-91c9-4782-89a3-1cc09f3aec95-sys\") pod \"node-exporter-jxbq6\" (UID: \"0a900f93-91c9-4782-89a3-1cc09f3aec95\") " pod="openshift-monitoring/node-exporter-jxbq6" Feb 16 02:10:40.336038 master-0 kubenswrapper[7721]: I0216 02:10:40.335836 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/86af980a-2653-40c3-a368-a795d7fb8558-volume-directive-shadow\") pod \"kube-state-metrics-7cc9598d54-2gx8b\" (UID: \"86af980a-2653-40c3-a368-a795d7fb8558\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-2gx8b" Feb 16 02:10:40.336038 master-0 kubenswrapper[7721]: I0216 02:10:40.335858 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/0a900f93-91c9-4782-89a3-1cc09f3aec95-root\") pod \"node-exporter-jxbq6\" (UID: \"0a900f93-91c9-4782-89a3-1cc09f3aec95\") " pod="openshift-monitoring/node-exporter-jxbq6" Feb 16 02:10:40.336038 master-0 kubenswrapper[7721]: I0216 02:10:40.335881 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/86af980a-2653-40c3-a368-a795d7fb8558-kube-state-metrics-tls\") pod \"kube-state-metrics-7cc9598d54-2gx8b\" (UID: \"86af980a-2653-40c3-a368-a795d7fb8558\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-2gx8b" Feb 16 02:10:40.336038 master-0 kubenswrapper[7721]: I0216 02:10:40.335906 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/86af980a-2653-40c3-a368-a795d7fb8558-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7cc9598d54-2gx8b\" (UID: \"86af980a-2653-40c3-a368-a795d7fb8558\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-2gx8b" Feb 16 02:10:40.336038 master-0 kubenswrapper[7721]: I0216 02:10:40.335935 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/0a900f93-91c9-4782-89a3-1cc09f3aec95-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-jxbq6\" (UID: \"0a900f93-91c9-4782-89a3-1cc09f3aec95\") " pod="openshift-monitoring/node-exporter-jxbq6" Feb 16 02:10:40.336038 master-0 kubenswrapper[7721]: I0216 02:10:40.335963 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/0a900f93-91c9-4782-89a3-1cc09f3aec95-node-exporter-textfile\") pod \"node-exporter-jxbq6\" (UID: \"0a900f93-91c9-4782-89a3-1cc09f3aec95\") " pod="openshift-monitoring/node-exporter-jxbq6" Feb 16 02:10:40.336038 master-0 kubenswrapper[7721]: I0216 02:10:40.335984 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/0a900f93-91c9-4782-89a3-1cc09f3aec95-node-exporter-tls\") pod \"node-exporter-jxbq6\" (UID: \"0a900f93-91c9-4782-89a3-1cc09f3aec95\") " pod="openshift-monitoring/node-exporter-jxbq6" Feb 16 02:10:40.336038 master-0 kubenswrapper[7721]: I0216 02:10:40.336013 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/86af980a-2653-40c3-a368-a795d7fb8558-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7cc9598d54-2gx8b\" (UID: \"86af980a-2653-40c3-a368-a795d7fb8558\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-2gx8b" Feb 16 02:10:40.336247 master-0 kubenswrapper[7721]: I0216 02:10:40.336041 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4mp4\" (UniqueName: \"kubernetes.io/projected/32d420d6-bbda-42c0-82fe-8b187ad91607-kube-api-access-w4mp4\") pod \"openshift-state-metrics-546cc7d765-2zl2r\" (UID: \"32d420d6-bbda-42c0-82fe-8b187ad91607\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-2zl2r" Feb 16 02:10:40.336247 master-0 kubenswrapper[7721]: I0216 02:10:40.336064 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/32d420d6-bbda-42c0-82fe-8b187ad91607-openshift-state-metrics-tls\") pod \"openshift-state-metrics-546cc7d765-2zl2r\" (UID: \"32d420d6-bbda-42c0-82fe-8b187ad91607\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-2zl2r" Feb 16 02:10:40.336247 master-0 kubenswrapper[7721]: I0216 02:10:40.336087 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/86af980a-2653-40c3-a368-a795d7fb8558-metrics-client-ca\") pod \"kube-state-metrics-7cc9598d54-2gx8b\" (UID: \"86af980a-2653-40c3-a368-a795d7fb8558\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-2gx8b" Feb 16 02:10:40.336247 master-0 kubenswrapper[7721]: I0216 02:10:40.336116 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/0a900f93-91c9-4782-89a3-1cc09f3aec95-node-exporter-wtmp\") pod \"node-exporter-jxbq6\" (UID: \"0a900f93-91c9-4782-89a3-1cc09f3aec95\") " pod="openshift-monitoring/node-exporter-jxbq6" Feb 16 02:10:40.336247 master-0 kubenswrapper[7721]: I0216 02:10:40.336139 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/0a900f93-91c9-4782-89a3-1cc09f3aec95-metrics-client-ca\") pod \"node-exporter-jxbq6\" (UID: \"0a900f93-91c9-4782-89a3-1cc09f3aec95\") " pod="openshift-monitoring/node-exporter-jxbq6" Feb 16 02:10:40.336247 master-0 kubenswrapper[7721]: I0216 02:10:40.336162 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgqtf\" (UniqueName: \"kubernetes.io/projected/86af980a-2653-40c3-a368-a795d7fb8558-kube-api-access-tgqtf\") pod \"kube-state-metrics-7cc9598d54-2gx8b\" (UID: \"86af980a-2653-40c3-a368-a795d7fb8558\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-2gx8b" Feb 16 02:10:40.336247 master-0 kubenswrapper[7721]: I0216 02:10:40.336204 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/32d420d6-bbda-42c0-82fe-8b187ad91607-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-546cc7d765-2zl2r\" (UID: \"32d420d6-bbda-42c0-82fe-8b187ad91607\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-2zl2r" Feb 16 02:10:40.336465 master-0 kubenswrapper[7721]: I0216 02:10:40.336250 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/32d420d6-bbda-42c0-82fe-8b187ad91607-metrics-client-ca\") pod \"openshift-state-metrics-546cc7d765-2zl2r\" (UID: \"32d420d6-bbda-42c0-82fe-8b187ad91607\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-2zl2r" Feb 16 02:10:40.336790 master-0 kubenswrapper[7721]: E0216 02:10:40.336745 7721 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: secret "kube-state-metrics-tls" not found Feb 16 02:10:40.336844 master-0 kubenswrapper[7721]: I0216 02:10:40.336784 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/0a900f93-91c9-4782-89a3-1cc09f3aec95-root\") pod \"node-exporter-jxbq6\" (UID: \"0a900f93-91c9-4782-89a3-1cc09f3aec95\") " pod="openshift-monitoring/node-exporter-jxbq6" Feb 16 02:10:40.336844 master-0 kubenswrapper[7721]: E0216 02:10:40.336833 7721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86af980a-2653-40c3-a368-a795d7fb8558-kube-state-metrics-tls podName:86af980a-2653-40c3-a368-a795d7fb8558 nodeName:}" failed. No retries permitted until 2026-02-16 02:10:40.836815067 +0000 UTC m=+244.331049329 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/86af980a-2653-40c3-a368-a795d7fb8558-kube-state-metrics-tls") pod "kube-state-metrics-7cc9598d54-2gx8b" (UID: "86af980a-2653-40c3-a368-a795d7fb8558") : secret "kube-state-metrics-tls" not found Feb 16 02:10:40.337300 master-0 kubenswrapper[7721]: I0216 02:10:40.337275 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/32d420d6-bbda-42c0-82fe-8b187ad91607-metrics-client-ca\") pod \"openshift-state-metrics-546cc7d765-2zl2r\" (UID: \"32d420d6-bbda-42c0-82fe-8b187ad91607\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-2zl2r" Feb 16 02:10:40.339908 master-0 kubenswrapper[7721]: I0216 02:10:40.337655 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/0a900f93-91c9-4782-89a3-1cc09f3aec95-metrics-client-ca\") pod \"node-exporter-jxbq6\" (UID: \"0a900f93-91c9-4782-89a3-1cc09f3aec95\") " pod="openshift-monitoring/node-exporter-jxbq6" Feb 16 02:10:40.339908 master-0 kubenswrapper[7721]: I0216 02:10:40.337725 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0a900f93-91c9-4782-89a3-1cc09f3aec95-sys\") pod \"node-exporter-jxbq6\" (UID: \"0a900f93-91c9-4782-89a3-1cc09f3aec95\") " pod="openshift-monitoring/node-exporter-jxbq6" Feb 16 02:10:40.339908 master-0 kubenswrapper[7721]: I0216 02:10:40.338171 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/86af980a-2653-40c3-a368-a795d7fb8558-volume-directive-shadow\") pod \"kube-state-metrics-7cc9598d54-2gx8b\" (UID: \"86af980a-2653-40c3-a368-a795d7fb8558\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-2gx8b" Feb 16 02:10:40.340058 master-0 kubenswrapper[7721]: I0216 02:10:40.340005 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/86af980a-2653-40c3-a368-a795d7fb8558-metrics-client-ca\") pod \"kube-state-metrics-7cc9598d54-2gx8b\" (UID: \"86af980a-2653-40c3-a368-a795d7fb8558\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-2gx8b" Feb 16 02:10:40.340781 master-0 kubenswrapper[7721]: I0216 02:10:40.340122 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/0a900f93-91c9-4782-89a3-1cc09f3aec95-node-exporter-wtmp\") pod \"node-exporter-jxbq6\" (UID: \"0a900f93-91c9-4782-89a3-1cc09f3aec95\") " pod="openshift-monitoring/node-exporter-jxbq6" Feb 16 02:10:40.340781 master-0 kubenswrapper[7721]: I0216 02:10:40.340528 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/0a900f93-91c9-4782-89a3-1cc09f3aec95-node-exporter-textfile\") pod \"node-exporter-jxbq6\" (UID: \"0a900f93-91c9-4782-89a3-1cc09f3aec95\") " pod="openshift-monitoring/node-exporter-jxbq6" Feb 16 02:10:40.341103 master-0 kubenswrapper[7721]: I0216 02:10:40.341059 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/0a900f93-91c9-4782-89a3-1cc09f3aec95-node-exporter-tls\") pod \"node-exporter-jxbq6\" (UID: \"0a900f93-91c9-4782-89a3-1cc09f3aec95\") " pod="openshift-monitoring/node-exporter-jxbq6" Feb 16 02:10:40.341472 master-0 kubenswrapper[7721]: I0216 02:10:40.341182 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/86af980a-2653-40c3-a368-a795d7fb8558-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7cc9598d54-2gx8b\" (UID: \"86af980a-2653-40c3-a368-a795d7fb8558\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-2gx8b" Feb 16 02:10:40.341678 master-0 kubenswrapper[7721]: I0216 02:10:40.341647 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/32d420d6-bbda-42c0-82fe-8b187ad91607-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-546cc7d765-2zl2r\" (UID: \"32d420d6-bbda-42c0-82fe-8b187ad91607\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-2zl2r" Feb 16 02:10:40.344873 master-0 kubenswrapper[7721]: I0216 02:10:40.344836 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/0a900f93-91c9-4782-89a3-1cc09f3aec95-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-jxbq6\" (UID: \"0a900f93-91c9-4782-89a3-1cc09f3aec95\") " pod="openshift-monitoring/node-exporter-jxbq6" Feb 16 02:10:40.344961 master-0 kubenswrapper[7721]: I0216 02:10:40.344937 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/32d420d6-bbda-42c0-82fe-8b187ad91607-openshift-state-metrics-tls\") pod \"openshift-state-metrics-546cc7d765-2zl2r\" (UID: \"32d420d6-bbda-42c0-82fe-8b187ad91607\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-2zl2r" Feb 16 02:10:40.349451 master-0 kubenswrapper[7721]: I0216 02:10:40.348122 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/86af980a-2653-40c3-a368-a795d7fb8558-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7cc9598d54-2gx8b\" (UID: \"86af980a-2653-40c3-a368-a795d7fb8558\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-2gx8b" Feb 16 02:10:40.353674 master-0 kubenswrapper[7721]: I0216 02:10:40.353631 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4mp4\" (UniqueName: \"kubernetes.io/projected/32d420d6-bbda-42c0-82fe-8b187ad91607-kube-api-access-w4mp4\") pod \"openshift-state-metrics-546cc7d765-2zl2r\" (UID: \"32d420d6-bbda-42c0-82fe-8b187ad91607\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-2zl2r" Feb 16 02:10:40.360901 master-0 kubenswrapper[7721]: I0216 02:10:40.358498 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgqtf\" (UniqueName: \"kubernetes.io/projected/86af980a-2653-40c3-a368-a795d7fb8558-kube-api-access-tgqtf\") pod \"kube-state-metrics-7cc9598d54-2gx8b\" (UID: \"86af980a-2653-40c3-a368-a795d7fb8558\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-2gx8b" Feb 16 02:10:40.360901 master-0 kubenswrapper[7721]: I0216 02:10:40.359068 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sctj8\" (UniqueName: \"kubernetes.io/projected/0a900f93-91c9-4782-89a3-1cc09f3aec95-kube-api-access-sctj8\") pod \"node-exporter-jxbq6\" (UID: \"0a900f93-91c9-4782-89a3-1cc09f3aec95\") " pod="openshift-monitoring/node-exporter-jxbq6" Feb 16 02:10:40.478553 master-0 kubenswrapper[7721]: I0216 02:10:40.478446 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-2zl2r" Feb 16 02:10:40.511512 master-0 kubenswrapper[7721]: I0216 02:10:40.511428 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-jxbq6" Feb 16 02:10:40.848807 master-0 kubenswrapper[7721]: I0216 02:10:40.847238 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/86af980a-2653-40c3-a368-a795d7fb8558-kube-state-metrics-tls\") pod \"kube-state-metrics-7cc9598d54-2gx8b\" (UID: \"86af980a-2653-40c3-a368-a795d7fb8558\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-2gx8b" Feb 16 02:10:40.854013 master-0 kubenswrapper[7721]: I0216 02:10:40.850071 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/86af980a-2653-40c3-a368-a795d7fb8558-kube-state-metrics-tls\") pod \"kube-state-metrics-7cc9598d54-2gx8b\" (UID: \"86af980a-2653-40c3-a368-a795d7fb8558\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-2gx8b" Feb 16 02:10:40.970791 master-0 kubenswrapper[7721]: I0216 02:10:40.970715 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-864ddd5f56-ffptx" Feb 16 02:10:40.974697 master-0 kubenswrapper[7721]: I0216 02:10:40.974659 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:10:40.974697 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:10:40.974697 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:10:40.974697 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:10:40.974844 master-0 kubenswrapper[7721]: I0216 02:10:40.974711 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:10:41.017491 master-0 kubenswrapper[7721]: I0216 02:10:41.017442 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-546cc7d765-2zl2r"] Feb 16 02:10:41.032451 master-0 kubenswrapper[7721]: E0216 02:10:41.028365 7721 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2efdaf5ed192249635cc07babe812fdcdd13a7e2f2a52f81ea63d81c53853c44" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 16 02:10:41.032451 master-0 kubenswrapper[7721]: E0216 02:10:41.031570 7721 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2efdaf5ed192249635cc07babe812fdcdd13a7e2f2a52f81ea63d81c53853c44" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 16 02:10:41.033788 master-0 kubenswrapper[7721]: E0216 02:10:41.033725 7721 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2efdaf5ed192249635cc07babe812fdcdd13a7e2f2a52f81ea63d81c53853c44" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 16 02:10:41.033873 master-0 kubenswrapper[7721]: E0216 02:10:41.033816 7721 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-7c8qv" podUID="009e7f72-fcdc-4a88-9769-09f95bccee6e" containerName="kube-multus-additional-cni-plugins" Feb 16 02:10:41.131622 master-0 kubenswrapper[7721]: I0216 02:10:41.131543 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-2zl2r" event={"ID":"32d420d6-bbda-42c0-82fe-8b187ad91607","Type":"ContainerStarted","Data":"f218aafff65afcf35d3001ac97851bc4eb0edf9e76199787dd7b9355dbf3fd1e"} Feb 16 02:10:41.133867 master-0 kubenswrapper[7721]: I0216 02:10:41.133813 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-jxbq6" event={"ID":"0a900f93-91c9-4782-89a3-1cc09f3aec95","Type":"ContainerStarted","Data":"7d9105e1418ead3e83bafdc82309e78dfce8ddc065bc7a3854cc209af8115774"} Feb 16 02:10:41.133947 master-0 kubenswrapper[7721]: I0216 02:10:41.133856 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-2gx8b" Feb 16 02:10:41.297617 master-0 kubenswrapper[7721]: I0216 02:10:41.297374 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Feb 16 02:10:41.298186 master-0 kubenswrapper[7721]: I0216 02:10:41.298152 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Feb 16 02:10:41.300237 master-0 kubenswrapper[7721]: I0216 02:10:41.299597 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-9tdhq" Feb 16 02:10:41.300237 master-0 kubenswrapper[7721]: I0216 02:10:41.300052 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 16 02:10:41.311371 master-0 kubenswrapper[7721]: I0216 02:10:41.311198 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Feb 16 02:10:41.484821 master-0 kubenswrapper[7721]: I0216 02:10:41.484734 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5b79cd7f-675e-4778-be06-95e79b1c008a-var-lock\") pod \"installer-3-master-0\" (UID: \"5b79cd7f-675e-4778-be06-95e79b1c008a\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 16 02:10:41.484821 master-0 kubenswrapper[7721]: I0216 02:10:41.484823 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5b79cd7f-675e-4778-be06-95e79b1c008a-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"5b79cd7f-675e-4778-be06-95e79b1c008a\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 16 02:10:41.485178 master-0 kubenswrapper[7721]: I0216 02:10:41.484933 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5b79cd7f-675e-4778-be06-95e79b1c008a-kube-api-access\") pod \"installer-3-master-0\" (UID: \"5b79cd7f-675e-4778-be06-95e79b1c008a\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 16 02:10:41.566520 master-0 kubenswrapper[7721]: I0216 02:10:41.566465 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-7cc9598d54-2gx8b"] Feb 16 02:10:41.586660 master-0 kubenswrapper[7721]: I0216 02:10:41.586611 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5b79cd7f-675e-4778-be06-95e79b1c008a-kube-api-access\") pod \"installer-3-master-0\" (UID: \"5b79cd7f-675e-4778-be06-95e79b1c008a\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 16 02:10:41.586833 master-0 kubenswrapper[7721]: I0216 02:10:41.586683 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5b79cd7f-675e-4778-be06-95e79b1c008a-var-lock\") pod \"installer-3-master-0\" (UID: \"5b79cd7f-675e-4778-be06-95e79b1c008a\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 16 02:10:41.586833 master-0 kubenswrapper[7721]: I0216 02:10:41.586706 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5b79cd7f-675e-4778-be06-95e79b1c008a-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"5b79cd7f-675e-4778-be06-95e79b1c008a\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 16 02:10:41.586833 master-0 kubenswrapper[7721]: I0216 02:10:41.586785 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5b79cd7f-675e-4778-be06-95e79b1c008a-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"5b79cd7f-675e-4778-be06-95e79b1c008a\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 16 02:10:41.587125 master-0 kubenswrapper[7721]: I0216 02:10:41.586983 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5b79cd7f-675e-4778-be06-95e79b1c008a-var-lock\") pod \"installer-3-master-0\" (UID: \"5b79cd7f-675e-4778-be06-95e79b1c008a\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 16 02:10:41.613500 master-0 kubenswrapper[7721]: I0216 02:10:41.613412 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5b79cd7f-675e-4778-be06-95e79b1c008a-kube-api-access\") pod \"installer-3-master-0\" (UID: \"5b79cd7f-675e-4778-be06-95e79b1c008a\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 16 02:10:41.635341 master-0 kubenswrapper[7721]: I0216 02:10:41.635277 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Feb 16 02:10:41.974316 master-0 kubenswrapper[7721]: I0216 02:10:41.974249 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:10:41.974316 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:10:41.974316 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:10:41.974316 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:10:41.974554 master-0 kubenswrapper[7721]: I0216 02:10:41.974341 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:10:42.141645 master-0 kubenswrapper[7721]: I0216 02:10:42.141593 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-2zl2r" event={"ID":"32d420d6-bbda-42c0-82fe-8b187ad91607","Type":"ContainerStarted","Data":"b6ad419a3d57e0304e302f3a3a33339c2a3c18f928d5d48d437f5e8d619ecdbc"} Feb 16 02:10:42.141645 master-0 kubenswrapper[7721]: I0216 02:10:42.141647 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-2zl2r" event={"ID":"32d420d6-bbda-42c0-82fe-8b187ad91607","Type":"ContainerStarted","Data":"e9c2347794bf87b872581acd9e07616e75ca39fd40ce7ae89a0dec339e5133b1"} Feb 16 02:10:42.143048 master-0 kubenswrapper[7721]: I0216 02:10:42.143013 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-jxbq6" event={"ID":"0a900f93-91c9-4782-89a3-1cc09f3aec95","Type":"ContainerStarted","Data":"c8ecc68c95de851226f7803b9643bfa846686a0947e8d9573c40fc13da9cd25c"} Feb 16 02:10:42.145914 master-0 kubenswrapper[7721]: I0216 02:10:42.145879 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-2gx8b" event={"ID":"86af980a-2653-40c3-a368-a795d7fb8558","Type":"ContainerStarted","Data":"f2d5026b3d62b6eac44704f83447125870ee696cf63066e123a37273291b1d8f"} Feb 16 02:10:42.271950 master-0 kubenswrapper[7721]: I0216 02:10:42.271623 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Feb 16 02:10:42.968719 master-0 kubenswrapper[7721]: I0216 02:10:42.968653 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:10:42.968719 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:10:42.968719 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:10:42.968719 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:10:42.969356 master-0 kubenswrapper[7721]: I0216 02:10:42.968736 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:10:43.155069 master-0 kubenswrapper[7721]: I0216 02:10:43.155003 7721 generic.go:334] "Generic (PLEG): container finished" podID="0a900f93-91c9-4782-89a3-1cc09f3aec95" containerID="c8ecc68c95de851226f7803b9643bfa846686a0947e8d9573c40fc13da9cd25c" exitCode=0 Feb 16 02:10:43.155302 master-0 kubenswrapper[7721]: I0216 02:10:43.155118 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-jxbq6" event={"ID":"0a900f93-91c9-4782-89a3-1cc09f3aec95","Type":"ContainerDied","Data":"c8ecc68c95de851226f7803b9643bfa846686a0947e8d9573c40fc13da9cd25c"} Feb 16 02:10:43.157862 master-0 kubenswrapper[7721]: I0216 02:10:43.156609 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"5b79cd7f-675e-4778-be06-95e79b1c008a","Type":"ContainerStarted","Data":"7ece968240d91c23ca40de1bcd222697872432fda5d86a538de746813dee22af"} Feb 16 02:10:43.157862 master-0 kubenswrapper[7721]: I0216 02:10:43.156636 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"5b79cd7f-675e-4778-be06-95e79b1c008a","Type":"ContainerStarted","Data":"f617d4c81a045b3a0a2096d5b392bf8b99ea0de59561036edc52c149d97f12ac"} Feb 16 02:10:43.162968 master-0 kubenswrapper[7721]: I0216 02:10:43.162665 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-2zl2r" event={"ID":"32d420d6-bbda-42c0-82fe-8b187ad91607","Type":"ContainerStarted","Data":"a9090b1957479e8109508a8557ff60b3c52f30920ce4fd1a0ffb93d7098e578c"} Feb 16 02:10:43.223780 master-0 kubenswrapper[7721]: I0216 02:10:43.223654 7721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-3-master-0" podStartSLOduration=2.223624715 podStartE2EDuration="2.223624715s" podCreationTimestamp="2026-02-16 02:10:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:10:43.217851533 +0000 UTC m=+246.712085835" watchObservedRunningTime="2026-02-16 02:10:43.223624715 +0000 UTC m=+246.717859017" Feb 16 02:10:43.247612 master-0 kubenswrapper[7721]: I0216 02:10:43.247502 7721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-2zl2r" podStartSLOduration=1.96636451 podStartE2EDuration="3.247471639s" podCreationTimestamp="2026-02-16 02:10:40 +0000 UTC" firstStartedPulling="2026-02-16 02:10:41.295304051 +0000 UTC m=+244.789538323" lastFinishedPulling="2026-02-16 02:10:42.57641115 +0000 UTC m=+246.070645452" observedRunningTime="2026-02-16 02:10:43.246341219 +0000 UTC m=+246.740575511" watchObservedRunningTime="2026-02-16 02:10:43.247471639 +0000 UTC m=+246.741705931" Feb 16 02:10:43.969712 master-0 kubenswrapper[7721]: I0216 02:10:43.969612 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:10:43.969712 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:10:43.969712 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:10:43.969712 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:10:43.969712 master-0 kubenswrapper[7721]: I0216 02:10:43.969709 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:10:44.174546 master-0 kubenswrapper[7721]: I0216 02:10:44.174470 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-jxbq6" event={"ID":"0a900f93-91c9-4782-89a3-1cc09f3aec95","Type":"ContainerStarted","Data":"1e86264e8404f7ff58cad7527b56f7a44869ccab5a42f18bb67dd18b3fc9138e"} Feb 16 02:10:44.174546 master-0 kubenswrapper[7721]: I0216 02:10:44.174549 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-jxbq6" event={"ID":"0a900f93-91c9-4782-89a3-1cc09f3aec95","Type":"ContainerStarted","Data":"3ffeee940e09b68982a1807585f77f8b0623fcd9ee35017ef32393dbf39f7e6a"} Feb 16 02:10:44.177957 master-0 kubenswrapper[7721]: I0216 02:10:44.177893 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-2gx8b" event={"ID":"86af980a-2653-40c3-a368-a795d7fb8558","Type":"ContainerStarted","Data":"9dd0c97cff5ed81ca8a7198c60f7fb874bb426fd2a2f7cc938e80d7e550ad9fb"} Feb 16 02:10:44.177957 master-0 kubenswrapper[7721]: I0216 02:10:44.177944 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-2gx8b" event={"ID":"86af980a-2653-40c3-a368-a795d7fb8558","Type":"ContainerStarted","Data":"094b7c33509932c7ad5ab4af6d99403bfe778ebf9b2738fbf698de66ea74eb28"} Feb 16 02:10:44.177957 master-0 kubenswrapper[7721]: I0216 02:10:44.177967 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-2gx8b" event={"ID":"86af980a-2653-40c3-a368-a795d7fb8558","Type":"ContainerStarted","Data":"35d42ca1da3cdc042cc669dcd4cadb61ea705241e759a7ecdf2dfe667b965ad1"} Feb 16 02:10:44.208507 master-0 kubenswrapper[7721]: I0216 02:10:44.208369 7721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-jxbq6" podStartSLOduration=2.957422765 podStartE2EDuration="4.208343565s" podCreationTimestamp="2026-02-16 02:10:40 +0000 UTC" firstStartedPulling="2026-02-16 02:10:40.550779869 +0000 UTC m=+244.045014141" lastFinishedPulling="2026-02-16 02:10:41.801700669 +0000 UTC m=+245.295934941" observedRunningTime="2026-02-16 02:10:44.203211391 +0000 UTC m=+247.697445693" watchObservedRunningTime="2026-02-16 02:10:44.208343565 +0000 UTC m=+247.702577857" Feb 16 02:10:44.245367 master-0 kubenswrapper[7721]: I0216 02:10:44.245183 7721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-2gx8b" podStartSLOduration=2.875086561 podStartE2EDuration="4.245162849s" podCreationTimestamp="2026-02-16 02:10:40 +0000 UTC" firstStartedPulling="2026-02-16 02:10:41.7532356 +0000 UTC m=+245.247469862" lastFinishedPulling="2026-02-16 02:10:43.123311878 +0000 UTC m=+246.617546150" observedRunningTime="2026-02-16 02:10:44.23908275 +0000 UTC m=+247.733317072" watchObservedRunningTime="2026-02-16 02:10:44.245162849 +0000 UTC m=+247.739397141" Feb 16 02:10:44.968386 master-0 kubenswrapper[7721]: I0216 02:10:44.968317 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:10:44.968386 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:10:44.968386 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:10:44.968386 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:10:44.968704 master-0 kubenswrapper[7721]: I0216 02:10:44.968403 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:10:45.966317 master-0 kubenswrapper[7721]: I0216 02:10:45.966238 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-67b79bd656-cs2n2"] Feb 16 02:10:45.968394 master-0 kubenswrapper[7721]: I0216 02:10:45.968313 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-67b79bd656-cs2n2" Feb 16 02:10:45.971608 master-0 kubenswrapper[7721]: I0216 02:10:45.971351 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Feb 16 02:10:45.971608 master-0 kubenswrapper[7721]: I0216 02:10:45.971392 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:10:45.971608 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:10:45.971608 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:10:45.971608 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:10:45.971608 master-0 kubenswrapper[7721]: I0216 02:10:45.971496 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:10:45.972777 master-0 kubenswrapper[7721]: I0216 02:10:45.972553 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-2md94v7udfjth" Feb 16 02:10:45.972777 master-0 kubenswrapper[7721]: I0216 02:10:45.972582 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Feb 16 02:10:45.972777 master-0 kubenswrapper[7721]: I0216 02:10:45.972755 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-wsv7k" Feb 16 02:10:45.977110 master-0 kubenswrapper[7721]: I0216 02:10:45.977046 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Feb 16 02:10:45.982076 master-0 kubenswrapper[7721]: I0216 02:10:45.982016 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Feb 16 02:10:45.990486 master-0 kubenswrapper[7721]: I0216 02:10:45.990424 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-67b79bd656-cs2n2"] Feb 16 02:10:46.069925 master-0 kubenswrapper[7721]: I0216 02:10:46.069839 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9snq8\" (UniqueName: \"kubernetes.io/projected/8c267cc7-a51a-4b14-baee-e584254eefc5-kube-api-access-9snq8\") pod \"metrics-server-67b79bd656-cs2n2\" (UID: \"8c267cc7-a51a-4b14-baee-e584254eefc5\") " pod="openshift-monitoring/metrics-server-67b79bd656-cs2n2" Feb 16 02:10:46.070258 master-0 kubenswrapper[7721]: I0216 02:10:46.070036 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c267cc7-a51a-4b14-baee-e584254eefc5-client-ca-bundle\") pod \"metrics-server-67b79bd656-cs2n2\" (UID: \"8c267cc7-a51a-4b14-baee-e584254eefc5\") " pod="openshift-monitoring/metrics-server-67b79bd656-cs2n2" Feb 16 02:10:46.070258 master-0 kubenswrapper[7721]: I0216 02:10:46.070118 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/8c267cc7-a51a-4b14-baee-e584254eefc5-metrics-server-audit-profiles\") pod \"metrics-server-67b79bd656-cs2n2\" (UID: \"8c267cc7-a51a-4b14-baee-e584254eefc5\") " pod="openshift-monitoring/metrics-server-67b79bd656-cs2n2" Feb 16 02:10:46.070258 master-0 kubenswrapper[7721]: I0216 02:10:46.070154 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/8c267cc7-a51a-4b14-baee-e584254eefc5-secret-metrics-server-tls\") pod \"metrics-server-67b79bd656-cs2n2\" (UID: \"8c267cc7-a51a-4b14-baee-e584254eefc5\") " pod="openshift-monitoring/metrics-server-67b79bd656-cs2n2" Feb 16 02:10:46.070258 master-0 kubenswrapper[7721]: I0216 02:10:46.070214 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c267cc7-a51a-4b14-baee-e584254eefc5-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-67b79bd656-cs2n2\" (UID: \"8c267cc7-a51a-4b14-baee-e584254eefc5\") " pod="openshift-monitoring/metrics-server-67b79bd656-cs2n2" Feb 16 02:10:46.070602 master-0 kubenswrapper[7721]: I0216 02:10:46.070304 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/8c267cc7-a51a-4b14-baee-e584254eefc5-secret-metrics-client-certs\") pod \"metrics-server-67b79bd656-cs2n2\" (UID: \"8c267cc7-a51a-4b14-baee-e584254eefc5\") " pod="openshift-monitoring/metrics-server-67b79bd656-cs2n2" Feb 16 02:10:46.070602 master-0 kubenswrapper[7721]: I0216 02:10:46.070362 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/8c267cc7-a51a-4b14-baee-e584254eefc5-audit-log\") pod \"metrics-server-67b79bd656-cs2n2\" (UID: \"8c267cc7-a51a-4b14-baee-e584254eefc5\") " pod="openshift-monitoring/metrics-server-67b79bd656-cs2n2" Feb 16 02:10:46.171997 master-0 kubenswrapper[7721]: I0216 02:10:46.171024 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c267cc7-a51a-4b14-baee-e584254eefc5-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-67b79bd656-cs2n2\" (UID: \"8c267cc7-a51a-4b14-baee-e584254eefc5\") " pod="openshift-monitoring/metrics-server-67b79bd656-cs2n2" Feb 16 02:10:46.171997 master-0 kubenswrapper[7721]: I0216 02:10:46.171274 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/8c267cc7-a51a-4b14-baee-e584254eefc5-secret-metrics-client-certs\") pod \"metrics-server-67b79bd656-cs2n2\" (UID: \"8c267cc7-a51a-4b14-baee-e584254eefc5\") " pod="openshift-monitoring/metrics-server-67b79bd656-cs2n2" Feb 16 02:10:46.171997 master-0 kubenswrapper[7721]: I0216 02:10:46.171379 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/8c267cc7-a51a-4b14-baee-e584254eefc5-audit-log\") pod \"metrics-server-67b79bd656-cs2n2\" (UID: \"8c267cc7-a51a-4b14-baee-e584254eefc5\") " pod="openshift-monitoring/metrics-server-67b79bd656-cs2n2" Feb 16 02:10:46.171997 master-0 kubenswrapper[7721]: I0216 02:10:46.171661 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9snq8\" (UniqueName: \"kubernetes.io/projected/8c267cc7-a51a-4b14-baee-e584254eefc5-kube-api-access-9snq8\") pod \"metrics-server-67b79bd656-cs2n2\" (UID: \"8c267cc7-a51a-4b14-baee-e584254eefc5\") " pod="openshift-monitoring/metrics-server-67b79bd656-cs2n2" Feb 16 02:10:46.171997 master-0 kubenswrapper[7721]: I0216 02:10:46.171794 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c267cc7-a51a-4b14-baee-e584254eefc5-client-ca-bundle\") pod \"metrics-server-67b79bd656-cs2n2\" (UID: \"8c267cc7-a51a-4b14-baee-e584254eefc5\") " pod="openshift-monitoring/metrics-server-67b79bd656-cs2n2" Feb 16 02:10:46.171997 master-0 kubenswrapper[7721]: I0216 02:10:46.171864 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/8c267cc7-a51a-4b14-baee-e584254eefc5-metrics-server-audit-profiles\") pod \"metrics-server-67b79bd656-cs2n2\" (UID: \"8c267cc7-a51a-4b14-baee-e584254eefc5\") " pod="openshift-monitoring/metrics-server-67b79bd656-cs2n2" Feb 16 02:10:46.171997 master-0 kubenswrapper[7721]: I0216 02:10:46.171903 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/8c267cc7-a51a-4b14-baee-e584254eefc5-secret-metrics-server-tls\") pod \"metrics-server-67b79bd656-cs2n2\" (UID: \"8c267cc7-a51a-4b14-baee-e584254eefc5\") " pod="openshift-monitoring/metrics-server-67b79bd656-cs2n2" Feb 16 02:10:46.176512 master-0 kubenswrapper[7721]: I0216 02:10:46.173766 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/8c267cc7-a51a-4b14-baee-e584254eefc5-audit-log\") pod \"metrics-server-67b79bd656-cs2n2\" (UID: \"8c267cc7-a51a-4b14-baee-e584254eefc5\") " pod="openshift-monitoring/metrics-server-67b79bd656-cs2n2" Feb 16 02:10:46.176512 master-0 kubenswrapper[7721]: I0216 02:10:46.174678 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c267cc7-a51a-4b14-baee-e584254eefc5-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-67b79bd656-cs2n2\" (UID: \"8c267cc7-a51a-4b14-baee-e584254eefc5\") " pod="openshift-monitoring/metrics-server-67b79bd656-cs2n2" Feb 16 02:10:46.176512 master-0 kubenswrapper[7721]: I0216 02:10:46.174966 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/8c267cc7-a51a-4b14-baee-e584254eefc5-metrics-server-audit-profiles\") pod \"metrics-server-67b79bd656-cs2n2\" (UID: \"8c267cc7-a51a-4b14-baee-e584254eefc5\") " pod="openshift-monitoring/metrics-server-67b79bd656-cs2n2" Feb 16 02:10:46.178926 master-0 kubenswrapper[7721]: I0216 02:10:46.178874 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c267cc7-a51a-4b14-baee-e584254eefc5-client-ca-bundle\") pod \"metrics-server-67b79bd656-cs2n2\" (UID: \"8c267cc7-a51a-4b14-baee-e584254eefc5\") " pod="openshift-monitoring/metrics-server-67b79bd656-cs2n2" Feb 16 02:10:46.180361 master-0 kubenswrapper[7721]: I0216 02:10:46.180295 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/8c267cc7-a51a-4b14-baee-e584254eefc5-secret-metrics-client-certs\") pod \"metrics-server-67b79bd656-cs2n2\" (UID: \"8c267cc7-a51a-4b14-baee-e584254eefc5\") " pod="openshift-monitoring/metrics-server-67b79bd656-cs2n2" Feb 16 02:10:46.188915 master-0 kubenswrapper[7721]: I0216 02:10:46.188772 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/8c267cc7-a51a-4b14-baee-e584254eefc5-secret-metrics-server-tls\") pod \"metrics-server-67b79bd656-cs2n2\" (UID: \"8c267cc7-a51a-4b14-baee-e584254eefc5\") " pod="openshift-monitoring/metrics-server-67b79bd656-cs2n2" Feb 16 02:10:46.212064 master-0 kubenswrapper[7721]: I0216 02:10:46.212009 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9snq8\" (UniqueName: \"kubernetes.io/projected/8c267cc7-a51a-4b14-baee-e584254eefc5-kube-api-access-9snq8\") pod \"metrics-server-67b79bd656-cs2n2\" (UID: \"8c267cc7-a51a-4b14-baee-e584254eefc5\") " pod="openshift-monitoring/metrics-server-67b79bd656-cs2n2" Feb 16 02:10:46.297354 master-0 kubenswrapper[7721]: I0216 02:10:46.297289 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-67b79bd656-cs2n2" Feb 16 02:10:46.855227 master-0 kubenswrapper[7721]: I0216 02:10:46.855071 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-67b79bd656-cs2n2"] Feb 16 02:10:46.970469 master-0 kubenswrapper[7721]: I0216 02:10:46.969912 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:10:46.970469 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:10:46.970469 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:10:46.970469 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:10:46.970469 master-0 kubenswrapper[7721]: I0216 02:10:46.970071 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:10:47.204449 master-0 kubenswrapper[7721]: I0216 02:10:47.204379 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-67b79bd656-cs2n2" event={"ID":"8c267cc7-a51a-4b14-baee-e584254eefc5","Type":"ContainerStarted","Data":"4f55a0409391e0031662fe90965f9c6570290d87940cb9577014c63ddf57bd34"} Feb 16 02:10:47.968054 master-0 kubenswrapper[7721]: I0216 02:10:47.967950 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:10:47.968054 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:10:47.968054 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:10:47.968054 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:10:47.968508 master-0 kubenswrapper[7721]: I0216 02:10:47.968087 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:10:48.970671 master-0 kubenswrapper[7721]: I0216 02:10:48.969899 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:10:48.970671 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:10:48.970671 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:10:48.970671 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:10:48.971833 master-0 kubenswrapper[7721]: I0216 02:10:48.970670 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:10:49.227178 master-0 kubenswrapper[7721]: I0216 02:10:49.227005 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-67b79bd656-cs2n2" event={"ID":"8c267cc7-a51a-4b14-baee-e584254eefc5","Type":"ContainerStarted","Data":"f80750b41fcca97bf8458c1b6044d45377e09a5a0f5619c086ed38bf7a1478e0"} Feb 16 02:10:49.968657 master-0 kubenswrapper[7721]: I0216 02:10:49.968576 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:10:49.968657 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:10:49.968657 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:10:49.968657 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:10:49.969103 master-0 kubenswrapper[7721]: I0216 02:10:49.968687 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:10:50.968589 master-0 kubenswrapper[7721]: I0216 02:10:50.968510 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:10:50.968589 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:10:50.968589 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:10:50.968589 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:10:50.969580 master-0 kubenswrapper[7721]: I0216 02:10:50.968599 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:10:51.027796 master-0 kubenswrapper[7721]: E0216 02:10:51.027709 7721 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2efdaf5ed192249635cc07babe812fdcdd13a7e2f2a52f81ea63d81c53853c44" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 16 02:10:51.030859 master-0 kubenswrapper[7721]: E0216 02:10:51.030794 7721 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2efdaf5ed192249635cc07babe812fdcdd13a7e2f2a52f81ea63d81c53853c44" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 16 02:10:51.036885 master-0 kubenswrapper[7721]: E0216 02:10:51.036796 7721 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2efdaf5ed192249635cc07babe812fdcdd13a7e2f2a52f81ea63d81c53853c44" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 16 02:10:51.036960 master-0 kubenswrapper[7721]: E0216 02:10:51.036885 7721 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-7c8qv" podUID="009e7f72-fcdc-4a88-9769-09f95bccee6e" containerName="kube-multus-additional-cni-plugins" Feb 16 02:10:51.969478 master-0 kubenswrapper[7721]: I0216 02:10:51.969393 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:10:51.969478 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:10:51.969478 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:10:51.969478 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:10:51.970251 master-0 kubenswrapper[7721]: I0216 02:10:51.969522 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:10:52.969660 master-0 kubenswrapper[7721]: I0216 02:10:52.969563 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:10:52.969660 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:10:52.969660 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:10:52.969660 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:10:52.970642 master-0 kubenswrapper[7721]: I0216 02:10:52.969677 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:10:53.969376 master-0 kubenswrapper[7721]: I0216 02:10:53.969299 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:10:53.969376 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:10:53.969376 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:10:53.969376 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:10:53.970064 master-0 kubenswrapper[7721]: I0216 02:10:53.969397 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:10:54.969412 master-0 kubenswrapper[7721]: I0216 02:10:54.969320 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:10:54.969412 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:10:54.969412 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:10:54.969412 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:10:54.970343 master-0 kubenswrapper[7721]: I0216 02:10:54.969428 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:10:55.968973 master-0 kubenswrapper[7721]: I0216 02:10:55.968887 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:10:55.968973 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:10:55.968973 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:10:55.968973 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:10:55.969479 master-0 kubenswrapper[7721]: I0216 02:10:55.968994 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:10:56.969252 master-0 kubenswrapper[7721]: I0216 02:10:56.969171 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:10:56.969252 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:10:56.969252 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:10:56.969252 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:10:56.969252 master-0 kubenswrapper[7721]: I0216 02:10:56.969256 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:10:57.313654 master-0 kubenswrapper[7721]: I0216 02:10:57.313513 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-7c64d55f8-62wr2_b6088119-1125-4271-8c0b-0675e700edd9/multus-admission-controller/0.log" Feb 16 02:10:57.313858 master-0 kubenswrapper[7721]: I0216 02:10:57.313675 7721 generic.go:334] "Generic (PLEG): container finished" podID="b6088119-1125-4271-8c0b-0675e700edd9" containerID="1ad7005008299c6a17798fe23c749281008c22dc9cf9892a566f5f5d5a934a24" exitCode=137 Feb 16 02:10:57.313858 master-0 kubenswrapper[7721]: I0216 02:10:57.313721 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7c64d55f8-62wr2" event={"ID":"b6088119-1125-4271-8c0b-0675e700edd9","Type":"ContainerDied","Data":"1ad7005008299c6a17798fe23c749281008c22dc9cf9892a566f5f5d5a934a24"} Feb 16 02:10:57.504577 master-0 kubenswrapper[7721]: I0216 02:10:57.504496 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-7c64d55f8-62wr2_b6088119-1125-4271-8c0b-0675e700edd9/multus-admission-controller/0.log" Feb 16 02:10:57.504577 master-0 kubenswrapper[7721]: I0216 02:10:57.504613 7721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-7c64d55f8-62wr2" Feb 16 02:10:57.533921 master-0 kubenswrapper[7721]: I0216 02:10:57.531871 7721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-67b79bd656-cs2n2" podStartSLOduration=10.847227723 podStartE2EDuration="12.531847647s" podCreationTimestamp="2026-02-16 02:10:45 +0000 UTC" firstStartedPulling="2026-02-16 02:10:46.86782813 +0000 UTC m=+250.362062392" lastFinishedPulling="2026-02-16 02:10:48.552448054 +0000 UTC m=+252.046682316" observedRunningTime="2026-02-16 02:10:49.261104357 +0000 UTC m=+252.755338699" watchObservedRunningTime="2026-02-16 02:10:57.531847647 +0000 UTC m=+261.026081909" Feb 16 02:10:57.589550 master-0 kubenswrapper[7721]: I0216 02:10:57.589344 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b6088119-1125-4271-8c0b-0675e700edd9-webhook-certs\") pod \"b6088119-1125-4271-8c0b-0675e700edd9\" (UID: \"b6088119-1125-4271-8c0b-0675e700edd9\") " Feb 16 02:10:57.589823 master-0 kubenswrapper[7721]: I0216 02:10:57.589658 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jgpcj\" (UniqueName: \"kubernetes.io/projected/b6088119-1125-4271-8c0b-0675e700edd9-kube-api-access-jgpcj\") pod \"b6088119-1125-4271-8c0b-0675e700edd9\" (UID: \"b6088119-1125-4271-8c0b-0675e700edd9\") " Feb 16 02:10:57.593586 master-0 kubenswrapper[7721]: I0216 02:10:57.593452 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6088119-1125-4271-8c0b-0675e700edd9-kube-api-access-jgpcj" (OuterVolumeSpecName: "kube-api-access-jgpcj") pod "b6088119-1125-4271-8c0b-0675e700edd9" (UID: "b6088119-1125-4271-8c0b-0675e700edd9"). InnerVolumeSpecName "kube-api-access-jgpcj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:10:57.594983 master-0 kubenswrapper[7721]: I0216 02:10:57.594909 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6088119-1125-4271-8c0b-0675e700edd9-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "b6088119-1125-4271-8c0b-0675e700edd9" (UID: "b6088119-1125-4271-8c0b-0675e700edd9"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:10:57.692718 master-0 kubenswrapper[7721]: I0216 02:10:57.692655 7721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jgpcj\" (UniqueName: \"kubernetes.io/projected/b6088119-1125-4271-8c0b-0675e700edd9-kube-api-access-jgpcj\") on node \"master-0\" DevicePath \"\"" Feb 16 02:10:57.692942 master-0 kubenswrapper[7721]: I0216 02:10:57.692733 7721 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b6088119-1125-4271-8c0b-0675e700edd9-webhook-certs\") on node \"master-0\" DevicePath \"\"" Feb 16 02:10:57.970179 master-0 kubenswrapper[7721]: I0216 02:10:57.970076 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:10:57.970179 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:10:57.970179 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:10:57.970179 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:10:57.971185 master-0 kubenswrapper[7721]: I0216 02:10:57.970188 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:10:58.337808 master-0 kubenswrapper[7721]: I0216 02:10:58.337643 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-7c64d55f8-62wr2_b6088119-1125-4271-8c0b-0675e700edd9/multus-admission-controller/0.log" Feb 16 02:10:58.337808 master-0 kubenswrapper[7721]: I0216 02:10:58.337756 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7c64d55f8-62wr2" event={"ID":"b6088119-1125-4271-8c0b-0675e700edd9","Type":"ContainerDied","Data":"2acb9f3d5afb22161d4bddb28de7cc78d9278231cc5c43495ca4ac920d494a5e"} Feb 16 02:10:58.338125 master-0 kubenswrapper[7721]: I0216 02:10:58.337825 7721 scope.go:117] "RemoveContainer" containerID="0a7334b26fd5842515d0403030d7f8a503f042b5470ce6d8a2e80440c021f184" Feb 16 02:10:58.338125 master-0 kubenswrapper[7721]: I0216 02:10:58.337913 7721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-7c64d55f8-62wr2" Feb 16 02:10:58.373997 master-0 kubenswrapper[7721]: I0216 02:10:58.373923 7721 scope.go:117] "RemoveContainer" containerID="1ad7005008299c6a17798fe23c749281008c22dc9cf9892a566f5f5d5a934a24" Feb 16 02:10:58.410214 master-0 kubenswrapper[7721]: I0216 02:10:58.408642 7721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-7c64d55f8-62wr2"] Feb 16 02:10:58.424296 master-0 kubenswrapper[7721]: I0216 02:10:58.424199 7721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/multus-admission-controller-7c64d55f8-62wr2"] Feb 16 02:10:58.742029 master-0 kubenswrapper[7721]: I0216 02:10:58.741943 7721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6088119-1125-4271-8c0b-0675e700edd9" path="/var/lib/kubelet/pods/b6088119-1125-4271-8c0b-0675e700edd9/volumes" Feb 16 02:10:58.968607 master-0 kubenswrapper[7721]: I0216 02:10:58.968519 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:10:58.968607 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:10:58.968607 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:10:58.968607 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:10:58.968910 master-0 kubenswrapper[7721]: I0216 02:10:58.968635 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:10:59.967654 master-0 kubenswrapper[7721]: I0216 02:10:59.967591 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:10:59.967654 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:10:59.967654 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:10:59.967654 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:10:59.968500 master-0 kubenswrapper[7721]: I0216 02:10:59.967685 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:11:00.969702 master-0 kubenswrapper[7721]: I0216 02:11:00.969609 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:11:00.969702 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:11:00.969702 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:11:00.969702 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:11:00.970678 master-0 kubenswrapper[7721]: I0216 02:11:00.969720 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:11:01.028459 master-0 kubenswrapper[7721]: E0216 02:11:01.028340 7721 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2efdaf5ed192249635cc07babe812fdcdd13a7e2f2a52f81ea63d81c53853c44" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 16 02:11:01.030507 master-0 kubenswrapper[7721]: E0216 02:11:01.030390 7721 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2efdaf5ed192249635cc07babe812fdcdd13a7e2f2a52f81ea63d81c53853c44" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 16 02:11:01.032861 master-0 kubenswrapper[7721]: E0216 02:11:01.032797 7721 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2efdaf5ed192249635cc07babe812fdcdd13a7e2f2a52f81ea63d81c53853c44" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 16 02:11:01.032861 master-0 kubenswrapper[7721]: E0216 02:11:01.032853 7721 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-7c8qv" podUID="009e7f72-fcdc-4a88-9769-09f95bccee6e" containerName="kube-multus-additional-cni-plugins" Feb 16 02:11:01.969939 master-0 kubenswrapper[7721]: I0216 02:11:01.969845 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:11:01.969939 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:11:01.969939 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:11:01.969939 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:11:01.971156 master-0 kubenswrapper[7721]: I0216 02:11:01.969979 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:11:02.968753 master-0 kubenswrapper[7721]: I0216 02:11:02.968664 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:11:02.968753 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:11:02.968753 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:11:02.968753 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:11:02.969268 master-0 kubenswrapper[7721]: I0216 02:11:02.968764 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:11:03.969119 master-0 kubenswrapper[7721]: I0216 02:11:03.969032 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:11:03.969119 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:11:03.969119 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:11:03.969119 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:11:03.970075 master-0 kubenswrapper[7721]: I0216 02:11:03.969151 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:11:04.968300 master-0 kubenswrapper[7721]: I0216 02:11:04.968230 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:11:04.968300 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:11:04.968300 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:11:04.968300 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:11:04.968737 master-0 kubenswrapper[7721]: I0216 02:11:04.968316 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:11:05.230206 master-0 kubenswrapper[7721]: I0216 02:11:05.230083 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-7c8qv_009e7f72-fcdc-4a88-9769-09f95bccee6e/kube-multus-additional-cni-plugins/0.log" Feb 16 02:11:05.230206 master-0 kubenswrapper[7721]: I0216 02:11:05.230172 7721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-7c8qv" Feb 16 02:11:05.245204 master-0 kubenswrapper[7721]: E0216 02:11:05.245129 7721 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6088119_1125_4271_8c0b_0675e700edd9.slice/crio-2acb9f3d5afb22161d4bddb28de7cc78d9278231cc5c43495ca4ac920d494a5e\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod009e7f72_fcdc_4a88_9769_09f95bccee6e.slice/crio-2efdaf5ed192249635cc07babe812fdcdd13a7e2f2a52f81ea63d81c53853c44.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod009e7f72_fcdc_4a88_9769_09f95bccee6e.slice/crio-conmon-2efdaf5ed192249635cc07babe812fdcdd13a7e2f2a52f81ea63d81c53853c44.scope\": RecentStats: unable to find data in memory cache]" Feb 16 02:11:05.245447 master-0 kubenswrapper[7721]: E0216 02:11:05.245331 7721 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6088119_1125_4271_8c0b_0675e700edd9.slice/crio-1ad7005008299c6a17798fe23c749281008c22dc9cf9892a566f5f5d5a934a24.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod009e7f72_fcdc_4a88_9769_09f95bccee6e.slice/crio-conmon-2efdaf5ed192249635cc07babe812fdcdd13a7e2f2a52f81ea63d81c53853c44.scope\": RecentStats: unable to find data in memory cache]" Feb 16 02:11:05.249615 master-0 kubenswrapper[7721]: E0216 02:11:05.249523 7721 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6088119_1125_4271_8c0b_0675e700edd9.slice/crio-2acb9f3d5afb22161d4bddb28de7cc78d9278231cc5c43495ca4ac920d494a5e\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod009e7f72_fcdc_4a88_9769_09f95bccee6e.slice/crio-conmon-2efdaf5ed192249635cc07babe812fdcdd13a7e2f2a52f81ea63d81c53853c44.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6088119_1125_4271_8c0b_0675e700edd9.slice/crio-conmon-1ad7005008299c6a17798fe23c749281008c22dc9cf9892a566f5f5d5a934a24.scope\": RecentStats: unable to find data in memory cache]" Feb 16 02:11:05.324518 master-0 kubenswrapper[7721]: I0216 02:11:05.324413 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/009e7f72-fcdc-4a88-9769-09f95bccee6e-tuning-conf-dir\") pod \"009e7f72-fcdc-4a88-9769-09f95bccee6e\" (UID: \"009e7f72-fcdc-4a88-9769-09f95bccee6e\") " Feb 16 02:11:05.324898 master-0 kubenswrapper[7721]: I0216 02:11:05.324536 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/009e7f72-fcdc-4a88-9769-09f95bccee6e-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "009e7f72-fcdc-4a88-9769-09f95bccee6e" (UID: "009e7f72-fcdc-4a88-9769-09f95bccee6e"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:11:05.324898 master-0 kubenswrapper[7721]: I0216 02:11:05.324596 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdsdl\" (UniqueName: \"kubernetes.io/projected/009e7f72-fcdc-4a88-9769-09f95bccee6e-kube-api-access-kdsdl\") pod \"009e7f72-fcdc-4a88-9769-09f95bccee6e\" (UID: \"009e7f72-fcdc-4a88-9769-09f95bccee6e\") " Feb 16 02:11:05.324898 master-0 kubenswrapper[7721]: I0216 02:11:05.324660 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/009e7f72-fcdc-4a88-9769-09f95bccee6e-ready\") pod \"009e7f72-fcdc-4a88-9769-09f95bccee6e\" (UID: \"009e7f72-fcdc-4a88-9769-09f95bccee6e\") " Feb 16 02:11:05.324898 master-0 kubenswrapper[7721]: I0216 02:11:05.324704 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/009e7f72-fcdc-4a88-9769-09f95bccee6e-cni-sysctl-allowlist\") pod \"009e7f72-fcdc-4a88-9769-09f95bccee6e\" (UID: \"009e7f72-fcdc-4a88-9769-09f95bccee6e\") " Feb 16 02:11:05.325103 master-0 kubenswrapper[7721]: I0216 02:11:05.325052 7721 reconciler_common.go:293] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/009e7f72-fcdc-4a88-9769-09f95bccee6e-tuning-conf-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 02:11:05.325237 master-0 kubenswrapper[7721]: I0216 02:11:05.325168 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/009e7f72-fcdc-4a88-9769-09f95bccee6e-ready" (OuterVolumeSpecName: "ready") pod "009e7f72-fcdc-4a88-9769-09f95bccee6e" (UID: "009e7f72-fcdc-4a88-9769-09f95bccee6e"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 02:11:05.325990 master-0 kubenswrapper[7721]: I0216 02:11:05.325895 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/009e7f72-fcdc-4a88-9769-09f95bccee6e-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "009e7f72-fcdc-4a88-9769-09f95bccee6e" (UID: "009e7f72-fcdc-4a88-9769-09f95bccee6e"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:11:05.329649 master-0 kubenswrapper[7721]: I0216 02:11:05.329564 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/009e7f72-fcdc-4a88-9769-09f95bccee6e-kube-api-access-kdsdl" (OuterVolumeSpecName: "kube-api-access-kdsdl") pod "009e7f72-fcdc-4a88-9769-09f95bccee6e" (UID: "009e7f72-fcdc-4a88-9769-09f95bccee6e"). InnerVolumeSpecName "kube-api-access-kdsdl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:11:05.403297 master-0 kubenswrapper[7721]: I0216 02:11:05.403229 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-7c8qv_009e7f72-fcdc-4a88-9769-09f95bccee6e/kube-multus-additional-cni-plugins/0.log" Feb 16 02:11:05.404838 master-0 kubenswrapper[7721]: I0216 02:11:05.404762 7721 generic.go:334] "Generic (PLEG): container finished" podID="009e7f72-fcdc-4a88-9769-09f95bccee6e" containerID="2efdaf5ed192249635cc07babe812fdcdd13a7e2f2a52f81ea63d81c53853c44" exitCode=137 Feb 16 02:11:05.404921 master-0 kubenswrapper[7721]: I0216 02:11:05.404852 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-7c8qv" event={"ID":"009e7f72-fcdc-4a88-9769-09f95bccee6e","Type":"ContainerDied","Data":"2efdaf5ed192249635cc07babe812fdcdd13a7e2f2a52f81ea63d81c53853c44"} Feb 16 02:11:05.404973 master-0 kubenswrapper[7721]: I0216 02:11:05.404925 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-7c8qv" event={"ID":"009e7f72-fcdc-4a88-9769-09f95bccee6e","Type":"ContainerDied","Data":"9a477bb9fb47b237818cd392a5e13b01d794b117ac5f6e24e794b17acdd31ff5"} Feb 16 02:11:05.405020 master-0 kubenswrapper[7721]: I0216 02:11:05.404990 7721 scope.go:117] "RemoveContainer" containerID="2efdaf5ed192249635cc07babe812fdcdd13a7e2f2a52f81ea63d81c53853c44" Feb 16 02:11:05.405143 master-0 kubenswrapper[7721]: I0216 02:11:05.405071 7721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-7c8qv" Feb 16 02:11:05.426820 master-0 kubenswrapper[7721]: I0216 02:11:05.426762 7721 reconciler_common.go:293] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/009e7f72-fcdc-4a88-9769-09f95bccee6e-ready\") on node \"master-0\" DevicePath \"\"" Feb 16 02:11:05.426820 master-0 kubenswrapper[7721]: I0216 02:11:05.426804 7721 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/009e7f72-fcdc-4a88-9769-09f95bccee6e-cni-sysctl-allowlist\") on node \"master-0\" DevicePath \"\"" Feb 16 02:11:05.426820 master-0 kubenswrapper[7721]: I0216 02:11:05.426821 7721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdsdl\" (UniqueName: \"kubernetes.io/projected/009e7f72-fcdc-4a88-9769-09f95bccee6e-kube-api-access-kdsdl\") on node \"master-0\" DevicePath \"\"" Feb 16 02:11:05.429995 master-0 kubenswrapper[7721]: I0216 02:11:05.429952 7721 scope.go:117] "RemoveContainer" containerID="2efdaf5ed192249635cc07babe812fdcdd13a7e2f2a52f81ea63d81c53853c44" Feb 16 02:11:05.430649 master-0 kubenswrapper[7721]: E0216 02:11:05.430579 7721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2efdaf5ed192249635cc07babe812fdcdd13a7e2f2a52f81ea63d81c53853c44\": container with ID starting with 2efdaf5ed192249635cc07babe812fdcdd13a7e2f2a52f81ea63d81c53853c44 not found: ID does not exist" containerID="2efdaf5ed192249635cc07babe812fdcdd13a7e2f2a52f81ea63d81c53853c44" Feb 16 02:11:05.430740 master-0 kubenswrapper[7721]: I0216 02:11:05.430655 7721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2efdaf5ed192249635cc07babe812fdcdd13a7e2f2a52f81ea63d81c53853c44"} err="failed to get container status \"2efdaf5ed192249635cc07babe812fdcdd13a7e2f2a52f81ea63d81c53853c44\": rpc error: code = NotFound desc = could not find container \"2efdaf5ed192249635cc07babe812fdcdd13a7e2f2a52f81ea63d81c53853c44\": container with ID starting with 2efdaf5ed192249635cc07babe812fdcdd13a7e2f2a52f81ea63d81c53853c44 not found: ID does not exist" Feb 16 02:11:05.460143 master-0 kubenswrapper[7721]: I0216 02:11:05.460044 7721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-7c8qv"] Feb 16 02:11:05.465490 master-0 kubenswrapper[7721]: I0216 02:11:05.465397 7721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-7c8qv"] Feb 16 02:11:05.969688 master-0 kubenswrapper[7721]: I0216 02:11:05.969600 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:11:05.969688 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:11:05.969688 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:11:05.969688 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:11:05.970363 master-0 kubenswrapper[7721]: I0216 02:11:05.969706 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:11:06.298171 master-0 kubenswrapper[7721]: I0216 02:11:06.297918 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-67b79bd656-cs2n2" Feb 16 02:11:06.298171 master-0 kubenswrapper[7721]: I0216 02:11:06.298060 7721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-67b79bd656-cs2n2" Feb 16 02:11:06.741989 master-0 kubenswrapper[7721]: I0216 02:11:06.741841 7721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="009e7f72-fcdc-4a88-9769-09f95bccee6e" path="/var/lib/kubelet/pods/009e7f72-fcdc-4a88-9769-09f95bccee6e/volumes" Feb 16 02:11:06.969216 master-0 kubenswrapper[7721]: I0216 02:11:06.969078 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:11:06.969216 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:11:06.969216 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:11:06.969216 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:11:06.969699 master-0 kubenswrapper[7721]: I0216 02:11:06.969224 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:11:07.968401 master-0 kubenswrapper[7721]: I0216 02:11:07.968298 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:11:07.968401 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:11:07.968401 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:11:07.968401 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:11:07.969087 master-0 kubenswrapper[7721]: I0216 02:11:07.968421 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:11:08.968716 master-0 kubenswrapper[7721]: I0216 02:11:08.968620 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:11:08.968716 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:11:08.968716 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:11:08.968716 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:11:08.969705 master-0 kubenswrapper[7721]: I0216 02:11:08.968731 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:11:09.970626 master-0 kubenswrapper[7721]: I0216 02:11:09.970557 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:11:09.970626 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:11:09.970626 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:11:09.970626 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:11:09.971650 master-0 kubenswrapper[7721]: I0216 02:11:09.970650 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:11:10.969204 master-0 kubenswrapper[7721]: I0216 02:11:10.969129 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:11:10.969204 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:11:10.969204 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:11:10.969204 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:11:10.969672 master-0 kubenswrapper[7721]: I0216 02:11:10.969218 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:11:11.969382 master-0 kubenswrapper[7721]: I0216 02:11:11.969097 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:11:11.969382 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:11:11.969382 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:11:11.969382 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:11:11.970389 master-0 kubenswrapper[7721]: I0216 02:11:11.969387 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:11:12.968737 master-0 kubenswrapper[7721]: I0216 02:11:12.968621 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:11:12.968737 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:11:12.968737 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:11:12.968737 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:11:12.968737 master-0 kubenswrapper[7721]: I0216 02:11:12.968725 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:11:13.968924 master-0 kubenswrapper[7721]: I0216 02:11:13.968836 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:11:13.968924 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:11:13.968924 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:11:13.968924 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:11:13.970004 master-0 kubenswrapper[7721]: I0216 02:11:13.968929 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:11:14.973432 master-0 kubenswrapper[7721]: I0216 02:11:14.973340 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:11:14.973432 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:11:14.973432 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:11:14.973432 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:11:14.974668 master-0 kubenswrapper[7721]: I0216 02:11:14.973474 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:11:15.831095 master-0 kubenswrapper[7721]: I0216 02:11:15.831029 7721 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 16 02:11:15.831951 master-0 kubenswrapper[7721]: I0216 02:11:15.831854 7721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="971c7312e8ac72eb9932acb64a3dd785" containerName="kube-controller-manager" containerID="cri-o://462bc8e54438708fbe0de05ecb433d15f63ff46542c44ae6f1cb6f59fc242a3b" gracePeriod=30 Feb 16 02:11:15.832605 master-0 kubenswrapper[7721]: I0216 02:11:15.831991 7721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="971c7312e8ac72eb9932acb64a3dd785" containerName="kube-controller-manager-recovery-controller" containerID="cri-o://1f0cb68115478c6fd515542fbb0fa0d43b3b478c6e2bb7366eec3aa3beebf374" gracePeriod=30 Feb 16 02:11:15.832909 master-0 kubenswrapper[7721]: I0216 02:11:15.832066 7721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="971c7312e8ac72eb9932acb64a3dd785" containerName="cluster-policy-controller" containerID="cri-o://f7886612dab7fdbb2c8fa01ccf5ff672b9f28739bb24c915a3676c6391134016" gracePeriod=30 Feb 16 02:11:15.834043 master-0 kubenswrapper[7721]: I0216 02:11:15.832227 7721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="971c7312e8ac72eb9932acb64a3dd785" containerName="kube-controller-manager-cert-syncer" containerID="cri-o://e06f2cd26b4721860828d787726c09450d829acd1f0cf5360dbf2c9f1becfde8" gracePeriod=30 Feb 16 02:11:15.834771 master-0 kubenswrapper[7721]: I0216 02:11:15.834699 7721 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 16 02:11:15.835175 master-0 kubenswrapper[7721]: E0216 02:11:15.835113 7721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="971c7312e8ac72eb9932acb64a3dd785" containerName="kube-controller-manager-cert-syncer" Feb 16 02:11:15.835175 master-0 kubenswrapper[7721]: I0216 02:11:15.835148 7721 state_mem.go:107] "Deleted CPUSet assignment" podUID="971c7312e8ac72eb9932acb64a3dd785" containerName="kube-controller-manager-cert-syncer" Feb 16 02:11:15.835175 master-0 kubenswrapper[7721]: E0216 02:11:15.835173 7721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="009e7f72-fcdc-4a88-9769-09f95bccee6e" containerName="kube-multus-additional-cni-plugins" Feb 16 02:11:15.835175 master-0 kubenswrapper[7721]: I0216 02:11:15.835186 7721 state_mem.go:107] "Deleted CPUSet assignment" podUID="009e7f72-fcdc-4a88-9769-09f95bccee6e" containerName="kube-multus-additional-cni-plugins" Feb 16 02:11:15.835686 master-0 kubenswrapper[7721]: E0216 02:11:15.835204 7721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6088119-1125-4271-8c0b-0675e700edd9" containerName="kube-rbac-proxy" Feb 16 02:11:15.835686 master-0 kubenswrapper[7721]: I0216 02:11:15.835216 7721 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6088119-1125-4271-8c0b-0675e700edd9" containerName="kube-rbac-proxy" Feb 16 02:11:15.835686 master-0 kubenswrapper[7721]: E0216 02:11:15.835235 7721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6088119-1125-4271-8c0b-0675e700edd9" containerName="multus-admission-controller" Feb 16 02:11:15.835686 master-0 kubenswrapper[7721]: I0216 02:11:15.835247 7721 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6088119-1125-4271-8c0b-0675e700edd9" containerName="multus-admission-controller" Feb 16 02:11:15.835686 master-0 kubenswrapper[7721]: E0216 02:11:15.835270 7721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="971c7312e8ac72eb9932acb64a3dd785" containerName="kube-controller-manager-recovery-controller" Feb 16 02:11:15.835686 master-0 kubenswrapper[7721]: I0216 02:11:15.835283 7721 state_mem.go:107] "Deleted CPUSet assignment" podUID="971c7312e8ac72eb9932acb64a3dd785" containerName="kube-controller-manager-recovery-controller" Feb 16 02:11:15.835686 master-0 kubenswrapper[7721]: E0216 02:11:15.835303 7721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="971c7312e8ac72eb9932acb64a3dd785" containerName="cluster-policy-controller" Feb 16 02:11:15.835686 master-0 kubenswrapper[7721]: I0216 02:11:15.835315 7721 state_mem.go:107] "Deleted CPUSet assignment" podUID="971c7312e8ac72eb9932acb64a3dd785" containerName="cluster-policy-controller" Feb 16 02:11:15.835686 master-0 kubenswrapper[7721]: E0216 02:11:15.835342 7721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="971c7312e8ac72eb9932acb64a3dd785" containerName="kube-controller-manager" Feb 16 02:11:15.835686 master-0 kubenswrapper[7721]: I0216 02:11:15.835354 7721 state_mem.go:107] "Deleted CPUSet assignment" podUID="971c7312e8ac72eb9932acb64a3dd785" containerName="kube-controller-manager" Feb 16 02:11:15.835686 master-0 kubenswrapper[7721]: I0216 02:11:15.835564 7721 memory_manager.go:354] "RemoveStaleState removing state" podUID="009e7f72-fcdc-4a88-9769-09f95bccee6e" containerName="kube-multus-additional-cni-plugins" Feb 16 02:11:15.835686 master-0 kubenswrapper[7721]: I0216 02:11:15.835649 7721 memory_manager.go:354] "RemoveStaleState removing state" podUID="971c7312e8ac72eb9932acb64a3dd785" containerName="kube-controller-manager" Feb 16 02:11:15.835686 master-0 kubenswrapper[7721]: I0216 02:11:15.835668 7721 memory_manager.go:354] "RemoveStaleState removing state" podUID="971c7312e8ac72eb9932acb64a3dd785" containerName="cluster-policy-controller" Feb 16 02:11:15.835686 master-0 kubenswrapper[7721]: I0216 02:11:15.835716 7721 memory_manager.go:354] "RemoveStaleState removing state" podUID="971c7312e8ac72eb9932acb64a3dd785" containerName="kube-controller-manager-cert-syncer" Feb 16 02:11:15.837012 master-0 kubenswrapper[7721]: I0216 02:11:15.835741 7721 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6088119-1125-4271-8c0b-0675e700edd9" containerName="multus-admission-controller" Feb 16 02:11:15.837012 master-0 kubenswrapper[7721]: I0216 02:11:15.835759 7721 memory_manager.go:354] "RemoveStaleState removing state" podUID="971c7312e8ac72eb9932acb64a3dd785" containerName="kube-controller-manager-recovery-controller" Feb 16 02:11:15.837012 master-0 kubenswrapper[7721]: I0216 02:11:15.835776 7721 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6088119-1125-4271-8c0b-0675e700edd9" containerName="kube-rbac-proxy" Feb 16 02:11:15.893014 master-0 kubenswrapper[7721]: I0216 02:11:15.892924 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/532487ad51c30257b744e7c1c79fb34f-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"532487ad51c30257b744e7c1c79fb34f\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:11:15.893210 master-0 kubenswrapper[7721]: I0216 02:11:15.893024 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/532487ad51c30257b744e7c1c79fb34f-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"532487ad51c30257b744e7c1c79fb34f\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:11:15.972727 master-0 kubenswrapper[7721]: I0216 02:11:15.972653 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:11:15.972727 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:11:15.972727 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:11:15.972727 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:11:15.973126 master-0 kubenswrapper[7721]: I0216 02:11:15.972749 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:11:15.995254 master-0 kubenswrapper[7721]: I0216 02:11:15.995195 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/532487ad51c30257b744e7c1c79fb34f-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"532487ad51c30257b744e7c1c79fb34f\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:11:15.996021 master-0 kubenswrapper[7721]: I0216 02:11:15.995988 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/532487ad51c30257b744e7c1c79fb34f-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"532487ad51c30257b744e7c1c79fb34f\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:11:15.996357 master-0 kubenswrapper[7721]: I0216 02:11:15.996066 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/532487ad51c30257b744e7c1c79fb34f-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"532487ad51c30257b744e7c1c79fb34f\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:11:15.996357 master-0 kubenswrapper[7721]: I0216 02:11:15.995347 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/532487ad51c30257b744e7c1c79fb34f-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"532487ad51c30257b744e7c1c79fb34f\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:11:16.087352 master-0 kubenswrapper[7721]: I0216 02:11:16.086965 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_971c7312e8ac72eb9932acb64a3dd785/kube-controller-manager-cert-syncer/0.log" Feb 16 02:11:16.090218 master-0 kubenswrapper[7721]: I0216 02:11:16.089672 7721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:11:16.094951 master-0 kubenswrapper[7721]: I0216 02:11:16.094881 7721 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="971c7312e8ac72eb9932acb64a3dd785" podUID="532487ad51c30257b744e7c1c79fb34f" Feb 16 02:11:16.200581 master-0 kubenswrapper[7721]: I0216 02:11:16.200527 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/971c7312e8ac72eb9932acb64a3dd785-cert-dir\") pod \"971c7312e8ac72eb9932acb64a3dd785\" (UID: \"971c7312e8ac72eb9932acb64a3dd785\") " Feb 16 02:11:16.200918 master-0 kubenswrapper[7721]: I0216 02:11:16.200889 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/971c7312e8ac72eb9932acb64a3dd785-resource-dir\") pod \"971c7312e8ac72eb9932acb64a3dd785\" (UID: \"971c7312e8ac72eb9932acb64a3dd785\") " Feb 16 02:11:16.201235 master-0 kubenswrapper[7721]: I0216 02:11:16.200670 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/971c7312e8ac72eb9932acb64a3dd785-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "971c7312e8ac72eb9932acb64a3dd785" (UID: "971c7312e8ac72eb9932acb64a3dd785"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:11:16.201333 master-0 kubenswrapper[7721]: I0216 02:11:16.200977 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/971c7312e8ac72eb9932acb64a3dd785-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "971c7312e8ac72eb9932acb64a3dd785" (UID: "971c7312e8ac72eb9932acb64a3dd785"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:11:16.201798 master-0 kubenswrapper[7721]: I0216 02:11:16.201768 7721 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/971c7312e8ac72eb9932acb64a3dd785-cert-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 02:11:16.201946 master-0 kubenswrapper[7721]: I0216 02:11:16.201923 7721 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/971c7312e8ac72eb9932acb64a3dd785-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 02:11:16.503631 master-0 kubenswrapper[7721]: I0216 02:11:16.503551 7721 generic.go:334] "Generic (PLEG): container finished" podID="5b79cd7f-675e-4778-be06-95e79b1c008a" containerID="7ece968240d91c23ca40de1bcd222697872432fda5d86a538de746813dee22af" exitCode=0 Feb 16 02:11:16.504393 master-0 kubenswrapper[7721]: I0216 02:11:16.503673 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"5b79cd7f-675e-4778-be06-95e79b1c008a","Type":"ContainerDied","Data":"7ece968240d91c23ca40de1bcd222697872432fda5d86a538de746813dee22af"} Feb 16 02:11:16.508034 master-0 kubenswrapper[7721]: I0216 02:11:16.507965 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_971c7312e8ac72eb9932acb64a3dd785/kube-controller-manager-cert-syncer/0.log" Feb 16 02:11:16.509796 master-0 kubenswrapper[7721]: I0216 02:11:16.509739 7721 generic.go:334] "Generic (PLEG): container finished" podID="971c7312e8ac72eb9932acb64a3dd785" containerID="1f0cb68115478c6fd515542fbb0fa0d43b3b478c6e2bb7366eec3aa3beebf374" exitCode=0 Feb 16 02:11:16.509796 master-0 kubenswrapper[7721]: I0216 02:11:16.509786 7721 generic.go:334] "Generic (PLEG): container finished" podID="971c7312e8ac72eb9932acb64a3dd785" containerID="e06f2cd26b4721860828d787726c09450d829acd1f0cf5360dbf2c9f1becfde8" exitCode=2 Feb 16 02:11:16.509971 master-0 kubenswrapper[7721]: I0216 02:11:16.509809 7721 generic.go:334] "Generic (PLEG): container finished" podID="971c7312e8ac72eb9932acb64a3dd785" containerID="f7886612dab7fdbb2c8fa01ccf5ff672b9f28739bb24c915a3676c6391134016" exitCode=0 Feb 16 02:11:16.509971 master-0 kubenswrapper[7721]: I0216 02:11:16.509825 7721 generic.go:334] "Generic (PLEG): container finished" podID="971c7312e8ac72eb9932acb64a3dd785" containerID="462bc8e54438708fbe0de05ecb433d15f63ff46542c44ae6f1cb6f59fc242a3b" exitCode=0 Feb 16 02:11:16.509971 master-0 kubenswrapper[7721]: I0216 02:11:16.509879 7721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3382fccbb3d34fbfaf90ce0b807cc46f9761dae67dfbad31837566d7894b9fdf" Feb 16 02:11:16.509971 master-0 kubenswrapper[7721]: I0216 02:11:16.509898 7721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:11:16.537027 master-0 kubenswrapper[7721]: I0216 02:11:16.536688 7721 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="971c7312e8ac72eb9932acb64a3dd785" podUID="532487ad51c30257b744e7c1c79fb34f" Feb 16 02:11:16.553870 master-0 kubenswrapper[7721]: I0216 02:11:16.553814 7721 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="971c7312e8ac72eb9932acb64a3dd785" podUID="532487ad51c30257b744e7c1c79fb34f" Feb 16 02:11:16.740419 master-0 kubenswrapper[7721]: I0216 02:11:16.740364 7721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="971c7312e8ac72eb9932acb64a3dd785" path="/var/lib/kubelet/pods/971c7312e8ac72eb9932acb64a3dd785/volumes" Feb 16 02:11:16.968248 master-0 kubenswrapper[7721]: I0216 02:11:16.968184 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:11:16.968248 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:11:16.968248 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:11:16.968248 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:11:16.968248 master-0 kubenswrapper[7721]: I0216 02:11:16.968276 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:11:17.903561 master-0 kubenswrapper[7721]: I0216 02:11:17.903427 7721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Feb 16 02:11:17.944401 master-0 kubenswrapper[7721]: I0216 02:11:17.943965 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5b79cd7f-675e-4778-be06-95e79b1c008a-kubelet-dir\") pod \"5b79cd7f-675e-4778-be06-95e79b1c008a\" (UID: \"5b79cd7f-675e-4778-be06-95e79b1c008a\") " Feb 16 02:11:17.944401 master-0 kubenswrapper[7721]: I0216 02:11:17.944080 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5b79cd7f-675e-4778-be06-95e79b1c008a-kube-api-access\") pod \"5b79cd7f-675e-4778-be06-95e79b1c008a\" (UID: \"5b79cd7f-675e-4778-be06-95e79b1c008a\") " Feb 16 02:11:17.944401 master-0 kubenswrapper[7721]: I0216 02:11:17.944191 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5b79cd7f-675e-4778-be06-95e79b1c008a-var-lock\") pod \"5b79cd7f-675e-4778-be06-95e79b1c008a\" (UID: \"5b79cd7f-675e-4778-be06-95e79b1c008a\") " Feb 16 02:11:17.944401 master-0 kubenswrapper[7721]: I0216 02:11:17.944208 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b79cd7f-675e-4778-be06-95e79b1c008a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5b79cd7f-675e-4778-be06-95e79b1c008a" (UID: "5b79cd7f-675e-4778-be06-95e79b1c008a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:11:17.945169 master-0 kubenswrapper[7721]: I0216 02:11:17.944522 7721 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5b79cd7f-675e-4778-be06-95e79b1c008a-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 02:11:17.945169 master-0 kubenswrapper[7721]: I0216 02:11:17.944594 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b79cd7f-675e-4778-be06-95e79b1c008a-var-lock" (OuterVolumeSpecName: "var-lock") pod "5b79cd7f-675e-4778-be06-95e79b1c008a" (UID: "5b79cd7f-675e-4778-be06-95e79b1c008a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:11:17.949608 master-0 kubenswrapper[7721]: I0216 02:11:17.948924 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b79cd7f-675e-4778-be06-95e79b1c008a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5b79cd7f-675e-4778-be06-95e79b1c008a" (UID: "5b79cd7f-675e-4778-be06-95e79b1c008a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:11:17.969154 master-0 kubenswrapper[7721]: I0216 02:11:17.969030 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:11:17.969154 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:11:17.969154 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:11:17.969154 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:11:17.969666 master-0 kubenswrapper[7721]: I0216 02:11:17.969174 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:11:18.046493 master-0 kubenswrapper[7721]: I0216 02:11:18.046204 7721 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5b79cd7f-675e-4778-be06-95e79b1c008a-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 16 02:11:18.046493 master-0 kubenswrapper[7721]: I0216 02:11:18.046333 7721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5b79cd7f-675e-4778-be06-95e79b1c008a-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 16 02:11:18.531326 master-0 kubenswrapper[7721]: I0216 02:11:18.531242 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"5b79cd7f-675e-4778-be06-95e79b1c008a","Type":"ContainerDied","Data":"f617d4c81a045b3a0a2096d5b392bf8b99ea0de59561036edc52c149d97f12ac"} Feb 16 02:11:18.531326 master-0 kubenswrapper[7721]: I0216 02:11:18.531312 7721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f617d4c81a045b3a0a2096d5b392bf8b99ea0de59561036edc52c149d97f12ac" Feb 16 02:11:18.531732 master-0 kubenswrapper[7721]: I0216 02:11:18.531343 7721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Feb 16 02:11:18.969880 master-0 kubenswrapper[7721]: I0216 02:11:18.969784 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:11:18.969880 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:11:18.969880 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:11:18.969880 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:11:18.970825 master-0 kubenswrapper[7721]: I0216 02:11:18.969896 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:11:19.971811 master-0 kubenswrapper[7721]: I0216 02:11:19.971699 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:11:19.971811 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:11:19.971811 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:11:19.971811 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:11:19.972942 master-0 kubenswrapper[7721]: I0216 02:11:19.971822 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:11:20.969688 master-0 kubenswrapper[7721]: I0216 02:11:20.969591 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:11:20.969688 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:11:20.969688 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:11:20.969688 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:11:20.969688 master-0 kubenswrapper[7721]: I0216 02:11:20.969676 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:11:21.969911 master-0 kubenswrapper[7721]: I0216 02:11:21.969758 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:11:21.969911 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:11:21.969911 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:11:21.969911 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:11:21.969911 master-0 kubenswrapper[7721]: I0216 02:11:21.969889 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:11:22.968828 master-0 kubenswrapper[7721]: I0216 02:11:22.968749 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:11:22.968828 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:11:22.968828 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:11:22.968828 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:11:22.969276 master-0 kubenswrapper[7721]: I0216 02:11:22.968850 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:11:23.969271 master-0 kubenswrapper[7721]: I0216 02:11:23.969200 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:11:23.969271 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:11:23.969271 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:11:23.969271 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:11:23.970491 master-0 kubenswrapper[7721]: I0216 02:11:23.970392 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:11:24.970145 master-0 kubenswrapper[7721]: I0216 02:11:24.970024 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:11:24.970145 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:11:24.970145 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:11:24.970145 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:11:24.970840 master-0 kubenswrapper[7721]: I0216 02:11:24.970203 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:11:25.968669 master-0 kubenswrapper[7721]: I0216 02:11:25.968567 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:11:25.968669 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:11:25.968669 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:11:25.968669 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:11:25.969095 master-0 kubenswrapper[7721]: I0216 02:11:25.968695 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:11:26.307193 master-0 kubenswrapper[7721]: I0216 02:11:26.307028 7721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-67b79bd656-cs2n2" Feb 16 02:11:26.314415 master-0 kubenswrapper[7721]: I0216 02:11:26.314351 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-67b79bd656-cs2n2" Feb 16 02:11:26.968492 master-0 kubenswrapper[7721]: I0216 02:11:26.968381 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:11:26.968492 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:11:26.968492 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:11:26.968492 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:11:26.968917 master-0 kubenswrapper[7721]: I0216 02:11:26.968509 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:11:27.969949 master-0 kubenswrapper[7721]: I0216 02:11:27.969857 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:11:27.969949 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:11:27.969949 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:11:27.969949 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:11:27.971176 master-0 kubenswrapper[7721]: I0216 02:11:27.969976 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:11:28.968576 master-0 kubenswrapper[7721]: I0216 02:11:28.968499 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:11:28.968576 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:11:28.968576 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:11:28.968576 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:11:28.968915 master-0 kubenswrapper[7721]: I0216 02:11:28.968589 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:11:29.968143 master-0 kubenswrapper[7721]: I0216 02:11:29.968099 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:11:29.968143 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:11:29.968143 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:11:29.968143 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:11:29.968909 master-0 kubenswrapper[7721]: I0216 02:11:29.968877 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:11:30.724948 master-0 kubenswrapper[7721]: I0216 02:11:30.724820 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:11:30.760482 master-0 kubenswrapper[7721]: I0216 02:11:30.760388 7721 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="3a7ee083-994c-4d2a-ac66-eea8663f8e3f" Feb 16 02:11:30.760482 master-0 kubenswrapper[7721]: I0216 02:11:30.760475 7721 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="3a7ee083-994c-4d2a-ac66-eea8663f8e3f" Feb 16 02:11:30.777205 master-0 kubenswrapper[7721]: I0216 02:11:30.777127 7721 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:11:30.802766 master-0 kubenswrapper[7721]: I0216 02:11:30.796708 7721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 16 02:11:30.802766 master-0 kubenswrapper[7721]: I0216 02:11:30.796869 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:11:30.806236 master-0 kubenswrapper[7721]: I0216 02:11:30.804641 7721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 16 02:11:30.813621 master-0 kubenswrapper[7721]: I0216 02:11:30.811798 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 16 02:11:30.832915 master-0 kubenswrapper[7721]: W0216 02:11:30.832827 7721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod532487ad51c30257b744e7c1c79fb34f.slice/crio-48118d83188cff04b48b2d21c92d5267795e6e491327e95878cf252a4b94caea WatchSource:0}: Error finding container 48118d83188cff04b48b2d21c92d5267795e6e491327e95878cf252a4b94caea: Status 404 returned error can't find the container with id 48118d83188cff04b48b2d21c92d5267795e6e491327e95878cf252a4b94caea Feb 16 02:11:30.968222 master-0 kubenswrapper[7721]: I0216 02:11:30.968165 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:11:30.968222 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:11:30.968222 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:11:30.968222 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:11:30.969241 master-0 kubenswrapper[7721]: I0216 02:11:30.968235 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:11:31.662487 master-0 kubenswrapper[7721]: I0216 02:11:31.662328 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"532487ad51c30257b744e7c1c79fb34f","Type":"ContainerStarted","Data":"f04772c7428fae13ccd84b0277f134e7c93b419ed981a3e828e88653f2fe03b1"} Feb 16 02:11:31.662861 master-0 kubenswrapper[7721]: I0216 02:11:31.662785 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"532487ad51c30257b744e7c1c79fb34f","Type":"ContainerStarted","Data":"2764ae0cc6da6493da6557571cb01f0bf8aba4f15b5e56b0e8f80cf54cb86272"} Feb 16 02:11:31.663036 master-0 kubenswrapper[7721]: I0216 02:11:31.663008 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"532487ad51c30257b744e7c1c79fb34f","Type":"ContainerStarted","Data":"48118d83188cff04b48b2d21c92d5267795e6e491327e95878cf252a4b94caea"} Feb 16 02:11:31.969149 master-0 kubenswrapper[7721]: I0216 02:11:31.969065 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:11:31.969149 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:11:31.969149 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:11:31.969149 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:11:31.970160 master-0 kubenswrapper[7721]: I0216 02:11:31.970114 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:11:32.672646 master-0 kubenswrapper[7721]: I0216 02:11:32.672601 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"532487ad51c30257b744e7c1c79fb34f","Type":"ContainerStarted","Data":"47f370468f9a506b6024de7fb2029d49ff3b6445c9e16b06204e3c886ebdacc9"} Feb 16 02:11:32.673010 master-0 kubenswrapper[7721]: I0216 02:11:32.672980 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"532487ad51c30257b744e7c1c79fb34f","Type":"ContainerStarted","Data":"c1607c7a684a009a85d360c6358aedc027d89ca14606abafaf65b0d9cbaca7c9"} Feb 16 02:11:32.712954 master-0 kubenswrapper[7721]: I0216 02:11:32.712803 7721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=2.712771325 podStartE2EDuration="2.712771325s" podCreationTimestamp="2026-02-16 02:11:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:11:32.701281375 +0000 UTC m=+296.195515687" watchObservedRunningTime="2026-02-16 02:11:32.712771325 +0000 UTC m=+296.207005627" Feb 16 02:11:32.969835 master-0 kubenswrapper[7721]: I0216 02:11:32.969673 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:11:32.969835 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:11:32.969835 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:11:32.969835 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:11:32.970912 master-0 kubenswrapper[7721]: I0216 02:11:32.970687 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:11:33.972532 master-0 kubenswrapper[7721]: I0216 02:11:33.972181 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:11:33.972532 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:11:33.972532 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:11:33.972532 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:11:33.972532 master-0 kubenswrapper[7721]: I0216 02:11:33.972361 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:11:34.968158 master-0 kubenswrapper[7721]: I0216 02:11:34.968103 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:11:34.968158 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:11:34.968158 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:11:34.968158 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:11:34.968694 master-0 kubenswrapper[7721]: I0216 02:11:34.968649 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:11:35.969269 master-0 kubenswrapper[7721]: I0216 02:11:35.969140 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:11:35.969269 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:11:35.969269 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:11:35.969269 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:11:35.969269 master-0 kubenswrapper[7721]: I0216 02:11:35.969252 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:11:36.968153 master-0 kubenswrapper[7721]: I0216 02:11:36.968052 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:11:36.968153 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:11:36.968153 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:11:36.968153 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:11:36.968672 master-0 kubenswrapper[7721]: I0216 02:11:36.968155 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:11:37.969172 master-0 kubenswrapper[7721]: I0216 02:11:37.969083 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:11:37.969172 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:11:37.969172 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:11:37.969172 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:11:37.970254 master-0 kubenswrapper[7721]: I0216 02:11:37.969233 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:11:38.968935 master-0 kubenswrapper[7721]: I0216 02:11:38.968819 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:11:38.968935 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:11:38.968935 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:11:38.968935 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:11:38.970110 master-0 kubenswrapper[7721]: I0216 02:11:38.968938 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:11:39.968709 master-0 kubenswrapper[7721]: I0216 02:11:39.968609 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:11:39.968709 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:11:39.968709 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:11:39.968709 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:11:39.968709 master-0 kubenswrapper[7721]: I0216 02:11:39.968710 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:11:40.798009 master-0 kubenswrapper[7721]: I0216 02:11:40.797957 7721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:11:40.798368 master-0 kubenswrapper[7721]: I0216 02:11:40.798339 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:11:40.798642 master-0 kubenswrapper[7721]: I0216 02:11:40.798571 7721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:11:40.799245 master-0 kubenswrapper[7721]: I0216 02:11:40.799196 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:11:40.804829 master-0 kubenswrapper[7721]: I0216 02:11:40.804763 7721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:11:40.805794 master-0 kubenswrapper[7721]: I0216 02:11:40.805764 7721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:11:40.969096 master-0 kubenswrapper[7721]: I0216 02:11:40.968968 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:11:40.969096 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:11:40.969096 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:11:40.969096 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:11:40.970041 master-0 kubenswrapper[7721]: I0216 02:11:40.969632 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:11:41.767835 master-0 kubenswrapper[7721]: I0216 02:11:41.767766 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:11:41.768139 master-0 kubenswrapper[7721]: I0216 02:11:41.768073 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:11:41.968197 master-0 kubenswrapper[7721]: I0216 02:11:41.968040 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:11:41.968197 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:11:41.968197 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:11:41.968197 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:11:41.968692 master-0 kubenswrapper[7721]: I0216 02:11:41.968219 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:11:42.970263 master-0 kubenswrapper[7721]: I0216 02:11:42.970175 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:11:42.970263 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:11:42.970263 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:11:42.970263 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:11:42.971008 master-0 kubenswrapper[7721]: I0216 02:11:42.970265 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:11:43.968878 master-0 kubenswrapper[7721]: I0216 02:11:43.968791 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:11:43.968878 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:11:43.968878 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:11:43.968878 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:11:43.969373 master-0 kubenswrapper[7721]: I0216 02:11:43.968928 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:11:44.968186 master-0 kubenswrapper[7721]: I0216 02:11:44.968083 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:11:44.968186 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:11:44.968186 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:11:44.968186 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:11:44.968186 master-0 kubenswrapper[7721]: I0216 02:11:44.968167 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:11:45.968283 master-0 kubenswrapper[7721]: I0216 02:11:45.968215 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:11:45.968283 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:11:45.968283 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:11:45.968283 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:11:45.969167 master-0 kubenswrapper[7721]: I0216 02:11:45.969129 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:11:46.968468 master-0 kubenswrapper[7721]: I0216 02:11:46.968348 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:11:46.968468 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:11:46.968468 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:11:46.968468 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:11:46.969546 master-0 kubenswrapper[7721]: I0216 02:11:46.968479 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:11:47.974087 master-0 kubenswrapper[7721]: I0216 02:11:47.974033 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:11:47.974087 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:11:47.974087 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:11:47.974087 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:11:47.975172 master-0 kubenswrapper[7721]: I0216 02:11:47.975129 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:11:48.968975 master-0 kubenswrapper[7721]: I0216 02:11:48.968886 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:11:48.968975 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:11:48.968975 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:11:48.968975 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:11:48.969414 master-0 kubenswrapper[7721]: I0216 02:11:48.968998 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:11:49.970353 master-0 kubenswrapper[7721]: I0216 02:11:49.969815 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:11:49.970353 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:11:49.970353 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:11:49.970353 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:11:49.970353 master-0 kubenswrapper[7721]: I0216 02:11:49.969897 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:11:50.969115 master-0 kubenswrapper[7721]: I0216 02:11:50.969015 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:11:50.969115 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:11:50.969115 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:11:50.969115 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:11:50.969679 master-0 kubenswrapper[7721]: I0216 02:11:50.969133 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:11:51.023930 master-0 kubenswrapper[7721]: I0216 02:11:51.023853 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-c588d8cb4-nbjz6_04804a08-e3a5-46f3-abcb-967866834baa/ingress-operator/1.log" Feb 16 02:11:51.025302 master-0 kubenswrapper[7721]: I0216 02:11:51.025241 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-c588d8cb4-nbjz6_04804a08-e3a5-46f3-abcb-967866834baa/ingress-operator/0.log" Feb 16 02:11:51.025467 master-0 kubenswrapper[7721]: I0216 02:11:51.025319 7721 generic.go:334] "Generic (PLEG): container finished" podID="04804a08-e3a5-46f3-abcb-967866834baa" containerID="7c934fcf17603ba6880730036301dc7740655f1f475a9dcfef2ce3f1ec5f5b71" exitCode=1 Feb 16 02:11:51.025467 master-0 kubenswrapper[7721]: I0216 02:11:51.025366 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nbjz6" event={"ID":"04804a08-e3a5-46f3-abcb-967866834baa","Type":"ContainerDied","Data":"7c934fcf17603ba6880730036301dc7740655f1f475a9dcfef2ce3f1ec5f5b71"} Feb 16 02:11:51.025467 master-0 kubenswrapper[7721]: I0216 02:11:51.025413 7721 scope.go:117] "RemoveContainer" containerID="87993edba6f07930300de55e54a0440afea4e88c5ea50fe933142a412c18bfd2" Feb 16 02:11:51.027168 master-0 kubenswrapper[7721]: I0216 02:11:51.026045 7721 scope.go:117] "RemoveContainer" containerID="7c934fcf17603ba6880730036301dc7740655f1f475a9dcfef2ce3f1ec5f5b71" Feb 16 02:11:51.027168 master-0 kubenswrapper[7721]: E0216 02:11:51.026374 7721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ingress-operator pod=ingress-operator-c588d8cb4-nbjz6_openshift-ingress-operator(04804a08-e3a5-46f3-abcb-967866834baa)\"" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nbjz6" podUID="04804a08-e3a5-46f3-abcb-967866834baa" Feb 16 02:11:51.969127 master-0 kubenswrapper[7721]: I0216 02:11:51.969009 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:11:51.969127 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:11:51.969127 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:11:51.969127 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:11:51.969835 master-0 kubenswrapper[7721]: I0216 02:11:51.969151 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:11:52.036671 master-0 kubenswrapper[7721]: I0216 02:11:52.036598 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-c588d8cb4-nbjz6_04804a08-e3a5-46f3-abcb-967866834baa/ingress-operator/1.log" Feb 16 02:11:52.886800 master-0 kubenswrapper[7721]: I0216 02:11:52.886592 7721 scope.go:117] "RemoveContainer" containerID="7921033cca2163ce5e4549f18d23b23e3797f9935bb1bd7ed5580d96e9031f08" Feb 16 02:11:52.912169 master-0 kubenswrapper[7721]: I0216 02:11:52.912116 7721 scope.go:117] "RemoveContainer" containerID="55eb3affcbf11e7c854417599461f5fea225338fb48ac5a0d81226de9a467092" Feb 16 02:11:52.970976 master-0 kubenswrapper[7721]: I0216 02:11:52.970888 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:11:52.970976 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:11:52.970976 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:11:52.970976 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:11:52.971329 master-0 kubenswrapper[7721]: I0216 02:11:52.971033 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:11:53.969099 master-0 kubenswrapper[7721]: I0216 02:11:53.969002 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:11:53.969099 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:11:53.969099 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:11:53.969099 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:11:53.969099 master-0 kubenswrapper[7721]: I0216 02:11:53.969096 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:11:54.969094 master-0 kubenswrapper[7721]: I0216 02:11:54.969013 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:11:54.969094 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:11:54.969094 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:11:54.969094 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:11:54.970101 master-0 kubenswrapper[7721]: I0216 02:11:54.969100 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:11:55.969016 master-0 kubenswrapper[7721]: I0216 02:11:55.968933 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:11:55.969016 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:11:55.969016 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:11:55.969016 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:11:55.970044 master-0 kubenswrapper[7721]: I0216 02:11:55.969044 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:11:56.968798 master-0 kubenswrapper[7721]: I0216 02:11:56.968715 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:11:56.968798 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:11:56.968798 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:11:56.968798 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:11:56.969378 master-0 kubenswrapper[7721]: I0216 02:11:56.969331 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:11:57.969144 master-0 kubenswrapper[7721]: I0216 02:11:57.969068 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:11:57.969144 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:11:57.969144 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:11:57.969144 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:11:57.970354 master-0 kubenswrapper[7721]: I0216 02:11:57.969170 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:11:58.968252 master-0 kubenswrapper[7721]: I0216 02:11:58.968161 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:11:58.968252 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:11:58.968252 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:11:58.968252 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:11:58.968750 master-0 kubenswrapper[7721]: I0216 02:11:58.968259 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:11:59.969055 master-0 kubenswrapper[7721]: I0216 02:11:59.968966 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:11:59.969055 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:11:59.969055 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:11:59.969055 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:11:59.970032 master-0 kubenswrapper[7721]: I0216 02:11:59.969075 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:12:00.968304 master-0 kubenswrapper[7721]: I0216 02:12:00.968201 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:12:00.968304 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:12:00.968304 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:12:00.968304 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:12:00.968859 master-0 kubenswrapper[7721]: I0216 02:12:00.968318 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:12:01.969406 master-0 kubenswrapper[7721]: I0216 02:12:01.969285 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:12:01.969406 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:12:01.969406 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:12:01.969406 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:12:01.970536 master-0 kubenswrapper[7721]: I0216 02:12:01.969401 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:12:02.726198 master-0 kubenswrapper[7721]: I0216 02:12:02.726002 7721 scope.go:117] "RemoveContainer" containerID="7c934fcf17603ba6880730036301dc7740655f1f475a9dcfef2ce3f1ec5f5b71" Feb 16 02:12:02.968619 master-0 kubenswrapper[7721]: I0216 02:12:02.968509 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:12:02.968619 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:12:02.968619 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:12:02.968619 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:12:02.969091 master-0 kubenswrapper[7721]: I0216 02:12:02.968613 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:12:03.128752 master-0 kubenswrapper[7721]: I0216 02:12:03.128580 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-c588d8cb4-nbjz6_04804a08-e3a5-46f3-abcb-967866834baa/ingress-operator/1.log" Feb 16 02:12:03.129712 master-0 kubenswrapper[7721]: I0216 02:12:03.129158 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nbjz6" event={"ID":"04804a08-e3a5-46f3-abcb-967866834baa","Type":"ContainerStarted","Data":"6a5eef57bcb093780918b99bdb16653d8db2a96f5c207767f7945385b5adfeef"} Feb 16 02:12:03.968492 master-0 kubenswrapper[7721]: I0216 02:12:03.968383 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:12:03.968492 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:12:03.968492 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:12:03.968492 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:12:03.969152 master-0 kubenswrapper[7721]: I0216 02:12:03.968493 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:12:04.967919 master-0 kubenswrapper[7721]: I0216 02:12:04.967836 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:12:04.967919 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:12:04.967919 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:12:04.967919 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:12:04.969008 master-0 kubenswrapper[7721]: I0216 02:12:04.967936 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:12:05.969812 master-0 kubenswrapper[7721]: I0216 02:12:05.969719 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:12:05.969812 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:12:05.969812 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:12:05.969812 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:12:05.970782 master-0 kubenswrapper[7721]: I0216 02:12:05.969836 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:12:06.968488 master-0 kubenswrapper[7721]: I0216 02:12:06.968323 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:12:06.968488 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:12:06.968488 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:12:06.968488 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:12:06.968488 master-0 kubenswrapper[7721]: I0216 02:12:06.968418 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:12:07.969243 master-0 kubenswrapper[7721]: I0216 02:12:07.969139 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:12:07.969243 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:12:07.969243 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:12:07.969243 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:12:07.970217 master-0 kubenswrapper[7721]: I0216 02:12:07.969260 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:12:08.969091 master-0 kubenswrapper[7721]: I0216 02:12:08.969026 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:12:08.969091 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:12:08.969091 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:12:08.969091 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:12:08.970196 master-0 kubenswrapper[7721]: I0216 02:12:08.969113 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:12:09.978537 master-0 kubenswrapper[7721]: I0216 02:12:09.978414 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:12:09.978537 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:12:09.978537 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:12:09.978537 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:12:09.979771 master-0 kubenswrapper[7721]: I0216 02:12:09.978596 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:12:10.968504 master-0 kubenswrapper[7721]: I0216 02:12:10.968382 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:12:10.968504 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:12:10.968504 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:12:10.968504 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:12:10.968929 master-0 kubenswrapper[7721]: I0216 02:12:10.968518 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:12:11.968356 master-0 kubenswrapper[7721]: I0216 02:12:11.968260 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:12:11.968356 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:12:11.968356 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:12:11.968356 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:12:11.969380 master-0 kubenswrapper[7721]: I0216 02:12:11.968363 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:12:12.970611 master-0 kubenswrapper[7721]: I0216 02:12:12.970499 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:12:12.970611 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:12:12.970611 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:12:12.970611 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:12:12.970611 master-0 kubenswrapper[7721]: I0216 02:12:12.970609 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:12:13.968981 master-0 kubenswrapper[7721]: I0216 02:12:13.968896 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:12:13.968981 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:12:13.968981 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:12:13.968981 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:12:13.969520 master-0 kubenswrapper[7721]: I0216 02:12:13.969009 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:12:14.969734 master-0 kubenswrapper[7721]: I0216 02:12:14.969669 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:12:14.969734 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:12:14.969734 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:12:14.969734 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:12:14.970950 master-0 kubenswrapper[7721]: I0216 02:12:14.969767 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:12:15.969147 master-0 kubenswrapper[7721]: I0216 02:12:15.969056 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:12:15.969147 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:12:15.969147 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:12:15.969147 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:12:15.969777 master-0 kubenswrapper[7721]: I0216 02:12:15.969171 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:12:16.969764 master-0 kubenswrapper[7721]: I0216 02:12:16.969053 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:12:16.969764 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:12:16.969764 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:12:16.969764 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:12:16.969764 master-0 kubenswrapper[7721]: I0216 02:12:16.969160 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:12:17.969308 master-0 kubenswrapper[7721]: I0216 02:12:17.969196 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:12:17.969308 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:12:17.969308 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:12:17.969308 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:12:17.969757 master-0 kubenswrapper[7721]: I0216 02:12:17.969357 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:12:18.969115 master-0 kubenswrapper[7721]: I0216 02:12:18.968996 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:12:18.969115 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:12:18.969115 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:12:18.969115 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:12:18.970111 master-0 kubenswrapper[7721]: I0216 02:12:18.969141 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:12:19.968814 master-0 kubenswrapper[7721]: I0216 02:12:19.968730 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:12:19.968814 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:12:19.968814 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:12:19.968814 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:12:19.969849 master-0 kubenswrapper[7721]: I0216 02:12:19.968838 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:12:20.968761 master-0 kubenswrapper[7721]: I0216 02:12:20.968675 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:12:20.968761 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:12:20.968761 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:12:20.968761 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:12:20.969942 master-0 kubenswrapper[7721]: I0216 02:12:20.968778 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:12:21.969724 master-0 kubenswrapper[7721]: I0216 02:12:21.969588 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:12:21.969724 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:12:21.969724 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:12:21.969724 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:12:21.969724 master-0 kubenswrapper[7721]: I0216 02:12:21.969676 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:12:22.969013 master-0 kubenswrapper[7721]: I0216 02:12:22.968922 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:12:22.969013 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:12:22.969013 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:12:22.969013 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:12:22.969562 master-0 kubenswrapper[7721]: I0216 02:12:22.969034 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:12:23.970867 master-0 kubenswrapper[7721]: I0216 02:12:23.970324 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:12:23.970867 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:12:23.970867 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:12:23.970867 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:12:23.970867 master-0 kubenswrapper[7721]: I0216 02:12:23.970405 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:12:24.968267 master-0 kubenswrapper[7721]: I0216 02:12:24.968218 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:12:24.968267 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:12:24.968267 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:12:24.968267 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:12:24.968690 master-0 kubenswrapper[7721]: I0216 02:12:24.968658 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:12:25.968859 master-0 kubenswrapper[7721]: I0216 02:12:25.968784 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:12:25.968859 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:12:25.968859 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:12:25.968859 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:12:25.969790 master-0 kubenswrapper[7721]: I0216 02:12:25.968874 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:12:26.969011 master-0 kubenswrapper[7721]: I0216 02:12:26.968931 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:12:26.969011 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:12:26.969011 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:12:26.969011 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:12:26.970770 master-0 kubenswrapper[7721]: I0216 02:12:26.970695 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:12:27.970171 master-0 kubenswrapper[7721]: I0216 02:12:27.970063 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:12:27.970171 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:12:27.970171 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:12:27.970171 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:12:27.970874 master-0 kubenswrapper[7721]: I0216 02:12:27.970206 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:12:28.969081 master-0 kubenswrapper[7721]: I0216 02:12:28.968977 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:12:28.969081 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:12:28.969081 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:12:28.969081 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:12:28.969616 master-0 kubenswrapper[7721]: I0216 02:12:28.969132 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:12:29.969722 master-0 kubenswrapper[7721]: I0216 02:12:29.969194 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:12:29.969722 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:12:29.969722 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:12:29.969722 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:12:29.969722 master-0 kubenswrapper[7721]: I0216 02:12:29.969274 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:12:30.968579 master-0 kubenswrapper[7721]: I0216 02:12:30.968509 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:12:30.968579 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:12:30.968579 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:12:30.968579 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:12:30.969027 master-0 kubenswrapper[7721]: I0216 02:12:30.968591 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:12:31.969106 master-0 kubenswrapper[7721]: I0216 02:12:31.968999 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:12:31.969106 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:12:31.969106 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:12:31.969106 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:12:31.970361 master-0 kubenswrapper[7721]: I0216 02:12:31.969127 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:12:32.969781 master-0 kubenswrapper[7721]: I0216 02:12:32.969686 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:12:32.969781 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:12:32.969781 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:12:32.969781 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:12:32.970679 master-0 kubenswrapper[7721]: I0216 02:12:32.969785 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:12:33.968615 master-0 kubenswrapper[7721]: I0216 02:12:33.968525 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:12:33.968615 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:12:33.968615 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:12:33.968615 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:12:33.969067 master-0 kubenswrapper[7721]: I0216 02:12:33.968618 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:12:33.969067 master-0 kubenswrapper[7721]: I0216 02:12:33.968686 7721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-864ddd5f56-ffptx" Feb 16 02:12:33.969535 master-0 kubenswrapper[7721]: I0216 02:12:33.969476 7721 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"627ab5d9a2bbd36bf2da2e153f10cfc3717737712db4adb69838076f9b75b2a5"} pod="openshift-ingress/router-default-864ddd5f56-ffptx" containerMessage="Container router failed startup probe, will be restarted" Feb 16 02:12:33.969674 master-0 kubenswrapper[7721]: I0216 02:12:33.969544 7721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" containerID="cri-o://627ab5d9a2bbd36bf2da2e153f10cfc3717737712db4adb69838076f9b75b2a5" gracePeriod=3600 Feb 16 02:13:20.819074 master-0 kubenswrapper[7721]: I0216 02:13:20.818987 7721 generic.go:334] "Generic (PLEG): container finished" podID="17390d9a-148d-4927-a831-5bc4873c43d5" containerID="627ab5d9a2bbd36bf2da2e153f10cfc3717737712db4adb69838076f9b75b2a5" exitCode=0 Feb 16 02:13:20.819074 master-0 kubenswrapper[7721]: I0216 02:13:20.819074 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-864ddd5f56-ffptx" event={"ID":"17390d9a-148d-4927-a831-5bc4873c43d5","Type":"ContainerDied","Data":"627ab5d9a2bbd36bf2da2e153f10cfc3717737712db4adb69838076f9b75b2a5"} Feb 16 02:13:20.820232 master-0 kubenswrapper[7721]: I0216 02:13:20.819160 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-864ddd5f56-ffptx" event={"ID":"17390d9a-148d-4927-a831-5bc4873c43d5","Type":"ContainerStarted","Data":"226c9fa2763b9d22f9a5e31efca1219592ef35d729ba28d906add8e85efb3944"} Feb 16 02:13:20.965757 master-0 kubenswrapper[7721]: I0216 02:13:20.965656 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-864ddd5f56-ffptx" Feb 16 02:13:20.965757 master-0 kubenswrapper[7721]: I0216 02:13:20.965756 7721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-864ddd5f56-ffptx" Feb 16 02:13:20.969744 master-0 kubenswrapper[7721]: I0216 02:13:20.969678 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:13:20.969744 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:13:20.969744 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:13:20.969744 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:13:20.970034 master-0 kubenswrapper[7721]: I0216 02:13:20.969763 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:13:21.970425 master-0 kubenswrapper[7721]: I0216 02:13:21.970298 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:13:21.970425 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:13:21.970425 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:13:21.970425 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:13:21.971476 master-0 kubenswrapper[7721]: I0216 02:13:21.970474 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:13:22.968817 master-0 kubenswrapper[7721]: I0216 02:13:22.968693 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:13:22.968817 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:13:22.968817 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:13:22.968817 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:13:22.969542 master-0 kubenswrapper[7721]: I0216 02:13:22.968817 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:13:23.969527 master-0 kubenswrapper[7721]: I0216 02:13:23.968872 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:13:23.969527 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:13:23.969527 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:13:23.969527 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:13:23.969527 master-0 kubenswrapper[7721]: I0216 02:13:23.968982 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:13:24.968431 master-0 kubenswrapper[7721]: I0216 02:13:24.968348 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:13:24.968431 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:13:24.968431 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:13:24.968431 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:13:24.968900 master-0 kubenswrapper[7721]: I0216 02:13:24.968487 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:13:25.969794 master-0 kubenswrapper[7721]: I0216 02:13:25.969627 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:13:25.969794 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:13:25.969794 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:13:25.969794 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:13:25.970881 master-0 kubenswrapper[7721]: I0216 02:13:25.969800 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:13:26.969025 master-0 kubenswrapper[7721]: I0216 02:13:26.968900 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:13:26.969025 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:13:26.969025 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:13:26.969025 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:13:26.969667 master-0 kubenswrapper[7721]: I0216 02:13:26.969047 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:13:27.968976 master-0 kubenswrapper[7721]: I0216 02:13:27.968848 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:13:27.968976 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:13:27.968976 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:13:27.968976 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:13:27.968976 master-0 kubenswrapper[7721]: I0216 02:13:27.968953 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:13:28.969742 master-0 kubenswrapper[7721]: I0216 02:13:28.969644 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:13:28.969742 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:13:28.969742 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:13:28.969742 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:13:28.971057 master-0 kubenswrapper[7721]: I0216 02:13:28.969777 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:13:29.968327 master-0 kubenswrapper[7721]: I0216 02:13:29.968198 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:13:29.968327 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:13:29.968327 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:13:29.968327 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:13:29.968327 master-0 kubenswrapper[7721]: I0216 02:13:29.968319 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:13:30.969170 master-0 kubenswrapper[7721]: I0216 02:13:30.969033 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:13:30.969170 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:13:30.969170 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:13:30.969170 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:13:30.969170 master-0 kubenswrapper[7721]: I0216 02:13:30.969161 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:13:31.971515 master-0 kubenswrapper[7721]: I0216 02:13:31.971397 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:13:31.971515 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:13:31.971515 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:13:31.971515 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:13:31.972560 master-0 kubenswrapper[7721]: I0216 02:13:31.971552 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:13:32.970099 master-0 kubenswrapper[7721]: I0216 02:13:32.969973 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:13:32.970099 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:13:32.970099 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:13:32.970099 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:13:32.970099 master-0 kubenswrapper[7721]: I0216 02:13:32.970087 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:13:33.969212 master-0 kubenswrapper[7721]: I0216 02:13:33.969112 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:13:33.969212 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:13:33.969212 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:13:33.969212 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:13:33.970241 master-0 kubenswrapper[7721]: I0216 02:13:33.969216 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:13:34.968812 master-0 kubenswrapper[7721]: I0216 02:13:34.968723 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:13:34.968812 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:13:34.968812 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:13:34.968812 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:13:34.968812 master-0 kubenswrapper[7721]: I0216 02:13:34.968808 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:13:35.968951 master-0 kubenswrapper[7721]: I0216 02:13:35.968856 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:13:35.968951 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:13:35.968951 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:13:35.968951 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:13:35.968951 master-0 kubenswrapper[7721]: I0216 02:13:35.968947 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:13:36.969962 master-0 kubenswrapper[7721]: I0216 02:13:36.969853 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:13:36.969962 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:13:36.969962 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:13:36.969962 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:13:36.970981 master-0 kubenswrapper[7721]: I0216 02:13:36.969968 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:13:37.969773 master-0 kubenswrapper[7721]: I0216 02:13:37.969676 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:13:37.969773 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:13:37.969773 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:13:37.969773 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:13:37.970792 master-0 kubenswrapper[7721]: I0216 02:13:37.969784 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:13:38.970293 master-0 kubenswrapper[7721]: I0216 02:13:38.970199 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:13:38.970293 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:13:38.970293 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:13:38.970293 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:13:38.971297 master-0 kubenswrapper[7721]: I0216 02:13:38.970305 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:13:39.968739 master-0 kubenswrapper[7721]: I0216 02:13:39.968614 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:13:39.968739 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:13:39.968739 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:13:39.968739 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:13:39.969210 master-0 kubenswrapper[7721]: I0216 02:13:39.968765 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:13:40.969614 master-0 kubenswrapper[7721]: I0216 02:13:40.969533 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:13:40.969614 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:13:40.969614 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:13:40.969614 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:13:40.970823 master-0 kubenswrapper[7721]: I0216 02:13:40.969618 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:13:41.969575 master-0 kubenswrapper[7721]: I0216 02:13:41.969461 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:13:41.969575 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:13:41.969575 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:13:41.969575 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:13:41.970772 master-0 kubenswrapper[7721]: I0216 02:13:41.969572 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:13:42.969181 master-0 kubenswrapper[7721]: I0216 02:13:42.968982 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:13:42.969181 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:13:42.969181 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:13:42.969181 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:13:42.969181 master-0 kubenswrapper[7721]: I0216 02:13:42.969071 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:13:43.970700 master-0 kubenswrapper[7721]: I0216 02:13:43.970382 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:13:43.970700 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:13:43.970700 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:13:43.970700 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:13:43.970700 master-0 kubenswrapper[7721]: I0216 02:13:43.970499 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:13:44.970030 master-0 kubenswrapper[7721]: I0216 02:13:44.969918 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:13:44.970030 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:13:44.970030 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:13:44.970030 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:13:44.970030 master-0 kubenswrapper[7721]: I0216 02:13:44.970033 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:13:45.970210 master-0 kubenswrapper[7721]: I0216 02:13:45.970073 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:13:45.970210 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:13:45.970210 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:13:45.970210 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:13:45.971603 master-0 kubenswrapper[7721]: I0216 02:13:45.970235 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:13:46.969084 master-0 kubenswrapper[7721]: I0216 02:13:46.969002 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:13:46.969084 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:13:46.969084 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:13:46.969084 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:13:46.971887 master-0 kubenswrapper[7721]: I0216 02:13:46.969107 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:13:47.969922 master-0 kubenswrapper[7721]: I0216 02:13:47.969780 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:13:47.969922 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:13:47.969922 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:13:47.969922 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:13:47.969922 master-0 kubenswrapper[7721]: I0216 02:13:47.969913 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:13:48.969620 master-0 kubenswrapper[7721]: I0216 02:13:48.969491 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:13:48.969620 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:13:48.969620 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:13:48.969620 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:13:48.970645 master-0 kubenswrapper[7721]: I0216 02:13:48.969626 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:13:49.968806 master-0 kubenswrapper[7721]: I0216 02:13:49.968728 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:13:49.968806 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:13:49.968806 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:13:49.968806 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:13:49.969273 master-0 kubenswrapper[7721]: I0216 02:13:49.968824 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:13:50.969166 master-0 kubenswrapper[7721]: I0216 02:13:50.969045 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:13:50.969166 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:13:50.969166 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:13:50.969166 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:13:50.969166 master-0 kubenswrapper[7721]: I0216 02:13:50.969164 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:13:51.969028 master-0 kubenswrapper[7721]: I0216 02:13:51.968916 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:13:51.969028 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:13:51.969028 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:13:51.969028 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:13:51.970038 master-0 kubenswrapper[7721]: I0216 02:13:51.969028 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:13:52.969503 master-0 kubenswrapper[7721]: I0216 02:13:52.969350 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:13:52.969503 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:13:52.969503 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:13:52.969503 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:13:52.969503 master-0 kubenswrapper[7721]: I0216 02:13:52.969517 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:13:53.047715 master-0 kubenswrapper[7721]: I0216 02:13:53.047614 7721 scope.go:117] "RemoveContainer" containerID="c3d16f1de74c265a7099eb264692d380180b156ac079a97448ae93f5a03844ad" Feb 16 02:13:53.070832 master-0 kubenswrapper[7721]: I0216 02:13:53.070679 7721 scope.go:117] "RemoveContainer" containerID="699ab655e0c4a6b31309afab886f90e9b48704fa893a80a048fd279bf60a2f9d" Feb 16 02:13:53.968967 master-0 kubenswrapper[7721]: I0216 02:13:53.968409 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:13:53.968967 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:13:53.968967 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:13:53.968967 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:13:53.968967 master-0 kubenswrapper[7721]: I0216 02:13:53.968544 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:13:54.969053 master-0 kubenswrapper[7721]: I0216 02:13:54.968917 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:13:54.969053 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:13:54.969053 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:13:54.969053 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:13:54.969053 master-0 kubenswrapper[7721]: I0216 02:13:54.969056 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:13:55.969465 master-0 kubenswrapper[7721]: I0216 02:13:55.969335 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:13:55.969465 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:13:55.969465 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:13:55.969465 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:13:55.970507 master-0 kubenswrapper[7721]: I0216 02:13:55.969492 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:13:56.969900 master-0 kubenswrapper[7721]: I0216 02:13:56.969803 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:13:56.969900 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:13:56.969900 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:13:56.969900 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:13:56.969900 master-0 kubenswrapper[7721]: I0216 02:13:56.969898 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:13:57.969407 master-0 kubenswrapper[7721]: I0216 02:13:57.969290 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:13:57.969407 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:13:57.969407 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:13:57.969407 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:13:57.969946 master-0 kubenswrapper[7721]: I0216 02:13:57.969416 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:13:58.970023 master-0 kubenswrapper[7721]: I0216 02:13:58.969928 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:13:58.970023 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:13:58.970023 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:13:58.970023 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:13:58.971201 master-0 kubenswrapper[7721]: I0216 02:13:58.970043 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:13:59.968623 master-0 kubenswrapper[7721]: I0216 02:13:59.968523 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:13:59.968623 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:13:59.968623 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:13:59.968623 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:13:59.969145 master-0 kubenswrapper[7721]: I0216 02:13:59.968642 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:14:00.969007 master-0 kubenswrapper[7721]: I0216 02:14:00.968909 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:14:00.969007 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:14:00.969007 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:14:00.969007 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:14:00.970011 master-0 kubenswrapper[7721]: I0216 02:14:00.969026 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:14:01.969606 master-0 kubenswrapper[7721]: I0216 02:14:01.969503 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:14:01.969606 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:14:01.969606 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:14:01.969606 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:14:01.970664 master-0 kubenswrapper[7721]: I0216 02:14:01.969620 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:14:02.970189 master-0 kubenswrapper[7721]: I0216 02:14:02.970103 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:14:02.970189 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:14:02.970189 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:14:02.970189 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:14:02.971014 master-0 kubenswrapper[7721]: I0216 02:14:02.970202 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:14:03.970176 master-0 kubenswrapper[7721]: I0216 02:14:03.970059 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:14:03.970176 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:14:03.970176 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:14:03.970176 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:14:03.971431 master-0 kubenswrapper[7721]: I0216 02:14:03.970214 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:14:04.953765 master-0 kubenswrapper[7721]: I0216 02:14:04.949249 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-6t7mx"] Feb 16 02:14:04.953765 master-0 kubenswrapper[7721]: E0216 02:14:04.949528 7721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b79cd7f-675e-4778-be06-95e79b1c008a" containerName="installer" Feb 16 02:14:04.953765 master-0 kubenswrapper[7721]: I0216 02:14:04.949543 7721 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b79cd7f-675e-4778-be06-95e79b1c008a" containerName="installer" Feb 16 02:14:04.953765 master-0 kubenswrapper[7721]: I0216 02:14:04.949687 7721 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b79cd7f-675e-4778-be06-95e79b1c008a" containerName="installer" Feb 16 02:14:04.953765 master-0 kubenswrapper[7721]: I0216 02:14:04.950145 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-6t7mx" Feb 16 02:14:04.956515 master-0 kubenswrapper[7721]: I0216 02:14:04.954317 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 16 02:14:04.956515 master-0 kubenswrapper[7721]: I0216 02:14:04.954346 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-4zs9t" Feb 16 02:14:04.956515 master-0 kubenswrapper[7721]: I0216 02:14:04.954611 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 16 02:14:04.956515 master-0 kubenswrapper[7721]: I0216 02:14:04.954652 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 16 02:14:04.969885 master-0 kubenswrapper[7721]: I0216 02:14:04.969097 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:14:04.969885 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:14:04.969885 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:14:04.969885 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:14:04.969885 master-0 kubenswrapper[7721]: I0216 02:14:04.969191 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:14:04.973709 master-0 kubenswrapper[7721]: I0216 02:14:04.973650 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-6t7mx"] Feb 16 02:14:05.122519 master-0 kubenswrapper[7721]: I0216 02:14:05.122405 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7846b339-c46d-4983-b586-a28f2868f665-cert\") pod \"ingress-canary-6t7mx\" (UID: \"7846b339-c46d-4983-b586-a28f2868f665\") " pod="openshift-ingress-canary/ingress-canary-6t7mx" Feb 16 02:14:05.122519 master-0 kubenswrapper[7721]: I0216 02:14:05.122527 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cnfs\" (UniqueName: \"kubernetes.io/projected/7846b339-c46d-4983-b586-a28f2868f665-kube-api-access-5cnfs\") pod \"ingress-canary-6t7mx\" (UID: \"7846b339-c46d-4983-b586-a28f2868f665\") " pod="openshift-ingress-canary/ingress-canary-6t7mx" Feb 16 02:14:05.218914 master-0 kubenswrapper[7721]: I0216 02:14:05.218773 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-c588d8cb4-nbjz6_04804a08-e3a5-46f3-abcb-967866834baa/ingress-operator/2.log" Feb 16 02:14:05.220598 master-0 kubenswrapper[7721]: I0216 02:14:05.220549 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-c588d8cb4-nbjz6_04804a08-e3a5-46f3-abcb-967866834baa/ingress-operator/1.log" Feb 16 02:14:05.221327 master-0 kubenswrapper[7721]: I0216 02:14:05.221252 7721 generic.go:334] "Generic (PLEG): container finished" podID="04804a08-e3a5-46f3-abcb-967866834baa" containerID="6a5eef57bcb093780918b99bdb16653d8db2a96f5c207767f7945385b5adfeef" exitCode=1 Feb 16 02:14:05.221429 master-0 kubenswrapper[7721]: I0216 02:14:05.221330 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nbjz6" event={"ID":"04804a08-e3a5-46f3-abcb-967866834baa","Type":"ContainerDied","Data":"6a5eef57bcb093780918b99bdb16653d8db2a96f5c207767f7945385b5adfeef"} Feb 16 02:14:05.221429 master-0 kubenswrapper[7721]: I0216 02:14:05.221395 7721 scope.go:117] "RemoveContainer" containerID="7c934fcf17603ba6880730036301dc7740655f1f475a9dcfef2ce3f1ec5f5b71" Feb 16 02:14:05.222850 master-0 kubenswrapper[7721]: I0216 02:14:05.222791 7721 scope.go:117] "RemoveContainer" containerID="6a5eef57bcb093780918b99bdb16653d8db2a96f5c207767f7945385b5adfeef" Feb 16 02:14:05.223386 master-0 kubenswrapper[7721]: E0216 02:14:05.223327 7721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ingress-operator pod=ingress-operator-c588d8cb4-nbjz6_openshift-ingress-operator(04804a08-e3a5-46f3-abcb-967866834baa)\"" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nbjz6" podUID="04804a08-e3a5-46f3-abcb-967866834baa" Feb 16 02:14:05.224035 master-0 kubenswrapper[7721]: I0216 02:14:05.223963 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7846b339-c46d-4983-b586-a28f2868f665-cert\") pod \"ingress-canary-6t7mx\" (UID: \"7846b339-c46d-4983-b586-a28f2868f665\") " pod="openshift-ingress-canary/ingress-canary-6t7mx" Feb 16 02:14:05.224160 master-0 kubenswrapper[7721]: I0216 02:14:05.224043 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cnfs\" (UniqueName: \"kubernetes.io/projected/7846b339-c46d-4983-b586-a28f2868f665-kube-api-access-5cnfs\") pod \"ingress-canary-6t7mx\" (UID: \"7846b339-c46d-4983-b586-a28f2868f665\") " pod="openshift-ingress-canary/ingress-canary-6t7mx" Feb 16 02:14:05.230836 master-0 kubenswrapper[7721]: I0216 02:14:05.230775 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7846b339-c46d-4983-b586-a28f2868f665-cert\") pod \"ingress-canary-6t7mx\" (UID: \"7846b339-c46d-4983-b586-a28f2868f665\") " pod="openshift-ingress-canary/ingress-canary-6t7mx" Feb 16 02:14:05.262832 master-0 kubenswrapper[7721]: I0216 02:14:05.262777 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cnfs\" (UniqueName: \"kubernetes.io/projected/7846b339-c46d-4983-b586-a28f2868f665-kube-api-access-5cnfs\") pod \"ingress-canary-6t7mx\" (UID: \"7846b339-c46d-4983-b586-a28f2868f665\") " pod="openshift-ingress-canary/ingress-canary-6t7mx" Feb 16 02:14:05.294904 master-0 kubenswrapper[7721]: I0216 02:14:05.294824 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-6t7mx" Feb 16 02:14:05.811499 master-0 kubenswrapper[7721]: I0216 02:14:05.811399 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-6t7mx"] Feb 16 02:14:05.970406 master-0 kubenswrapper[7721]: I0216 02:14:05.970169 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:14:05.970406 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:14:05.970406 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:14:05.970406 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:14:05.970765 master-0 kubenswrapper[7721]: I0216 02:14:05.970605 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:14:06.233009 master-0 kubenswrapper[7721]: I0216 02:14:06.232917 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-6t7mx" event={"ID":"7846b339-c46d-4983-b586-a28f2868f665","Type":"ContainerStarted","Data":"b9a5461e89858828a3921c09148717115d1cdb5353427ea29dabcf7d2f9ec053"} Feb 16 02:14:06.233009 master-0 kubenswrapper[7721]: I0216 02:14:06.233002 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-6t7mx" event={"ID":"7846b339-c46d-4983-b586-a28f2868f665","Type":"ContainerStarted","Data":"d247b2be5bf1cb751d97a183400c5d6577356c5b5dce9cfa29235bda3ce8eb9a"} Feb 16 02:14:06.238235 master-0 kubenswrapper[7721]: I0216 02:14:06.238155 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-c588d8cb4-nbjz6_04804a08-e3a5-46f3-abcb-967866834baa/ingress-operator/2.log" Feb 16 02:14:06.270013 master-0 kubenswrapper[7721]: I0216 02:14:06.269622 7721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-6t7mx" podStartSLOduration=2.269584436 podStartE2EDuration="2.269584436s" podCreationTimestamp="2026-02-16 02:14:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:14:06.259590915 +0000 UTC m=+449.753825207" watchObservedRunningTime="2026-02-16 02:14:06.269584436 +0000 UTC m=+449.763818738" Feb 16 02:14:06.968593 master-0 kubenswrapper[7721]: I0216 02:14:06.968497 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:14:06.968593 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:14:06.968593 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:14:06.968593 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:14:06.969060 master-0 kubenswrapper[7721]: I0216 02:14:06.968594 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:14:07.969313 master-0 kubenswrapper[7721]: I0216 02:14:07.969233 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:14:07.969313 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:14:07.969313 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:14:07.969313 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:14:07.970606 master-0 kubenswrapper[7721]: I0216 02:14:07.969321 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:14:08.969612 master-0 kubenswrapper[7721]: I0216 02:14:08.969510 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:14:08.969612 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:14:08.969612 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:14:08.969612 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:14:08.970596 master-0 kubenswrapper[7721]: I0216 02:14:08.969659 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:14:09.969359 master-0 kubenswrapper[7721]: I0216 02:14:09.969291 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:14:09.969359 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:14:09.969359 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:14:09.969359 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:14:09.970110 master-0 kubenswrapper[7721]: I0216 02:14:09.969376 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:14:10.968660 master-0 kubenswrapper[7721]: I0216 02:14:10.968327 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:14:10.968660 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:14:10.968660 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:14:10.968660 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:14:10.969578 master-0 kubenswrapper[7721]: I0216 02:14:10.969325 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:14:11.969921 master-0 kubenswrapper[7721]: I0216 02:14:11.969823 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:14:11.969921 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:14:11.969921 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:14:11.969921 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:14:11.971190 master-0 kubenswrapper[7721]: I0216 02:14:11.969948 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:14:12.969124 master-0 kubenswrapper[7721]: I0216 02:14:12.969009 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:14:12.969124 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:14:12.969124 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:14:12.969124 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:14:12.969124 master-0 kubenswrapper[7721]: I0216 02:14:12.969099 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:14:13.970498 master-0 kubenswrapper[7721]: I0216 02:14:13.970361 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:14:13.970498 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:14:13.970498 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:14:13.970498 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:14:13.971662 master-0 kubenswrapper[7721]: I0216 02:14:13.970539 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:14:14.968959 master-0 kubenswrapper[7721]: I0216 02:14:14.968836 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:14:14.968959 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:14:14.968959 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:14:14.968959 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:14:14.968959 master-0 kubenswrapper[7721]: I0216 02:14:14.968925 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:14:15.969411 master-0 kubenswrapper[7721]: I0216 02:14:15.969264 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:14:15.969411 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:14:15.969411 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:14:15.969411 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:14:15.969411 master-0 kubenswrapper[7721]: I0216 02:14:15.969398 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:14:16.969865 master-0 kubenswrapper[7721]: I0216 02:14:16.969663 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:14:16.969865 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:14:16.969865 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:14:16.969865 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:14:16.971373 master-0 kubenswrapper[7721]: I0216 02:14:16.969866 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:14:17.724960 master-0 kubenswrapper[7721]: I0216 02:14:17.724868 7721 scope.go:117] "RemoveContainer" containerID="6a5eef57bcb093780918b99bdb16653d8db2a96f5c207767f7945385b5adfeef" Feb 16 02:14:17.725497 master-0 kubenswrapper[7721]: E0216 02:14:17.725374 7721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ingress-operator pod=ingress-operator-c588d8cb4-nbjz6_openshift-ingress-operator(04804a08-e3a5-46f3-abcb-967866834baa)\"" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nbjz6" podUID="04804a08-e3a5-46f3-abcb-967866834baa" Feb 16 02:14:17.969706 master-0 kubenswrapper[7721]: I0216 02:14:17.969515 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:14:17.969706 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:14:17.969706 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:14:17.969706 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:14:17.969706 master-0 kubenswrapper[7721]: I0216 02:14:17.969632 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:14:18.969721 master-0 kubenswrapper[7721]: I0216 02:14:18.969599 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:14:18.969721 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:14:18.969721 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:14:18.969721 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:14:18.970218 master-0 kubenswrapper[7721]: I0216 02:14:18.969717 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:14:19.968625 master-0 kubenswrapper[7721]: I0216 02:14:19.968540 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:14:19.968625 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:14:19.968625 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:14:19.968625 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:14:19.968625 master-0 kubenswrapper[7721]: I0216 02:14:19.968623 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:14:20.969415 master-0 kubenswrapper[7721]: I0216 02:14:20.969287 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:14:20.969415 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:14:20.969415 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:14:20.969415 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:14:20.970681 master-0 kubenswrapper[7721]: I0216 02:14:20.969418 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:14:21.969474 master-0 kubenswrapper[7721]: I0216 02:14:21.969322 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:14:21.969474 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:14:21.969474 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:14:21.969474 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:14:21.970483 master-0 kubenswrapper[7721]: I0216 02:14:21.969483 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:14:22.969762 master-0 kubenswrapper[7721]: I0216 02:14:22.969595 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:14:22.969762 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:14:22.969762 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:14:22.969762 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:14:22.971058 master-0 kubenswrapper[7721]: I0216 02:14:22.969765 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:14:23.969614 master-0 kubenswrapper[7721]: I0216 02:14:23.969483 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:14:23.969614 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:14:23.969614 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:14:23.969614 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:14:23.969614 master-0 kubenswrapper[7721]: I0216 02:14:23.969604 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:14:24.969941 master-0 kubenswrapper[7721]: I0216 02:14:24.969813 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:14:24.969941 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:14:24.969941 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:14:24.969941 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:14:24.969941 master-0 kubenswrapper[7721]: I0216 02:14:24.969915 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:14:25.980362 master-0 kubenswrapper[7721]: I0216 02:14:25.980218 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:14:25.980362 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:14:25.980362 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:14:25.980362 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:14:25.980362 master-0 kubenswrapper[7721]: I0216 02:14:25.980341 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:14:26.968473 master-0 kubenswrapper[7721]: I0216 02:14:26.968346 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:14:26.968473 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:14:26.968473 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:14:26.968473 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:14:26.968945 master-0 kubenswrapper[7721]: I0216 02:14:26.968483 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:14:27.967864 master-0 kubenswrapper[7721]: I0216 02:14:27.967797 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:14:27.967864 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:14:27.967864 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:14:27.967864 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:14:27.968563 master-0 kubenswrapper[7721]: I0216 02:14:27.967918 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:14:28.725125 master-0 kubenswrapper[7721]: I0216 02:14:28.725037 7721 scope.go:117] "RemoveContainer" containerID="6a5eef57bcb093780918b99bdb16653d8db2a96f5c207767f7945385b5adfeef" Feb 16 02:14:28.968868 master-0 kubenswrapper[7721]: I0216 02:14:28.968776 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:14:28.968868 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:14:28.968868 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:14:28.968868 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:14:28.969911 master-0 kubenswrapper[7721]: I0216 02:14:28.968888 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:14:29.444473 master-0 kubenswrapper[7721]: I0216 02:14:29.442868 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-c588d8cb4-nbjz6_04804a08-e3a5-46f3-abcb-967866834baa/ingress-operator/2.log" Feb 16 02:14:29.444473 master-0 kubenswrapper[7721]: I0216 02:14:29.443318 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nbjz6" event={"ID":"04804a08-e3a5-46f3-abcb-967866834baa","Type":"ContainerStarted","Data":"f8b63e1652f210c6b1254a1e3bd4ab515a460adc2668368a4276c7b3f8a11479"} Feb 16 02:14:29.968152 master-0 kubenswrapper[7721]: I0216 02:14:29.968054 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:14:29.968152 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:14:29.968152 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:14:29.968152 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:14:29.968152 master-0 kubenswrapper[7721]: I0216 02:14:29.968147 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:14:30.970228 master-0 kubenswrapper[7721]: I0216 02:14:30.970154 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:14:30.970228 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:14:30.970228 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:14:30.970228 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:14:30.971394 master-0 kubenswrapper[7721]: I0216 02:14:30.971346 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:14:31.970230 master-0 kubenswrapper[7721]: I0216 02:14:31.969849 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:14:31.970230 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:14:31.970230 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:14:31.970230 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:14:31.970230 master-0 kubenswrapper[7721]: I0216 02:14:31.969950 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:14:32.969260 master-0 kubenswrapper[7721]: I0216 02:14:32.969152 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:14:32.969260 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:14:32.969260 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:14:32.969260 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:14:32.969746 master-0 kubenswrapper[7721]: I0216 02:14:32.969275 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:14:33.969754 master-0 kubenswrapper[7721]: I0216 02:14:33.969668 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:14:33.969754 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:14:33.969754 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:14:33.969754 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:14:33.970733 master-0 kubenswrapper[7721]: I0216 02:14:33.969774 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:14:34.969135 master-0 kubenswrapper[7721]: I0216 02:14:34.968998 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:14:34.969135 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:14:34.969135 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:14:34.969135 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:14:34.969817 master-0 kubenswrapper[7721]: I0216 02:14:34.969148 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:14:35.969465 master-0 kubenswrapper[7721]: I0216 02:14:35.969356 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:14:35.969465 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:14:35.969465 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:14:35.969465 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:14:35.969837 master-0 kubenswrapper[7721]: I0216 02:14:35.969511 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:14:36.969044 master-0 kubenswrapper[7721]: I0216 02:14:36.968962 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:14:36.969044 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:14:36.969044 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:14:36.969044 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:14:36.969463 master-0 kubenswrapper[7721]: I0216 02:14:36.969072 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:14:37.969397 master-0 kubenswrapper[7721]: I0216 02:14:37.969273 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:14:37.969397 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:14:37.969397 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:14:37.969397 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:14:37.969397 master-0 kubenswrapper[7721]: I0216 02:14:37.969387 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:14:38.969912 master-0 kubenswrapper[7721]: I0216 02:14:38.969800 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:14:38.969912 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:14:38.969912 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:14:38.969912 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:14:38.970935 master-0 kubenswrapper[7721]: I0216 02:14:38.969911 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:14:39.969276 master-0 kubenswrapper[7721]: I0216 02:14:39.969181 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:14:39.969276 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:14:39.969276 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:14:39.969276 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:14:39.969925 master-0 kubenswrapper[7721]: I0216 02:14:39.969303 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:14:40.970414 master-0 kubenswrapper[7721]: I0216 02:14:40.970229 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:14:40.970414 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:14:40.970414 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:14:40.970414 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:14:40.971706 master-0 kubenswrapper[7721]: I0216 02:14:40.971599 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:14:41.968985 master-0 kubenswrapper[7721]: I0216 02:14:41.968888 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:14:41.968985 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:14:41.968985 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:14:41.968985 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:14:41.969461 master-0 kubenswrapper[7721]: I0216 02:14:41.968996 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:14:42.969634 master-0 kubenswrapper[7721]: I0216 02:14:42.969525 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:14:42.969634 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:14:42.969634 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:14:42.969634 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:14:42.973138 master-0 kubenswrapper[7721]: I0216 02:14:42.969658 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:14:43.970140 master-0 kubenswrapper[7721]: I0216 02:14:43.970052 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:14:43.970140 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:14:43.970140 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:14:43.970140 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:14:43.972774 master-0 kubenswrapper[7721]: I0216 02:14:43.970147 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:14:44.969137 master-0 kubenswrapper[7721]: I0216 02:14:44.969037 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:14:44.969137 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:14:44.969137 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:14:44.969137 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:14:44.969137 master-0 kubenswrapper[7721]: I0216 02:14:44.969135 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:14:45.972874 master-0 kubenswrapper[7721]: I0216 02:14:45.972746 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:14:45.972874 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:14:45.972874 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:14:45.972874 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:14:45.974073 master-0 kubenswrapper[7721]: I0216 02:14:45.972880 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:14:46.969191 master-0 kubenswrapper[7721]: I0216 02:14:46.969071 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:14:46.969191 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:14:46.969191 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:14:46.969191 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:14:46.969863 master-0 kubenswrapper[7721]: I0216 02:14:46.969188 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:14:47.968959 master-0 kubenswrapper[7721]: I0216 02:14:47.968868 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:14:47.968959 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:14:47.968959 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:14:47.968959 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:14:47.969907 master-0 kubenswrapper[7721]: I0216 02:14:47.968968 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:14:48.969480 master-0 kubenswrapper[7721]: I0216 02:14:48.969330 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:14:48.969480 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:14:48.969480 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:14:48.969480 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:14:48.969480 master-0 kubenswrapper[7721]: I0216 02:14:48.969464 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:14:49.969809 master-0 kubenswrapper[7721]: I0216 02:14:49.969717 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:14:49.969809 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:14:49.969809 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:14:49.969809 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:14:49.970575 master-0 kubenswrapper[7721]: I0216 02:14:49.969836 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:14:50.972196 master-0 kubenswrapper[7721]: I0216 02:14:50.972069 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:14:50.972196 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:14:50.972196 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:14:50.972196 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:14:50.972196 master-0 kubenswrapper[7721]: I0216 02:14:50.972181 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:14:51.968206 master-0 kubenswrapper[7721]: I0216 02:14:51.967697 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:14:51.968206 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:14:51.968206 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:14:51.968206 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:14:51.968722 master-0 kubenswrapper[7721]: I0216 02:14:51.968215 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:14:52.969312 master-0 kubenswrapper[7721]: I0216 02:14:52.969196 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:14:52.969312 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:14:52.969312 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:14:52.969312 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:14:52.969312 master-0 kubenswrapper[7721]: I0216 02:14:52.969302 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:14:53.117482 master-0 kubenswrapper[7721]: I0216 02:14:53.117372 7721 scope.go:117] "RemoveContainer" containerID="fcd918e42e09edbde82af27329ac4d0663845d79ca2085b97d9bb5eab9b7e0af" Feb 16 02:14:53.971865 master-0 kubenswrapper[7721]: I0216 02:14:53.971790 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:14:53.971865 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:14:53.971865 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:14:53.971865 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:14:53.972417 master-0 kubenswrapper[7721]: I0216 02:14:53.971882 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:14:54.969326 master-0 kubenswrapper[7721]: I0216 02:14:54.969192 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:14:54.969326 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:14:54.969326 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:14:54.969326 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:14:54.969952 master-0 kubenswrapper[7721]: I0216 02:14:54.969330 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:14:55.969886 master-0 kubenswrapper[7721]: I0216 02:14:55.969771 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:14:55.969886 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:14:55.969886 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:14:55.969886 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:14:55.971017 master-0 kubenswrapper[7721]: I0216 02:14:55.969902 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:14:56.968990 master-0 kubenswrapper[7721]: I0216 02:14:56.968885 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:14:56.968990 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:14:56.968990 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:14:56.968990 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:14:56.969499 master-0 kubenswrapper[7721]: I0216 02:14:56.969012 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:14:57.969558 master-0 kubenswrapper[7721]: I0216 02:14:57.969422 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:14:57.969558 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:14:57.969558 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:14:57.969558 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:14:57.970760 master-0 kubenswrapper[7721]: I0216 02:14:57.969552 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:14:58.992882 master-0 kubenswrapper[7721]: I0216 02:14:58.992076 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:14:58.992882 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:14:58.992882 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:14:58.992882 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:14:58.992882 master-0 kubenswrapper[7721]: I0216 02:14:58.992185 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:14:59.968741 master-0 kubenswrapper[7721]: I0216 02:14:59.968626 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:14:59.968741 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:14:59.968741 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:14:59.968741 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:14:59.968741 master-0 kubenswrapper[7721]: I0216 02:14:59.968733 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:15:00.215193 master-0 kubenswrapper[7721]: I0216 02:15:00.215077 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520135-gdm59"] Feb 16 02:15:00.216496 master-0 kubenswrapper[7721]: I0216 02:15:00.216395 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520135-gdm59" Feb 16 02:15:00.220090 master-0 kubenswrapper[7721]: I0216 02:15:00.219966 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-h8ldk" Feb 16 02:15:00.220270 master-0 kubenswrapper[7721]: I0216 02:15:00.220137 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 02:15:00.237047 master-0 kubenswrapper[7721]: I0216 02:15:00.236654 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520135-gdm59"] Feb 16 02:15:00.363362 master-0 kubenswrapper[7721]: I0216 02:15:00.363274 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8269ffdd-7357-4a8c-b578-0f482558f93e-secret-volume\") pod \"collect-profiles-29520135-gdm59\" (UID: \"8269ffdd-7357-4a8c-b578-0f482558f93e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520135-gdm59" Feb 16 02:15:00.363661 master-0 kubenswrapper[7721]: I0216 02:15:00.363527 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8269ffdd-7357-4a8c-b578-0f482558f93e-config-volume\") pod \"collect-profiles-29520135-gdm59\" (UID: \"8269ffdd-7357-4a8c-b578-0f482558f93e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520135-gdm59" Feb 16 02:15:00.363661 master-0 kubenswrapper[7721]: I0216 02:15:00.363640 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vdjz\" (UniqueName: \"kubernetes.io/projected/8269ffdd-7357-4a8c-b578-0f482558f93e-kube-api-access-7vdjz\") pod \"collect-profiles-29520135-gdm59\" (UID: \"8269ffdd-7357-4a8c-b578-0f482558f93e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520135-gdm59" Feb 16 02:15:00.464789 master-0 kubenswrapper[7721]: I0216 02:15:00.464698 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8269ffdd-7357-4a8c-b578-0f482558f93e-config-volume\") pod \"collect-profiles-29520135-gdm59\" (UID: \"8269ffdd-7357-4a8c-b578-0f482558f93e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520135-gdm59" Feb 16 02:15:00.465095 master-0 kubenswrapper[7721]: I0216 02:15:00.464806 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vdjz\" (UniqueName: \"kubernetes.io/projected/8269ffdd-7357-4a8c-b578-0f482558f93e-kube-api-access-7vdjz\") pod \"collect-profiles-29520135-gdm59\" (UID: \"8269ffdd-7357-4a8c-b578-0f482558f93e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520135-gdm59" Feb 16 02:15:00.465095 master-0 kubenswrapper[7721]: I0216 02:15:00.464909 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8269ffdd-7357-4a8c-b578-0f482558f93e-secret-volume\") pod \"collect-profiles-29520135-gdm59\" (UID: \"8269ffdd-7357-4a8c-b578-0f482558f93e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520135-gdm59" Feb 16 02:15:00.467967 master-0 kubenswrapper[7721]: I0216 02:15:00.467896 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8269ffdd-7357-4a8c-b578-0f482558f93e-config-volume\") pod \"collect-profiles-29520135-gdm59\" (UID: \"8269ffdd-7357-4a8c-b578-0f482558f93e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520135-gdm59" Feb 16 02:15:00.470769 master-0 kubenswrapper[7721]: I0216 02:15:00.470651 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8269ffdd-7357-4a8c-b578-0f482558f93e-secret-volume\") pod \"collect-profiles-29520135-gdm59\" (UID: \"8269ffdd-7357-4a8c-b578-0f482558f93e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520135-gdm59" Feb 16 02:15:00.495693 master-0 kubenswrapper[7721]: I0216 02:15:00.495600 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vdjz\" (UniqueName: \"kubernetes.io/projected/8269ffdd-7357-4a8c-b578-0f482558f93e-kube-api-access-7vdjz\") pod \"collect-profiles-29520135-gdm59\" (UID: \"8269ffdd-7357-4a8c-b578-0f482558f93e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520135-gdm59" Feb 16 02:15:00.560401 master-0 kubenswrapper[7721]: I0216 02:15:00.560315 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520135-gdm59" Feb 16 02:15:00.969590 master-0 kubenswrapper[7721]: I0216 02:15:00.969481 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:15:00.969590 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:15:00.969590 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:15:00.969590 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:15:00.970032 master-0 kubenswrapper[7721]: I0216 02:15:00.969587 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:15:01.117110 master-0 kubenswrapper[7721]: I0216 02:15:01.116864 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520135-gdm59"] Feb 16 02:15:01.126811 master-0 kubenswrapper[7721]: W0216 02:15:01.126727 7721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8269ffdd_7357_4a8c_b578_0f482558f93e.slice/crio-37b892c134020f30aa7292b1e83d1425d52b4470fccaddc25006dc9605b060e9 WatchSource:0}: Error finding container 37b892c134020f30aa7292b1e83d1425d52b4470fccaddc25006dc9605b060e9: Status 404 returned error can't find the container with id 37b892c134020f30aa7292b1e83d1425d52b4470fccaddc25006dc9605b060e9 Feb 16 02:15:01.737661 master-0 kubenswrapper[7721]: I0216 02:15:01.737571 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520135-gdm59" event={"ID":"8269ffdd-7357-4a8c-b578-0f482558f93e","Type":"ContainerStarted","Data":"3d85392af80e65ab2985e54a4974e1f024f9a9bb02545f3e0dcd5540c8518016"} Feb 16 02:15:01.737661 master-0 kubenswrapper[7721]: I0216 02:15:01.737654 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520135-gdm59" event={"ID":"8269ffdd-7357-4a8c-b578-0f482558f93e","Type":"ContainerStarted","Data":"37b892c134020f30aa7292b1e83d1425d52b4470fccaddc25006dc9605b060e9"} Feb 16 02:15:01.766309 master-0 kubenswrapper[7721]: I0216 02:15:01.766223 7721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29520135-gdm59" podStartSLOduration=1.766196011 podStartE2EDuration="1.766196011s" podCreationTimestamp="2026-02-16 02:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:15:01.76311882 +0000 UTC m=+505.257353142" watchObservedRunningTime="2026-02-16 02:15:01.766196011 +0000 UTC m=+505.260430303" Feb 16 02:15:01.968961 master-0 kubenswrapper[7721]: I0216 02:15:01.968873 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:15:01.968961 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:15:01.968961 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:15:01.968961 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:15:01.969429 master-0 kubenswrapper[7721]: I0216 02:15:01.968967 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:15:02.970339 master-0 kubenswrapper[7721]: I0216 02:15:02.970211 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:15:02.970339 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:15:02.970339 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:15:02.970339 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:15:02.970339 master-0 kubenswrapper[7721]: I0216 02:15:02.970296 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:15:03.970890 master-0 kubenswrapper[7721]: I0216 02:15:03.970682 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:15:03.970890 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:15:03.970890 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:15:03.970890 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:15:03.970890 master-0 kubenswrapper[7721]: I0216 02:15:03.970826 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:15:04.764076 master-0 kubenswrapper[7721]: I0216 02:15:04.763975 7721 generic.go:334] "Generic (PLEG): container finished" podID="8269ffdd-7357-4a8c-b578-0f482558f93e" containerID="3d85392af80e65ab2985e54a4974e1f024f9a9bb02545f3e0dcd5540c8518016" exitCode=0 Feb 16 02:15:04.764076 master-0 kubenswrapper[7721]: I0216 02:15:04.764049 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520135-gdm59" event={"ID":"8269ffdd-7357-4a8c-b578-0f482558f93e","Type":"ContainerDied","Data":"3d85392af80e65ab2985e54a4974e1f024f9a9bb02545f3e0dcd5540c8518016"} Feb 16 02:15:04.969112 master-0 kubenswrapper[7721]: I0216 02:15:04.969016 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:15:04.969112 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:15:04.969112 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:15:04.969112 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:15:04.969762 master-0 kubenswrapper[7721]: I0216 02:15:04.969134 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:15:05.971545 master-0 kubenswrapper[7721]: I0216 02:15:05.971453 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:15:05.971545 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:15:05.971545 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:15:05.971545 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:15:05.971545 master-0 kubenswrapper[7721]: I0216 02:15:05.971528 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:15:06.184811 master-0 kubenswrapper[7721]: I0216 02:15:06.184736 7721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520135-gdm59" Feb 16 02:15:06.264354 master-0 kubenswrapper[7721]: I0216 02:15:06.264276 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vdjz\" (UniqueName: \"kubernetes.io/projected/8269ffdd-7357-4a8c-b578-0f482558f93e-kube-api-access-7vdjz\") pod \"8269ffdd-7357-4a8c-b578-0f482558f93e\" (UID: \"8269ffdd-7357-4a8c-b578-0f482558f93e\") " Feb 16 02:15:06.264875 master-0 kubenswrapper[7721]: I0216 02:15:06.264422 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8269ffdd-7357-4a8c-b578-0f482558f93e-config-volume\") pod \"8269ffdd-7357-4a8c-b578-0f482558f93e\" (UID: \"8269ffdd-7357-4a8c-b578-0f482558f93e\") " Feb 16 02:15:06.265385 master-0 kubenswrapper[7721]: I0216 02:15:06.265322 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8269ffdd-7357-4a8c-b578-0f482558f93e-config-volume" (OuterVolumeSpecName: "config-volume") pod "8269ffdd-7357-4a8c-b578-0f482558f93e" (UID: "8269ffdd-7357-4a8c-b578-0f482558f93e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:15:06.269491 master-0 kubenswrapper[7721]: I0216 02:15:06.269421 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8269ffdd-7357-4a8c-b578-0f482558f93e-kube-api-access-7vdjz" (OuterVolumeSpecName: "kube-api-access-7vdjz") pod "8269ffdd-7357-4a8c-b578-0f482558f93e" (UID: "8269ffdd-7357-4a8c-b578-0f482558f93e"). InnerVolumeSpecName "kube-api-access-7vdjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:15:06.366139 master-0 kubenswrapper[7721]: I0216 02:15:06.366064 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8269ffdd-7357-4a8c-b578-0f482558f93e-secret-volume\") pod \"8269ffdd-7357-4a8c-b578-0f482558f93e\" (UID: \"8269ffdd-7357-4a8c-b578-0f482558f93e\") " Feb 16 02:15:06.366558 master-0 kubenswrapper[7721]: I0216 02:15:06.366525 7721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vdjz\" (UniqueName: \"kubernetes.io/projected/8269ffdd-7357-4a8c-b578-0f482558f93e-kube-api-access-7vdjz\") on node \"master-0\" DevicePath \"\"" Feb 16 02:15:06.366623 master-0 kubenswrapper[7721]: I0216 02:15:06.366562 7721 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8269ffdd-7357-4a8c-b578-0f482558f93e-config-volume\") on node \"master-0\" DevicePath \"\"" Feb 16 02:15:06.371093 master-0 kubenswrapper[7721]: I0216 02:15:06.371030 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8269ffdd-7357-4a8c-b578-0f482558f93e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8269ffdd-7357-4a8c-b578-0f482558f93e" (UID: "8269ffdd-7357-4a8c-b578-0f482558f93e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:15:06.467657 master-0 kubenswrapper[7721]: I0216 02:15:06.467577 7721 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8269ffdd-7357-4a8c-b578-0f482558f93e-secret-volume\") on node \"master-0\" DevicePath \"\"" Feb 16 02:15:06.783624 master-0 kubenswrapper[7721]: I0216 02:15:06.783558 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520135-gdm59" event={"ID":"8269ffdd-7357-4a8c-b578-0f482558f93e","Type":"ContainerDied","Data":"37b892c134020f30aa7292b1e83d1425d52b4470fccaddc25006dc9605b060e9"} Feb 16 02:15:06.783624 master-0 kubenswrapper[7721]: I0216 02:15:06.783606 7721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37b892c134020f30aa7292b1e83d1425d52b4470fccaddc25006dc9605b060e9" Feb 16 02:15:06.783909 master-0 kubenswrapper[7721]: I0216 02:15:06.783643 7721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520135-gdm59" Feb 16 02:15:06.969709 master-0 kubenswrapper[7721]: I0216 02:15:06.969583 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:15:06.969709 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:15:06.969709 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:15:06.969709 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:15:06.970475 master-0 kubenswrapper[7721]: I0216 02:15:06.970174 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:15:07.969758 master-0 kubenswrapper[7721]: I0216 02:15:07.969626 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:15:07.969758 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:15:07.969758 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:15:07.969758 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:15:07.969758 master-0 kubenswrapper[7721]: I0216 02:15:07.969729 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:15:08.101182 master-0 kubenswrapper[7721]: I0216 02:15:08.101050 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-2-master-0"] Feb 16 02:15:08.101597 master-0 kubenswrapper[7721]: E0216 02:15:08.101540 7721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8269ffdd-7357-4a8c-b578-0f482558f93e" containerName="collect-profiles" Feb 16 02:15:08.101597 master-0 kubenswrapper[7721]: I0216 02:15:08.101588 7721 state_mem.go:107] "Deleted CPUSet assignment" podUID="8269ffdd-7357-4a8c-b578-0f482558f93e" containerName="collect-profiles" Feb 16 02:15:08.101898 master-0 kubenswrapper[7721]: I0216 02:15:08.101855 7721 memory_manager.go:354] "RemoveStaleState removing state" podUID="8269ffdd-7357-4a8c-b578-0f482558f93e" containerName="collect-profiles" Feb 16 02:15:08.102732 master-0 kubenswrapper[7721]: I0216 02:15:08.102678 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Feb 16 02:15:08.106014 master-0 kubenswrapper[7721]: I0216 02:15:08.105894 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd"/"installer-sa-dockercfg-c6ptw" Feb 16 02:15:08.108735 master-0 kubenswrapper[7721]: I0216 02:15:08.107788 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"kube-root-ca.crt" Feb 16 02:15:08.115075 master-0 kubenswrapper[7721]: I0216 02:15:08.114985 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-2-master-0"] Feb 16 02:15:08.196498 master-0 kubenswrapper[7721]: I0216 02:15:08.196385 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4ff0fbd8-9ecc-421f-952c-c90ea17ddc7b-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"4ff0fbd8-9ecc-421f-952c-c90ea17ddc7b\") " pod="openshift-etcd/installer-2-master-0" Feb 16 02:15:08.196821 master-0 kubenswrapper[7721]: I0216 02:15:08.196564 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4ff0fbd8-9ecc-421f-952c-c90ea17ddc7b-kube-api-access\") pod \"installer-2-master-0\" (UID: \"4ff0fbd8-9ecc-421f-952c-c90ea17ddc7b\") " pod="openshift-etcd/installer-2-master-0" Feb 16 02:15:08.196821 master-0 kubenswrapper[7721]: I0216 02:15:08.196597 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4ff0fbd8-9ecc-421f-952c-c90ea17ddc7b-var-lock\") pod \"installer-2-master-0\" (UID: \"4ff0fbd8-9ecc-421f-952c-c90ea17ddc7b\") " pod="openshift-etcd/installer-2-master-0" Feb 16 02:15:08.299279 master-0 kubenswrapper[7721]: I0216 02:15:08.299007 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4ff0fbd8-9ecc-421f-952c-c90ea17ddc7b-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"4ff0fbd8-9ecc-421f-952c-c90ea17ddc7b\") " pod="openshift-etcd/installer-2-master-0" Feb 16 02:15:08.299910 master-0 kubenswrapper[7721]: I0216 02:15:08.299214 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4ff0fbd8-9ecc-421f-952c-c90ea17ddc7b-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"4ff0fbd8-9ecc-421f-952c-c90ea17ddc7b\") " pod="openshift-etcd/installer-2-master-0" Feb 16 02:15:08.300041 master-0 kubenswrapper[7721]: I0216 02:15:08.299885 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4ff0fbd8-9ecc-421f-952c-c90ea17ddc7b-kube-api-access\") pod \"installer-2-master-0\" (UID: \"4ff0fbd8-9ecc-421f-952c-c90ea17ddc7b\") " pod="openshift-etcd/installer-2-master-0" Feb 16 02:15:08.300175 master-0 kubenswrapper[7721]: I0216 02:15:08.300154 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4ff0fbd8-9ecc-421f-952c-c90ea17ddc7b-var-lock\") pod \"installer-2-master-0\" (UID: \"4ff0fbd8-9ecc-421f-952c-c90ea17ddc7b\") " pod="openshift-etcd/installer-2-master-0" Feb 16 02:15:08.300348 master-0 kubenswrapper[7721]: I0216 02:15:08.300220 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4ff0fbd8-9ecc-421f-952c-c90ea17ddc7b-var-lock\") pod \"installer-2-master-0\" (UID: \"4ff0fbd8-9ecc-421f-952c-c90ea17ddc7b\") " pod="openshift-etcd/installer-2-master-0" Feb 16 02:15:08.331064 master-0 kubenswrapper[7721]: I0216 02:15:08.330991 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4ff0fbd8-9ecc-421f-952c-c90ea17ddc7b-kube-api-access\") pod \"installer-2-master-0\" (UID: \"4ff0fbd8-9ecc-421f-952c-c90ea17ddc7b\") " pod="openshift-etcd/installer-2-master-0" Feb 16 02:15:08.492491 master-0 kubenswrapper[7721]: I0216 02:15:08.492041 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Feb 16 02:15:08.969005 master-0 kubenswrapper[7721]: I0216 02:15:08.968889 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:15:08.969005 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:15:08.969005 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:15:08.969005 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:15:08.969005 master-0 kubenswrapper[7721]: I0216 02:15:08.968988 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:15:09.024676 master-0 kubenswrapper[7721]: I0216 02:15:09.023610 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-2-master-0"] Feb 16 02:15:09.037824 master-0 kubenswrapper[7721]: W0216 02:15:09.037733 7721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod4ff0fbd8_9ecc_421f_952c_c90ea17ddc7b.slice/crio-2ca81682ef82021a12533b5444bd8e4c0c15330d2db438a6a205cb774dd456a1 WatchSource:0}: Error finding container 2ca81682ef82021a12533b5444bd8e4c0c15330d2db438a6a205cb774dd456a1: Status 404 returned error can't find the container with id 2ca81682ef82021a12533b5444bd8e4c0c15330d2db438a6a205cb774dd456a1 Feb 16 02:15:09.811366 master-0 kubenswrapper[7721]: I0216 02:15:09.811276 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"4ff0fbd8-9ecc-421f-952c-c90ea17ddc7b","Type":"ContainerStarted","Data":"0b50b3880ac63d1428248172985d5d09cf2333281a593cd01b076731c8454c9a"} Feb 16 02:15:09.811366 master-0 kubenswrapper[7721]: I0216 02:15:09.811351 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"4ff0fbd8-9ecc-421f-952c-c90ea17ddc7b","Type":"ContainerStarted","Data":"2ca81682ef82021a12533b5444bd8e4c0c15330d2db438a6a205cb774dd456a1"} Feb 16 02:15:09.835190 master-0 kubenswrapper[7721]: I0216 02:15:09.835087 7721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-2-master-0" podStartSLOduration=1.8350523010000002 podStartE2EDuration="1.835052301s" podCreationTimestamp="2026-02-16 02:15:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:15:09.834369903 +0000 UTC m=+513.328604225" watchObservedRunningTime="2026-02-16 02:15:09.835052301 +0000 UTC m=+513.329286603" Feb 16 02:15:09.969587 master-0 kubenswrapper[7721]: I0216 02:15:09.969502 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:15:09.969587 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:15:09.969587 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:15:09.969587 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:15:09.970102 master-0 kubenswrapper[7721]: I0216 02:15:09.969598 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:15:10.969306 master-0 kubenswrapper[7721]: I0216 02:15:10.969200 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:15:10.969306 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:15:10.969306 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:15:10.969306 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:15:10.969306 master-0 kubenswrapper[7721]: I0216 02:15:10.969300 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:15:11.968609 master-0 kubenswrapper[7721]: I0216 02:15:11.968477 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:15:11.968609 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:15:11.968609 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:15:11.968609 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:15:11.968609 master-0 kubenswrapper[7721]: I0216 02:15:11.968609 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:15:12.969043 master-0 kubenswrapper[7721]: I0216 02:15:12.968919 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:15:12.969043 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:15:12.969043 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:15:12.969043 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:15:12.970483 master-0 kubenswrapper[7721]: I0216 02:15:12.969055 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:15:13.970089 master-0 kubenswrapper[7721]: I0216 02:15:13.969937 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:15:13.970089 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:15:13.970089 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:15:13.970089 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:15:13.970089 master-0 kubenswrapper[7721]: I0216 02:15:13.970067 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:15:14.969430 master-0 kubenswrapper[7721]: I0216 02:15:14.969365 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:15:14.969430 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:15:14.969430 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:15:14.969430 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:15:14.970024 master-0 kubenswrapper[7721]: I0216 02:15:14.969964 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:15:15.974740 master-0 kubenswrapper[7721]: I0216 02:15:15.974627 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:15:15.974740 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:15:15.974740 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:15:15.974740 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:15:15.976011 master-0 kubenswrapper[7721]: I0216 02:15:15.974761 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:15:16.969756 master-0 kubenswrapper[7721]: I0216 02:15:16.969568 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:15:16.969756 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:15:16.969756 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:15:16.969756 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:15:16.970544 master-0 kubenswrapper[7721]: I0216 02:15:16.969789 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:15:17.969164 master-0 kubenswrapper[7721]: I0216 02:15:17.968989 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:15:17.969164 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:15:17.969164 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:15:17.969164 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:15:17.970899 master-0 kubenswrapper[7721]: I0216 02:15:17.970656 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:15:18.970111 master-0 kubenswrapper[7721]: I0216 02:15:18.969996 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:15:18.970111 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:15:18.970111 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:15:18.970111 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:15:18.971240 master-0 kubenswrapper[7721]: I0216 02:15:18.970122 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:15:19.968537 master-0 kubenswrapper[7721]: I0216 02:15:19.968458 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:15:19.968537 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:15:19.968537 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:15:19.968537 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:15:19.968878 master-0 kubenswrapper[7721]: I0216 02:15:19.968558 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:15:19.968878 master-0 kubenswrapper[7721]: I0216 02:15:19.968628 7721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-864ddd5f56-ffptx" Feb 16 02:15:19.969502 master-0 kubenswrapper[7721]: I0216 02:15:19.969424 7721 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"226c9fa2763b9d22f9a5e31efca1219592ef35d729ba28d906add8e85efb3944"} pod="openshift-ingress/router-default-864ddd5f56-ffptx" containerMessage="Container router failed startup probe, will be restarted" Feb 16 02:15:19.969552 master-0 kubenswrapper[7721]: I0216 02:15:19.969531 7721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" containerID="cri-o://226c9fa2763b9d22f9a5e31efca1219592ef35d729ba28d906add8e85efb3944" gracePeriod=3600 Feb 16 02:15:36.737578 master-0 kubenswrapper[7721]: I0216 02:15:36.737503 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Feb 16 02:15:36.740783 master-0 kubenswrapper[7721]: I0216 02:15:36.738644 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 16 02:15:36.746097 master-0 kubenswrapper[7721]: I0216 02:15:36.744480 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-wnpkt" Feb 16 02:15:36.746097 master-0 kubenswrapper[7721]: I0216 02:15:36.744798 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 16 02:15:36.751100 master-0 kubenswrapper[7721]: I0216 02:15:36.751043 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Feb 16 02:15:36.844906 master-0 kubenswrapper[7721]: I0216 02:15:36.844834 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9063971f-d258-4c4b-9e12-06b7de390d3b-var-lock\") pod \"installer-1-retry-1-master-0\" (UID: \"9063971f-d258-4c4b-9e12-06b7de390d3b\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 16 02:15:36.845171 master-0 kubenswrapper[7721]: I0216 02:15:36.844964 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9063971f-d258-4c4b-9e12-06b7de390d3b-kubelet-dir\") pod \"installer-1-retry-1-master-0\" (UID: \"9063971f-d258-4c4b-9e12-06b7de390d3b\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 16 02:15:36.845171 master-0 kubenswrapper[7721]: I0216 02:15:36.845019 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9063971f-d258-4c4b-9e12-06b7de390d3b-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"9063971f-d258-4c4b-9e12-06b7de390d3b\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 16 02:15:36.946986 master-0 kubenswrapper[7721]: I0216 02:15:36.946934 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9063971f-d258-4c4b-9e12-06b7de390d3b-var-lock\") pod \"installer-1-retry-1-master-0\" (UID: \"9063971f-d258-4c4b-9e12-06b7de390d3b\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 16 02:15:36.947333 master-0 kubenswrapper[7721]: I0216 02:15:36.947310 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9063971f-d258-4c4b-9e12-06b7de390d3b-kubelet-dir\") pod \"installer-1-retry-1-master-0\" (UID: \"9063971f-d258-4c4b-9e12-06b7de390d3b\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 16 02:15:36.947507 master-0 kubenswrapper[7721]: I0216 02:15:36.947405 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9063971f-d258-4c4b-9e12-06b7de390d3b-var-lock\") pod \"installer-1-retry-1-master-0\" (UID: \"9063971f-d258-4c4b-9e12-06b7de390d3b\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 16 02:15:36.947595 master-0 kubenswrapper[7721]: I0216 02:15:36.947541 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9063971f-d258-4c4b-9e12-06b7de390d3b-kubelet-dir\") pod \"installer-1-retry-1-master-0\" (UID: \"9063971f-d258-4c4b-9e12-06b7de390d3b\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 16 02:15:36.947707 master-0 kubenswrapper[7721]: I0216 02:15:36.947688 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9063971f-d258-4c4b-9e12-06b7de390d3b-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"9063971f-d258-4c4b-9e12-06b7de390d3b\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 16 02:15:36.978932 master-0 kubenswrapper[7721]: I0216 02:15:36.978873 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9063971f-d258-4c4b-9e12-06b7de390d3b-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"9063971f-d258-4c4b-9e12-06b7de390d3b\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 16 02:15:37.073108 master-0 kubenswrapper[7721]: I0216 02:15:37.072953 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 16 02:15:37.577224 master-0 kubenswrapper[7721]: I0216 02:15:37.577124 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Feb 16 02:15:37.584230 master-0 kubenswrapper[7721]: W0216 02:15:37.584155 7721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod9063971f_d258_4c4b_9e12_06b7de390d3b.slice/crio-58997b8fc48a379ceddf1aa04ebaf598050ecf9e7d85b482c7e790d8fddb7296 WatchSource:0}: Error finding container 58997b8fc48a379ceddf1aa04ebaf598050ecf9e7d85b482c7e790d8fddb7296: Status 404 returned error can't find the container with id 58997b8fc48a379ceddf1aa04ebaf598050ecf9e7d85b482c7e790d8fddb7296 Feb 16 02:15:38.086119 master-0 kubenswrapper[7721]: I0216 02:15:38.086012 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"9063971f-d258-4c4b-9e12-06b7de390d3b","Type":"ContainerStarted","Data":"f61bf9622140879bd257d70cb26fe6250ec7cfc5858c85cf7bce7b8c5f8c9dbd"} Feb 16 02:15:38.086119 master-0 kubenswrapper[7721]: I0216 02:15:38.086092 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"9063971f-d258-4c4b-9e12-06b7de390d3b","Type":"ContainerStarted","Data":"58997b8fc48a379ceddf1aa04ebaf598050ecf9e7d85b482c7e790d8fddb7296"} Feb 16 02:15:38.109811 master-0 kubenswrapper[7721]: I0216 02:15:38.109697 7721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" podStartSLOduration=2.109660322 podStartE2EDuration="2.109660322s" podCreationTimestamp="2026-02-16 02:15:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:15:38.108331265 +0000 UTC m=+541.602565587" watchObservedRunningTime="2026-02-16 02:15:38.109660322 +0000 UTC m=+541.603894654" Feb 16 02:15:40.757671 master-0 kubenswrapper[7721]: I0216 02:15:40.757565 7721 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-etcd/etcd-master-0"] Feb 16 02:15:40.759111 master-0 kubenswrapper[7721]: I0216 02:15:40.758270 7721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="401699cb53e7098157e808a83125b0e4" containerName="etcdctl" containerID="cri-o://515103720c79c12544e44c114caf39f1fead71aaf1f7b32099dd6e9f8d85dad1" gracePeriod=30 Feb 16 02:15:40.759111 master-0 kubenswrapper[7721]: I0216 02:15:40.758375 7721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="401699cb53e7098157e808a83125b0e4" containerName="etcd-metrics" containerID="cri-o://552743736ff647b8843ecd9320b831ffe94e088a851c3824a47b4abd72c5bf6c" gracePeriod=30 Feb 16 02:15:40.759111 master-0 kubenswrapper[7721]: I0216 02:15:40.758405 7721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="401699cb53e7098157e808a83125b0e4" containerName="etcd" containerID="cri-o://833612f242200ef19bcc5a8b7695eb61621624c88cc11bacb1ca3e93309cd101" gracePeriod=30 Feb 16 02:15:40.759111 master-0 kubenswrapper[7721]: I0216 02:15:40.758570 7721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="401699cb53e7098157e808a83125b0e4" containerName="etcd-readyz" containerID="cri-o://bd0d944155c33386f42c58adaa5c5fc430daa3c34d69e35637d722fa28cc7f3d" gracePeriod=30 Feb 16 02:15:40.759111 master-0 kubenswrapper[7721]: I0216 02:15:40.758419 7721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="401699cb53e7098157e808a83125b0e4" containerName="etcd-rev" containerID="cri-o://3923bf88ac48e47171dba4bde6b1f5e832036c71d78be372ba2409d3f0539359" gracePeriod=30 Feb 16 02:15:40.761889 master-0 kubenswrapper[7721]: I0216 02:15:40.761828 7721 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-0"] Feb 16 02:15:40.762524 master-0 kubenswrapper[7721]: E0216 02:15:40.762411 7721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="401699cb53e7098157e808a83125b0e4" containerName="etcd-rev" Feb 16 02:15:40.762524 master-0 kubenswrapper[7721]: I0216 02:15:40.762488 7721 state_mem.go:107] "Deleted CPUSet assignment" podUID="401699cb53e7098157e808a83125b0e4" containerName="etcd-rev" Feb 16 02:15:40.762524 master-0 kubenswrapper[7721]: E0216 02:15:40.762519 7721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="401699cb53e7098157e808a83125b0e4" containerName="etcd-metrics" Feb 16 02:15:40.762857 master-0 kubenswrapper[7721]: I0216 02:15:40.762537 7721 state_mem.go:107] "Deleted CPUSet assignment" podUID="401699cb53e7098157e808a83125b0e4" containerName="etcd-metrics" Feb 16 02:15:40.762857 master-0 kubenswrapper[7721]: E0216 02:15:40.762610 7721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="401699cb53e7098157e808a83125b0e4" containerName="etcd-readyz" Feb 16 02:15:40.762857 master-0 kubenswrapper[7721]: I0216 02:15:40.762634 7721 state_mem.go:107] "Deleted CPUSet assignment" podUID="401699cb53e7098157e808a83125b0e4" containerName="etcd-readyz" Feb 16 02:15:40.762857 master-0 kubenswrapper[7721]: E0216 02:15:40.762690 7721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="401699cb53e7098157e808a83125b0e4" containerName="etcd-ensure-env-vars" Feb 16 02:15:40.762857 master-0 kubenswrapper[7721]: I0216 02:15:40.762708 7721 state_mem.go:107] "Deleted CPUSet assignment" podUID="401699cb53e7098157e808a83125b0e4" containerName="etcd-ensure-env-vars" Feb 16 02:15:40.762857 master-0 kubenswrapper[7721]: E0216 02:15:40.762733 7721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="401699cb53e7098157e808a83125b0e4" containerName="setup" Feb 16 02:15:40.762857 master-0 kubenswrapper[7721]: I0216 02:15:40.762753 7721 state_mem.go:107] "Deleted CPUSet assignment" podUID="401699cb53e7098157e808a83125b0e4" containerName="setup" Feb 16 02:15:40.763516 master-0 kubenswrapper[7721]: E0216 02:15:40.762864 7721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="401699cb53e7098157e808a83125b0e4" containerName="etcd-resources-copy" Feb 16 02:15:40.763516 master-0 kubenswrapper[7721]: I0216 02:15:40.762885 7721 state_mem.go:107] "Deleted CPUSet assignment" podUID="401699cb53e7098157e808a83125b0e4" containerName="etcd-resources-copy" Feb 16 02:15:40.763516 master-0 kubenswrapper[7721]: E0216 02:15:40.762909 7721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="401699cb53e7098157e808a83125b0e4" containerName="etcdctl" Feb 16 02:15:40.763516 master-0 kubenswrapper[7721]: I0216 02:15:40.762926 7721 state_mem.go:107] "Deleted CPUSet assignment" podUID="401699cb53e7098157e808a83125b0e4" containerName="etcdctl" Feb 16 02:15:40.763516 master-0 kubenswrapper[7721]: E0216 02:15:40.762951 7721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="401699cb53e7098157e808a83125b0e4" containerName="etcd" Feb 16 02:15:40.763516 master-0 kubenswrapper[7721]: I0216 02:15:40.762967 7721 state_mem.go:107] "Deleted CPUSet assignment" podUID="401699cb53e7098157e808a83125b0e4" containerName="etcd" Feb 16 02:15:40.763516 master-0 kubenswrapper[7721]: I0216 02:15:40.763247 7721 memory_manager.go:354] "RemoveStaleState removing state" podUID="401699cb53e7098157e808a83125b0e4" containerName="etcd" Feb 16 02:15:40.763516 master-0 kubenswrapper[7721]: I0216 02:15:40.763284 7721 memory_manager.go:354] "RemoveStaleState removing state" podUID="401699cb53e7098157e808a83125b0e4" containerName="etcdctl" Feb 16 02:15:40.763516 master-0 kubenswrapper[7721]: I0216 02:15:40.763306 7721 memory_manager.go:354] "RemoveStaleState removing state" podUID="401699cb53e7098157e808a83125b0e4" containerName="etcd-rev" Feb 16 02:15:40.763516 master-0 kubenswrapper[7721]: I0216 02:15:40.763334 7721 memory_manager.go:354] "RemoveStaleState removing state" podUID="401699cb53e7098157e808a83125b0e4" containerName="etcd-metrics" Feb 16 02:15:40.763516 master-0 kubenswrapper[7721]: I0216 02:15:40.763367 7721 memory_manager.go:354] "RemoveStaleState removing state" podUID="401699cb53e7098157e808a83125b0e4" containerName="etcd-readyz" Feb 16 02:15:40.912336 master-0 kubenswrapper[7721]: I0216 02:15:40.912240 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-static-pod-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 02:15:40.912579 master-0 kubenswrapper[7721]: I0216 02:15:40.912353 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-cert-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 02:15:40.912579 master-0 kubenswrapper[7721]: I0216 02:15:40.912427 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-log-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 02:15:40.912579 master-0 kubenswrapper[7721]: I0216 02:15:40.912560 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-resource-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 02:15:40.913101 master-0 kubenswrapper[7721]: I0216 02:15:40.912621 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-data-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 02:15:40.913101 master-0 kubenswrapper[7721]: I0216 02:15:40.912677 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-usr-local-bin\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 02:15:41.014645 master-0 kubenswrapper[7721]: I0216 02:15:41.013933 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-cert-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 02:15:41.014645 master-0 kubenswrapper[7721]: I0216 02:15:41.014035 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-log-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 02:15:41.014645 master-0 kubenswrapper[7721]: I0216 02:15:41.014082 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-resource-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 02:15:41.014645 master-0 kubenswrapper[7721]: I0216 02:15:41.014118 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-log-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 02:15:41.014645 master-0 kubenswrapper[7721]: I0216 02:15:41.014141 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-data-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 02:15:41.014645 master-0 kubenswrapper[7721]: I0216 02:15:41.014180 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-resource-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 02:15:41.014645 master-0 kubenswrapper[7721]: I0216 02:15:41.014083 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-cert-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 02:15:41.014645 master-0 kubenswrapper[7721]: I0216 02:15:41.014224 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-data-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 02:15:41.014645 master-0 kubenswrapper[7721]: I0216 02:15:41.014235 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-usr-local-bin\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 02:15:41.014645 master-0 kubenswrapper[7721]: I0216 02:15:41.014265 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-usr-local-bin\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 02:15:41.014645 master-0 kubenswrapper[7721]: I0216 02:15:41.014459 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-static-pod-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 02:15:41.014645 master-0 kubenswrapper[7721]: I0216 02:15:41.014554 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-static-pod-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 02:15:41.120934 master-0 kubenswrapper[7721]: I0216 02:15:41.120860 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_401699cb53e7098157e808a83125b0e4/etcd-rev/0.log" Feb 16 02:15:41.122934 master-0 kubenswrapper[7721]: I0216 02:15:41.122885 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_401699cb53e7098157e808a83125b0e4/etcd-metrics/0.log" Feb 16 02:15:41.126370 master-0 kubenswrapper[7721]: I0216 02:15:41.126321 7721 generic.go:334] "Generic (PLEG): container finished" podID="401699cb53e7098157e808a83125b0e4" containerID="3923bf88ac48e47171dba4bde6b1f5e832036c71d78be372ba2409d3f0539359" exitCode=2 Feb 16 02:15:41.126370 master-0 kubenswrapper[7721]: I0216 02:15:41.126368 7721 generic.go:334] "Generic (PLEG): container finished" podID="401699cb53e7098157e808a83125b0e4" containerID="bd0d944155c33386f42c58adaa5c5fc430daa3c34d69e35637d722fa28cc7f3d" exitCode=0 Feb 16 02:15:41.126585 master-0 kubenswrapper[7721]: I0216 02:15:41.126388 7721 generic.go:334] "Generic (PLEG): container finished" podID="401699cb53e7098157e808a83125b0e4" containerID="552743736ff647b8843ecd9320b831ffe94e088a851c3824a47b4abd72c5bf6c" exitCode=2 Feb 16 02:15:41.478230 master-0 kubenswrapper[7721]: I0216 02:15:41.478145 7721 patch_prober.go:28] interesting pod/etcd-master-0 container/etcd namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.32.10:9980/readyz\": dial tcp 192.168.32.10:9980: connect: connection refused" start-of-body= Feb 16 02:15:41.478570 master-0 kubenswrapper[7721]: I0216 02:15:41.478234 7721 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-master-0" podUID="401699cb53e7098157e808a83125b0e4" containerName="etcd" probeResult="failure" output="Get \"https://192.168.32.10:9980/readyz\": dial tcp 192.168.32.10:9980: connect: connection refused" Feb 16 02:15:50.978575 master-0 kubenswrapper[7721]: E0216 02:15:50.978431 7721 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 02:15:53.182681 master-0 kubenswrapper[7721]: I0216 02:15:53.182603 7721 scope.go:117] "RemoveContainer" containerID="faf5128620c105dbf4c0b83460e5c6d63ea7e16d1417f90a62c09817a9c5e166" Feb 16 02:15:54.272251 master-0 kubenswrapper[7721]: I0216 02:15:54.272096 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_532487ad51c30257b744e7c1c79fb34f/kube-controller-manager/0.log" Feb 16 02:15:54.272251 master-0 kubenswrapper[7721]: I0216 02:15:54.272225 7721 generic.go:334] "Generic (PLEG): container finished" podID="532487ad51c30257b744e7c1c79fb34f" containerID="2764ae0cc6da6493da6557571cb01f0bf8aba4f15b5e56b0e8f80cf54cb86272" exitCode=1 Feb 16 02:15:54.273308 master-0 kubenswrapper[7721]: I0216 02:15:54.272290 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"532487ad51c30257b744e7c1c79fb34f","Type":"ContainerDied","Data":"2764ae0cc6da6493da6557571cb01f0bf8aba4f15b5e56b0e8f80cf54cb86272"} Feb 16 02:15:54.273426 master-0 kubenswrapper[7721]: I0216 02:15:54.273384 7721 scope.go:117] "RemoveContainer" containerID="2764ae0cc6da6493da6557571cb01f0bf8aba4f15b5e56b0e8f80cf54cb86272" Feb 16 02:15:55.287832 master-0 kubenswrapper[7721]: I0216 02:15:55.287640 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_532487ad51c30257b744e7c1c79fb34f/kube-controller-manager/0.log" Feb 16 02:15:55.288553 master-0 kubenswrapper[7721]: I0216 02:15:55.287835 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"532487ad51c30257b744e7c1c79fb34f","Type":"ContainerStarted","Data":"c315dd4013bf47ddda1fd2c99a095489c35ec0eda907e0f77d5a4d2d27ec8d89"} Feb 16 02:15:55.290697 master-0 kubenswrapper[7721]: I0216 02:15:55.290629 7721 generic.go:334] "Generic (PLEG): container finished" podID="4ff0fbd8-9ecc-421f-952c-c90ea17ddc7b" containerID="0b50b3880ac63d1428248172985d5d09cf2333281a593cd01b076731c8454c9a" exitCode=0 Feb 16 02:15:55.290785 master-0 kubenswrapper[7721]: I0216 02:15:55.290714 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"4ff0fbd8-9ecc-421f-952c-c90ea17ddc7b","Type":"ContainerDied","Data":"0b50b3880ac63d1428248172985d5d09cf2333281a593cd01b076731c8454c9a"} Feb 16 02:15:56.763471 master-0 kubenswrapper[7721]: I0216 02:15:56.763382 7721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Feb 16 02:15:56.890515 master-0 kubenswrapper[7721]: I0216 02:15:56.890457 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4ff0fbd8-9ecc-421f-952c-c90ea17ddc7b-kube-api-access\") pod \"4ff0fbd8-9ecc-421f-952c-c90ea17ddc7b\" (UID: \"4ff0fbd8-9ecc-421f-952c-c90ea17ddc7b\") " Feb 16 02:15:56.890777 master-0 kubenswrapper[7721]: I0216 02:15:56.890604 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4ff0fbd8-9ecc-421f-952c-c90ea17ddc7b-kubelet-dir\") pod \"4ff0fbd8-9ecc-421f-952c-c90ea17ddc7b\" (UID: \"4ff0fbd8-9ecc-421f-952c-c90ea17ddc7b\") " Feb 16 02:15:56.890777 master-0 kubenswrapper[7721]: I0216 02:15:56.890743 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4ff0fbd8-9ecc-421f-952c-c90ea17ddc7b-var-lock\") pod \"4ff0fbd8-9ecc-421f-952c-c90ea17ddc7b\" (UID: \"4ff0fbd8-9ecc-421f-952c-c90ea17ddc7b\") " Feb 16 02:15:56.890979 master-0 kubenswrapper[7721]: I0216 02:15:56.890829 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ff0fbd8-9ecc-421f-952c-c90ea17ddc7b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4ff0fbd8-9ecc-421f-952c-c90ea17ddc7b" (UID: "4ff0fbd8-9ecc-421f-952c-c90ea17ddc7b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:15:56.891088 master-0 kubenswrapper[7721]: I0216 02:15:56.890997 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ff0fbd8-9ecc-421f-952c-c90ea17ddc7b-var-lock" (OuterVolumeSpecName: "var-lock") pod "4ff0fbd8-9ecc-421f-952c-c90ea17ddc7b" (UID: "4ff0fbd8-9ecc-421f-952c-c90ea17ddc7b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:15:56.891666 master-0 kubenswrapper[7721]: I0216 02:15:56.891618 7721 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4ff0fbd8-9ecc-421f-952c-c90ea17ddc7b-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 02:15:56.891666 master-0 kubenswrapper[7721]: I0216 02:15:56.891656 7721 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4ff0fbd8-9ecc-421f-952c-c90ea17ddc7b-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 16 02:15:56.895463 master-0 kubenswrapper[7721]: I0216 02:15:56.895370 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ff0fbd8-9ecc-421f-952c-c90ea17ddc7b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4ff0fbd8-9ecc-421f-952c-c90ea17ddc7b" (UID: "4ff0fbd8-9ecc-421f-952c-c90ea17ddc7b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:15:56.992783 master-0 kubenswrapper[7721]: I0216 02:15:56.992649 7721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4ff0fbd8-9ecc-421f-952c-c90ea17ddc7b-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 16 02:15:57.311866 master-0 kubenswrapper[7721]: I0216 02:15:57.311760 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"4ff0fbd8-9ecc-421f-952c-c90ea17ddc7b","Type":"ContainerDied","Data":"2ca81682ef82021a12533b5444bd8e4c0c15330d2db438a6a205cb774dd456a1"} Feb 16 02:15:57.311866 master-0 kubenswrapper[7721]: I0216 02:15:57.311806 7721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Feb 16 02:15:57.311866 master-0 kubenswrapper[7721]: I0216 02:15:57.311829 7721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ca81682ef82021a12533b5444bd8e4c0c15330d2db438a6a205cb774dd456a1" Feb 16 02:15:57.314651 master-0 kubenswrapper[7721]: I0216 02:15:57.314587 7721 generic.go:334] "Generic (PLEG): container finished" podID="9460ca0802075a8a6a10d7b3e6052c4d" containerID="1bad524fd514e3639a6a8b060873c8398b9f534aa2528726df9aa2897827465b" exitCode=1 Feb 16 02:15:57.314651 master-0 kubenswrapper[7721]: I0216 02:15:57.314636 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"9460ca0802075a8a6a10d7b3e6052c4d","Type":"ContainerDied","Data":"1bad524fd514e3639a6a8b060873c8398b9f534aa2528726df9aa2897827465b"} Feb 16 02:15:57.314883 master-0 kubenswrapper[7721]: I0216 02:15:57.314677 7721 scope.go:117] "RemoveContainer" containerID="0d84c00dcc11900a2f5a4ff15f798ef8c8b6cc92a9b7e1f32a7c33bfeed4a478" Feb 16 02:15:57.315356 master-0 kubenswrapper[7721]: I0216 02:15:57.315288 7721 scope.go:117] "RemoveContainer" containerID="1bad524fd514e3639a6a8b060873c8398b9f534aa2528726df9aa2897827465b" Feb 16 02:15:57.315806 master-0 kubenswrapper[7721]: E0216 02:15:57.315746 7721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-scheduler pod=bootstrap-kube-scheduler-master-0_kube-system(9460ca0802075a8a6a10d7b3e6052c4d)\"" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="9460ca0802075a8a6a10d7b3e6052c4d" Feb 16 02:16:00.797345 master-0 kubenswrapper[7721]: I0216 02:16:00.797248 7721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:16:00.797345 master-0 kubenswrapper[7721]: I0216 02:16:00.797341 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:16:00.798710 master-0 kubenswrapper[7721]: I0216 02:16:00.797915 7721 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Feb 16 02:16:00.798710 master-0 kubenswrapper[7721]: I0216 02:16:00.798007 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="532487ad51c30257b744e7c1c79fb34f" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Feb 16 02:16:00.980067 master-0 kubenswrapper[7721]: E0216 02:16:00.979952 7721 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io master-0)" Feb 16 02:16:06.414936 master-0 kubenswrapper[7721]: I0216 02:16:06.414831 7721 generic.go:334] "Generic (PLEG): container finished" podID="17390d9a-148d-4927-a831-5bc4873c43d5" containerID="226c9fa2763b9d22f9a5e31efca1219592ef35d729ba28d906add8e85efb3944" exitCode=0 Feb 16 02:16:06.414936 master-0 kubenswrapper[7721]: I0216 02:16:06.414913 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-864ddd5f56-ffptx" event={"ID":"17390d9a-148d-4927-a831-5bc4873c43d5","Type":"ContainerDied","Data":"226c9fa2763b9d22f9a5e31efca1219592ef35d729ba28d906add8e85efb3944"} Feb 16 02:16:06.416091 master-0 kubenswrapper[7721]: I0216 02:16:06.415001 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-864ddd5f56-ffptx" event={"ID":"17390d9a-148d-4927-a831-5bc4873c43d5","Type":"ContainerStarted","Data":"3ce3bc4dbf9d2b84eb229ace41c4b8a84419c825acb6db3b3ccaf2d00311773f"} Feb 16 02:16:06.416091 master-0 kubenswrapper[7721]: I0216 02:16:06.415054 7721 scope.go:117] "RemoveContainer" containerID="627ab5d9a2bbd36bf2da2e153f10cfc3717737712db4adb69838076f9b75b2a5" Feb 16 02:16:06.965588 master-0 kubenswrapper[7721]: I0216 02:16:06.965499 7721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-864ddd5f56-ffptx" Feb 16 02:16:06.969489 master-0 kubenswrapper[7721]: I0216 02:16:06.969360 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:16:06.969489 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:16:06.969489 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:16:06.969489 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:16:06.969824 master-0 kubenswrapper[7721]: I0216 02:16:06.969510 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:16:07.969395 master-0 kubenswrapper[7721]: I0216 02:16:07.969311 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:16:07.969395 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:16:07.969395 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:16:07.969395 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:16:07.970131 master-0 kubenswrapper[7721]: I0216 02:16:07.969426 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:16:08.969741 master-0 kubenswrapper[7721]: I0216 02:16:08.969607 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:16:08.969741 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:16:08.969741 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:16:08.969741 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:16:08.969741 master-0 kubenswrapper[7721]: I0216 02:16:08.969727 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:16:09.969020 master-0 kubenswrapper[7721]: I0216 02:16:09.968896 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:16:09.969020 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:16:09.969020 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:16:09.969020 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:16:09.969020 master-0 kubenswrapper[7721]: I0216 02:16:09.969008 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:16:10.797563 master-0 kubenswrapper[7721]: I0216 02:16:10.797516 7721 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Feb 16 02:16:10.797911 master-0 kubenswrapper[7721]: I0216 02:16:10.797593 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="532487ad51c30257b744e7c1c79fb34f" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Feb 16 02:16:10.966138 master-0 kubenswrapper[7721]: I0216 02:16:10.965947 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-864ddd5f56-ffptx" Feb 16 02:16:10.969487 master-0 kubenswrapper[7721]: I0216 02:16:10.969365 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:16:10.969487 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:16:10.969487 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:16:10.969487 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:16:10.969829 master-0 kubenswrapper[7721]: I0216 02:16:10.969495 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:16:10.982162 master-0 kubenswrapper[7721]: E0216 02:16:10.982023 7721 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 02:16:11.372618 master-0 kubenswrapper[7721]: I0216 02:16:11.370418 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_401699cb53e7098157e808a83125b0e4/etcd-rev/0.log" Feb 16 02:16:11.372618 master-0 kubenswrapper[7721]: I0216 02:16:11.372015 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_401699cb53e7098157e808a83125b0e4/etcd-metrics/0.log" Feb 16 02:16:11.374794 master-0 kubenswrapper[7721]: I0216 02:16:11.374727 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_401699cb53e7098157e808a83125b0e4/etcd/0.log" Feb 16 02:16:11.375515 master-0 kubenswrapper[7721]: I0216 02:16:11.375466 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_401699cb53e7098157e808a83125b0e4/etcdctl/0.log" Feb 16 02:16:11.377700 master-0 kubenswrapper[7721]: I0216 02:16:11.377648 7721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Feb 16 02:16:11.451573 master-0 kubenswrapper[7721]: I0216 02:16:11.451475 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-static-pod-dir" (OuterVolumeSpecName: "static-pod-dir") pod "401699cb53e7098157e808a83125b0e4" (UID: "401699cb53e7098157e808a83125b0e4"). InnerVolumeSpecName "static-pod-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:16:11.451573 master-0 kubenswrapper[7721]: I0216 02:16:11.451343 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-static-pod-dir\") pod \"401699cb53e7098157e808a83125b0e4\" (UID: \"401699cb53e7098157e808a83125b0e4\") " Feb 16 02:16:11.452152 master-0 kubenswrapper[7721]: I0216 02:16:11.451627 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-log-dir\") pod \"401699cb53e7098157e808a83125b0e4\" (UID: \"401699cb53e7098157e808a83125b0e4\") " Feb 16 02:16:11.452152 master-0 kubenswrapper[7721]: I0216 02:16:11.451714 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-log-dir" (OuterVolumeSpecName: "log-dir") pod "401699cb53e7098157e808a83125b0e4" (UID: "401699cb53e7098157e808a83125b0e4"). InnerVolumeSpecName "log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:16:11.452152 master-0 kubenswrapper[7721]: I0216 02:16:11.451784 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-resource-dir\") pod \"401699cb53e7098157e808a83125b0e4\" (UID: \"401699cb53e7098157e808a83125b0e4\") " Feb 16 02:16:11.452152 master-0 kubenswrapper[7721]: I0216 02:16:11.451862 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "401699cb53e7098157e808a83125b0e4" (UID: "401699cb53e7098157e808a83125b0e4"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:16:11.452152 master-0 kubenswrapper[7721]: I0216 02:16:11.451969 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-data-dir\") pod \"401699cb53e7098157e808a83125b0e4\" (UID: \"401699cb53e7098157e808a83125b0e4\") " Feb 16 02:16:11.452152 master-0 kubenswrapper[7721]: I0216 02:16:11.452032 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-usr-local-bin\") pod \"401699cb53e7098157e808a83125b0e4\" (UID: \"401699cb53e7098157e808a83125b0e4\") " Feb 16 02:16:11.452152 master-0 kubenswrapper[7721]: I0216 02:16:11.452061 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-cert-dir\") pod \"401699cb53e7098157e808a83125b0e4\" (UID: \"401699cb53e7098157e808a83125b0e4\") " Feb 16 02:16:11.452644 master-0 kubenswrapper[7721]: I0216 02:16:11.452163 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-usr-local-bin" (OuterVolumeSpecName: "usr-local-bin") pod "401699cb53e7098157e808a83125b0e4" (UID: "401699cb53e7098157e808a83125b0e4"). InnerVolumeSpecName "usr-local-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:16:11.452644 master-0 kubenswrapper[7721]: I0216 02:16:11.452154 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-data-dir" (OuterVolumeSpecName: "data-dir") pod "401699cb53e7098157e808a83125b0e4" (UID: "401699cb53e7098157e808a83125b0e4"). InnerVolumeSpecName "data-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:16:11.452644 master-0 kubenswrapper[7721]: I0216 02:16:11.452260 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "401699cb53e7098157e808a83125b0e4" (UID: "401699cb53e7098157e808a83125b0e4"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:16:11.452644 master-0 kubenswrapper[7721]: I0216 02:16:11.452539 7721 reconciler_common.go:293] "Volume detached for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-data-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 02:16:11.452644 master-0 kubenswrapper[7721]: I0216 02:16:11.452566 7721 reconciler_common.go:293] "Volume detached for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-usr-local-bin\") on node \"master-0\" DevicePath \"\"" Feb 16 02:16:11.452644 master-0 kubenswrapper[7721]: I0216 02:16:11.452590 7721 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-cert-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 02:16:11.452644 master-0 kubenswrapper[7721]: I0216 02:16:11.452649 7721 reconciler_common.go:293] "Volume detached for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-static-pod-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 02:16:11.453106 master-0 kubenswrapper[7721]: I0216 02:16:11.452667 7721 reconciler_common.go:293] "Volume detached for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-log-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 02:16:11.453106 master-0 kubenswrapper[7721]: I0216 02:16:11.452684 7721 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 02:16:11.464872 master-0 kubenswrapper[7721]: I0216 02:16:11.464810 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_401699cb53e7098157e808a83125b0e4/etcd-rev/0.log" Feb 16 02:16:11.466727 master-0 kubenswrapper[7721]: I0216 02:16:11.466664 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_401699cb53e7098157e808a83125b0e4/etcd-metrics/0.log" Feb 16 02:16:11.468100 master-0 kubenswrapper[7721]: I0216 02:16:11.468037 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_401699cb53e7098157e808a83125b0e4/etcd/0.log" Feb 16 02:16:11.469172 master-0 kubenswrapper[7721]: I0216 02:16:11.469108 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_401699cb53e7098157e808a83125b0e4/etcdctl/0.log" Feb 16 02:16:11.470679 master-0 kubenswrapper[7721]: I0216 02:16:11.470614 7721 generic.go:334] "Generic (PLEG): container finished" podID="401699cb53e7098157e808a83125b0e4" containerID="833612f242200ef19bcc5a8b7695eb61621624c88cc11bacb1ca3e93309cd101" exitCode=137 Feb 16 02:16:11.470679 master-0 kubenswrapper[7721]: I0216 02:16:11.470669 7721 generic.go:334] "Generic (PLEG): container finished" podID="401699cb53e7098157e808a83125b0e4" containerID="515103720c79c12544e44c114caf39f1fead71aaf1f7b32099dd6e9f8d85dad1" exitCode=137 Feb 16 02:16:11.470881 master-0 kubenswrapper[7721]: I0216 02:16:11.470741 7721 scope.go:117] "RemoveContainer" containerID="3923bf88ac48e47171dba4bde6b1f5e832036c71d78be372ba2409d3f0539359" Feb 16 02:16:11.471072 master-0 kubenswrapper[7721]: I0216 02:16:11.471014 7721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Feb 16 02:16:11.499597 master-0 kubenswrapper[7721]: I0216 02:16:11.499526 7721 scope.go:117] "RemoveContainer" containerID="bd0d944155c33386f42c58adaa5c5fc430daa3c34d69e35637d722fa28cc7f3d" Feb 16 02:16:11.527019 master-0 kubenswrapper[7721]: I0216 02:16:11.526949 7721 scope.go:117] "RemoveContainer" containerID="552743736ff647b8843ecd9320b831ffe94e088a851c3824a47b4abd72c5bf6c" Feb 16 02:16:11.554824 master-0 kubenswrapper[7721]: I0216 02:16:11.554750 7721 scope.go:117] "RemoveContainer" containerID="833612f242200ef19bcc5a8b7695eb61621624c88cc11bacb1ca3e93309cd101" Feb 16 02:16:11.585002 master-0 kubenswrapper[7721]: I0216 02:16:11.584940 7721 scope.go:117] "RemoveContainer" containerID="515103720c79c12544e44c114caf39f1fead71aaf1f7b32099dd6e9f8d85dad1" Feb 16 02:16:11.609216 master-0 kubenswrapper[7721]: I0216 02:16:11.609170 7721 scope.go:117] "RemoveContainer" containerID="62af0446d65ad4423070101103807a98e30f740477e4dc3f78e2f74fd5837d04" Feb 16 02:16:11.638521 master-0 kubenswrapper[7721]: I0216 02:16:11.638369 7721 scope.go:117] "RemoveContainer" containerID="38253c4a837f04a0a9230ea518637f47275c1199732b226cf26c062c64a84db2" Feb 16 02:16:11.674802 master-0 kubenswrapper[7721]: I0216 02:16:11.674740 7721 scope.go:117] "RemoveContainer" containerID="09435c57d62c6b4c54925d09966ded5900478367440fe80cf72c6bfff877a848" Feb 16 02:16:11.709328 master-0 kubenswrapper[7721]: I0216 02:16:11.709267 7721 scope.go:117] "RemoveContainer" containerID="3923bf88ac48e47171dba4bde6b1f5e832036c71d78be372ba2409d3f0539359" Feb 16 02:16:11.709998 master-0 kubenswrapper[7721]: E0216 02:16:11.709932 7721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3923bf88ac48e47171dba4bde6b1f5e832036c71d78be372ba2409d3f0539359\": container with ID starting with 3923bf88ac48e47171dba4bde6b1f5e832036c71d78be372ba2409d3f0539359 not found: ID does not exist" containerID="3923bf88ac48e47171dba4bde6b1f5e832036c71d78be372ba2409d3f0539359" Feb 16 02:16:11.710161 master-0 kubenswrapper[7721]: I0216 02:16:11.710001 7721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3923bf88ac48e47171dba4bde6b1f5e832036c71d78be372ba2409d3f0539359"} err="failed to get container status \"3923bf88ac48e47171dba4bde6b1f5e832036c71d78be372ba2409d3f0539359\": rpc error: code = NotFound desc = could not find container \"3923bf88ac48e47171dba4bde6b1f5e832036c71d78be372ba2409d3f0539359\": container with ID starting with 3923bf88ac48e47171dba4bde6b1f5e832036c71d78be372ba2409d3f0539359 not found: ID does not exist" Feb 16 02:16:11.710161 master-0 kubenswrapper[7721]: I0216 02:16:11.710048 7721 scope.go:117] "RemoveContainer" containerID="bd0d944155c33386f42c58adaa5c5fc430daa3c34d69e35637d722fa28cc7f3d" Feb 16 02:16:11.710677 master-0 kubenswrapper[7721]: E0216 02:16:11.710606 7721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd0d944155c33386f42c58adaa5c5fc430daa3c34d69e35637d722fa28cc7f3d\": container with ID starting with bd0d944155c33386f42c58adaa5c5fc430daa3c34d69e35637d722fa28cc7f3d not found: ID does not exist" containerID="bd0d944155c33386f42c58adaa5c5fc430daa3c34d69e35637d722fa28cc7f3d" Feb 16 02:16:11.710773 master-0 kubenswrapper[7721]: I0216 02:16:11.710686 7721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd0d944155c33386f42c58adaa5c5fc430daa3c34d69e35637d722fa28cc7f3d"} err="failed to get container status \"bd0d944155c33386f42c58adaa5c5fc430daa3c34d69e35637d722fa28cc7f3d\": rpc error: code = NotFound desc = could not find container \"bd0d944155c33386f42c58adaa5c5fc430daa3c34d69e35637d722fa28cc7f3d\": container with ID starting with bd0d944155c33386f42c58adaa5c5fc430daa3c34d69e35637d722fa28cc7f3d not found: ID does not exist" Feb 16 02:16:11.710773 master-0 kubenswrapper[7721]: I0216 02:16:11.710730 7721 scope.go:117] "RemoveContainer" containerID="552743736ff647b8843ecd9320b831ffe94e088a851c3824a47b4abd72c5bf6c" Feb 16 02:16:11.711590 master-0 kubenswrapper[7721]: E0216 02:16:11.711521 7721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"552743736ff647b8843ecd9320b831ffe94e088a851c3824a47b4abd72c5bf6c\": container with ID starting with 552743736ff647b8843ecd9320b831ffe94e088a851c3824a47b4abd72c5bf6c not found: ID does not exist" containerID="552743736ff647b8843ecd9320b831ffe94e088a851c3824a47b4abd72c5bf6c" Feb 16 02:16:11.711712 master-0 kubenswrapper[7721]: I0216 02:16:11.711586 7721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"552743736ff647b8843ecd9320b831ffe94e088a851c3824a47b4abd72c5bf6c"} err="failed to get container status \"552743736ff647b8843ecd9320b831ffe94e088a851c3824a47b4abd72c5bf6c\": rpc error: code = NotFound desc = could not find container \"552743736ff647b8843ecd9320b831ffe94e088a851c3824a47b4abd72c5bf6c\": container with ID starting with 552743736ff647b8843ecd9320b831ffe94e088a851c3824a47b4abd72c5bf6c not found: ID does not exist" Feb 16 02:16:11.711712 master-0 kubenswrapper[7721]: I0216 02:16:11.711668 7721 scope.go:117] "RemoveContainer" containerID="833612f242200ef19bcc5a8b7695eb61621624c88cc11bacb1ca3e93309cd101" Feb 16 02:16:11.712194 master-0 kubenswrapper[7721]: E0216 02:16:11.712137 7721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"833612f242200ef19bcc5a8b7695eb61621624c88cc11bacb1ca3e93309cd101\": container with ID starting with 833612f242200ef19bcc5a8b7695eb61621624c88cc11bacb1ca3e93309cd101 not found: ID does not exist" containerID="833612f242200ef19bcc5a8b7695eb61621624c88cc11bacb1ca3e93309cd101" Feb 16 02:16:11.712277 master-0 kubenswrapper[7721]: I0216 02:16:11.712189 7721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"833612f242200ef19bcc5a8b7695eb61621624c88cc11bacb1ca3e93309cd101"} err="failed to get container status \"833612f242200ef19bcc5a8b7695eb61621624c88cc11bacb1ca3e93309cd101\": rpc error: code = NotFound desc = could not find container \"833612f242200ef19bcc5a8b7695eb61621624c88cc11bacb1ca3e93309cd101\": container with ID starting with 833612f242200ef19bcc5a8b7695eb61621624c88cc11bacb1ca3e93309cd101 not found: ID does not exist" Feb 16 02:16:11.712277 master-0 kubenswrapper[7721]: I0216 02:16:11.712220 7721 scope.go:117] "RemoveContainer" containerID="515103720c79c12544e44c114caf39f1fead71aaf1f7b32099dd6e9f8d85dad1" Feb 16 02:16:11.712884 master-0 kubenswrapper[7721]: E0216 02:16:11.712808 7721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"515103720c79c12544e44c114caf39f1fead71aaf1f7b32099dd6e9f8d85dad1\": container with ID starting with 515103720c79c12544e44c114caf39f1fead71aaf1f7b32099dd6e9f8d85dad1 not found: ID does not exist" containerID="515103720c79c12544e44c114caf39f1fead71aaf1f7b32099dd6e9f8d85dad1" Feb 16 02:16:11.713002 master-0 kubenswrapper[7721]: I0216 02:16:11.712881 7721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"515103720c79c12544e44c114caf39f1fead71aaf1f7b32099dd6e9f8d85dad1"} err="failed to get container status \"515103720c79c12544e44c114caf39f1fead71aaf1f7b32099dd6e9f8d85dad1\": rpc error: code = NotFound desc = could not find container \"515103720c79c12544e44c114caf39f1fead71aaf1f7b32099dd6e9f8d85dad1\": container with ID starting with 515103720c79c12544e44c114caf39f1fead71aaf1f7b32099dd6e9f8d85dad1 not found: ID does not exist" Feb 16 02:16:11.713002 master-0 kubenswrapper[7721]: I0216 02:16:11.712921 7721 scope.go:117] "RemoveContainer" containerID="62af0446d65ad4423070101103807a98e30f740477e4dc3f78e2f74fd5837d04" Feb 16 02:16:11.713614 master-0 kubenswrapper[7721]: E0216 02:16:11.713563 7721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62af0446d65ad4423070101103807a98e30f740477e4dc3f78e2f74fd5837d04\": container with ID starting with 62af0446d65ad4423070101103807a98e30f740477e4dc3f78e2f74fd5837d04 not found: ID does not exist" containerID="62af0446d65ad4423070101103807a98e30f740477e4dc3f78e2f74fd5837d04" Feb 16 02:16:11.713714 master-0 kubenswrapper[7721]: I0216 02:16:11.713610 7721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62af0446d65ad4423070101103807a98e30f740477e4dc3f78e2f74fd5837d04"} err="failed to get container status \"62af0446d65ad4423070101103807a98e30f740477e4dc3f78e2f74fd5837d04\": rpc error: code = NotFound desc = could not find container \"62af0446d65ad4423070101103807a98e30f740477e4dc3f78e2f74fd5837d04\": container with ID starting with 62af0446d65ad4423070101103807a98e30f740477e4dc3f78e2f74fd5837d04 not found: ID does not exist" Feb 16 02:16:11.713714 master-0 kubenswrapper[7721]: I0216 02:16:11.713641 7721 scope.go:117] "RemoveContainer" containerID="38253c4a837f04a0a9230ea518637f47275c1199732b226cf26c062c64a84db2" Feb 16 02:16:11.714294 master-0 kubenswrapper[7721]: E0216 02:16:11.714225 7721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38253c4a837f04a0a9230ea518637f47275c1199732b226cf26c062c64a84db2\": container with ID starting with 38253c4a837f04a0a9230ea518637f47275c1199732b226cf26c062c64a84db2 not found: ID does not exist" containerID="38253c4a837f04a0a9230ea518637f47275c1199732b226cf26c062c64a84db2" Feb 16 02:16:11.714391 master-0 kubenswrapper[7721]: I0216 02:16:11.714289 7721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38253c4a837f04a0a9230ea518637f47275c1199732b226cf26c062c64a84db2"} err="failed to get container status \"38253c4a837f04a0a9230ea518637f47275c1199732b226cf26c062c64a84db2\": rpc error: code = NotFound desc = could not find container \"38253c4a837f04a0a9230ea518637f47275c1199732b226cf26c062c64a84db2\": container with ID starting with 38253c4a837f04a0a9230ea518637f47275c1199732b226cf26c062c64a84db2 not found: ID does not exist" Feb 16 02:16:11.714391 master-0 kubenswrapper[7721]: I0216 02:16:11.714335 7721 scope.go:117] "RemoveContainer" containerID="09435c57d62c6b4c54925d09966ded5900478367440fe80cf72c6bfff877a848" Feb 16 02:16:11.715018 master-0 kubenswrapper[7721]: E0216 02:16:11.714948 7721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09435c57d62c6b4c54925d09966ded5900478367440fe80cf72c6bfff877a848\": container with ID starting with 09435c57d62c6b4c54925d09966ded5900478367440fe80cf72c6bfff877a848 not found: ID does not exist" containerID="09435c57d62c6b4c54925d09966ded5900478367440fe80cf72c6bfff877a848" Feb 16 02:16:11.715134 master-0 kubenswrapper[7721]: I0216 02:16:11.715012 7721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09435c57d62c6b4c54925d09966ded5900478367440fe80cf72c6bfff877a848"} err="failed to get container status \"09435c57d62c6b4c54925d09966ded5900478367440fe80cf72c6bfff877a848\": rpc error: code = NotFound desc = could not find container \"09435c57d62c6b4c54925d09966ded5900478367440fe80cf72c6bfff877a848\": container with ID starting with 09435c57d62c6b4c54925d09966ded5900478367440fe80cf72c6bfff877a848 not found: ID does not exist" Feb 16 02:16:11.715134 master-0 kubenswrapper[7721]: I0216 02:16:11.715057 7721 scope.go:117] "RemoveContainer" containerID="3923bf88ac48e47171dba4bde6b1f5e832036c71d78be372ba2409d3f0539359" Feb 16 02:16:11.715693 master-0 kubenswrapper[7721]: I0216 02:16:11.715623 7721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3923bf88ac48e47171dba4bde6b1f5e832036c71d78be372ba2409d3f0539359"} err="failed to get container status \"3923bf88ac48e47171dba4bde6b1f5e832036c71d78be372ba2409d3f0539359\": rpc error: code = NotFound desc = could not find container \"3923bf88ac48e47171dba4bde6b1f5e832036c71d78be372ba2409d3f0539359\": container with ID starting with 3923bf88ac48e47171dba4bde6b1f5e832036c71d78be372ba2409d3f0539359 not found: ID does not exist" Feb 16 02:16:11.715693 master-0 kubenswrapper[7721]: I0216 02:16:11.715684 7721 scope.go:117] "RemoveContainer" containerID="bd0d944155c33386f42c58adaa5c5fc430daa3c34d69e35637d722fa28cc7f3d" Feb 16 02:16:11.716268 master-0 kubenswrapper[7721]: I0216 02:16:11.716197 7721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd0d944155c33386f42c58adaa5c5fc430daa3c34d69e35637d722fa28cc7f3d"} err="failed to get container status \"bd0d944155c33386f42c58adaa5c5fc430daa3c34d69e35637d722fa28cc7f3d\": rpc error: code = NotFound desc = could not find container \"bd0d944155c33386f42c58adaa5c5fc430daa3c34d69e35637d722fa28cc7f3d\": container with ID starting with bd0d944155c33386f42c58adaa5c5fc430daa3c34d69e35637d722fa28cc7f3d not found: ID does not exist" Feb 16 02:16:11.716268 master-0 kubenswrapper[7721]: I0216 02:16:11.716248 7721 scope.go:117] "RemoveContainer" containerID="552743736ff647b8843ecd9320b831ffe94e088a851c3824a47b4abd72c5bf6c" Feb 16 02:16:11.716943 master-0 kubenswrapper[7721]: I0216 02:16:11.716885 7721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"552743736ff647b8843ecd9320b831ffe94e088a851c3824a47b4abd72c5bf6c"} err="failed to get container status \"552743736ff647b8843ecd9320b831ffe94e088a851c3824a47b4abd72c5bf6c\": rpc error: code = NotFound desc = could not find container \"552743736ff647b8843ecd9320b831ffe94e088a851c3824a47b4abd72c5bf6c\": container with ID starting with 552743736ff647b8843ecd9320b831ffe94e088a851c3824a47b4abd72c5bf6c not found: ID does not exist" Feb 16 02:16:11.716943 master-0 kubenswrapper[7721]: I0216 02:16:11.716926 7721 scope.go:117] "RemoveContainer" containerID="833612f242200ef19bcc5a8b7695eb61621624c88cc11bacb1ca3e93309cd101" Feb 16 02:16:11.717885 master-0 kubenswrapper[7721]: I0216 02:16:11.717758 7721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"833612f242200ef19bcc5a8b7695eb61621624c88cc11bacb1ca3e93309cd101"} err="failed to get container status \"833612f242200ef19bcc5a8b7695eb61621624c88cc11bacb1ca3e93309cd101\": rpc error: code = NotFound desc = could not find container \"833612f242200ef19bcc5a8b7695eb61621624c88cc11bacb1ca3e93309cd101\": container with ID starting with 833612f242200ef19bcc5a8b7695eb61621624c88cc11bacb1ca3e93309cd101 not found: ID does not exist" Feb 16 02:16:11.718003 master-0 kubenswrapper[7721]: I0216 02:16:11.717915 7721 scope.go:117] "RemoveContainer" containerID="515103720c79c12544e44c114caf39f1fead71aaf1f7b32099dd6e9f8d85dad1" Feb 16 02:16:11.718864 master-0 kubenswrapper[7721]: I0216 02:16:11.718752 7721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"515103720c79c12544e44c114caf39f1fead71aaf1f7b32099dd6e9f8d85dad1"} err="failed to get container status \"515103720c79c12544e44c114caf39f1fead71aaf1f7b32099dd6e9f8d85dad1\": rpc error: code = NotFound desc = could not find container \"515103720c79c12544e44c114caf39f1fead71aaf1f7b32099dd6e9f8d85dad1\": container with ID starting with 515103720c79c12544e44c114caf39f1fead71aaf1f7b32099dd6e9f8d85dad1 not found: ID does not exist" Feb 16 02:16:11.718864 master-0 kubenswrapper[7721]: I0216 02:16:11.718847 7721 scope.go:117] "RemoveContainer" containerID="62af0446d65ad4423070101103807a98e30f740477e4dc3f78e2f74fd5837d04" Feb 16 02:16:11.719422 master-0 kubenswrapper[7721]: I0216 02:16:11.719356 7721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62af0446d65ad4423070101103807a98e30f740477e4dc3f78e2f74fd5837d04"} err="failed to get container status \"62af0446d65ad4423070101103807a98e30f740477e4dc3f78e2f74fd5837d04\": rpc error: code = NotFound desc = could not find container \"62af0446d65ad4423070101103807a98e30f740477e4dc3f78e2f74fd5837d04\": container with ID starting with 62af0446d65ad4423070101103807a98e30f740477e4dc3f78e2f74fd5837d04 not found: ID does not exist" Feb 16 02:16:11.719422 master-0 kubenswrapper[7721]: I0216 02:16:11.719412 7721 scope.go:117] "RemoveContainer" containerID="38253c4a837f04a0a9230ea518637f47275c1199732b226cf26c062c64a84db2" Feb 16 02:16:11.720036 master-0 kubenswrapper[7721]: I0216 02:16:11.719970 7721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38253c4a837f04a0a9230ea518637f47275c1199732b226cf26c062c64a84db2"} err="failed to get container status \"38253c4a837f04a0a9230ea518637f47275c1199732b226cf26c062c64a84db2\": rpc error: code = NotFound desc = could not find container \"38253c4a837f04a0a9230ea518637f47275c1199732b226cf26c062c64a84db2\": container with ID starting with 38253c4a837f04a0a9230ea518637f47275c1199732b226cf26c062c64a84db2 not found: ID does not exist" Feb 16 02:16:11.720036 master-0 kubenswrapper[7721]: I0216 02:16:11.720013 7721 scope.go:117] "RemoveContainer" containerID="09435c57d62c6b4c54925d09966ded5900478367440fe80cf72c6bfff877a848" Feb 16 02:16:11.720616 master-0 kubenswrapper[7721]: I0216 02:16:11.720545 7721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09435c57d62c6b4c54925d09966ded5900478367440fe80cf72c6bfff877a848"} err="failed to get container status \"09435c57d62c6b4c54925d09966ded5900478367440fe80cf72c6bfff877a848\": rpc error: code = NotFound desc = could not find container \"09435c57d62c6b4c54925d09966ded5900478367440fe80cf72c6bfff877a848\": container with ID starting with 09435c57d62c6b4c54925d09966ded5900478367440fe80cf72c6bfff877a848 not found: ID does not exist" Feb 16 02:16:11.726006 master-0 kubenswrapper[7721]: I0216 02:16:11.725957 7721 scope.go:117] "RemoveContainer" containerID="1bad524fd514e3639a6a8b060873c8398b9f534aa2528726df9aa2897827465b" Feb 16 02:16:11.969705 master-0 kubenswrapper[7721]: I0216 02:16:11.969622 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:16:11.969705 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:16:11.969705 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:16:11.969705 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:16:11.984019 master-0 kubenswrapper[7721]: I0216 02:16:11.969735 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:16:12.488571 master-0 kubenswrapper[7721]: I0216 02:16:12.488399 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"9460ca0802075a8a6a10d7b3e6052c4d","Type":"ContainerStarted","Data":"6bc4b5ee1e89ed7a76ec9068e6cdb19289d70c03bd852b3dc8e93c9d7f9e1ba4"} Feb 16 02:16:12.747293 master-0 kubenswrapper[7721]: I0216 02:16:12.747115 7721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="401699cb53e7098157e808a83125b0e4" path="/var/lib/kubelet/pods/401699cb53e7098157e808a83125b0e4/volumes" Feb 16 02:16:12.969635 master-0 kubenswrapper[7721]: I0216 02:16:12.969541 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:16:12.969635 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:16:12.969635 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:16:12.969635 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:16:12.970634 master-0 kubenswrapper[7721]: I0216 02:16:12.969652 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:16:13.968855 master-0 kubenswrapper[7721]: I0216 02:16:13.968712 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:16:13.968855 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:16:13.968855 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:16:13.968855 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:16:13.968855 master-0 kubenswrapper[7721]: I0216 02:16:13.968812 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:16:14.768950 master-0 kubenswrapper[7721]: E0216 02:16:14.768721 7721 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{etcd-master-0.1894986b752dd365 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0,UID:401699cb53e7098157e808a83125b0e4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Killing,Message:Stopping container etcd-readyz,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:15:40.758307685 +0000 UTC m=+544.252541987,LastTimestamp:2026-02-16 02:15:40.758307685 +0000 UTC m=+544.252541987,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:16:14.968711 master-0 kubenswrapper[7721]: I0216 02:16:14.968626 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:16:14.968711 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:16:14.968711 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:16:14.968711 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:16:14.969158 master-0 kubenswrapper[7721]: I0216 02:16:14.968730 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:16:15.969819 master-0 kubenswrapper[7721]: I0216 02:16:15.969708 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:16:15.969819 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:16:15.969819 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:16:15.969819 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:16:15.969819 master-0 kubenswrapper[7721]: I0216 02:16:15.969814 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:16:16.968007 master-0 kubenswrapper[7721]: I0216 02:16:16.967901 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:16:16.968007 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:16:16.968007 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:16:16.968007 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:16:16.968711 master-0 kubenswrapper[7721]: I0216 02:16:16.968028 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:16:17.969573 master-0 kubenswrapper[7721]: I0216 02:16:17.969431 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:16:17.969573 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:16:17.969573 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:16:17.969573 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:16:17.969573 master-0 kubenswrapper[7721]: I0216 02:16:17.969569 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:16:18.969331 master-0 kubenswrapper[7721]: I0216 02:16:18.969243 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:16:18.969331 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:16:18.969331 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:16:18.969331 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:16:18.970268 master-0 kubenswrapper[7721]: I0216 02:16:18.969329 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:16:19.724134 master-0 kubenswrapper[7721]: I0216 02:16:19.724027 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Feb 16 02:16:19.744357 master-0 kubenswrapper[7721]: I0216 02:16:19.744290 7721 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="75e1522d-2c8c-4dbc-830d-47636881cc06" Feb 16 02:16:19.744357 master-0 kubenswrapper[7721]: I0216 02:16:19.744335 7721 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="75e1522d-2c8c-4dbc-830d-47636881cc06" Feb 16 02:16:19.968840 master-0 kubenswrapper[7721]: I0216 02:16:19.968686 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:16:19.968840 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:16:19.968840 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:16:19.968840 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:16:19.969361 master-0 kubenswrapper[7721]: I0216 02:16:19.968879 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:16:20.798540 master-0 kubenswrapper[7721]: I0216 02:16:20.798396 7721 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Feb 16 02:16:20.799330 master-0 kubenswrapper[7721]: I0216 02:16:20.798547 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="532487ad51c30257b744e7c1c79fb34f" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Feb 16 02:16:20.799330 master-0 kubenswrapper[7721]: I0216 02:16:20.798684 7721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:16:20.800426 master-0 kubenswrapper[7721]: I0216 02:16:20.800354 7721 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"c315dd4013bf47ddda1fd2c99a095489c35ec0eda907e0f77d5a4d2d27ec8d89"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Feb 16 02:16:20.800675 master-0 kubenswrapper[7721]: I0216 02:16:20.800623 7721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="532487ad51c30257b744e7c1c79fb34f" containerName="kube-controller-manager" containerID="cri-o://c315dd4013bf47ddda1fd2c99a095489c35ec0eda907e0f77d5a4d2d27ec8d89" gracePeriod=30 Feb 16 02:16:20.969964 master-0 kubenswrapper[7721]: I0216 02:16:20.969850 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:16:20.969964 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:16:20.969964 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:16:20.969964 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:16:20.970378 master-0 kubenswrapper[7721]: I0216 02:16:20.969978 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:16:20.983080 master-0 kubenswrapper[7721]: E0216 02:16:20.982991 7721 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 02:16:21.969168 master-0 kubenswrapper[7721]: I0216 02:16:21.969030 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:16:21.969168 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:16:21.969168 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:16:21.969168 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:16:21.970096 master-0 kubenswrapper[7721]: I0216 02:16:21.969206 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:16:22.970210 master-0 kubenswrapper[7721]: I0216 02:16:22.970116 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:16:22.970210 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:16:22.970210 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:16:22.970210 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:16:22.971595 master-0 kubenswrapper[7721]: I0216 02:16:22.970210 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:16:23.581258 master-0 kubenswrapper[7721]: I0216 02:16:23.581190 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-retry-1-master-0_9063971f-d258-4c4b-9e12-06b7de390d3b/installer/0.log" Feb 16 02:16:23.581548 master-0 kubenswrapper[7721]: I0216 02:16:23.581288 7721 generic.go:334] "Generic (PLEG): container finished" podID="9063971f-d258-4c4b-9e12-06b7de390d3b" containerID="f61bf9622140879bd257d70cb26fe6250ec7cfc5858c85cf7bce7b8c5f8c9dbd" exitCode=1 Feb 16 02:16:23.581548 master-0 kubenswrapper[7721]: I0216 02:16:23.581343 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"9063971f-d258-4c4b-9e12-06b7de390d3b","Type":"ContainerDied","Data":"f61bf9622140879bd257d70cb26fe6250ec7cfc5858c85cf7bce7b8c5f8c9dbd"} Feb 16 02:16:23.970506 master-0 kubenswrapper[7721]: I0216 02:16:23.970364 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:16:23.970506 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:16:23.970506 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:16:23.970506 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:16:23.971481 master-0 kubenswrapper[7721]: I0216 02:16:23.970514 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:16:24.253107 master-0 kubenswrapper[7721]: E0216 02:16:24.252900 7721 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T02:16:14Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T02:16:14Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T02:16:14Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T02:16:14Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 02:16:24.970049 master-0 kubenswrapper[7721]: I0216 02:16:24.969920 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:16:24.970049 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:16:24.970049 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:16:24.970049 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:16:24.970371 master-0 kubenswrapper[7721]: I0216 02:16:24.970114 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:16:25.008951 master-0 kubenswrapper[7721]: I0216 02:16:25.008836 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-retry-1-master-0_9063971f-d258-4c4b-9e12-06b7de390d3b/installer/0.log" Feb 16 02:16:25.008951 master-0 kubenswrapper[7721]: I0216 02:16:25.008977 7721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 16 02:16:25.173240 master-0 kubenswrapper[7721]: I0216 02:16:25.173128 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9063971f-d258-4c4b-9e12-06b7de390d3b-var-lock\") pod \"9063971f-d258-4c4b-9e12-06b7de390d3b\" (UID: \"9063971f-d258-4c4b-9e12-06b7de390d3b\") " Feb 16 02:16:25.173240 master-0 kubenswrapper[7721]: I0216 02:16:25.173198 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9063971f-d258-4c4b-9e12-06b7de390d3b-kubelet-dir\") pod \"9063971f-d258-4c4b-9e12-06b7de390d3b\" (UID: \"9063971f-d258-4c4b-9e12-06b7de390d3b\") " Feb 16 02:16:25.173761 master-0 kubenswrapper[7721]: I0216 02:16:25.173267 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9063971f-d258-4c4b-9e12-06b7de390d3b-var-lock" (OuterVolumeSpecName: "var-lock") pod "9063971f-d258-4c4b-9e12-06b7de390d3b" (UID: "9063971f-d258-4c4b-9e12-06b7de390d3b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:16:25.173761 master-0 kubenswrapper[7721]: I0216 02:16:25.173382 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9063971f-d258-4c4b-9e12-06b7de390d3b-kube-api-access\") pod \"9063971f-d258-4c4b-9e12-06b7de390d3b\" (UID: \"9063971f-d258-4c4b-9e12-06b7de390d3b\") " Feb 16 02:16:25.173761 master-0 kubenswrapper[7721]: I0216 02:16:25.173414 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9063971f-d258-4c4b-9e12-06b7de390d3b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9063971f-d258-4c4b-9e12-06b7de390d3b" (UID: "9063971f-d258-4c4b-9e12-06b7de390d3b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:16:25.174095 master-0 kubenswrapper[7721]: I0216 02:16:25.173825 7721 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9063971f-d258-4c4b-9e12-06b7de390d3b-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 16 02:16:25.174095 master-0 kubenswrapper[7721]: I0216 02:16:25.173851 7721 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9063971f-d258-4c4b-9e12-06b7de390d3b-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 02:16:25.178405 master-0 kubenswrapper[7721]: I0216 02:16:25.178165 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9063971f-d258-4c4b-9e12-06b7de390d3b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9063971f-d258-4c4b-9e12-06b7de390d3b" (UID: "9063971f-d258-4c4b-9e12-06b7de390d3b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:16:25.275405 master-0 kubenswrapper[7721]: I0216 02:16:25.275327 7721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9063971f-d258-4c4b-9e12-06b7de390d3b-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 16 02:16:25.601567 master-0 kubenswrapper[7721]: I0216 02:16:25.601372 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-retry-1-master-0_9063971f-d258-4c4b-9e12-06b7de390d3b/installer/0.log" Feb 16 02:16:25.601567 master-0 kubenswrapper[7721]: I0216 02:16:25.601509 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"9063971f-d258-4c4b-9e12-06b7de390d3b","Type":"ContainerDied","Data":"58997b8fc48a379ceddf1aa04ebaf598050ecf9e7d85b482c7e790d8fddb7296"} Feb 16 02:16:25.601567 master-0 kubenswrapper[7721]: I0216 02:16:25.601550 7721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58997b8fc48a379ceddf1aa04ebaf598050ecf9e7d85b482c7e790d8fddb7296" Feb 16 02:16:25.602014 master-0 kubenswrapper[7721]: I0216 02:16:25.601622 7721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 16 02:16:25.969419 master-0 kubenswrapper[7721]: I0216 02:16:25.969315 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:16:25.969419 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:16:25.969419 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:16:25.969419 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:16:25.969948 master-0 kubenswrapper[7721]: I0216 02:16:25.969482 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:16:26.989803 master-0 kubenswrapper[7721]: I0216 02:16:26.989620 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:16:26.989803 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:16:26.989803 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:16:26.989803 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:16:26.989803 master-0 kubenswrapper[7721]: I0216 02:16:26.989782 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:16:27.968946 master-0 kubenswrapper[7721]: I0216 02:16:27.968818 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:16:27.968946 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:16:27.968946 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:16:27.968946 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:16:27.968946 master-0 kubenswrapper[7721]: I0216 02:16:27.968935 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:16:28.969694 master-0 kubenswrapper[7721]: I0216 02:16:28.969589 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:16:28.969694 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:16:28.969694 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:16:28.969694 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:16:28.970938 master-0 kubenswrapper[7721]: I0216 02:16:28.969708 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:16:29.968976 master-0 kubenswrapper[7721]: I0216 02:16:29.968902 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:16:29.968976 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:16:29.968976 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:16:29.968976 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:16:29.969482 master-0 kubenswrapper[7721]: I0216 02:16:29.969003 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:16:30.970672 master-0 kubenswrapper[7721]: I0216 02:16:30.970420 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:16:30.970672 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:16:30.970672 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:16:30.970672 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:16:30.970672 master-0 kubenswrapper[7721]: I0216 02:16:30.970615 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:16:30.984148 master-0 kubenswrapper[7721]: E0216 02:16:30.984045 7721 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 02:16:30.984148 master-0 kubenswrapper[7721]: I0216 02:16:30.984136 7721 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 16 02:16:31.969332 master-0 kubenswrapper[7721]: I0216 02:16:31.969201 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:16:31.969332 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:16:31.969332 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:16:31.969332 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:16:31.969858 master-0 kubenswrapper[7721]: I0216 02:16:31.969350 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:16:32.969186 master-0 kubenswrapper[7721]: I0216 02:16:32.969079 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:16:32.969186 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:16:32.969186 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:16:32.969186 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:16:32.969186 master-0 kubenswrapper[7721]: I0216 02:16:32.969175 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:16:33.969509 master-0 kubenswrapper[7721]: I0216 02:16:33.969418 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:16:33.969509 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:16:33.969509 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:16:33.969509 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:16:33.972340 master-0 kubenswrapper[7721]: I0216 02:16:33.972293 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:16:34.253771 master-0 kubenswrapper[7721]: E0216 02:16:34.253604 7721 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 02:16:34.968147 master-0 kubenswrapper[7721]: I0216 02:16:34.968026 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:16:34.968147 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:16:34.968147 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:16:34.968147 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:16:34.969015 master-0 kubenswrapper[7721]: I0216 02:16:34.968177 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:16:35.968903 master-0 kubenswrapper[7721]: I0216 02:16:35.968813 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:16:35.968903 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:16:35.968903 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:16:35.968903 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:16:35.969956 master-0 kubenswrapper[7721]: I0216 02:16:35.968906 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:16:36.969255 master-0 kubenswrapper[7721]: I0216 02:16:36.969172 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:16:36.969255 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:16:36.969255 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:16:36.969255 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:16:36.970354 master-0 kubenswrapper[7721]: I0216 02:16:36.969285 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:16:37.969540 master-0 kubenswrapper[7721]: I0216 02:16:37.969376 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:16:37.969540 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:16:37.969540 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:16:37.969540 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:16:37.970582 master-0 kubenswrapper[7721]: I0216 02:16:37.969571 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:16:38.717878 master-0 kubenswrapper[7721]: I0216 02:16:38.717771 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-kffmg_dbc5b101-936f-4bf3-bbf3-f30966b0ab50/approver/1.log" Feb 16 02:16:38.718691 master-0 kubenswrapper[7721]: I0216 02:16:38.718638 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-kffmg_dbc5b101-936f-4bf3-bbf3-f30966b0ab50/approver/0.log" Feb 16 02:16:38.719321 master-0 kubenswrapper[7721]: I0216 02:16:38.719260 7721 generic.go:334] "Generic (PLEG): container finished" podID="dbc5b101-936f-4bf3-bbf3-f30966b0ab50" containerID="e2b95d6d9e0e9f98872131f6b8b2e0daa77ccd636475f9813d73f42413bc869a" exitCode=1 Feb 16 02:16:38.719566 master-0 kubenswrapper[7721]: I0216 02:16:38.719349 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-kffmg" event={"ID":"dbc5b101-936f-4bf3-bbf3-f30966b0ab50","Type":"ContainerDied","Data":"e2b95d6d9e0e9f98872131f6b8b2e0daa77ccd636475f9813d73f42413bc869a"} Feb 16 02:16:38.719781 master-0 kubenswrapper[7721]: I0216 02:16:38.719754 7721 scope.go:117] "RemoveContainer" containerID="73c5d4096ed4f5f723bea74695c09c9920b7cf6836ef92fa2286119a88696c78" Feb 16 02:16:38.720798 master-0 kubenswrapper[7721]: I0216 02:16:38.720749 7721 scope.go:117] "RemoveContainer" containerID="e2b95d6d9e0e9f98872131f6b8b2e0daa77ccd636475f9813d73f42413bc869a" Feb 16 02:16:38.721257 master-0 kubenswrapper[7721]: E0216 02:16:38.721198 7721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"approver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=approver pod=network-node-identity-kffmg_openshift-network-node-identity(dbc5b101-936f-4bf3-bbf3-f30966b0ab50)\"" pod="openshift-network-node-identity/network-node-identity-kffmg" podUID="dbc5b101-936f-4bf3-bbf3-f30966b0ab50" Feb 16 02:16:38.969172 master-0 kubenswrapper[7721]: I0216 02:16:38.968990 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:16:38.969172 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:16:38.969172 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:16:38.969172 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:16:38.969172 master-0 kubenswrapper[7721]: I0216 02:16:38.969121 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:16:39.728620 master-0 kubenswrapper[7721]: I0216 02:16:39.728536 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-kffmg_dbc5b101-936f-4bf3-bbf3-f30966b0ab50/approver/1.log" Feb 16 02:16:39.968473 master-0 kubenswrapper[7721]: I0216 02:16:39.968331 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:16:39.968473 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:16:39.968473 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:16:39.968473 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:16:39.968473 master-0 kubenswrapper[7721]: I0216 02:16:39.968421 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:16:40.969578 master-0 kubenswrapper[7721]: I0216 02:16:40.969475 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:16:40.969578 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:16:40.969578 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:16:40.969578 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:16:40.969578 master-0 kubenswrapper[7721]: I0216 02:16:40.969572 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:16:40.984822 master-0 kubenswrapper[7721]: E0216 02:16:40.984748 7721 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="200ms" Feb 16 02:16:41.969536 master-0 kubenswrapper[7721]: I0216 02:16:41.969427 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:16:41.969536 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:16:41.969536 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:16:41.969536 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:16:41.970521 master-0 kubenswrapper[7721]: I0216 02:16:41.969554 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:16:42.969527 master-0 kubenswrapper[7721]: I0216 02:16:42.969423 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:16:42.969527 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:16:42.969527 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:16:42.969527 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:16:42.970592 master-0 kubenswrapper[7721]: I0216 02:16:42.969539 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:16:43.968537 master-0 kubenswrapper[7721]: I0216 02:16:43.968406 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:16:43.968537 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:16:43.968537 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:16:43.968537 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:16:43.968974 master-0 kubenswrapper[7721]: I0216 02:16:43.968544 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:16:44.254693 master-0 kubenswrapper[7721]: E0216 02:16:44.254527 7721 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 02:16:44.969016 master-0 kubenswrapper[7721]: I0216 02:16:44.968935 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:16:44.969016 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:16:44.969016 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:16:44.969016 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:16:44.969710 master-0 kubenswrapper[7721]: I0216 02:16:44.969643 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:16:45.969809 master-0 kubenswrapper[7721]: I0216 02:16:45.969718 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:16:45.969809 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:16:45.969809 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:16:45.969809 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:16:45.971177 master-0 kubenswrapper[7721]: I0216 02:16:45.969814 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:16:46.969830 master-0 kubenswrapper[7721]: I0216 02:16:46.969713 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:16:46.969830 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:16:46.969830 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:16:46.969830 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:16:46.971151 master-0 kubenswrapper[7721]: I0216 02:16:46.969834 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:16:47.969029 master-0 kubenswrapper[7721]: I0216 02:16:47.968926 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:16:47.969029 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:16:47.969029 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:16:47.969029 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:16:47.969532 master-0 kubenswrapper[7721]: I0216 02:16:47.969039 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:16:48.772703 master-0 kubenswrapper[7721]: E0216 02:16:48.772425 7721 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{kube-controller-manager-master-0.1894983144b03725 openshift-kube-controller-manager 11371 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-master-0,UID:532487ad51c30257b744e7c1c79fb34f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:11:30 +0000 UTC,LastTimestamp:2026-02-16 02:15:54.275422568 +0000 UTC m=+557.769656860,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:16:48.969786 master-0 kubenswrapper[7721]: I0216 02:16:48.969686 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:16:48.969786 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:16:48.969786 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:16:48.969786 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:16:48.970223 master-0 kubenswrapper[7721]: I0216 02:16:48.969818 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:16:49.968565 master-0 kubenswrapper[7721]: I0216 02:16:49.968479 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:16:49.968565 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:16:49.968565 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:16:49.968565 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:16:49.969553 master-0 kubenswrapper[7721]: I0216 02:16:49.968622 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:16:50.725267 master-0 kubenswrapper[7721]: I0216 02:16:50.725164 7721 scope.go:117] "RemoveContainer" containerID="e2b95d6d9e0e9f98872131f6b8b2e0daa77ccd636475f9813d73f42413bc869a" Feb 16 02:16:50.969411 master-0 kubenswrapper[7721]: I0216 02:16:50.969314 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:16:50.969411 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:16:50.969411 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:16:50.969411 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:16:50.970517 master-0 kubenswrapper[7721]: I0216 02:16:50.969425 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:16:51.186985 master-0 kubenswrapper[7721]: E0216 02:16:51.186876 7721 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms" Feb 16 02:16:51.860904 master-0 kubenswrapper[7721]: I0216 02:16:51.860823 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-kffmg_dbc5b101-936f-4bf3-bbf3-f30966b0ab50/approver/1.log" Feb 16 02:16:51.861847 master-0 kubenswrapper[7721]: I0216 02:16:51.861780 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-kffmg" event={"ID":"dbc5b101-936f-4bf3-bbf3-f30966b0ab50","Type":"ContainerStarted","Data":"f8e9232389b318fb727f9a093b5c3a8a99a78f322c493fc06d4b78e1055bfd3d"} Feb 16 02:16:51.865653 master-0 kubenswrapper[7721]: I0216 02:16:51.865586 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_532487ad51c30257b744e7c1c79fb34f/kube-controller-manager/1.log" Feb 16 02:16:51.868818 master-0 kubenswrapper[7721]: I0216 02:16:51.868762 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_532487ad51c30257b744e7c1c79fb34f/kube-controller-manager/0.log" Feb 16 02:16:51.868970 master-0 kubenswrapper[7721]: I0216 02:16:51.868883 7721 generic.go:334] "Generic (PLEG): container finished" podID="532487ad51c30257b744e7c1c79fb34f" containerID="c315dd4013bf47ddda1fd2c99a095489c35ec0eda907e0f77d5a4d2d27ec8d89" exitCode=137 Feb 16 02:16:51.869053 master-0 kubenswrapper[7721]: I0216 02:16:51.868973 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"532487ad51c30257b744e7c1c79fb34f","Type":"ContainerDied","Data":"c315dd4013bf47ddda1fd2c99a095489c35ec0eda907e0f77d5a4d2d27ec8d89"} Feb 16 02:16:51.869119 master-0 kubenswrapper[7721]: I0216 02:16:51.869060 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"532487ad51c30257b744e7c1c79fb34f","Type":"ContainerStarted","Data":"5fc96fb916b196b3dbc229cfd525c7d85b5052106365d264bf8c22b6c5329dbb"} Feb 16 02:16:51.869192 master-0 kubenswrapper[7721]: I0216 02:16:51.869136 7721 scope.go:117] "RemoveContainer" containerID="2764ae0cc6da6493da6557571cb01f0bf8aba4f15b5e56b0e8f80cf54cb86272" Feb 16 02:16:51.968966 master-0 kubenswrapper[7721]: I0216 02:16:51.968884 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:16:51.968966 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:16:51.968966 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:16:51.968966 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:16:51.969385 master-0 kubenswrapper[7721]: I0216 02:16:51.968983 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:16:52.881404 master-0 kubenswrapper[7721]: I0216 02:16:52.881354 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_532487ad51c30257b744e7c1c79fb34f/kube-controller-manager/1.log" Feb 16 02:16:52.970029 master-0 kubenswrapper[7721]: I0216 02:16:52.969880 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:16:52.970029 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:16:52.970029 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:16:52.970029 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:16:52.970029 master-0 kubenswrapper[7721]: I0216 02:16:52.969993 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:16:53.276748 master-0 kubenswrapper[7721]: I0216 02:16:53.276663 7721 scope.go:117] "RemoveContainer" containerID="462bc8e54438708fbe0de05ecb433d15f63ff46542c44ae6f1cb6f59fc242a3b" Feb 16 02:16:53.303225 master-0 kubenswrapper[7721]: I0216 02:16:53.303180 7721 scope.go:117] "RemoveContainer" containerID="f7886612dab7fdbb2c8fa01ccf5ff672b9f28739bb24c915a3676c6391134016" Feb 16 02:16:53.327743 master-0 kubenswrapper[7721]: I0216 02:16:53.327680 7721 scope.go:117] "RemoveContainer" containerID="1f0cb68115478c6fd515542fbb0fa0d43b3b478c6e2bb7366eec3aa3beebf374" Feb 16 02:16:53.350940 master-0 kubenswrapper[7721]: I0216 02:16:53.350883 7721 scope.go:117] "RemoveContainer" containerID="e06f2cd26b4721860828d787726c09450d829acd1f0cf5360dbf2c9f1becfde8" Feb 16 02:16:53.747027 master-0 kubenswrapper[7721]: E0216 02:16:53.746937 7721 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Feb 16 02:16:53.747668 master-0 kubenswrapper[7721]: I0216 02:16:53.747623 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Feb 16 02:16:53.780908 master-0 kubenswrapper[7721]: W0216 02:16:53.780849 7721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7adecad495595c43c57c30abd350e987.slice/crio-5b8d0bef4d74f4dd2410957462f576db299d55ec6675ac364687f5b27fba5fd5 WatchSource:0}: Error finding container 5b8d0bef4d74f4dd2410957462f576db299d55ec6675ac364687f5b27fba5fd5: Status 404 returned error can't find the container with id 5b8d0bef4d74f4dd2410957462f576db299d55ec6675ac364687f5b27fba5fd5 Feb 16 02:16:53.896561 master-0 kubenswrapper[7721]: I0216 02:16:53.896470 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerStarted","Data":"5b8d0bef4d74f4dd2410957462f576db299d55ec6675ac364687f5b27fba5fd5"} Feb 16 02:16:53.974038 master-0 kubenswrapper[7721]: I0216 02:16:53.973259 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:16:53.974038 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:16:53.974038 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:16:53.974038 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:16:53.974038 master-0 kubenswrapper[7721]: I0216 02:16:53.973362 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:16:54.256330 master-0 kubenswrapper[7721]: E0216 02:16:54.256198 7721 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 02:16:54.274714 master-0 kubenswrapper[7721]: I0216 02:16:54.274558 7721 status_manager.go:851] "Failed to get status for pod" podUID="532487ad51c30257b744e7c1c79fb34f" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-controller-manager-master-0)" Feb 16 02:16:54.917034 master-0 kubenswrapper[7721]: I0216 02:16:54.916826 7721 generic.go:334] "Generic (PLEG): container finished" podID="7adecad495595c43c57c30abd350e987" containerID="ec03f418d636771605fae0ee7e9daf8aa0945bcc9619f802f8819cb5a43f7d70" exitCode=0 Feb 16 02:16:54.917034 master-0 kubenswrapper[7721]: I0216 02:16:54.916926 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerDied","Data":"ec03f418d636771605fae0ee7e9daf8aa0945bcc9619f802f8819cb5a43f7d70"} Feb 16 02:16:54.919395 master-0 kubenswrapper[7721]: I0216 02:16:54.917249 7721 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="75e1522d-2c8c-4dbc-830d-47636881cc06" Feb 16 02:16:54.919395 master-0 kubenswrapper[7721]: I0216 02:16:54.917289 7721 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="75e1522d-2c8c-4dbc-830d-47636881cc06" Feb 16 02:16:54.968580 master-0 kubenswrapper[7721]: I0216 02:16:54.968507 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:16:54.968580 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:16:54.968580 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:16:54.968580 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:16:54.969022 master-0 kubenswrapper[7721]: I0216 02:16:54.968586 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:16:55.969162 master-0 kubenswrapper[7721]: I0216 02:16:55.969065 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:16:55.969162 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:16:55.969162 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:16:55.969162 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:16:55.970376 master-0 kubenswrapper[7721]: I0216 02:16:55.969190 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:16:56.968994 master-0 kubenswrapper[7721]: I0216 02:16:56.968911 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:16:56.968994 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:16:56.968994 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:16:56.968994 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:16:56.970102 master-0 kubenswrapper[7721]: I0216 02:16:56.969048 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:16:57.968870 master-0 kubenswrapper[7721]: I0216 02:16:57.968793 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:16:57.968870 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:16:57.968870 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:16:57.968870 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:16:57.969330 master-0 kubenswrapper[7721]: I0216 02:16:57.968905 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:16:58.969165 master-0 kubenswrapper[7721]: I0216 02:16:58.969087 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:16:58.969165 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:16:58.969165 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:16:58.969165 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:16:58.970212 master-0 kubenswrapper[7721]: I0216 02:16:58.969175 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:16:59.963506 master-0 kubenswrapper[7721]: I0216 02:16:59.962603 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-c588d8cb4-nbjz6_04804a08-e3a5-46f3-abcb-967866834baa/ingress-operator/3.log" Feb 16 02:16:59.963793 master-0 kubenswrapper[7721]: I0216 02:16:59.963553 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-c588d8cb4-nbjz6_04804a08-e3a5-46f3-abcb-967866834baa/ingress-operator/2.log" Feb 16 02:16:59.965044 master-0 kubenswrapper[7721]: I0216 02:16:59.964227 7721 generic.go:334] "Generic (PLEG): container finished" podID="04804a08-e3a5-46f3-abcb-967866834baa" containerID="f8b63e1652f210c6b1254a1e3bd4ab515a460adc2668368a4276c7b3f8a11479" exitCode=1 Feb 16 02:16:59.965044 master-0 kubenswrapper[7721]: I0216 02:16:59.964277 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nbjz6" event={"ID":"04804a08-e3a5-46f3-abcb-967866834baa","Type":"ContainerDied","Data":"f8b63e1652f210c6b1254a1e3bd4ab515a460adc2668368a4276c7b3f8a11479"} Feb 16 02:16:59.965044 master-0 kubenswrapper[7721]: I0216 02:16:59.964337 7721 scope.go:117] "RemoveContainer" containerID="6a5eef57bcb093780918b99bdb16653d8db2a96f5c207767f7945385b5adfeef" Feb 16 02:16:59.965494 master-0 kubenswrapper[7721]: I0216 02:16:59.965214 7721 scope.go:117] "RemoveContainer" containerID="f8b63e1652f210c6b1254a1e3bd4ab515a460adc2668368a4276c7b3f8a11479" Feb 16 02:16:59.965710 master-0 kubenswrapper[7721]: E0216 02:16:59.965648 7721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-c588d8cb4-nbjz6_openshift-ingress-operator(04804a08-e3a5-46f3-abcb-967866834baa)\"" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nbjz6" podUID="04804a08-e3a5-46f3-abcb-967866834baa" Feb 16 02:16:59.968490 master-0 kubenswrapper[7721]: I0216 02:16:59.968410 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:16:59.968490 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:16:59.968490 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:16:59.968490 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:16:59.968751 master-0 kubenswrapper[7721]: I0216 02:16:59.968496 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:17:00.797925 master-0 kubenswrapper[7721]: I0216 02:17:00.797815 7721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:17:00.798918 master-0 kubenswrapper[7721]: I0216 02:17:00.797965 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:17:00.805162 master-0 kubenswrapper[7721]: I0216 02:17:00.805118 7721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:17:00.969181 master-0 kubenswrapper[7721]: I0216 02:17:00.969095 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:17:00.969181 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:17:00.969181 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:17:00.969181 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:17:00.969181 master-0 kubenswrapper[7721]: I0216 02:17:00.969178 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:17:00.975666 master-0 kubenswrapper[7721]: I0216 02:17:00.975609 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-c588d8cb4-nbjz6_04804a08-e3a5-46f3-abcb-967866834baa/ingress-operator/3.log" Feb 16 02:17:01.588119 master-0 kubenswrapper[7721]: E0216 02:17:01.587979 7721 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="800ms" Feb 16 02:17:01.969719 master-0 kubenswrapper[7721]: I0216 02:17:01.969642 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:17:01.969719 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:17:01.969719 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:17:01.969719 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:17:01.970888 master-0 kubenswrapper[7721]: I0216 02:17:01.969731 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:17:02.971206 master-0 kubenswrapper[7721]: I0216 02:17:02.970429 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:17:02.971206 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:17:02.971206 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:17:02.971206 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:17:02.971206 master-0 kubenswrapper[7721]: I0216 02:17:02.970566 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:17:03.971148 master-0 kubenswrapper[7721]: I0216 02:17:03.971011 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:17:03.971148 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:17:03.971148 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:17:03.971148 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:17:03.971148 master-0 kubenswrapper[7721]: I0216 02:17:03.971108 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:17:04.256952 master-0 kubenswrapper[7721]: E0216 02:17:04.256740 7721 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 02:17:04.256952 master-0 kubenswrapper[7721]: E0216 02:17:04.256835 7721 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 02:17:04.968933 master-0 kubenswrapper[7721]: I0216 02:17:04.968841 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:17:04.968933 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:17:04.968933 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:17:04.968933 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:17:04.968933 master-0 kubenswrapper[7721]: I0216 02:17:04.968928 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:17:05.969777 master-0 kubenswrapper[7721]: I0216 02:17:05.969629 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:17:05.969777 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:17:05.969777 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:17:05.969777 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:17:05.969777 master-0 kubenswrapper[7721]: I0216 02:17:05.969739 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:17:06.968724 master-0 kubenswrapper[7721]: I0216 02:17:06.968603 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:17:06.968724 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:17:06.968724 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:17:06.968724 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:17:06.969166 master-0 kubenswrapper[7721]: I0216 02:17:06.968745 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:17:07.969094 master-0 kubenswrapper[7721]: I0216 02:17:07.969009 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:17:07.969094 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:17:07.969094 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:17:07.969094 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:17:07.969918 master-0 kubenswrapper[7721]: I0216 02:17:07.969099 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:17:08.970171 master-0 kubenswrapper[7721]: I0216 02:17:08.970081 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:17:08.970171 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:17:08.970171 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:17:08.970171 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:17:08.971318 master-0 kubenswrapper[7721]: I0216 02:17:08.970387 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:17:09.969383 master-0 kubenswrapper[7721]: I0216 02:17:09.969234 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:17:09.969383 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:17:09.969383 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:17:09.969383 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:17:09.970084 master-0 kubenswrapper[7721]: I0216 02:17:09.969388 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:17:10.803892 master-0 kubenswrapper[7721]: I0216 02:17:10.803826 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:17:10.968291 master-0 kubenswrapper[7721]: I0216 02:17:10.968222 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:17:10.968291 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:17:10.968291 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:17:10.968291 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:17:10.968654 master-0 kubenswrapper[7721]: I0216 02:17:10.968309 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:17:11.969330 master-0 kubenswrapper[7721]: I0216 02:17:11.969215 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:17:11.969330 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:17:11.969330 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:17:11.969330 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:17:11.969330 master-0 kubenswrapper[7721]: I0216 02:17:11.969330 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:17:12.389579 master-0 kubenswrapper[7721]: E0216 02:17:12.389178 7721 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="1.6s" Feb 16 02:17:12.969627 master-0 kubenswrapper[7721]: I0216 02:17:12.969504 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:17:12.969627 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:17:12.969627 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:17:12.969627 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:17:12.969627 master-0 kubenswrapper[7721]: I0216 02:17:12.969610 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:17:13.969323 master-0 kubenswrapper[7721]: I0216 02:17:13.969198 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:17:13.969323 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:17:13.969323 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:17:13.969323 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:17:13.969323 master-0 kubenswrapper[7721]: I0216 02:17:13.969314 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:17:14.968901 master-0 kubenswrapper[7721]: I0216 02:17:14.968806 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:17:14.968901 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:17:14.968901 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:17:14.968901 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:17:14.970777 master-0 kubenswrapper[7721]: I0216 02:17:14.968908 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:17:15.725606 master-0 kubenswrapper[7721]: I0216 02:17:15.725518 7721 scope.go:117] "RemoveContainer" containerID="f8b63e1652f210c6b1254a1e3bd4ab515a460adc2668368a4276c7b3f8a11479" Feb 16 02:17:15.726262 master-0 kubenswrapper[7721]: E0216 02:17:15.725999 7721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-c588d8cb4-nbjz6_openshift-ingress-operator(04804a08-e3a5-46f3-abcb-967866834baa)\"" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nbjz6" podUID="04804a08-e3a5-46f3-abcb-967866834baa" Feb 16 02:17:15.969593 master-0 kubenswrapper[7721]: I0216 02:17:15.969493 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:17:15.969593 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:17:15.969593 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:17:15.969593 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:17:15.970727 master-0 kubenswrapper[7721]: I0216 02:17:15.969623 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:17:16.970666 master-0 kubenswrapper[7721]: I0216 02:17:16.970410 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:17:16.970666 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:17:16.970666 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:17:16.970666 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:17:16.971813 master-0 kubenswrapper[7721]: I0216 02:17:16.971401 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:17:17.970492 master-0 kubenswrapper[7721]: I0216 02:17:17.970283 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:17:17.970492 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:17:17.970492 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:17:17.970492 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:17:17.971594 master-0 kubenswrapper[7721]: I0216 02:17:17.970530 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:17:18.116266 master-0 kubenswrapper[7721]: I0216 02:17:18.116145 7721 generic.go:334] "Generic (PLEG): container finished" podID="bde83629-b39c-401e-bc30-5ce205638918" containerID="878600941ff09ae766ef1ccc9a324f0c6d5cbe6f0b05660545fe5e976ad49b02" exitCode=0 Feb 16 02:17:18.116266 master-0 kubenswrapper[7721]: I0216 02:17:18.116238 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-8nl7s" event={"ID":"bde83629-b39c-401e-bc30-5ce205638918","Type":"ContainerDied","Data":"878600941ff09ae766ef1ccc9a324f0c6d5cbe6f0b05660545fe5e976ad49b02"} Feb 16 02:17:18.116612 master-0 kubenswrapper[7721]: I0216 02:17:18.116313 7721 scope.go:117] "RemoveContainer" containerID="4d828055a40abd365d5f9304f3bb2e1eea303420e0dae2b1729b6e96c17c65b6" Feb 16 02:17:18.117195 master-0 kubenswrapper[7721]: I0216 02:17:18.117134 7721 scope.go:117] "RemoveContainer" containerID="878600941ff09ae766ef1ccc9a324f0c6d5cbe6f0b05660545fe5e976ad49b02" Feb 16 02:17:18.117677 master-0 kubenswrapper[7721]: E0216 02:17:18.117623 7721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-6cc5b65c6b-8nl7s_openshift-marketplace(bde83629-b39c-401e-bc30-5ce205638918)\"" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-8nl7s" podUID="bde83629-b39c-401e-bc30-5ce205638918" Feb 16 02:17:18.969412 master-0 kubenswrapper[7721]: I0216 02:17:18.969262 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:17:18.969412 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:17:18.969412 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:17:18.969412 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:17:18.969412 master-0 kubenswrapper[7721]: I0216 02:17:18.969490 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:17:19.786930 master-0 kubenswrapper[7721]: I0216 02:17:19.786817 7721 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-8nl7s" Feb 16 02:17:19.786930 master-0 kubenswrapper[7721]: I0216 02:17:19.786915 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-8nl7s" Feb 16 02:17:19.788197 master-0 kubenswrapper[7721]: I0216 02:17:19.787810 7721 scope.go:117] "RemoveContainer" containerID="878600941ff09ae766ef1ccc9a324f0c6d5cbe6f0b05660545fe5e976ad49b02" Feb 16 02:17:19.788197 master-0 kubenswrapper[7721]: E0216 02:17:19.788186 7721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-6cc5b65c6b-8nl7s_openshift-marketplace(bde83629-b39c-401e-bc30-5ce205638918)\"" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-8nl7s" podUID="bde83629-b39c-401e-bc30-5ce205638918" Feb 16 02:17:19.969142 master-0 kubenswrapper[7721]: I0216 02:17:19.969001 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:17:19.969142 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:17:19.969142 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:17:19.969142 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:17:19.969142 master-0 kubenswrapper[7721]: I0216 02:17:19.969126 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:17:20.969058 master-0 kubenswrapper[7721]: I0216 02:17:20.968926 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:17:20.969058 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:17:20.969058 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:17:20.969058 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:17:20.969058 master-0 kubenswrapper[7721]: I0216 02:17:20.969055 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:17:21.969611 master-0 kubenswrapper[7721]: I0216 02:17:21.969498 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:17:21.969611 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:17:21.969611 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:17:21.969611 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:17:21.971071 master-0 kubenswrapper[7721]: I0216 02:17:21.969615 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:17:22.775973 master-0 kubenswrapper[7721]: E0216 02:17:22.775779 7721 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{kube-controller-manager-master-0.18949831579b7b97 openshift-kube-controller-manager 11372 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-master-0,UID:532487ad51c30257b744e7c1c79fb34f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:11:31 +0000 UTC,LastTimestamp:2026-02-16 02:15:54.607423829 +0000 UTC m=+558.101658121,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:17:22.970154 master-0 kubenswrapper[7721]: I0216 02:17:22.970019 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:17:22.970154 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:17:22.970154 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:17:22.970154 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:17:22.970154 master-0 kubenswrapper[7721]: I0216 02:17:22.970138 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:17:23.969562 master-0 kubenswrapper[7721]: I0216 02:17:23.969426 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:17:23.969562 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:17:23.969562 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:17:23.969562 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:17:23.969562 master-0 kubenswrapper[7721]: I0216 02:17:23.969550 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:17:23.990347 master-0 kubenswrapper[7721]: E0216 02:17:23.990240 7721 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Feb 16 02:17:24.969298 master-0 kubenswrapper[7721]: I0216 02:17:24.969174 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:17:24.969298 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:17:24.969298 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:17:24.969298 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:17:24.969974 master-0 kubenswrapper[7721]: I0216 02:17:24.969316 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:17:25.968724 master-0 kubenswrapper[7721]: I0216 02:17:25.968598 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:17:25.968724 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:17:25.968724 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:17:25.968724 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:17:25.968724 master-0 kubenswrapper[7721]: I0216 02:17:25.968699 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:17:26.969761 master-0 kubenswrapper[7721]: I0216 02:17:26.969679 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:17:26.969761 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:17:26.969761 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:17:26.969761 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:17:26.970818 master-0 kubenswrapper[7721]: I0216 02:17:26.969770 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:17:27.969796 master-0 kubenswrapper[7721]: I0216 02:17:27.969667 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:17:27.969796 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:17:27.969796 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:17:27.969796 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:17:27.971245 master-0 kubenswrapper[7721]: I0216 02:17:27.969846 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:17:28.920797 master-0 kubenswrapper[7721]: E0216 02:17:28.920643 7721 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Feb 16 02:17:28.969295 master-0 kubenswrapper[7721]: I0216 02:17:28.969128 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:17:28.969295 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:17:28.969295 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:17:28.969295 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:17:28.969295 master-0 kubenswrapper[7721]: I0216 02:17:28.969226 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:17:29.218899 master-0 kubenswrapper[7721]: I0216 02:17:29.218822 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj_c4a146b2-c712-408a-97d8-5de3a84f3aaf/config-sync-controllers/0.log" Feb 16 02:17:29.219942 master-0 kubenswrapper[7721]: I0216 02:17:29.219492 7721 generic.go:334] "Generic (PLEG): container finished" podID="c4a146b2-c712-408a-97d8-5de3a84f3aaf" containerID="7b8d5b60c64a954457f5d3632cc4eab151ef7d06b7f4c5d6693868e55012ceda" exitCode=1 Feb 16 02:17:29.219942 master-0 kubenswrapper[7721]: I0216 02:17:29.219575 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj" event={"ID":"c4a146b2-c712-408a-97d8-5de3a84f3aaf","Type":"ContainerDied","Data":"7b8d5b60c64a954457f5d3632cc4eab151ef7d06b7f4c5d6693868e55012ceda"} Feb 16 02:17:29.220791 master-0 kubenswrapper[7721]: I0216 02:17:29.220721 7721 scope.go:117] "RemoveContainer" containerID="7b8d5b60c64a954457f5d3632cc4eab151ef7d06b7f4c5d6693868e55012ceda" Feb 16 02:17:29.725916 master-0 kubenswrapper[7721]: I0216 02:17:29.725680 7721 scope.go:117] "RemoveContainer" containerID="f8b63e1652f210c6b1254a1e3bd4ab515a460adc2668368a4276c7b3f8a11479" Feb 16 02:17:29.726377 master-0 kubenswrapper[7721]: E0216 02:17:29.726303 7721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-c588d8cb4-nbjz6_openshift-ingress-operator(04804a08-e3a5-46f3-abcb-967866834baa)\"" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nbjz6" podUID="04804a08-e3a5-46f3-abcb-967866834baa" Feb 16 02:17:29.969078 master-0 kubenswrapper[7721]: I0216 02:17:29.968966 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:17:29.969078 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:17:29.969078 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:17:29.969078 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:17:29.969605 master-0 kubenswrapper[7721]: I0216 02:17:29.969082 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:17:30.237236 master-0 kubenswrapper[7721]: I0216 02:17:30.237133 7721 generic.go:334] "Generic (PLEG): container finished" podID="7adecad495595c43c57c30abd350e987" containerID="8a29b6deeb6009e1fe7a931b2cf89177c0cfa2c70c5aa2feb9a3b9bb5b6df61d" exitCode=0 Feb 16 02:17:30.237236 master-0 kubenswrapper[7721]: I0216 02:17:30.237203 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerDied","Data":"8a29b6deeb6009e1fe7a931b2cf89177c0cfa2c70c5aa2feb9a3b9bb5b6df61d"} Feb 16 02:17:30.238252 master-0 kubenswrapper[7721]: I0216 02:17:30.237688 7721 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="75e1522d-2c8c-4dbc-830d-47636881cc06" Feb 16 02:17:30.238252 master-0 kubenswrapper[7721]: I0216 02:17:30.237726 7721 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="75e1522d-2c8c-4dbc-830d-47636881cc06" Feb 16 02:17:30.241538 master-0 kubenswrapper[7721]: I0216 02:17:30.241429 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj_c4a146b2-c712-408a-97d8-5de3a84f3aaf/config-sync-controllers/0.log" Feb 16 02:17:30.242315 master-0 kubenswrapper[7721]: I0216 02:17:30.242226 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj" event={"ID":"c4a146b2-c712-408a-97d8-5de3a84f3aaf","Type":"ContainerStarted","Data":"3116977c8ab98cd19d9dbf39e2733713c2e0e86df2587421df3c5fa34ff35b8d"} Feb 16 02:17:30.969762 master-0 kubenswrapper[7721]: I0216 02:17:30.969651 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:17:30.969762 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:17:30.969762 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:17:30.969762 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:17:30.970222 master-0 kubenswrapper[7721]: I0216 02:17:30.969760 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:17:31.969624 master-0 kubenswrapper[7721]: I0216 02:17:31.969469 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:17:31.969624 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:17:31.969624 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:17:31.969624 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:17:31.969624 master-0 kubenswrapper[7721]: I0216 02:17:31.969582 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:17:32.969872 master-0 kubenswrapper[7721]: I0216 02:17:32.969702 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:17:32.969872 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:17:32.969872 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:17:32.969872 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:17:32.969872 master-0 kubenswrapper[7721]: I0216 02:17:32.969823 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:17:33.725488 master-0 kubenswrapper[7721]: I0216 02:17:33.725211 7721 scope.go:117] "RemoveContainer" containerID="878600941ff09ae766ef1ccc9a324f0c6d5cbe6f0b05660545fe5e976ad49b02" Feb 16 02:17:33.970109 master-0 kubenswrapper[7721]: I0216 02:17:33.970018 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:17:33.970109 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:17:33.970109 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:17:33.970109 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:17:33.971068 master-0 kubenswrapper[7721]: I0216 02:17:33.970124 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:17:34.281188 master-0 kubenswrapper[7721]: I0216 02:17:34.281089 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-8nl7s" event={"ID":"bde83629-b39c-401e-bc30-5ce205638918","Type":"ContainerStarted","Data":"c294045cdc69f6c083a4cdeb23b9bbfe3d4c6dfa0c1d7960cd217705505d5fc6"} Feb 16 02:17:34.281780 master-0 kubenswrapper[7721]: I0216 02:17:34.281701 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-8nl7s" Feb 16 02:17:34.283128 master-0 kubenswrapper[7721]: I0216 02:17:34.283060 7721 patch_prober.go:28] interesting pod/marketplace-operator-6cc5b65c6b-8nl7s container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.23:8080/healthz\": dial tcp 10.128.0.23:8080: connect: connection refused" start-of-body= Feb 16 02:17:34.283274 master-0 kubenswrapper[7721]: I0216 02:17:34.283153 7721 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-8nl7s" podUID="bde83629-b39c-401e-bc30-5ce205638918" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.23:8080/healthz\": dial tcp 10.128.0.23:8080: connect: connection refused" Feb 16 02:17:34.968054 master-0 kubenswrapper[7721]: I0216 02:17:34.967923 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:17:34.968054 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:17:34.968054 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:17:34.968054 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:17:34.968054 master-0 kubenswrapper[7721]: I0216 02:17:34.967992 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:17:35.295200 master-0 kubenswrapper[7721]: I0216 02:17:35.294979 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-8nl7s" Feb 16 02:17:35.970044 master-0 kubenswrapper[7721]: I0216 02:17:35.969949 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:17:35.970044 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:17:35.970044 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:17:35.970044 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:17:35.970044 master-0 kubenswrapper[7721]: I0216 02:17:35.970037 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:17:36.969201 master-0 kubenswrapper[7721]: I0216 02:17:36.969088 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:17:36.969201 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:17:36.969201 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:17:36.969201 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:17:36.969201 master-0 kubenswrapper[7721]: I0216 02:17:36.969191 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:17:37.190776 master-0 kubenswrapper[7721]: E0216 02:17:37.190672 7721 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 16 02:17:37.969572 master-0 kubenswrapper[7721]: I0216 02:17:37.969336 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:17:37.969572 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:17:37.969572 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:17:37.969572 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:17:37.969572 master-0 kubenswrapper[7721]: I0216 02:17:37.969470 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:17:38.320226 master-0 kubenswrapper[7721]: I0216 02:17:38.320005 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj_c4a146b2-c712-408a-97d8-5de3a84f3aaf/config-sync-controllers/0.log" Feb 16 02:17:38.321265 master-0 kubenswrapper[7721]: I0216 02:17:38.321203 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj_c4a146b2-c712-408a-97d8-5de3a84f3aaf/cluster-cloud-controller-manager/0.log" Feb 16 02:17:38.321400 master-0 kubenswrapper[7721]: I0216 02:17:38.321287 7721 generic.go:334] "Generic (PLEG): container finished" podID="c4a146b2-c712-408a-97d8-5de3a84f3aaf" containerID="0e6dfb235fe16f13df03b4a59ee89cd057fbaeee70e2959a56474787817390af" exitCode=1 Feb 16 02:17:38.321400 master-0 kubenswrapper[7721]: I0216 02:17:38.321335 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj" event={"ID":"c4a146b2-c712-408a-97d8-5de3a84f3aaf","Type":"ContainerDied","Data":"0e6dfb235fe16f13df03b4a59ee89cd057fbaeee70e2959a56474787817390af"} Feb 16 02:17:38.322285 master-0 kubenswrapper[7721]: I0216 02:17:38.322233 7721 scope.go:117] "RemoveContainer" containerID="0e6dfb235fe16f13df03b4a59ee89cd057fbaeee70e2959a56474787817390af" Feb 16 02:17:38.969696 master-0 kubenswrapper[7721]: I0216 02:17:38.969557 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:17:38.969696 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:17:38.969696 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:17:38.969696 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:17:38.969696 master-0 kubenswrapper[7721]: I0216 02:17:38.969676 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:17:39.336392 master-0 kubenswrapper[7721]: I0216 02:17:39.336212 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-85c9b89969-g9lcm_27d876a7-6a48-4942-ad96-ed8ed3aa104b/manager/1.log" Feb 16 02:17:39.338159 master-0 kubenswrapper[7721]: I0216 02:17:39.338105 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-85c9b89969-g9lcm_27d876a7-6a48-4942-ad96-ed8ed3aa104b/manager/0.log" Feb 16 02:17:39.338331 master-0 kubenswrapper[7721]: I0216 02:17:39.338180 7721 generic.go:334] "Generic (PLEG): container finished" podID="27d876a7-6a48-4942-ad96-ed8ed3aa104b" containerID="bb54fbd185420265f2400dbce5bb93b2c07ec50f3a0611291aab6640cc25bca3" exitCode=1 Feb 16 02:17:39.338331 master-0 kubenswrapper[7721]: I0216 02:17:39.338245 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-g9lcm" event={"ID":"27d876a7-6a48-4942-ad96-ed8ed3aa104b","Type":"ContainerDied","Data":"bb54fbd185420265f2400dbce5bb93b2c07ec50f3a0611291aab6640cc25bca3"} Feb 16 02:17:39.338630 master-0 kubenswrapper[7721]: I0216 02:17:39.338359 7721 scope.go:117] "RemoveContainer" containerID="940771c91c013a004b3132c01c764c048ed22316fa2e21d7b58deed65f3ed4cf" Feb 16 02:17:39.339661 master-0 kubenswrapper[7721]: I0216 02:17:39.339099 7721 scope.go:117] "RemoveContainer" containerID="bb54fbd185420265f2400dbce5bb93b2c07ec50f3a0611291aab6640cc25bca3" Feb 16 02:17:39.339661 master-0 kubenswrapper[7721]: E0216 02:17:39.339574 7721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=operator-controller-controller-manager-85c9b89969-g9lcm_openshift-operator-controller(27d876a7-6a48-4942-ad96-ed8ed3aa104b)\"" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-g9lcm" podUID="27d876a7-6a48-4942-ad96-ed8ed3aa104b" Feb 16 02:17:39.341948 master-0 kubenswrapper[7721]: I0216 02:17:39.341701 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-466x9_a3065737-c7c0-4fbb-b484-f2a9204d4908/snapshot-controller/1.log" Feb 16 02:17:39.342634 master-0 kubenswrapper[7721]: I0216 02:17:39.342583 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-466x9_a3065737-c7c0-4fbb-b484-f2a9204d4908/snapshot-controller/0.log" Feb 16 02:17:39.342765 master-0 kubenswrapper[7721]: I0216 02:17:39.342650 7721 generic.go:334] "Generic (PLEG): container finished" podID="a3065737-c7c0-4fbb-b484-f2a9204d4908" containerID="913a5d08144597878e120125e796f5de9a81becfa80d2751d362c9b509551b8f" exitCode=1 Feb 16 02:17:39.342765 master-0 kubenswrapper[7721]: I0216 02:17:39.342741 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-466x9" event={"ID":"a3065737-c7c0-4fbb-b484-f2a9204d4908","Type":"ContainerDied","Data":"913a5d08144597878e120125e796f5de9a81becfa80d2751d362c9b509551b8f"} Feb 16 02:17:39.343496 master-0 kubenswrapper[7721]: I0216 02:17:39.343407 7721 scope.go:117] "RemoveContainer" containerID="913a5d08144597878e120125e796f5de9a81becfa80d2751d362c9b509551b8f" Feb 16 02:17:39.343837 master-0 kubenswrapper[7721]: E0216 02:17:39.343788 7721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=snapshot-controller pod=csi-snapshot-controller-74b6595c6d-466x9_openshift-cluster-storage-operator(a3065737-c7c0-4fbb-b484-f2a9204d4908)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-466x9" podUID="a3065737-c7c0-4fbb-b484-f2a9204d4908" Feb 16 02:17:39.348805 master-0 kubenswrapper[7721]: I0216 02:17:39.348744 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj_c4a146b2-c712-408a-97d8-5de3a84f3aaf/config-sync-controllers/0.log" Feb 16 02:17:39.349734 master-0 kubenswrapper[7721]: I0216 02:17:39.349673 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj_c4a146b2-c712-408a-97d8-5de3a84f3aaf/cluster-cloud-controller-manager/0.log" Feb 16 02:17:39.349903 master-0 kubenswrapper[7721]: I0216 02:17:39.349762 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj" event={"ID":"c4a146b2-c712-408a-97d8-5de3a84f3aaf","Type":"ContainerStarted","Data":"dbc59b0605972fc248e9ffac205fbe2355251f90cf500ad93408060d389156cc"} Feb 16 02:17:39.392154 master-0 kubenswrapper[7721]: I0216 02:17:39.392083 7721 scope.go:117] "RemoveContainer" containerID="37393f3209e22fdba80463ac1612aee9793e0477a277020982d8df5dfbf209db" Feb 16 02:17:39.968864 master-0 kubenswrapper[7721]: I0216 02:17:39.968761 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:17:39.968864 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:17:39.968864 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:17:39.968864 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:17:39.969335 master-0 kubenswrapper[7721]: I0216 02:17:39.968873 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:17:40.371342 master-0 kubenswrapper[7721]: I0216 02:17:40.370701 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-85c9b89969-g9lcm_27d876a7-6a48-4942-ad96-ed8ed3aa104b/manager/1.log" Feb 16 02:17:40.376133 master-0 kubenswrapper[7721]: I0216 02:17:40.376077 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-466x9_a3065737-c7c0-4fbb-b484-f2a9204d4908/snapshot-controller/1.log" Feb 16 02:17:40.968814 master-0 kubenswrapper[7721]: I0216 02:17:40.968609 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:17:40.968814 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:17:40.968814 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:17:40.968814 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:17:40.968814 master-0 kubenswrapper[7721]: I0216 02:17:40.968732 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:17:41.969339 master-0 kubenswrapper[7721]: I0216 02:17:41.969209 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:17:41.969339 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:17:41.969339 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:17:41.969339 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:17:41.969339 master-0 kubenswrapper[7721]: I0216 02:17:41.969311 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:17:42.969366 master-0 kubenswrapper[7721]: I0216 02:17:42.969199 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:17:42.969366 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:17:42.969366 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:17:42.969366 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:17:42.970417 master-0 kubenswrapper[7721]: I0216 02:17:42.969384 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:17:43.969368 master-0 kubenswrapper[7721]: I0216 02:17:43.969235 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:17:43.969368 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:17:43.969368 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:17:43.969368 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:17:43.969368 master-0 kubenswrapper[7721]: I0216 02:17:43.969351 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:17:44.725290 master-0 kubenswrapper[7721]: I0216 02:17:44.725184 7721 scope.go:117] "RemoveContainer" containerID="f8b63e1652f210c6b1254a1e3bd4ab515a460adc2668368a4276c7b3f8a11479" Feb 16 02:17:44.969040 master-0 kubenswrapper[7721]: I0216 02:17:44.968940 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:17:44.969040 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:17:44.969040 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:17:44.969040 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:17:44.969040 master-0 kubenswrapper[7721]: I0216 02:17:44.969022 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:17:45.424323 master-0 kubenswrapper[7721]: I0216 02:17:45.424239 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-c588d8cb4-nbjz6_04804a08-e3a5-46f3-abcb-967866834baa/ingress-operator/3.log" Feb 16 02:17:45.430474 master-0 kubenswrapper[7721]: I0216 02:17:45.430361 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nbjz6" event={"ID":"04804a08-e3a5-46f3-abcb-967866834baa","Type":"ContainerStarted","Data":"0405f37172f7f0e66eacb12dabde4efc8bc5d9f141a69f5229eddcb49dd8fe93"} Feb 16 02:17:45.969906 master-0 kubenswrapper[7721]: I0216 02:17:45.969802 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:17:45.969906 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:17:45.969906 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:17:45.969906 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:17:45.970915 master-0 kubenswrapper[7721]: I0216 02:17:45.969935 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:17:46.962679 master-0 kubenswrapper[7721]: I0216 02:17:46.962604 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-g9lcm" Feb 16 02:17:46.962679 master-0 kubenswrapper[7721]: I0216 02:17:46.962675 7721 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-g9lcm" Feb 16 02:17:46.963554 master-0 kubenswrapper[7721]: I0216 02:17:46.963502 7721 scope.go:117] "RemoveContainer" containerID="bb54fbd185420265f2400dbce5bb93b2c07ec50f3a0611291aab6640cc25bca3" Feb 16 02:17:46.963935 master-0 kubenswrapper[7721]: E0216 02:17:46.963899 7721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=operator-controller-controller-manager-85c9b89969-g9lcm_openshift-operator-controller(27d876a7-6a48-4942-ad96-ed8ed3aa104b)\"" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-g9lcm" podUID="27d876a7-6a48-4942-ad96-ed8ed3aa104b" Feb 16 02:17:46.969543 master-0 kubenswrapper[7721]: I0216 02:17:46.969479 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:17:46.969543 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:17:46.969543 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:17:46.969543 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:17:46.969840 master-0 kubenswrapper[7721]: I0216 02:17:46.969567 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:17:47.448258 master-0 kubenswrapper[7721]: I0216 02:17:47.448192 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-67bc7c997f-zc2br_857357a1-dc98-4dd5-98b3-c94b1ddf9dec/manager/1.log" Feb 16 02:17:47.449634 master-0 kubenswrapper[7721]: I0216 02:17:47.449106 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-67bc7c997f-zc2br_857357a1-dc98-4dd5-98b3-c94b1ddf9dec/manager/0.log" Feb 16 02:17:47.449634 master-0 kubenswrapper[7721]: I0216 02:17:47.449617 7721 generic.go:334] "Generic (PLEG): container finished" podID="857357a1-dc98-4dd5-98b3-c94b1ddf9dec" containerID="0f7ba85e2cd54b9d76dada87f2712d689c26beb2b9f0778369e602e1815aefe6" exitCode=1 Feb 16 02:17:47.449816 master-0 kubenswrapper[7721]: I0216 02:17:47.449645 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-zc2br" event={"ID":"857357a1-dc98-4dd5-98b3-c94b1ddf9dec","Type":"ContainerDied","Data":"0f7ba85e2cd54b9d76dada87f2712d689c26beb2b9f0778369e602e1815aefe6"} Feb 16 02:17:47.449816 master-0 kubenswrapper[7721]: I0216 02:17:47.449679 7721 scope.go:117] "RemoveContainer" containerID="5636e1e80751f3a3c96789a21a3143daf15c7ab0cfa132d87dcb28a679f13f01" Feb 16 02:17:47.450396 master-0 kubenswrapper[7721]: I0216 02:17:47.450340 7721 scope.go:117] "RemoveContainer" containerID="0f7ba85e2cd54b9d76dada87f2712d689c26beb2b9f0778369e602e1815aefe6" Feb 16 02:17:47.450725 master-0 kubenswrapper[7721]: E0216 02:17:47.450675 7721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=catalogd-controller-manager-67bc7c997f-zc2br_openshift-catalogd(857357a1-dc98-4dd5-98b3-c94b1ddf9dec)\"" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-zc2br" podUID="857357a1-dc98-4dd5-98b3-c94b1ddf9dec" Feb 16 02:17:47.969276 master-0 kubenswrapper[7721]: I0216 02:17:47.969128 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:17:47.969276 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:17:47.969276 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:17:47.969276 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:17:47.969276 master-0 kubenswrapper[7721]: I0216 02:17:47.969256 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:17:48.460386 master-0 kubenswrapper[7721]: I0216 02:17:48.460344 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-67bc7c997f-zc2br_857357a1-dc98-4dd5-98b3-c94b1ddf9dec/manager/1.log" Feb 16 02:17:48.968678 master-0 kubenswrapper[7721]: I0216 02:17:48.968598 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:17:48.968678 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:17:48.968678 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:17:48.968678 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:17:48.969121 master-0 kubenswrapper[7721]: I0216 02:17:48.968686 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:17:49.968308 master-0 kubenswrapper[7721]: I0216 02:17:49.968185 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:17:49.968308 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:17:49.968308 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:17:49.968308 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:17:49.968308 master-0 kubenswrapper[7721]: I0216 02:17:49.968296 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:17:50.724657 master-0 kubenswrapper[7721]: I0216 02:17:50.724570 7721 scope.go:117] "RemoveContainer" containerID="913a5d08144597878e120125e796f5de9a81becfa80d2751d362c9b509551b8f" Feb 16 02:17:50.969528 master-0 kubenswrapper[7721]: I0216 02:17:50.969352 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:17:50.969528 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:17:50.969528 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:17:50.969528 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:17:50.970524 master-0 kubenswrapper[7721]: I0216 02:17:50.969591 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:17:51.487245 master-0 kubenswrapper[7721]: I0216 02:17:51.487169 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-466x9_a3065737-c7c0-4fbb-b484-f2a9204d4908/snapshot-controller/1.log" Feb 16 02:17:51.487608 master-0 kubenswrapper[7721]: I0216 02:17:51.487263 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-466x9" event={"ID":"a3065737-c7c0-4fbb-b484-f2a9204d4908","Type":"ContainerStarted","Data":"c39230a5d2b59263cdbaf8110f073f31b3c9d4bb5cf4e5c5c98b09871359d18e"} Feb 16 02:17:51.969770 master-0 kubenswrapper[7721]: I0216 02:17:51.969648 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:17:51.969770 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:17:51.969770 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:17:51.969770 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:17:51.969770 master-0 kubenswrapper[7721]: I0216 02:17:51.969755 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:17:52.969318 master-0 kubenswrapper[7721]: I0216 02:17:52.969204 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:17:52.969318 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:17:52.969318 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:17:52.969318 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:17:52.969318 master-0 kubenswrapper[7721]: I0216 02:17:52.969295 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:17:53.592005 master-0 kubenswrapper[7721]: E0216 02:17:53.591846 7721 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 16 02:17:53.971169 master-0 kubenswrapper[7721]: I0216 02:17:53.971045 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:17:53.971169 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:17:53.971169 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:17:53.971169 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:17:53.971169 master-0 kubenswrapper[7721]: I0216 02:17:53.971160 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:17:54.276224 master-0 kubenswrapper[7721]: I0216 02:17:54.276039 7721 status_manager.go:851] "Failed to get status for pod" podUID="9460ca0802075a8a6a10d7b3e6052c4d" pod="kube-system/bootstrap-kube-scheduler-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods bootstrap-kube-scheduler-master-0)" Feb 16 02:17:54.969570 master-0 kubenswrapper[7721]: I0216 02:17:54.969415 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:17:54.969570 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:17:54.969570 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:17:54.969570 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:17:54.969570 master-0 kubenswrapper[7721]: I0216 02:17:54.969537 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:17:55.969124 master-0 kubenswrapper[7721]: I0216 02:17:55.969028 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:17:55.969124 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:17:55.969124 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:17:55.969124 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:17:55.970511 master-0 kubenswrapper[7721]: I0216 02:17:55.969127 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:17:56.781336 master-0 kubenswrapper[7721]: E0216 02:17:56.781030 7721 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{kube-controller-manager-master-0.189498315898b423 openshift-kube-controller-manager 11373 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-master-0,UID:532487ad51c30257b744e7c1c79fb34f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:11:31 +0000 UTC,LastTimestamp:2026-02-16 02:15:54.622818795 +0000 UTC m=+558.117053098,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:17:56.844606 master-0 kubenswrapper[7721]: I0216 02:17:56.844407 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-zc2br" Feb 16 02:17:56.846157 master-0 kubenswrapper[7721]: I0216 02:17:56.846094 7721 scope.go:117] "RemoveContainer" containerID="0f7ba85e2cd54b9d76dada87f2712d689c26beb2b9f0778369e602e1815aefe6" Feb 16 02:17:56.846655 master-0 kubenswrapper[7721]: E0216 02:17:56.846582 7721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=catalogd-controller-manager-67bc7c997f-zc2br_openshift-catalogd(857357a1-dc98-4dd5-98b3-c94b1ddf9dec)\"" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-zc2br" podUID="857357a1-dc98-4dd5-98b3-c94b1ddf9dec" Feb 16 02:17:56.970339 master-0 kubenswrapper[7721]: I0216 02:17:56.970236 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:17:56.970339 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:17:56.970339 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:17:56.970339 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:17:56.971408 master-0 kubenswrapper[7721]: I0216 02:17:56.970326 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:17:57.970010 master-0 kubenswrapper[7721]: I0216 02:17:57.969877 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:17:57.970010 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:17:57.970010 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:17:57.970010 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:17:57.970010 master-0 kubenswrapper[7721]: I0216 02:17:57.969990 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:17:58.969366 master-0 kubenswrapper[7721]: I0216 02:17:58.969247 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:17:58.969366 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:17:58.969366 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:17:58.969366 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:17:58.969753 master-0 kubenswrapper[7721]: I0216 02:17:58.969361 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:17:59.725256 master-0 kubenswrapper[7721]: I0216 02:17:59.725168 7721 scope.go:117] "RemoveContainer" containerID="bb54fbd185420265f2400dbce5bb93b2c07ec50f3a0611291aab6640cc25bca3" Feb 16 02:17:59.968517 master-0 kubenswrapper[7721]: I0216 02:17:59.968415 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:17:59.968517 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:17:59.968517 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:17:59.968517 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:17:59.968916 master-0 kubenswrapper[7721]: I0216 02:17:59.968515 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:18:00.567680 master-0 kubenswrapper[7721]: I0216 02:18:00.567542 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-85c9b89969-g9lcm_27d876a7-6a48-4942-ad96-ed8ed3aa104b/manager/1.log" Feb 16 02:18:00.568370 master-0 kubenswrapper[7721]: I0216 02:18:00.568285 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-g9lcm" event={"ID":"27d876a7-6a48-4942-ad96-ed8ed3aa104b","Type":"ContainerStarted","Data":"df34eeaf7b8b1c9ac9f99c98ff9521a6fb563c73917557729d5d579eb3281aa9"} Feb 16 02:18:00.568676 master-0 kubenswrapper[7721]: I0216 02:18:00.568638 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-g9lcm" Feb 16 02:18:00.968524 master-0 kubenswrapper[7721]: I0216 02:18:00.968432 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:18:00.968524 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:18:00.968524 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:18:00.968524 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:18:00.969082 master-0 kubenswrapper[7721]: I0216 02:18:00.968554 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:18:01.969541 master-0 kubenswrapper[7721]: I0216 02:18:01.969320 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:18:01.969541 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:18:01.969541 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:18:01.969541 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:18:01.969541 master-0 kubenswrapper[7721]: I0216 02:18:01.969458 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:18:02.969791 master-0 kubenswrapper[7721]: I0216 02:18:02.969686 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:18:02.969791 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:18:02.969791 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:18:02.969791 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:18:02.970783 master-0 kubenswrapper[7721]: I0216 02:18:02.969805 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:18:03.970405 master-0 kubenswrapper[7721]: I0216 02:18:03.969738 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:18:03.970405 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:18:03.970405 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:18:03.970405 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:18:03.970405 master-0 kubenswrapper[7721]: I0216 02:18:03.969836 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:18:04.241271 master-0 kubenswrapper[7721]: E0216 02:18:04.241090 7721 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Feb 16 02:18:04.610320 master-0 kubenswrapper[7721]: I0216 02:18:04.610154 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerStarted","Data":"9fdba8862d6e8ad29a9f7bd67e796348970e6fc6146e389d31c604eee300ee18"} Feb 16 02:18:04.610727 master-0 kubenswrapper[7721]: I0216 02:18:04.610671 7721 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="75e1522d-2c8c-4dbc-830d-47636881cc06" Feb 16 02:18:04.610727 master-0 kubenswrapper[7721]: I0216 02:18:04.610722 7721 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="75e1522d-2c8c-4dbc-830d-47636881cc06" Feb 16 02:18:04.968733 master-0 kubenswrapper[7721]: I0216 02:18:04.968626 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:18:04.968733 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:18:04.968733 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:18:04.968733 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:18:04.969412 master-0 kubenswrapper[7721]: I0216 02:18:04.968751 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:18:05.621309 master-0 kubenswrapper[7721]: I0216 02:18:05.621191 7721 generic.go:334] "Generic (PLEG): container finished" podID="7adecad495595c43c57c30abd350e987" containerID="9fdba8862d6e8ad29a9f7bd67e796348970e6fc6146e389d31c604eee300ee18" exitCode=0 Feb 16 02:18:05.621309 master-0 kubenswrapper[7721]: I0216 02:18:05.621278 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerDied","Data":"9fdba8862d6e8ad29a9f7bd67e796348970e6fc6146e389d31c604eee300ee18"} Feb 16 02:18:05.968787 master-0 kubenswrapper[7721]: I0216 02:18:05.968741 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:18:05.968787 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:18:05.968787 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:18:05.968787 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:18:05.969143 master-0 kubenswrapper[7721]: I0216 02:18:05.968819 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:18:05.969143 master-0 kubenswrapper[7721]: I0216 02:18:05.968865 7721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-864ddd5f56-ffptx" Feb 16 02:18:05.969341 master-0 kubenswrapper[7721]: I0216 02:18:05.969315 7721 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"3ce3bc4dbf9d2b84eb229ace41c4b8a84419c825acb6db3b3ccaf2d00311773f"} pod="openshift-ingress/router-default-864ddd5f56-ffptx" containerMessage="Container router failed startup probe, will be restarted" Feb 16 02:18:05.969412 master-0 kubenswrapper[7721]: I0216 02:18:05.969359 7721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" containerID="cri-o://3ce3bc4dbf9d2b84eb229ace41c4b8a84419c825acb6db3b3ccaf2d00311773f" gracePeriod=3600 Feb 16 02:18:06.844099 master-0 kubenswrapper[7721]: I0216 02:18:06.844024 7721 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-zc2br" Feb 16 02:18:06.845464 master-0 kubenswrapper[7721]: I0216 02:18:06.845394 7721 scope.go:117] "RemoveContainer" containerID="0f7ba85e2cd54b9d76dada87f2712d689c26beb2b9f0778369e602e1815aefe6" Feb 16 02:18:06.966344 master-0 kubenswrapper[7721]: I0216 02:18:06.966292 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-g9lcm" Feb 16 02:18:07.643905 master-0 kubenswrapper[7721]: I0216 02:18:07.643810 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-d8bf84b88-r5l9f_a8d00a01-aa48-4830-a558-93a31cb98b31/control-plane-machine-set-operator/0.log" Feb 16 02:18:07.644323 master-0 kubenswrapper[7721]: I0216 02:18:07.643925 7721 generic.go:334] "Generic (PLEG): container finished" podID="a8d00a01-aa48-4830-a558-93a31cb98b31" containerID="cbb215837f1e5b2ced545b28b81dafe9fa0f617cf84f3ee5cf431ddb83b1fb21" exitCode=1 Feb 16 02:18:07.644323 master-0 kubenswrapper[7721]: I0216 02:18:07.644049 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-r5l9f" event={"ID":"a8d00a01-aa48-4830-a558-93a31cb98b31","Type":"ContainerDied","Data":"cbb215837f1e5b2ced545b28b81dafe9fa0f617cf84f3ee5cf431ddb83b1fb21"} Feb 16 02:18:07.645227 master-0 kubenswrapper[7721]: I0216 02:18:07.645156 7721 scope.go:117] "RemoveContainer" containerID="cbb215837f1e5b2ced545b28b81dafe9fa0f617cf84f3ee5cf431ddb83b1fb21" Feb 16 02:18:07.649495 master-0 kubenswrapper[7721]: I0216 02:18:07.649343 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-67bc7c997f-zc2br_857357a1-dc98-4dd5-98b3-c94b1ddf9dec/manager/1.log" Feb 16 02:18:07.650337 master-0 kubenswrapper[7721]: I0216 02:18:07.650250 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-zc2br" event={"ID":"857357a1-dc98-4dd5-98b3-c94b1ddf9dec","Type":"ContainerStarted","Data":"ce18364b815e675db6ef03723e51caaf036ffa1f9a221b4dbfad3b27652d8e68"} Feb 16 02:18:07.650740 master-0 kubenswrapper[7721]: I0216 02:18:07.650685 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-zc2br" Feb 16 02:18:07.654081 master-0 kubenswrapper[7721]: I0216 02:18:07.654013 7721 generic.go:334] "Generic (PLEG): container finished" podID="f7317f91-9441-449f-9738-85da088cf94f" containerID="b0f87ddc237d60c2bab39a1452b1e36c685e800e91756d3d4eee6ecf6e94ac8b" exitCode=0 Feb 16 02:18:07.654081 master-0 kubenswrapper[7721]: I0216 02:18:07.654078 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-mcff9" event={"ID":"f7317f91-9441-449f-9738-85da088cf94f","Type":"ContainerDied","Data":"b0f87ddc237d60c2bab39a1452b1e36c685e800e91756d3d4eee6ecf6e94ac8b"} Feb 16 02:18:07.654807 master-0 kubenswrapper[7721]: I0216 02:18:07.654751 7721 scope.go:117] "RemoveContainer" containerID="b0f87ddc237d60c2bab39a1452b1e36c685e800e91756d3d4eee6ecf6e94ac8b" Feb 16 02:18:08.668601 master-0 kubenswrapper[7721]: I0216 02:18:08.668499 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-mcff9" event={"ID":"f7317f91-9441-449f-9738-85da088cf94f","Type":"ContainerStarted","Data":"1c2fcab298b2d94d244c0f2f1146cf4d9ed56b1144f3b27d3246c2bbc582dbde"} Feb 16 02:18:08.671677 master-0 kubenswrapper[7721]: I0216 02:18:08.671626 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-d8bf84b88-r5l9f_a8d00a01-aa48-4830-a558-93a31cb98b31/control-plane-machine-set-operator/0.log" Feb 16 02:18:08.671965 master-0 kubenswrapper[7721]: I0216 02:18:08.671908 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-r5l9f" event={"ID":"a8d00a01-aa48-4830-a558-93a31cb98b31","Type":"ContainerStarted","Data":"0e52ea9b3e4847bf94c5548304f52b8e8a7aaa3be84f5a2010faa38847310a91"} Feb 16 02:18:10.594169 master-0 kubenswrapper[7721]: E0216 02:18:10.594029 7721 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 16 02:18:13.723594 master-0 kubenswrapper[7721]: I0216 02:18:13.723335 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_532487ad51c30257b744e7c1c79fb34f/kube-controller-manager/1.log" Feb 16 02:18:13.725254 master-0 kubenswrapper[7721]: I0216 02:18:13.725177 7721 generic.go:334] "Generic (PLEG): container finished" podID="532487ad51c30257b744e7c1c79fb34f" containerID="f04772c7428fae13ccd84b0277f134e7c93b419ed981a3e828e88653f2fe03b1" exitCode=0 Feb 16 02:18:13.725254 master-0 kubenswrapper[7721]: I0216 02:18:13.725239 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"532487ad51c30257b744e7c1c79fb34f","Type":"ContainerDied","Data":"f04772c7428fae13ccd84b0277f134e7c93b419ed981a3e828e88653f2fe03b1"} Feb 16 02:18:13.726121 master-0 kubenswrapper[7721]: I0216 02:18:13.726062 7721 scope.go:117] "RemoveContainer" containerID="f04772c7428fae13ccd84b0277f134e7c93b419ed981a3e828e88653f2fe03b1" Feb 16 02:18:14.734191 master-0 kubenswrapper[7721]: I0216 02:18:14.734075 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_532487ad51c30257b744e7c1c79fb34f/kube-controller-manager/1.log" Feb 16 02:18:14.738068 master-0 kubenswrapper[7721]: I0216 02:18:14.737959 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"532487ad51c30257b744e7c1c79fb34f","Type":"ContainerStarted","Data":"85aefff8c480adf2444cd8449bfec5064fe5a159a7fcab2f0f0135cccf7be479"} Feb 16 02:18:15.749745 master-0 kubenswrapper[7721]: I0216 02:18:15.749630 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-7bc947fc7d-frvgm_27a42eb0-677c-414d-b0ec-f945ec39b7e9/cluster-baremetal-operator/0.log" Feb 16 02:18:15.749745 master-0 kubenswrapper[7721]: I0216 02:18:15.749725 7721 generic.go:334] "Generic (PLEG): container finished" podID="27a42eb0-677c-414d-b0ec-f945ec39b7e9" containerID="88fcce026048d13fa9f5a17c335729461124a54c88a8e317918ea36be6c9ba26" exitCode=1 Feb 16 02:18:15.750717 master-0 kubenswrapper[7721]: I0216 02:18:15.749786 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-frvgm" event={"ID":"27a42eb0-677c-414d-b0ec-f945ec39b7e9","Type":"ContainerDied","Data":"88fcce026048d13fa9f5a17c335729461124a54c88a8e317918ea36be6c9ba26"} Feb 16 02:18:15.751172 master-0 kubenswrapper[7721]: I0216 02:18:15.751088 7721 scope.go:117] "RemoveContainer" containerID="88fcce026048d13fa9f5a17c335729461124a54c88a8e317918ea36be6c9ba26" Feb 16 02:18:16.762472 master-0 kubenswrapper[7721]: I0216 02:18:16.762357 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-7bc947fc7d-frvgm_27a42eb0-677c-414d-b0ec-f945ec39b7e9/cluster-baremetal-operator/0.log" Feb 16 02:18:16.763268 master-0 kubenswrapper[7721]: I0216 02:18:16.762485 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-frvgm" event={"ID":"27a42eb0-677c-414d-b0ec-f945ec39b7e9","Type":"ContainerStarted","Data":"3ea074aea2a594e75d8dbcb8474e8c5349cb474e287dbfac0d8bcbc83149c9d5"} Feb 16 02:18:16.847259 master-0 kubenswrapper[7721]: I0216 02:18:16.847155 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-zc2br" Feb 16 02:18:20.797824 master-0 kubenswrapper[7721]: I0216 02:18:20.797724 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:18:20.799049 master-0 kubenswrapper[7721]: I0216 02:18:20.798077 7721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:18:21.806247 master-0 kubenswrapper[7721]: I0216 02:18:21.806169 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-466x9_a3065737-c7c0-4fbb-b484-f2a9204d4908/snapshot-controller/2.log" Feb 16 02:18:21.807093 master-0 kubenswrapper[7721]: I0216 02:18:21.806985 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-466x9_a3065737-c7c0-4fbb-b484-f2a9204d4908/snapshot-controller/1.log" Feb 16 02:18:21.807093 master-0 kubenswrapper[7721]: I0216 02:18:21.807059 7721 generic.go:334] "Generic (PLEG): container finished" podID="a3065737-c7c0-4fbb-b484-f2a9204d4908" containerID="c39230a5d2b59263cdbaf8110f073f31b3c9d4bb5cf4e5c5c98b09871359d18e" exitCode=1 Feb 16 02:18:21.807323 master-0 kubenswrapper[7721]: I0216 02:18:21.807207 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-466x9" event={"ID":"a3065737-c7c0-4fbb-b484-f2a9204d4908","Type":"ContainerDied","Data":"c39230a5d2b59263cdbaf8110f073f31b3c9d4bb5cf4e5c5c98b09871359d18e"} Feb 16 02:18:21.807323 master-0 kubenswrapper[7721]: I0216 02:18:21.807302 7721 scope.go:117] "RemoveContainer" containerID="913a5d08144597878e120125e796f5de9a81becfa80d2751d362c9b509551b8f" Feb 16 02:18:21.808345 master-0 kubenswrapper[7721]: I0216 02:18:21.808278 7721 scope.go:117] "RemoveContainer" containerID="c39230a5d2b59263cdbaf8110f073f31b3c9d4bb5cf4e5c5c98b09871359d18e" Feb 16 02:18:21.808775 master-0 kubenswrapper[7721]: E0216 02:18:21.808717 7721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-74b6595c6d-466x9_openshift-cluster-storage-operator(a3065737-c7c0-4fbb-b484-f2a9204d4908)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-466x9" podUID="a3065737-c7c0-4fbb-b484-f2a9204d4908" Feb 16 02:18:22.819955 master-0 kubenswrapper[7721]: I0216 02:18:22.819805 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-466x9_a3065737-c7c0-4fbb-b484-f2a9204d4908/snapshot-controller/2.log" Feb 16 02:18:23.798490 master-0 kubenswrapper[7721]: I0216 02:18:23.798346 7721 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 02:18:23.798807 master-0 kubenswrapper[7721]: I0216 02:18:23.798529 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="532487ad51c30257b744e7c1c79fb34f" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 02:18:24.844618 master-0 kubenswrapper[7721]: I0216 02:18:24.844354 7721 generic.go:334] "Generic (PLEG): container finished" podID="e491b5ed-9c09-4308-9843-fba8d43bd3ae" containerID="fb538a41cea5f683a2ab8b99be06eb74affc528bd353ff7cabad5516264bee81" exitCode=0 Feb 16 02:18:24.844618 master-0 kubenswrapper[7721]: I0216 02:18:24.844425 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5788fc6459-29m25" event={"ID":"e491b5ed-9c09-4308-9843-fba8d43bd3ae","Type":"ContainerDied","Data":"fb538a41cea5f683a2ab8b99be06eb74affc528bd353ff7cabad5516264bee81"} Feb 16 02:18:24.846384 master-0 kubenswrapper[7721]: I0216 02:18:24.845232 7721 scope.go:117] "RemoveContainer" containerID="fb538a41cea5f683a2ab8b99be06eb74affc528bd353ff7cabad5516264bee81" Feb 16 02:18:25.855837 master-0 kubenswrapper[7721]: I0216 02:18:25.855759 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5788fc6459-29m25" event={"ID":"e491b5ed-9c09-4308-9843-fba8d43bd3ae","Type":"ContainerStarted","Data":"ca909365df9d2aff41f044240436ffc1049e0e2619dc9a8b0a9a1fe204214291"} Feb 16 02:18:25.856690 master-0 kubenswrapper[7721]: I0216 02:18:25.856193 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5788fc6459-29m25" Feb 16 02:18:25.859289 master-0 kubenswrapper[7721]: I0216 02:18:25.859222 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-8569dd85ff-vqtcl_c442d349-668b-4d01-a097-5981b7a04eac/machine-approver-controller/0.log" Feb 16 02:18:25.859893 master-0 kubenswrapper[7721]: I0216 02:18:25.859832 7721 generic.go:334] "Generic (PLEG): container finished" podID="c442d349-668b-4d01-a097-5981b7a04eac" containerID="7519ecb1c789c2c061040595067f6c82e07370c9c08904abeb4e65bb29dba279" exitCode=255 Feb 16 02:18:25.859998 master-0 kubenswrapper[7721]: I0216 02:18:25.859902 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-vqtcl" event={"ID":"c442d349-668b-4d01-a097-5981b7a04eac","Type":"ContainerDied","Data":"7519ecb1c789c2c061040595067f6c82e07370c9c08904abeb4e65bb29dba279"} Feb 16 02:18:25.860710 master-0 kubenswrapper[7721]: I0216 02:18:25.860663 7721 scope.go:117] "RemoveContainer" containerID="7519ecb1c789c2c061040595067f6c82e07370c9c08904abeb4e65bb29dba279" Feb 16 02:18:25.862741 master-0 kubenswrapper[7721]: I0216 02:18:25.862684 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5788fc6459-29m25" Feb 16 02:18:26.871607 master-0 kubenswrapper[7721]: I0216 02:18:26.871541 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-8569dd85ff-vqtcl_c442d349-668b-4d01-a097-5981b7a04eac/machine-approver-controller/0.log" Feb 16 02:18:26.872516 master-0 kubenswrapper[7721]: I0216 02:18:26.872199 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-vqtcl" event={"ID":"c442d349-668b-4d01-a097-5981b7a04eac","Type":"ContainerStarted","Data":"b3d462e9c6bbf99845ae272d2e12e6bb7fc2da1060f6187dc0663c8d12c28716"} Feb 16 02:18:27.596134 master-0 kubenswrapper[7721]: E0216 02:18:27.595930 7721 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 16 02:18:30.784952 master-0 kubenswrapper[7721]: E0216 02:18:30.784765 7721 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.1894986f5013571e kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:9460ca0802075a8a6a10d7b3e6052c4d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:BackOff,Message:Back-off restarting failed container kube-scheduler in pod bootstrap-kube-scheduler-master-0_kube-system(9460ca0802075a8a6a10d7b3e6052c4d),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:15:57.315684126 +0000 UTC m=+560.809918428,LastTimestamp:2026-02-16 02:15:57.315684126 +0000 UTC m=+560.809918428,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:18:33.798651 master-0 kubenswrapper[7721]: I0216 02:18:33.798537 7721 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 02:18:33.799586 master-0 kubenswrapper[7721]: I0216 02:18:33.798650 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="532487ad51c30257b744e7c1c79fb34f" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 02:18:34.725418 master-0 kubenswrapper[7721]: I0216 02:18:34.725327 7721 scope.go:117] "RemoveContainer" containerID="c39230a5d2b59263cdbaf8110f073f31b3c9d4bb5cf4e5c5c98b09871359d18e" Feb 16 02:18:34.725855 master-0 kubenswrapper[7721]: E0216 02:18:34.725780 7721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-74b6595c6d-466x9_openshift-cluster-storage-operator(a3065737-c7c0-4fbb-b484-f2a9204d4908)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-466x9" podUID="a3065737-c7c0-4fbb-b484-f2a9204d4908" Feb 16 02:18:38.613642 master-0 kubenswrapper[7721]: E0216 02:18:38.613496 7721 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Feb 16 02:18:38.973894 master-0 kubenswrapper[7721]: I0216 02:18:38.973808 7721 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="75e1522d-2c8c-4dbc-830d-47636881cc06" Feb 16 02:18:38.973894 master-0 kubenswrapper[7721]: I0216 02:18:38.973875 7721 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="75e1522d-2c8c-4dbc-830d-47636881cc06" Feb 16 02:18:43.798649 master-0 kubenswrapper[7721]: I0216 02:18:43.798542 7721 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 02:18:43.799526 master-0 kubenswrapper[7721]: I0216 02:18:43.798652 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="532487ad51c30257b744e7c1c79fb34f" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 02:18:43.799526 master-0 kubenswrapper[7721]: I0216 02:18:43.798738 7721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:18:43.799959 master-0 kubenswrapper[7721]: I0216 02:18:43.799897 7721 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"85aefff8c480adf2444cd8449bfec5064fe5a159a7fcab2f0f0135cccf7be479"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Feb 16 02:18:43.800106 master-0 kubenswrapper[7721]: I0216 02:18:43.800061 7721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="532487ad51c30257b744e7c1c79fb34f" containerName="cluster-policy-controller" containerID="cri-o://85aefff8c480adf2444cd8449bfec5064fe5a159a7fcab2f0f0135cccf7be479" gracePeriod=30 Feb 16 02:18:44.017590 master-0 kubenswrapper[7721]: I0216 02:18:44.017522 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_532487ad51c30257b744e7c1c79fb34f/cluster-policy-controller/1.log" Feb 16 02:18:44.018677 master-0 kubenswrapper[7721]: I0216 02:18:44.018640 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_532487ad51c30257b744e7c1c79fb34f/kube-controller-manager/1.log" Feb 16 02:18:44.021309 master-0 kubenswrapper[7721]: I0216 02:18:44.020230 7721 generic.go:334] "Generic (PLEG): container finished" podID="532487ad51c30257b744e7c1c79fb34f" containerID="85aefff8c480adf2444cd8449bfec5064fe5a159a7fcab2f0f0135cccf7be479" exitCode=255 Feb 16 02:18:44.021309 master-0 kubenswrapper[7721]: I0216 02:18:44.020286 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"532487ad51c30257b744e7c1c79fb34f","Type":"ContainerDied","Data":"85aefff8c480adf2444cd8449bfec5064fe5a159a7fcab2f0f0135cccf7be479"} Feb 16 02:18:44.021309 master-0 kubenswrapper[7721]: I0216 02:18:44.020333 7721 scope.go:117] "RemoveContainer" containerID="f04772c7428fae13ccd84b0277f134e7c93b419ed981a3e828e88653f2fe03b1" Feb 16 02:18:44.597177 master-0 kubenswrapper[7721]: E0216 02:18:44.596779 7721 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": context deadline exceeded" interval="7s" Feb 16 02:18:45.034592 master-0 kubenswrapper[7721]: I0216 02:18:45.034512 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_532487ad51c30257b744e7c1c79fb34f/cluster-policy-controller/1.log" Feb 16 02:18:45.036296 master-0 kubenswrapper[7721]: I0216 02:18:45.036228 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_532487ad51c30257b744e7c1c79fb34f/kube-controller-manager/1.log" Feb 16 02:18:45.037728 master-0 kubenswrapper[7721]: I0216 02:18:45.037648 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"532487ad51c30257b744e7c1c79fb34f","Type":"ContainerStarted","Data":"0774abbdc43d6d09c91b8e6396c1df79557519d012de62e83ac5b464fc305b3e"} Feb 16 02:18:46.725248 master-0 kubenswrapper[7721]: I0216 02:18:46.725161 7721 scope.go:117] "RemoveContainer" containerID="c39230a5d2b59263cdbaf8110f073f31b3c9d4bb5cf4e5c5c98b09871359d18e" Feb 16 02:18:47.057864 master-0 kubenswrapper[7721]: I0216 02:18:47.057703 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-466x9_a3065737-c7c0-4fbb-b484-f2a9204d4908/snapshot-controller/2.log" Feb 16 02:18:47.057864 master-0 kubenswrapper[7721]: I0216 02:18:47.057802 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-466x9" event={"ID":"a3065737-c7c0-4fbb-b484-f2a9204d4908","Type":"ContainerStarted","Data":"b4abe24fcfd1048dd070b8fa8ab3c32bc9a46d0638b59c84cbaf7245cf851339"} Feb 16 02:18:50.798060 master-0 kubenswrapper[7721]: I0216 02:18:50.797967 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:18:50.798060 master-0 kubenswrapper[7721]: I0216 02:18:50.798069 7721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:18:53.117501 master-0 kubenswrapper[7721]: I0216 02:18:53.117391 7721 generic.go:334] "Generic (PLEG): container finished" podID="17390d9a-148d-4927-a831-5bc4873c43d5" containerID="3ce3bc4dbf9d2b84eb229ace41c4b8a84419c825acb6db3b3ccaf2d00311773f" exitCode=0 Feb 16 02:18:53.118809 master-0 kubenswrapper[7721]: I0216 02:18:53.117510 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-864ddd5f56-ffptx" event={"ID":"17390d9a-148d-4927-a831-5bc4873c43d5","Type":"ContainerDied","Data":"3ce3bc4dbf9d2b84eb229ace41c4b8a84419c825acb6db3b3ccaf2d00311773f"} Feb 16 02:18:53.118809 master-0 kubenswrapper[7721]: I0216 02:18:53.117593 7721 scope.go:117] "RemoveContainer" containerID="226c9fa2763b9d22f9a5e31efca1219592ef35d729ba28d906add8e85efb3944" Feb 16 02:18:53.799325 master-0 kubenswrapper[7721]: I0216 02:18:53.799199 7721 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 02:18:53.799701 master-0 kubenswrapper[7721]: I0216 02:18:53.799364 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="532487ad51c30257b744e7c1c79fb34f" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 02:18:54.130729 master-0 kubenswrapper[7721]: I0216 02:18:54.130539 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-864ddd5f56-ffptx" event={"ID":"17390d9a-148d-4927-a831-5bc4873c43d5","Type":"ContainerStarted","Data":"6ef739f702cc8cdaf44b732cdf8fab6363588dea12a3413585529df5415a8dd4"} Feb 16 02:18:54.278612 master-0 kubenswrapper[7721]: I0216 02:18:54.278366 7721 status_manager.go:851] "Failed to get status for pod" podUID="4ff0fbd8-9ecc-421f-952c-c90ea17ddc7b" pod="openshift-etcd/installer-2-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-2-master-0)" Feb 16 02:18:54.965965 master-0 kubenswrapper[7721]: I0216 02:18:54.965844 7721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-864ddd5f56-ffptx" Feb 16 02:18:54.969053 master-0 kubenswrapper[7721]: I0216 02:18:54.968962 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:18:54.969053 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:18:54.969053 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:18:54.969053 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:18:54.969387 master-0 kubenswrapper[7721]: I0216 02:18:54.969079 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:18:55.969322 master-0 kubenswrapper[7721]: I0216 02:18:55.969224 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:18:55.969322 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:18:55.969322 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:18:55.969322 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:18:55.970323 master-0 kubenswrapper[7721]: I0216 02:18:55.969334 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:18:56.969385 master-0 kubenswrapper[7721]: I0216 02:18:56.969249 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:18:56.969385 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:18:56.969385 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:18:56.969385 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:18:56.969385 master-0 kubenswrapper[7721]: I0216 02:18:56.969349 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:18:57.970084 master-0 kubenswrapper[7721]: I0216 02:18:57.969941 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:18:57.970084 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:18:57.970084 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:18:57.970084 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:18:57.970084 master-0 kubenswrapper[7721]: I0216 02:18:57.970036 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:18:58.970207 master-0 kubenswrapper[7721]: I0216 02:18:58.970085 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:18:58.970207 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:18:58.970207 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:18:58.970207 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:18:58.970207 master-0 kubenswrapper[7721]: I0216 02:18:58.970207 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:18:59.969147 master-0 kubenswrapper[7721]: I0216 02:18:59.969017 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:18:59.969147 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:18:59.969147 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:18:59.969147 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:18:59.969147 master-0 kubenswrapper[7721]: I0216 02:18:59.969136 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:19:00.965855 master-0 kubenswrapper[7721]: I0216 02:19:00.965724 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-864ddd5f56-ffptx" Feb 16 02:19:00.971283 master-0 kubenswrapper[7721]: I0216 02:19:00.971102 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:19:00.971283 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:19:00.971283 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:19:00.971283 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:19:00.971283 master-0 kubenswrapper[7721]: I0216 02:19:00.971182 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:19:01.598403 master-0 kubenswrapper[7721]: E0216 02:19:01.598276 7721 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 16 02:19:01.969662 master-0 kubenswrapper[7721]: I0216 02:19:01.969530 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:19:01.969662 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:19:01.969662 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:19:01.969662 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:19:01.970422 master-0 kubenswrapper[7721]: I0216 02:19:01.969678 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:19:02.969009 master-0 kubenswrapper[7721]: I0216 02:19:02.968880 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:19:02.969009 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:19:02.969009 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:19:02.969009 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:19:02.969009 master-0 kubenswrapper[7721]: I0216 02:19:02.968991 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:19:03.798241 master-0 kubenswrapper[7721]: I0216 02:19:03.798128 7721 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 02:19:03.798627 master-0 kubenswrapper[7721]: I0216 02:19:03.798267 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="532487ad51c30257b744e7c1c79fb34f" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 02:19:03.971509 master-0 kubenswrapper[7721]: I0216 02:19:03.969989 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:19:03.971509 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:19:03.971509 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:19:03.971509 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:19:03.971509 master-0 kubenswrapper[7721]: I0216 02:19:03.970094 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:19:04.788552 master-0 kubenswrapper[7721]: E0216 02:19:04.788300 7721 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event=< Feb 16 02:19:04.788552 master-0 kubenswrapper[7721]: &Event{ObjectMeta:{kube-controller-manager-master-0.189498701fa2fa6f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-master-0,UID:532487ad51c30257b744e7c1c79fb34f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.32.10:10257/healthz": dial tcp 192.168.32.10:10257: connect: connection refused Feb 16 02:19:04.788552 master-0 kubenswrapper[7721]: body: Feb 16 02:19:04.788552 master-0 kubenswrapper[7721]: ,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:16:00.797981295 +0000 UTC m=+564.292215597,LastTimestamp:2026-02-16 02:16:00.797981295 +0000 UTC m=+564.292215597,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,} Feb 16 02:19:04.788552 master-0 kubenswrapper[7721]: > Feb 16 02:19:04.969562 master-0 kubenswrapper[7721]: I0216 02:19:04.969497 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:19:04.969562 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:19:04.969562 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:19:04.969562 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:19:04.970140 master-0 kubenswrapper[7721]: I0216 02:19:04.970097 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:19:05.967765 master-0 kubenswrapper[7721]: I0216 02:19:05.967716 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:19:05.967765 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:19:05.967765 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:19:05.967765 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:19:05.968563 master-0 kubenswrapper[7721]: I0216 02:19:05.968520 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:19:06.772398 master-0 kubenswrapper[7721]: E0216 02:19:06.772262 7721 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T02:18:56Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T02:18:56Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T02:18:56Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T02:18:56Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 02:19:06.968607 master-0 kubenswrapper[7721]: I0216 02:19:06.968511 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:19:06.968607 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:19:06.968607 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:19:06.968607 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:19:06.969628 master-0 kubenswrapper[7721]: I0216 02:19:06.968624 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:19:07.968598 master-0 kubenswrapper[7721]: I0216 02:19:07.968505 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:19:07.968598 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:19:07.968598 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:19:07.968598 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:19:07.969628 master-0 kubenswrapper[7721]: I0216 02:19:07.968615 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:19:08.969088 master-0 kubenswrapper[7721]: I0216 02:19:08.968991 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:19:08.969088 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:19:08.969088 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:19:08.969088 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:19:08.969957 master-0 kubenswrapper[7721]: I0216 02:19:08.969109 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:19:09.968930 master-0 kubenswrapper[7721]: I0216 02:19:09.968806 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:19:09.968930 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:19:09.968930 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:19:09.968930 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:19:09.970599 master-0 kubenswrapper[7721]: I0216 02:19:09.968972 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:19:10.968756 master-0 kubenswrapper[7721]: I0216 02:19:10.968563 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:19:10.968756 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:19:10.968756 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:19:10.968756 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:19:10.968756 master-0 kubenswrapper[7721]: I0216 02:19:10.968683 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:19:11.969148 master-0 kubenswrapper[7721]: I0216 02:19:11.969032 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:19:11.969148 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:19:11.969148 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:19:11.969148 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:19:11.970418 master-0 kubenswrapper[7721]: I0216 02:19:11.969154 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:19:12.971052 master-0 kubenswrapper[7721]: I0216 02:19:12.970907 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:19:12.971052 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:19:12.971052 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:19:12.971052 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:19:12.971052 master-0 kubenswrapper[7721]: I0216 02:19:12.971041 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:19:12.978354 master-0 kubenswrapper[7721]: E0216 02:19:12.978258 7721 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Feb 16 02:19:13.303732 master-0 kubenswrapper[7721]: I0216 02:19:13.303653 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerStarted","Data":"5afdb28db1102b8680211572600d2ea86ff6a2b01f828f45c36202b1f159b2ff"} Feb 16 02:19:13.797667 master-0 kubenswrapper[7721]: I0216 02:19:13.797460 7721 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 02:19:13.797667 master-0 kubenswrapper[7721]: I0216 02:19:13.797573 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="532487ad51c30257b744e7c1c79fb34f" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 02:19:13.797667 master-0 kubenswrapper[7721]: I0216 02:19:13.797654 7721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:19:13.799003 master-0 kubenswrapper[7721]: I0216 02:19:13.798921 7721 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"0774abbdc43d6d09c91b8e6396c1df79557519d012de62e83ac5b464fc305b3e"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Feb 16 02:19:13.799105 master-0 kubenswrapper[7721]: I0216 02:19:13.799081 7721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="532487ad51c30257b744e7c1c79fb34f" containerName="cluster-policy-controller" containerID="cri-o://0774abbdc43d6d09c91b8e6396c1df79557519d012de62e83ac5b464fc305b3e" gracePeriod=30 Feb 16 02:19:13.968503 master-0 kubenswrapper[7721]: I0216 02:19:13.968365 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:19:13.968503 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:19:13.968503 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:19:13.968503 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:19:13.968916 master-0 kubenswrapper[7721]: I0216 02:19:13.968505 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:19:14.321676 master-0 kubenswrapper[7721]: I0216 02:19:14.321607 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerStarted","Data":"3299356ddf4a192b926c81a482f497d9de881fe9f374a90912b930bcbf67c6b6"} Feb 16 02:19:14.321676 master-0 kubenswrapper[7721]: I0216 02:19:14.321677 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerStarted","Data":"ae32830ca9150c203b10939e294a55ba6320c100ed6704428d65d387485e03fc"} Feb 16 02:19:14.325708 master-0 kubenswrapper[7721]: I0216 02:19:14.325670 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_532487ad51c30257b744e7c1c79fb34f/cluster-policy-controller/2.log" Feb 16 02:19:14.326760 master-0 kubenswrapper[7721]: I0216 02:19:14.326703 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_532487ad51c30257b744e7c1c79fb34f/cluster-policy-controller/1.log" Feb 16 02:19:14.327803 master-0 kubenswrapper[7721]: I0216 02:19:14.327771 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_532487ad51c30257b744e7c1c79fb34f/kube-controller-manager/1.log" Feb 16 02:19:14.328488 master-0 kubenswrapper[7721]: I0216 02:19:14.328425 7721 generic.go:334] "Generic (PLEG): container finished" podID="532487ad51c30257b744e7c1c79fb34f" containerID="0774abbdc43d6d09c91b8e6396c1df79557519d012de62e83ac5b464fc305b3e" exitCode=255 Feb 16 02:19:14.328488 master-0 kubenswrapper[7721]: I0216 02:19:14.328475 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"532487ad51c30257b744e7c1c79fb34f","Type":"ContainerDied","Data":"0774abbdc43d6d09c91b8e6396c1df79557519d012de62e83ac5b464fc305b3e"} Feb 16 02:19:14.328628 master-0 kubenswrapper[7721]: I0216 02:19:14.328494 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"532487ad51c30257b744e7c1c79fb34f","Type":"ContainerStarted","Data":"da8d9bb8a4f56bffe45c048c1c373bcf9e89a06a44a652b40fb9bf76cc60fa15"} Feb 16 02:19:14.328628 master-0 kubenswrapper[7721]: I0216 02:19:14.328516 7721 scope.go:117] "RemoveContainer" containerID="85aefff8c480adf2444cd8449bfec5064fe5a159a7fcab2f0f0135cccf7be479" Feb 16 02:19:14.968987 master-0 kubenswrapper[7721]: I0216 02:19:14.968872 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:19:14.968987 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:19:14.968987 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:19:14.968987 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:19:14.968987 master-0 kubenswrapper[7721]: I0216 02:19:14.968967 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:19:15.342826 master-0 kubenswrapper[7721]: I0216 02:19:15.342744 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_532487ad51c30257b744e7c1c79fb34f/cluster-policy-controller/2.log" Feb 16 02:19:15.344267 master-0 kubenswrapper[7721]: I0216 02:19:15.344195 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_532487ad51c30257b744e7c1c79fb34f/kube-controller-manager/1.log" Feb 16 02:19:15.352085 master-0 kubenswrapper[7721]: I0216 02:19:15.351984 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerStarted","Data":"b50b0f44e3b3780166f8a63fd8d23b62ef36b9fc26de6eb074f3ec5177cf1af3"} Feb 16 02:19:15.352252 master-0 kubenswrapper[7721]: I0216 02:19:15.352092 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerStarted","Data":"372042b6282c1ebffee97f4b7cbb12648d0476752e838cf6bdaaa46e9c02aadb"} Feb 16 02:19:15.352491 master-0 kubenswrapper[7721]: I0216 02:19:15.352394 7721 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="75e1522d-2c8c-4dbc-830d-47636881cc06" Feb 16 02:19:15.352491 master-0 kubenswrapper[7721]: I0216 02:19:15.352468 7721 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="75e1522d-2c8c-4dbc-830d-47636881cc06" Feb 16 02:19:15.969229 master-0 kubenswrapper[7721]: I0216 02:19:15.969111 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:19:15.969229 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:19:15.969229 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:19:15.969229 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:19:15.969229 master-0 kubenswrapper[7721]: I0216 02:19:15.969214 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:19:16.365120 master-0 kubenswrapper[7721]: I0216 02:19:16.364914 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-7bc947fc7d-frvgm_27a42eb0-677c-414d-b0ec-f945ec39b7e9/cluster-baremetal-operator/1.log" Feb 16 02:19:16.366314 master-0 kubenswrapper[7721]: I0216 02:19:16.366257 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-7bc947fc7d-frvgm_27a42eb0-677c-414d-b0ec-f945ec39b7e9/cluster-baremetal-operator/0.log" Feb 16 02:19:16.366483 master-0 kubenswrapper[7721]: I0216 02:19:16.366332 7721 generic.go:334] "Generic (PLEG): container finished" podID="27a42eb0-677c-414d-b0ec-f945ec39b7e9" containerID="3ea074aea2a594e75d8dbcb8474e8c5349cb474e287dbfac0d8bcbc83149c9d5" exitCode=1 Feb 16 02:19:16.366483 master-0 kubenswrapper[7721]: I0216 02:19:16.366384 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-frvgm" event={"ID":"27a42eb0-677c-414d-b0ec-f945ec39b7e9","Type":"ContainerDied","Data":"3ea074aea2a594e75d8dbcb8474e8c5349cb474e287dbfac0d8bcbc83149c9d5"} Feb 16 02:19:16.366483 master-0 kubenswrapper[7721]: I0216 02:19:16.366473 7721 scope.go:117] "RemoveContainer" containerID="88fcce026048d13fa9f5a17c335729461124a54c88a8e317918ea36be6c9ba26" Feb 16 02:19:16.377491 master-0 kubenswrapper[7721]: I0216 02:19:16.370907 7721 scope.go:117] "RemoveContainer" containerID="3ea074aea2a594e75d8dbcb8474e8c5349cb474e287dbfac0d8bcbc83149c9d5" Feb 16 02:19:16.377491 master-0 kubenswrapper[7721]: E0216 02:19:16.371618 7721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-7bc947fc7d-frvgm_openshift-machine-api(27a42eb0-677c-414d-b0ec-f945ec39b7e9)\"" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-frvgm" podUID="27a42eb0-677c-414d-b0ec-f945ec39b7e9" Feb 16 02:19:16.772703 master-0 kubenswrapper[7721]: E0216 02:19:16.772627 7721 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 02:19:16.969536 master-0 kubenswrapper[7721]: I0216 02:19:16.969413 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:19:16.969536 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:19:16.969536 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:19:16.969536 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:19:16.969894 master-0 kubenswrapper[7721]: I0216 02:19:16.969630 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:19:17.379131 master-0 kubenswrapper[7721]: I0216 02:19:17.379033 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-7bc947fc7d-frvgm_27a42eb0-677c-414d-b0ec-f945ec39b7e9/cluster-baremetal-operator/1.log" Feb 16 02:19:17.382724 master-0 kubenswrapper[7721]: I0216 02:19:17.382659 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-466x9_a3065737-c7c0-4fbb-b484-f2a9204d4908/snapshot-controller/3.log" Feb 16 02:19:17.383545 master-0 kubenswrapper[7721]: I0216 02:19:17.383401 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-466x9_a3065737-c7c0-4fbb-b484-f2a9204d4908/snapshot-controller/2.log" Feb 16 02:19:17.383689 master-0 kubenswrapper[7721]: I0216 02:19:17.383554 7721 generic.go:334] "Generic (PLEG): container finished" podID="a3065737-c7c0-4fbb-b484-f2a9204d4908" containerID="b4abe24fcfd1048dd070b8fa8ab3c32bc9a46d0638b59c84cbaf7245cf851339" exitCode=1 Feb 16 02:19:17.383689 master-0 kubenswrapper[7721]: I0216 02:19:17.383622 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-466x9" event={"ID":"a3065737-c7c0-4fbb-b484-f2a9204d4908","Type":"ContainerDied","Data":"b4abe24fcfd1048dd070b8fa8ab3c32bc9a46d0638b59c84cbaf7245cf851339"} Feb 16 02:19:17.383689 master-0 kubenswrapper[7721]: I0216 02:19:17.383691 7721 scope.go:117] "RemoveContainer" containerID="c39230a5d2b59263cdbaf8110f073f31b3c9d4bb5cf4e5c5c98b09871359d18e" Feb 16 02:19:17.384750 master-0 kubenswrapper[7721]: I0216 02:19:17.384681 7721 scope.go:117] "RemoveContainer" containerID="b4abe24fcfd1048dd070b8fa8ab3c32bc9a46d0638b59c84cbaf7245cf851339" Feb 16 02:19:17.385149 master-0 kubenswrapper[7721]: E0216 02:19:17.385083 7721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-74b6595c6d-466x9_openshift-cluster-storage-operator(a3065737-c7c0-4fbb-b484-f2a9204d4908)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-466x9" podUID="a3065737-c7c0-4fbb-b484-f2a9204d4908" Feb 16 02:19:17.969685 master-0 kubenswrapper[7721]: I0216 02:19:17.969568 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:19:17.969685 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:19:17.969685 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:19:17.969685 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:19:17.970104 master-0 kubenswrapper[7721]: I0216 02:19:17.969701 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:19:18.394049 master-0 kubenswrapper[7721]: I0216 02:19:18.393863 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-466x9_a3065737-c7c0-4fbb-b484-f2a9204d4908/snapshot-controller/3.log" Feb 16 02:19:18.600473 master-0 kubenswrapper[7721]: E0216 02:19:18.600327 7721 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 16 02:19:18.748929 master-0 kubenswrapper[7721]: I0216 02:19:18.748824 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Feb 16 02:19:18.969054 master-0 kubenswrapper[7721]: I0216 02:19:18.968939 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:19:18.969054 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:19:18.969054 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:19:18.969054 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:19:18.969574 master-0 kubenswrapper[7721]: I0216 02:19:18.969056 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:19:19.969852 master-0 kubenswrapper[7721]: I0216 02:19:19.969756 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:19:19.969852 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:19:19.969852 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:19:19.969852 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:19:19.970633 master-0 kubenswrapper[7721]: I0216 02:19:19.969878 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:19:20.814239 master-0 kubenswrapper[7721]: I0216 02:19:20.814164 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:19:20.814239 master-0 kubenswrapper[7721]: I0216 02:19:20.814239 7721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:19:20.969607 master-0 kubenswrapper[7721]: I0216 02:19:20.968543 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:19:20.969607 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:19:20.969607 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:19:20.969607 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:19:20.969607 master-0 kubenswrapper[7721]: I0216 02:19:20.968632 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:19:21.970390 master-0 kubenswrapper[7721]: I0216 02:19:21.970285 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:19:21.970390 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:19:21.970390 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:19:21.970390 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:19:21.971621 master-0 kubenswrapper[7721]: I0216 02:19:21.970397 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:19:22.969691 master-0 kubenswrapper[7721]: I0216 02:19:22.969535 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:19:22.969691 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:19:22.969691 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:19:22.969691 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:19:22.969691 master-0 kubenswrapper[7721]: I0216 02:19:22.969667 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:19:23.749134 master-0 kubenswrapper[7721]: I0216 02:19:23.749009 7721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Feb 16 02:19:23.784050 master-0 kubenswrapper[7721]: I0216 02:19:23.783909 7721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Feb 16 02:19:23.798725 master-0 kubenswrapper[7721]: I0216 02:19:23.798622 7721 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 02:19:23.798896 master-0 kubenswrapper[7721]: I0216 02:19:23.798716 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="532487ad51c30257b744e7c1c79fb34f" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 02:19:23.974543 master-0 kubenswrapper[7721]: I0216 02:19:23.969691 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:19:23.974543 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:19:23.974543 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:19:23.974543 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:19:23.974543 master-0 kubenswrapper[7721]: I0216 02:19:23.969820 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:19:24.969318 master-0 kubenswrapper[7721]: I0216 02:19:24.969198 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:19:24.969318 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:19:24.969318 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:19:24.969318 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:19:24.970355 master-0 kubenswrapper[7721]: I0216 02:19:24.969317 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:19:25.968798 master-0 kubenswrapper[7721]: I0216 02:19:25.968668 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:19:25.968798 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:19:25.968798 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:19:25.968798 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:19:25.970126 master-0 kubenswrapper[7721]: I0216 02:19:25.968822 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:19:26.773700 master-0 kubenswrapper[7721]: E0216 02:19:26.773585 7721 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 02:19:26.968868 master-0 kubenswrapper[7721]: I0216 02:19:26.968760 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:19:26.968868 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:19:26.968868 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:19:26.968868 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:19:26.969344 master-0 kubenswrapper[7721]: I0216 02:19:26.968888 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:19:27.970547 master-0 kubenswrapper[7721]: I0216 02:19:27.970380 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:19:27.970547 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:19:27.970547 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:19:27.970547 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:19:27.971606 master-0 kubenswrapper[7721]: I0216 02:19:27.970557 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:19:28.784727 master-0 kubenswrapper[7721]: I0216 02:19:28.784598 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Feb 16 02:19:28.969605 master-0 kubenswrapper[7721]: I0216 02:19:28.969477 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:19:28.969605 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:19:28.969605 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:19:28.969605 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:19:28.969605 master-0 kubenswrapper[7721]: I0216 02:19:28.969595 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:19:29.726608 master-0 kubenswrapper[7721]: I0216 02:19:29.726521 7721 scope.go:117] "RemoveContainer" containerID="3ea074aea2a594e75d8dbcb8474e8c5349cb474e287dbfac0d8bcbc83149c9d5" Feb 16 02:19:29.916035 master-0 kubenswrapper[7721]: I0216 02:19:29.915948 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-7bc947fc7d-frvgm_27a42eb0-677c-414d-b0ec-f945ec39b7e9/cluster-baremetal-operator/1.log" Feb 16 02:19:29.916612 master-0 kubenswrapper[7721]: I0216 02:19:29.916548 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-frvgm" event={"ID":"27a42eb0-677c-414d-b0ec-f945ec39b7e9","Type":"ContainerStarted","Data":"1f9b8cb5579b9e0d6d7237180a327d565cb52ab9273c3c207f8893799bed4c02"} Feb 16 02:19:29.969257 master-0 kubenswrapper[7721]: I0216 02:19:29.969158 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:19:29.969257 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:19:29.969257 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:19:29.969257 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:19:29.969887 master-0 kubenswrapper[7721]: I0216 02:19:29.969274 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:19:30.968636 master-0 kubenswrapper[7721]: I0216 02:19:30.968560 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:19:30.968636 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:19:30.968636 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:19:30.968636 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:19:30.969869 master-0 kubenswrapper[7721]: I0216 02:19:30.969740 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:19:31.968971 master-0 kubenswrapper[7721]: I0216 02:19:31.968860 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:19:31.968971 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:19:31.968971 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:19:31.968971 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:19:31.970143 master-0 kubenswrapper[7721]: I0216 02:19:31.969011 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:19:32.726130 master-0 kubenswrapper[7721]: I0216 02:19:32.726022 7721 scope.go:117] "RemoveContainer" containerID="b4abe24fcfd1048dd070b8fa8ab3c32bc9a46d0638b59c84cbaf7245cf851339" Feb 16 02:19:32.726558 master-0 kubenswrapper[7721]: E0216 02:19:32.726463 7721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-74b6595c6d-466x9_openshift-cluster-storage-operator(a3065737-c7c0-4fbb-b484-f2a9204d4908)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-466x9" podUID="a3065737-c7c0-4fbb-b484-f2a9204d4908" Feb 16 02:19:32.969767 master-0 kubenswrapper[7721]: I0216 02:19:32.969630 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:19:32.969767 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:19:32.969767 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:19:32.969767 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:19:32.969767 master-0 kubenswrapper[7721]: I0216 02:19:32.969744 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:19:33.798550 master-0 kubenswrapper[7721]: I0216 02:19:33.798425 7721 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 02:19:33.798892 master-0 kubenswrapper[7721]: I0216 02:19:33.798562 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="532487ad51c30257b744e7c1c79fb34f" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 02:19:33.969249 master-0 kubenswrapper[7721]: I0216 02:19:33.969121 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:19:33.969249 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:19:33.969249 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:19:33.969249 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:19:33.969249 master-0 kubenswrapper[7721]: I0216 02:19:33.969237 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:19:34.968857 master-0 kubenswrapper[7721]: I0216 02:19:34.968695 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:19:34.968857 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:19:34.968857 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:19:34.968857 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:19:34.968857 master-0 kubenswrapper[7721]: I0216 02:19:34.968778 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:19:35.601763 master-0 kubenswrapper[7721]: E0216 02:19:35.601678 7721 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 16 02:19:35.969236 master-0 kubenswrapper[7721]: I0216 02:19:35.969115 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:19:35.969236 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:19:35.969236 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:19:35.969236 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:19:35.970261 master-0 kubenswrapper[7721]: I0216 02:19:35.969238 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:19:36.775072 master-0 kubenswrapper[7721]: E0216 02:19:36.774951 7721 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 02:19:36.968989 master-0 kubenswrapper[7721]: I0216 02:19:36.968876 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:19:36.968989 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:19:36.968989 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:19:36.968989 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:19:36.968989 master-0 kubenswrapper[7721]: I0216 02:19:36.968979 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:19:37.968595 master-0 kubenswrapper[7721]: I0216 02:19:37.968505 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:19:37.968595 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:19:37.968595 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:19:37.968595 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:19:37.969135 master-0 kubenswrapper[7721]: I0216 02:19:37.968594 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:19:38.792046 master-0 kubenswrapper[7721]: E0216 02:19:38.791858 7721 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{kube-controller-manager-master-0.189498701fa3e189 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-master-0,UID:532487ad51c30257b744e7c1c79fb34f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:16:00.798040457 +0000 UTC m=+564.292274749,LastTimestamp:2026-02-16 02:16:00.798040457 +0000 UTC m=+564.292274749,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:19:38.970565 master-0 kubenswrapper[7721]: I0216 02:19:38.970410 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:19:38.970565 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:19:38.970565 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:19:38.970565 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:19:38.970565 master-0 kubenswrapper[7721]: I0216 02:19:38.970559 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:19:39.969742 master-0 kubenswrapper[7721]: I0216 02:19:39.969644 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:19:39.969742 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:19:39.969742 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:19:39.969742 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:19:39.969742 master-0 kubenswrapper[7721]: I0216 02:19:39.969733 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:19:40.969968 master-0 kubenswrapper[7721]: I0216 02:19:40.969862 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:19:40.969968 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:19:40.969968 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:19:40.969968 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:19:40.969968 master-0 kubenswrapper[7721]: I0216 02:19:40.969943 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:19:41.968936 master-0 kubenswrapper[7721]: I0216 02:19:41.968866 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:19:41.968936 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:19:41.968936 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:19:41.968936 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:19:41.969535 master-0 kubenswrapper[7721]: I0216 02:19:41.969484 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:19:42.969103 master-0 kubenswrapper[7721]: I0216 02:19:42.968971 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:19:42.969103 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:19:42.969103 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:19:42.969103 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:19:42.969103 master-0 kubenswrapper[7721]: I0216 02:19:42.969098 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:19:43.798601 master-0 kubenswrapper[7721]: I0216 02:19:43.798512 7721 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 02:19:43.801362 master-0 kubenswrapper[7721]: I0216 02:19:43.798623 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="532487ad51c30257b744e7c1c79fb34f" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 02:19:43.801362 master-0 kubenswrapper[7721]: I0216 02:19:43.798713 7721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:19:43.801362 master-0 kubenswrapper[7721]: I0216 02:19:43.800061 7721 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"da8d9bb8a4f56bffe45c048c1c373bcf9e89a06a44a652b40fb9bf76cc60fa15"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Feb 16 02:19:43.801362 master-0 kubenswrapper[7721]: I0216 02:19:43.800233 7721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="532487ad51c30257b744e7c1c79fb34f" containerName="cluster-policy-controller" containerID="cri-o://da8d9bb8a4f56bffe45c048c1c373bcf9e89a06a44a652b40fb9bf76cc60fa15" gracePeriod=30 Feb 16 02:19:43.924043 master-0 kubenswrapper[7721]: E0216 02:19:43.923945 7721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(532487ad51c30257b744e7c1c79fb34f)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="532487ad51c30257b744e7c1c79fb34f" Feb 16 02:19:43.970329 master-0 kubenswrapper[7721]: I0216 02:19:43.970212 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:19:43.970329 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:19:43.970329 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:19:43.970329 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:19:43.971561 master-0 kubenswrapper[7721]: I0216 02:19:43.970329 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:19:44.037683 master-0 kubenswrapper[7721]: I0216 02:19:44.037595 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_532487ad51c30257b744e7c1c79fb34f/cluster-policy-controller/3.log" Feb 16 02:19:44.038667 master-0 kubenswrapper[7721]: I0216 02:19:44.038601 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_532487ad51c30257b744e7c1c79fb34f/cluster-policy-controller/2.log" Feb 16 02:19:44.040659 master-0 kubenswrapper[7721]: I0216 02:19:44.040555 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_532487ad51c30257b744e7c1c79fb34f/kube-controller-manager/1.log" Feb 16 02:19:44.042148 master-0 kubenswrapper[7721]: I0216 02:19:44.042082 7721 generic.go:334] "Generic (PLEG): container finished" podID="532487ad51c30257b744e7c1c79fb34f" containerID="da8d9bb8a4f56bffe45c048c1c373bcf9e89a06a44a652b40fb9bf76cc60fa15" exitCode=255 Feb 16 02:19:44.042268 master-0 kubenswrapper[7721]: I0216 02:19:44.042165 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"532487ad51c30257b744e7c1c79fb34f","Type":"ContainerDied","Data":"da8d9bb8a4f56bffe45c048c1c373bcf9e89a06a44a652b40fb9bf76cc60fa15"} Feb 16 02:19:44.042268 master-0 kubenswrapper[7721]: I0216 02:19:44.042256 7721 scope.go:117] "RemoveContainer" containerID="0774abbdc43d6d09c91b8e6396c1df79557519d012de62e83ac5b464fc305b3e" Feb 16 02:19:44.043693 master-0 kubenswrapper[7721]: I0216 02:19:44.043639 7721 scope.go:117] "RemoveContainer" containerID="da8d9bb8a4f56bffe45c048c1c373bcf9e89a06a44a652b40fb9bf76cc60fa15" Feb 16 02:19:44.044655 master-0 kubenswrapper[7721]: E0216 02:19:44.044595 7721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(532487ad51c30257b744e7c1c79fb34f)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="532487ad51c30257b744e7c1c79fb34f" Feb 16 02:19:44.969085 master-0 kubenswrapper[7721]: I0216 02:19:44.968973 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:19:44.969085 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:19:44.969085 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:19:44.969085 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:19:44.969705 master-0 kubenswrapper[7721]: I0216 02:19:44.969100 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:19:45.054186 master-0 kubenswrapper[7721]: I0216 02:19:45.054071 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_532487ad51c30257b744e7c1c79fb34f/cluster-policy-controller/3.log" Feb 16 02:19:45.055778 master-0 kubenswrapper[7721]: I0216 02:19:45.055688 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_532487ad51c30257b744e7c1c79fb34f/kube-controller-manager/1.log" Feb 16 02:19:45.725613 master-0 kubenswrapper[7721]: I0216 02:19:45.725508 7721 scope.go:117] "RemoveContainer" containerID="b4abe24fcfd1048dd070b8fa8ab3c32bc9a46d0638b59c84cbaf7245cf851339" Feb 16 02:19:45.726017 master-0 kubenswrapper[7721]: E0216 02:19:45.725931 7721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-74b6595c6d-466x9_openshift-cluster-storage-operator(a3065737-c7c0-4fbb-b484-f2a9204d4908)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-466x9" podUID="a3065737-c7c0-4fbb-b484-f2a9204d4908" Feb 16 02:19:45.969485 master-0 kubenswrapper[7721]: I0216 02:19:45.969376 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:19:45.969485 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:19:45.969485 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:19:45.969485 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:19:45.969904 master-0 kubenswrapper[7721]: I0216 02:19:45.969511 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:19:46.068323 master-0 kubenswrapper[7721]: I0216 02:19:46.068162 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-c588d8cb4-nbjz6_04804a08-e3a5-46f3-abcb-967866834baa/ingress-operator/4.log" Feb 16 02:19:46.070266 master-0 kubenswrapper[7721]: I0216 02:19:46.070202 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-c588d8cb4-nbjz6_04804a08-e3a5-46f3-abcb-967866834baa/ingress-operator/3.log" Feb 16 02:19:46.071157 master-0 kubenswrapper[7721]: I0216 02:19:46.071078 7721 generic.go:334] "Generic (PLEG): container finished" podID="04804a08-e3a5-46f3-abcb-967866834baa" containerID="0405f37172f7f0e66eacb12dabde4efc8bc5d9f141a69f5229eddcb49dd8fe93" exitCode=1 Feb 16 02:19:46.071157 master-0 kubenswrapper[7721]: I0216 02:19:46.071147 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nbjz6" event={"ID":"04804a08-e3a5-46f3-abcb-967866834baa","Type":"ContainerDied","Data":"0405f37172f7f0e66eacb12dabde4efc8bc5d9f141a69f5229eddcb49dd8fe93"} Feb 16 02:19:46.071339 master-0 kubenswrapper[7721]: I0216 02:19:46.071195 7721 scope.go:117] "RemoveContainer" containerID="f8b63e1652f210c6b1254a1e3bd4ab515a460adc2668368a4276c7b3f8a11479" Feb 16 02:19:46.071936 master-0 kubenswrapper[7721]: I0216 02:19:46.071874 7721 scope.go:117] "RemoveContainer" containerID="0405f37172f7f0e66eacb12dabde4efc8bc5d9f141a69f5229eddcb49dd8fe93" Feb 16 02:19:46.072382 master-0 kubenswrapper[7721]: E0216 02:19:46.072304 7721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-c588d8cb4-nbjz6_openshift-ingress-operator(04804a08-e3a5-46f3-abcb-967866834baa)\"" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nbjz6" podUID="04804a08-e3a5-46f3-abcb-967866834baa" Feb 16 02:19:46.775646 master-0 kubenswrapper[7721]: E0216 02:19:46.775547 7721 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 02:19:46.775646 master-0 kubenswrapper[7721]: E0216 02:19:46.775590 7721 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 02:19:46.969196 master-0 kubenswrapper[7721]: I0216 02:19:46.969094 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:19:46.969196 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:19:46.969196 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:19:46.969196 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:19:46.969688 master-0 kubenswrapper[7721]: I0216 02:19:46.969207 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:19:47.081152 master-0 kubenswrapper[7721]: I0216 02:19:47.080925 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-c588d8cb4-nbjz6_04804a08-e3a5-46f3-abcb-967866834baa/ingress-operator/4.log" Feb 16 02:19:47.969475 master-0 kubenswrapper[7721]: I0216 02:19:47.969359 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:19:47.969475 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:19:47.969475 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:19:47.969475 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:19:47.969953 master-0 kubenswrapper[7721]: I0216 02:19:47.969476 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:19:48.969129 master-0 kubenswrapper[7721]: I0216 02:19:48.969021 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:19:48.969129 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:19:48.969129 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:19:48.969129 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:19:48.970321 master-0 kubenswrapper[7721]: I0216 02:19:48.969154 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:19:49.357016 master-0 kubenswrapper[7721]: E0216 02:19:49.356791 7721 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Feb 16 02:19:49.969000 master-0 kubenswrapper[7721]: I0216 02:19:49.968922 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:19:49.969000 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:19:49.969000 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:19:49.969000 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:19:49.970005 master-0 kubenswrapper[7721]: I0216 02:19:49.969012 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:19:50.104914 master-0 kubenswrapper[7721]: I0216 02:19:50.104829 7721 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="75e1522d-2c8c-4dbc-830d-47636881cc06" Feb 16 02:19:50.104914 master-0 kubenswrapper[7721]: I0216 02:19:50.104879 7721 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="75e1522d-2c8c-4dbc-830d-47636881cc06" Feb 16 02:19:50.797321 master-0 kubenswrapper[7721]: I0216 02:19:50.797199 7721 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:19:50.800114 master-0 kubenswrapper[7721]: I0216 02:19:50.800032 7721 scope.go:117] "RemoveContainer" containerID="da8d9bb8a4f56bffe45c048c1c373bcf9e89a06a44a652b40fb9bf76cc60fa15" Feb 16 02:19:50.800608 master-0 kubenswrapper[7721]: E0216 02:19:50.800544 7721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(532487ad51c30257b744e7c1c79fb34f)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="532487ad51c30257b744e7c1c79fb34f" Feb 16 02:19:50.970030 master-0 kubenswrapper[7721]: I0216 02:19:50.969918 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:19:50.970030 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:19:50.970030 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:19:50.970030 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:19:50.970030 master-0 kubenswrapper[7721]: I0216 02:19:50.970023 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:19:51.969113 master-0 kubenswrapper[7721]: I0216 02:19:51.968965 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:19:51.969113 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:19:51.969113 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:19:51.969113 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:19:51.969113 master-0 kubenswrapper[7721]: I0216 02:19:51.969064 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:19:52.603171 master-0 kubenswrapper[7721]: E0216 02:19:52.603060 7721 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 16 02:19:52.969087 master-0 kubenswrapper[7721]: I0216 02:19:52.968988 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:19:52.969087 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:19:52.969087 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:19:52.969087 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:19:52.969591 master-0 kubenswrapper[7721]: I0216 02:19:52.969094 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:19:53.971770 master-0 kubenswrapper[7721]: I0216 02:19:53.971691 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:19:53.971770 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:19:53.971770 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:19:53.971770 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:19:53.972844 master-0 kubenswrapper[7721]: I0216 02:19:53.971797 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:19:54.280849 master-0 kubenswrapper[7721]: I0216 02:19:54.280676 7721 status_manager.go:851] "Failed to get status for pod" podUID="dbc5b101-936f-4bf3-bbf3-f30966b0ab50" pod="openshift-network-node-identity/network-node-identity-kffmg" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods network-node-identity-kffmg)" Feb 16 02:19:54.968593 master-0 kubenswrapper[7721]: I0216 02:19:54.968486 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:19:54.968593 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:19:54.968593 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:19:54.968593 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:19:54.969056 master-0 kubenswrapper[7721]: I0216 02:19:54.968600 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:19:55.969747 master-0 kubenswrapper[7721]: I0216 02:19:55.969667 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:19:55.969747 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:19:55.969747 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:19:55.969747 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:19:55.970960 master-0 kubenswrapper[7721]: I0216 02:19:55.969766 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:19:56.969790 master-0 kubenswrapper[7721]: I0216 02:19:56.969700 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:19:56.969790 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:19:56.969790 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:19:56.969790 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:19:56.969790 master-0 kubenswrapper[7721]: I0216 02:19:56.969787 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:19:57.969279 master-0 kubenswrapper[7721]: I0216 02:19:57.969169 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:19:57.969279 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:19:57.969279 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:19:57.969279 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:19:57.970526 master-0 kubenswrapper[7721]: I0216 02:19:57.969298 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:19:58.969386 master-0 kubenswrapper[7721]: I0216 02:19:58.969285 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:19:58.969386 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:19:58.969386 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:19:58.969386 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:19:58.969858 master-0 kubenswrapper[7721]: I0216 02:19:58.969384 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:19:59.969202 master-0 kubenswrapper[7721]: I0216 02:19:59.969082 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:19:59.969202 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:19:59.969202 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:19:59.969202 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:19:59.969202 master-0 kubenswrapper[7721]: I0216 02:19:59.969186 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:20:00.724742 master-0 kubenswrapper[7721]: I0216 02:20:00.724651 7721 scope.go:117] "RemoveContainer" containerID="b4abe24fcfd1048dd070b8fa8ab3c32bc9a46d0638b59c84cbaf7245cf851339" Feb 16 02:20:00.725057 master-0 kubenswrapper[7721]: I0216 02:20:00.724895 7721 scope.go:117] "RemoveContainer" containerID="0405f37172f7f0e66eacb12dabde4efc8bc5d9f141a69f5229eddcb49dd8fe93" Feb 16 02:20:00.725354 master-0 kubenswrapper[7721]: E0216 02:20:00.725294 7721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-c588d8cb4-nbjz6_openshift-ingress-operator(04804a08-e3a5-46f3-abcb-967866834baa)\"" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nbjz6" podUID="04804a08-e3a5-46f3-abcb-967866834baa" Feb 16 02:20:00.969585 master-0 kubenswrapper[7721]: I0216 02:20:00.969481 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:20:00.969585 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:20:00.969585 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:20:00.969585 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:20:00.969585 master-0 kubenswrapper[7721]: I0216 02:20:00.969572 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:20:01.203771 master-0 kubenswrapper[7721]: I0216 02:20:01.203694 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-466x9_a3065737-c7c0-4fbb-b484-f2a9204d4908/snapshot-controller/3.log" Feb 16 02:20:01.204590 master-0 kubenswrapper[7721]: I0216 02:20:01.203787 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-466x9" event={"ID":"a3065737-c7c0-4fbb-b484-f2a9204d4908","Type":"ContainerStarted","Data":"10c9eace2e116c89a552aa72158d1a899b8f235b70b43e057de19dffd38d865e"} Feb 16 02:20:01.726121 master-0 kubenswrapper[7721]: I0216 02:20:01.726020 7721 scope.go:117] "RemoveContainer" containerID="da8d9bb8a4f56bffe45c048c1c373bcf9e89a06a44a652b40fb9bf76cc60fa15" Feb 16 02:20:01.726624 master-0 kubenswrapper[7721]: E0216 02:20:01.726563 7721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(532487ad51c30257b744e7c1c79fb34f)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="532487ad51c30257b744e7c1c79fb34f" Feb 16 02:20:01.969392 master-0 kubenswrapper[7721]: I0216 02:20:01.969250 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:20:01.969392 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:20:01.969392 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:20:01.969392 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:20:01.969392 master-0 kubenswrapper[7721]: I0216 02:20:01.969357 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:20:02.969853 master-0 kubenswrapper[7721]: I0216 02:20:02.969708 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:20:02.969853 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:20:02.969853 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:20:02.969853 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:20:02.969853 master-0 kubenswrapper[7721]: I0216 02:20:02.969841 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:20:03.970253 master-0 kubenswrapper[7721]: I0216 02:20:03.970136 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:20:03.970253 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:20:03.970253 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:20:03.970253 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:20:03.970253 master-0 kubenswrapper[7721]: I0216 02:20:03.970243 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:20:04.969097 master-0 kubenswrapper[7721]: I0216 02:20:04.969028 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:20:04.969097 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:20:04.969097 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:20:04.969097 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:20:04.969715 master-0 kubenswrapper[7721]: I0216 02:20:04.969669 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:20:05.969812 master-0 kubenswrapper[7721]: I0216 02:20:05.969712 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:20:05.969812 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:20:05.969812 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:20:05.969812 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:20:05.971032 master-0 kubenswrapper[7721]: I0216 02:20:05.969812 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:20:06.969346 master-0 kubenswrapper[7721]: I0216 02:20:06.969216 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:20:06.969346 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:20:06.969346 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:20:06.969346 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:20:06.969346 master-0 kubenswrapper[7721]: I0216 02:20:06.969316 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:20:07.968490 master-0 kubenswrapper[7721]: I0216 02:20:07.968380 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:20:07.968490 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:20:07.968490 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:20:07.968490 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:20:07.968490 master-0 kubenswrapper[7721]: I0216 02:20:07.968496 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:20:08.968836 master-0 kubenswrapper[7721]: I0216 02:20:08.968759 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:20:08.968836 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:20:08.968836 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:20:08.968836 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:20:08.968836 master-0 kubenswrapper[7721]: I0216 02:20:08.968834 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:20:09.604587 master-0 kubenswrapper[7721]: E0216 02:20:09.604466 7721 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 16 02:20:09.969018 master-0 kubenswrapper[7721]: I0216 02:20:09.968894 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:20:09.969018 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:20:09.969018 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:20:09.969018 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:20:09.969018 master-0 kubenswrapper[7721]: I0216 02:20:09.968993 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:20:10.968858 master-0 kubenswrapper[7721]: I0216 02:20:10.968736 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:20:10.968858 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:20:10.968858 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:20:10.968858 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:20:10.968858 master-0 kubenswrapper[7721]: I0216 02:20:10.968840 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:20:11.969636 master-0 kubenswrapper[7721]: I0216 02:20:11.969552 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:20:11.969636 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:20:11.969636 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:20:11.969636 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:20:11.971042 master-0 kubenswrapper[7721]: I0216 02:20:11.970928 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:20:12.794584 master-0 kubenswrapper[7721]: E0216 02:20:12.794412 7721 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{router-default-864ddd5f56-ffptx.1894984abb128cac openshift-ingress 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-ingress,Name:router-default-864ddd5f56-ffptx,UID:17390d9a-148d-4927-a831-5bc4873c43d5,APIVersion:v1,ResourceVersion:10706,FieldPath:spec.containers{router},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b318889972c37662382a2905888bb3f1cfd71a433b6afa3504cc12f3c6fa6eb\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:13:20.19700446 +0000 UTC m=+403.691238732,LastTimestamp:2026-02-16 02:16:06.093231027 +0000 UTC m=+569.587465329,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:20:12.969013 master-0 kubenswrapper[7721]: I0216 02:20:12.968919 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:20:12.969013 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:20:12.969013 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:20:12.969013 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:20:12.969319 master-0 kubenswrapper[7721]: I0216 02:20:12.969018 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:20:13.970532 master-0 kubenswrapper[7721]: I0216 02:20:13.970228 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:20:13.970532 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:20:13.970532 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:20:13.970532 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:20:13.970532 master-0 kubenswrapper[7721]: I0216 02:20:13.970320 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:20:14.968693 master-0 kubenswrapper[7721]: I0216 02:20:14.968579 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:20:14.968693 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:20:14.968693 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:20:14.968693 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:20:14.968693 master-0 kubenswrapper[7721]: I0216 02:20:14.968669 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:20:15.725260 master-0 kubenswrapper[7721]: I0216 02:20:15.725183 7721 scope.go:117] "RemoveContainer" containerID="0405f37172f7f0e66eacb12dabde4efc8bc5d9f141a69f5229eddcb49dd8fe93" Feb 16 02:20:15.726076 master-0 kubenswrapper[7721]: E0216 02:20:15.725628 7721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-c588d8cb4-nbjz6_openshift-ingress-operator(04804a08-e3a5-46f3-abcb-967866834baa)\"" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nbjz6" podUID="04804a08-e3a5-46f3-abcb-967866834baa" Feb 16 02:20:15.969477 master-0 kubenswrapper[7721]: I0216 02:20:15.969361 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:20:15.969477 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:20:15.969477 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:20:15.969477 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:20:15.969477 master-0 kubenswrapper[7721]: I0216 02:20:15.969463 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:20:16.336047 master-0 kubenswrapper[7721]: I0216 02:20:16.335953 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-5c696dbdcd-tkqng_23755f7f-dce6-4dcf-9664-22e3aedb5c81/package-server-manager/0.log" Feb 16 02:20:16.336830 master-0 kubenswrapper[7721]: I0216 02:20:16.336754 7721 generic.go:334] "Generic (PLEG): container finished" podID="23755f7f-dce6-4dcf-9664-22e3aedb5c81" containerID="2f12531c82f370a1fa09ec7f01326ed0fd582df87939a5c0bd560230586f4734" exitCode=1 Feb 16 02:20:16.336987 master-0 kubenswrapper[7721]: I0216 02:20:16.336867 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-tkqng" event={"ID":"23755f7f-dce6-4dcf-9664-22e3aedb5c81","Type":"ContainerDied","Data":"2f12531c82f370a1fa09ec7f01326ed0fd582df87939a5c0bd560230586f4734"} Feb 16 02:20:16.337906 master-0 kubenswrapper[7721]: I0216 02:20:16.337832 7721 scope.go:117] "RemoveContainer" containerID="2f12531c82f370a1fa09ec7f01326ed0fd582df87939a5c0bd560230586f4734" Feb 16 02:20:16.339757 master-0 kubenswrapper[7721]: I0216 02:20:16.339195 7721 generic.go:334] "Generic (PLEG): container finished" podID="ad700b17-ba2a-41d4-8bec-538a009a613b" containerID="0f4270e5e44e4ba946d497e39a29fcdd94ebfa1e344531fd5ab06971f1a503e0" exitCode=0 Feb 16 02:20:16.339757 master-0 kubenswrapper[7721]: I0216 02:20:16.339273 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-649c4f5445-lzvc4" event={"ID":"ad700b17-ba2a-41d4-8bec-538a009a613b","Type":"ContainerDied","Data":"0f4270e5e44e4ba946d497e39a29fcdd94ebfa1e344531fd5ab06971f1a503e0"} Feb 16 02:20:16.340091 master-0 kubenswrapper[7721]: I0216 02:20:16.339827 7721 scope.go:117] "RemoveContainer" containerID="0f4270e5e44e4ba946d497e39a29fcdd94ebfa1e344531fd5ab06971f1a503e0" Feb 16 02:20:16.342073 master-0 kubenswrapper[7721]: I0216 02:20:16.341958 7721 generic.go:334] "Generic (PLEG): container finished" podID="9defdfff-eb18-4beb-9591-918d0e4b4236" containerID="ea913d4d2d0edfcbff7d836320baff12a198f69ef86939ba8c7d3ee238eec033" exitCode=0 Feb 16 02:20:16.342073 master-0 kubenswrapper[7721]: I0216 02:20:16.342051 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-676cd8b9b5-x6nhn" event={"ID":"9defdfff-eb18-4beb-9591-918d0e4b4236","Type":"ContainerDied","Data":"ea913d4d2d0edfcbff7d836320baff12a198f69ef86939ba8c7d3ee238eec033"} Feb 16 02:20:16.343121 master-0 kubenswrapper[7721]: I0216 02:20:16.343078 7721 scope.go:117] "RemoveContainer" containerID="ea913d4d2d0edfcbff7d836320baff12a198f69ef86939ba8c7d3ee238eec033" Feb 16 02:20:16.345386 master-0 kubenswrapper[7721]: I0216 02:20:16.345323 7721 generic.go:334] "Generic (PLEG): container finished" podID="a8f33151-61df-4b66-ba85-9ba210779059" containerID="8cdb2cf816b95ba9c46ea2bd0950b6c6b1a6f09cea50132c976d896bf508decf" exitCode=0 Feb 16 02:20:16.345678 master-0 kubenswrapper[7721]: I0216 02:20:16.345426 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-dgxhp" event={"ID":"a8f33151-61df-4b66-ba85-9ba210779059","Type":"ContainerDied","Data":"8cdb2cf816b95ba9c46ea2bd0950b6c6b1a6f09cea50132c976d896bf508decf"} Feb 16 02:20:16.346130 master-0 kubenswrapper[7721]: I0216 02:20:16.346056 7721 scope.go:117] "RemoveContainer" containerID="8cdb2cf816b95ba9c46ea2bd0950b6c6b1a6f09cea50132c976d896bf508decf" Feb 16 02:20:16.349689 master-0 kubenswrapper[7721]: I0216 02:20:16.349571 7721 generic.go:334] "Generic (PLEG): container finished" podID="9be9fd24-fdb1-43dc-80b8-68020427bfd7" containerID="bd11ac1d6de053ea502aba0e630fef47dce8c71db2927be44e676bacf23e8754" exitCode=0 Feb 16 02:20:16.349834 master-0 kubenswrapper[7721]: I0216 02:20:16.349682 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-zlbd2" event={"ID":"9be9fd24-fdb1-43dc-80b8-68020427bfd7","Type":"ContainerDied","Data":"bd11ac1d6de053ea502aba0e630fef47dce8c71db2927be44e676bacf23e8754"} Feb 16 02:20:16.349834 master-0 kubenswrapper[7721]: I0216 02:20:16.349779 7721 scope.go:117] "RemoveContainer" containerID="df7d86f073bbfdceeb06be9efc451e7abb0405476c53f59a41a6fb24d7d9750e" Feb 16 02:20:16.351507 master-0 kubenswrapper[7721]: I0216 02:20:16.351378 7721 scope.go:117] "RemoveContainer" containerID="bd11ac1d6de053ea502aba0e630fef47dce8c71db2927be44e676bacf23e8754" Feb 16 02:20:16.397539 master-0 kubenswrapper[7721]: I0216 02:20:16.393248 7721 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-zlbd2" Feb 16 02:20:16.726781 master-0 kubenswrapper[7721]: I0216 02:20:16.726713 7721 scope.go:117] "RemoveContainer" containerID="da8d9bb8a4f56bffe45c048c1c373bcf9e89a06a44a652b40fb9bf76cc60fa15" Feb 16 02:20:16.738376 master-0 kubenswrapper[7721]: E0216 02:20:16.727187 7721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(532487ad51c30257b744e7c1c79fb34f)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="532487ad51c30257b744e7c1c79fb34f" Feb 16 02:20:16.968146 master-0 kubenswrapper[7721]: I0216 02:20:16.968086 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:20:16.968146 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:20:16.968146 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:20:16.968146 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:20:16.968146 master-0 kubenswrapper[7721]: I0216 02:20:16.968149 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:20:17.367312 master-0 kubenswrapper[7721]: I0216 02:20:17.367147 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-649c4f5445-lzvc4" event={"ID":"ad700b17-ba2a-41d4-8bec-538a009a613b","Type":"ContainerStarted","Data":"a88aebc3d25f4d02ec19b186d969abb724ebb6d75dd82cc1bdde757313a0b269"} Feb 16 02:20:17.372889 master-0 kubenswrapper[7721]: I0216 02:20:17.372830 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-676cd8b9b5-x6nhn" event={"ID":"9defdfff-eb18-4beb-9591-918d0e4b4236","Type":"ContainerStarted","Data":"9e67cae7233a0fb227c64cf2c1a802320ae6892946df6d62c799db47de3c91c0"} Feb 16 02:20:17.376611 master-0 kubenswrapper[7721]: I0216 02:20:17.376548 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-dgxhp" event={"ID":"a8f33151-61df-4b66-ba85-9ba210779059","Type":"ContainerStarted","Data":"340e609caac9d3e70fd6bf0c82e788e294f176fc0768352d92876da69940cc06"} Feb 16 02:20:17.380187 master-0 kubenswrapper[7721]: I0216 02:20:17.380126 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-zlbd2" event={"ID":"9be9fd24-fdb1-43dc-80b8-68020427bfd7","Type":"ContainerStarted","Data":"917b2632e8a4e3e770f86449d7b3ef73654f05bfc0dedeadd349ab3f3148bd85"} Feb 16 02:20:17.381262 master-0 kubenswrapper[7721]: I0216 02:20:17.381206 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-zlbd2" Feb 16 02:20:17.384497 master-0 kubenswrapper[7721]: I0216 02:20:17.384427 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-5c696dbdcd-tkqng_23755f7f-dce6-4dcf-9664-22e3aedb5c81/package-server-manager/0.log" Feb 16 02:20:17.385062 master-0 kubenswrapper[7721]: I0216 02:20:17.385017 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-tkqng" event={"ID":"23755f7f-dce6-4dcf-9664-22e3aedb5c81","Type":"ContainerStarted","Data":"3cfd84875f189094a6f30d213325a0b38760476e7997a9ed33b7d94b56d12ef8"} Feb 16 02:20:17.385888 master-0 kubenswrapper[7721]: I0216 02:20:17.385845 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-tkqng" Feb 16 02:20:17.970023 master-0 kubenswrapper[7721]: I0216 02:20:17.969859 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:20:17.970023 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:20:17.970023 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:20:17.970023 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:20:17.971124 master-0 kubenswrapper[7721]: I0216 02:20:17.970044 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:20:18.969612 master-0 kubenswrapper[7721]: I0216 02:20:18.969512 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:20:18.969612 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:20:18.969612 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:20:18.969612 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:20:18.970062 master-0 kubenswrapper[7721]: I0216 02:20:18.969615 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:20:19.393393 master-0 kubenswrapper[7721]: I0216 02:20:19.393229 7721 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-zlbd2 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.13:8443/healthz\": dial tcp 10.128.0.13:8443: connect: connection refused" start-of-body= Feb 16 02:20:19.393393 master-0 kubenswrapper[7721]: I0216 02:20:19.393311 7721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-zlbd2" podUID="9be9fd24-fdb1-43dc-80b8-68020427bfd7" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.13:8443/healthz\": dial tcp 10.128.0.13:8443: connect: connection refused" Feb 16 02:20:19.409029 master-0 kubenswrapper[7721]: I0216 02:20:19.408930 7721 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-zlbd2 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.13:8443/healthz\": dial tcp 10.128.0.13:8443: connect: connection refused" start-of-body= Feb 16 02:20:19.409250 master-0 kubenswrapper[7721]: I0216 02:20:19.409017 7721 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-zlbd2" podUID="9be9fd24-fdb1-43dc-80b8-68020427bfd7" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.13:8443/healthz\": dial tcp 10.128.0.13:8443: connect: connection refused" Feb 16 02:20:19.968708 master-0 kubenswrapper[7721]: I0216 02:20:19.968605 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:20:19.968708 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:20:19.968708 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:20:19.968708 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:20:19.968708 master-0 kubenswrapper[7721]: I0216 02:20:19.968698 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:20:20.968782 master-0 kubenswrapper[7721]: I0216 02:20:20.968667 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:20:20.968782 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:20:20.968782 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:20:20.968782 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:20:20.969760 master-0 kubenswrapper[7721]: I0216 02:20:20.968787 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:20:20.978386 master-0 kubenswrapper[7721]: I0216 02:20:20.978323 7721 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-zlbd2 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.13:8443/healthz\": dial tcp 10.128.0.13:8443: connect: connection refused" start-of-body= Feb 16 02:20:20.978555 master-0 kubenswrapper[7721]: I0216 02:20:20.978377 7721 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-zlbd2" podUID="9be9fd24-fdb1-43dc-80b8-68020427bfd7" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.13:8443/healthz\": dial tcp 10.128.0.13:8443: connect: connection refused" Feb 16 02:20:21.969077 master-0 kubenswrapper[7721]: I0216 02:20:21.968931 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:20:21.969077 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:20:21.969077 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:20:21.969077 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:20:21.970291 master-0 kubenswrapper[7721]: I0216 02:20:21.969080 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:20:22.393685 master-0 kubenswrapper[7721]: I0216 02:20:22.393490 7721 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-zlbd2 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.13:8443/healthz\": dial tcp 10.128.0.13:8443: connect: connection refused" start-of-body= Feb 16 02:20:22.393685 master-0 kubenswrapper[7721]: I0216 02:20:22.393573 7721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-zlbd2" podUID="9be9fd24-fdb1-43dc-80b8-68020427bfd7" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.13:8443/healthz\": dial tcp 10.128.0.13:8443: connect: connection refused" Feb 16 02:20:22.968917 master-0 kubenswrapper[7721]: I0216 02:20:22.968813 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:20:22.968917 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:20:22.968917 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:20:22.968917 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:20:22.968917 master-0 kubenswrapper[7721]: I0216 02:20:22.968906 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:20:23.446619 master-0 kubenswrapper[7721]: I0216 02:20:23.446547 7721 generic.go:334] "Generic (PLEG): container finished" podID="724ac845-3835-458b-9645-e665be135ff9" containerID="f4cc6bf86c33c3e578a43a1648d54a69838bb79c81f9072d23717330a60f1d97" exitCode=0 Feb 16 02:20:23.446955 master-0 kubenswrapper[7721]: I0216 02:20:23.446658 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-htjgz" event={"ID":"724ac845-3835-458b-9645-e665be135ff9","Type":"ContainerDied","Data":"f4cc6bf86c33c3e578a43a1648d54a69838bb79c81f9072d23717330a60f1d97"} Feb 16 02:20:23.446955 master-0 kubenswrapper[7721]: I0216 02:20:23.446708 7721 scope.go:117] "RemoveContainer" containerID="7d8525382e7c303df250ff37074c2b59dae064f1c16fab17985b8492c29587df" Feb 16 02:20:23.447484 master-0 kubenswrapper[7721]: I0216 02:20:23.447387 7721 scope.go:117] "RemoveContainer" containerID="f4cc6bf86c33c3e578a43a1648d54a69838bb79c81f9072d23717330a60f1d97" Feb 16 02:20:23.451765 master-0 kubenswrapper[7721]: I0216 02:20:23.451710 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-67fd9768b5-9rvcj_48863ff6-63ac-42d7-bac7-29d888c92db9/cluster-autoscaler-operator/0.log" Feb 16 02:20:23.452468 master-0 kubenswrapper[7721]: I0216 02:20:23.452383 7721 generic.go:334] "Generic (PLEG): container finished" podID="48863ff6-63ac-42d7-bac7-29d888c92db9" containerID="1169ba7d80653acfb978496c38f306905e7dc8028752f494ebda1e9356b7b0b5" exitCode=255 Feb 16 02:20:23.452576 master-0 kubenswrapper[7721]: I0216 02:20:23.452491 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-9rvcj" event={"ID":"48863ff6-63ac-42d7-bac7-29d888c92db9","Type":"ContainerDied","Data":"1169ba7d80653acfb978496c38f306905e7dc8028752f494ebda1e9356b7b0b5"} Feb 16 02:20:23.453211 master-0 kubenswrapper[7721]: I0216 02:20:23.453166 7721 scope.go:117] "RemoveContainer" containerID="1169ba7d80653acfb978496c38f306905e7dc8028752f494ebda1e9356b7b0b5" Feb 16 02:20:23.456953 master-0 kubenswrapper[7721]: I0216 02:20:23.456890 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-bd7dd5c46-qw2zq_fec84b8a-a0d1-4b07-8827-cef0beb89ecd/machine-api-operator/0.log" Feb 16 02:20:23.457558 master-0 kubenswrapper[7721]: I0216 02:20:23.457506 7721 generic.go:334] "Generic (PLEG): container finished" podID="fec84b8a-a0d1-4b07-8827-cef0beb89ecd" containerID="54e4f3bd63acfa80c546903eac7441d247818158a150a69ea32c8395383dd3ba" exitCode=255 Feb 16 02:20:23.457659 master-0 kubenswrapper[7721]: I0216 02:20:23.457584 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-qw2zq" event={"ID":"fec84b8a-a0d1-4b07-8827-cef0beb89ecd","Type":"ContainerDied","Data":"54e4f3bd63acfa80c546903eac7441d247818158a150a69ea32c8395383dd3ba"} Feb 16 02:20:23.458172 master-0 kubenswrapper[7721]: I0216 02:20:23.458125 7721 scope.go:117] "RemoveContainer" containerID="54e4f3bd63acfa80c546903eac7441d247818158a150a69ea32c8395383dd3ba" Feb 16 02:20:23.462865 master-0 kubenswrapper[7721]: I0216 02:20:23.462782 7721 generic.go:334] "Generic (PLEG): container finished" podID="d870332c-2498-4135-a9b3-a71e67c2805b" containerID="8a15ec6edf531733b3fdbab5958c503602c9f05e39693986c688462128642a62" exitCode=0 Feb 16 02:20:23.463077 master-0 kubenswrapper[7721]: I0216 02:20:23.462870 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-5gs6g" event={"ID":"d870332c-2498-4135-a9b3-a71e67c2805b","Type":"ContainerDied","Data":"8a15ec6edf531733b3fdbab5958c503602c9f05e39693986c688462128642a62"} Feb 16 02:20:23.463761 master-0 kubenswrapper[7721]: I0216 02:20:23.463706 7721 scope.go:117] "RemoveContainer" containerID="8a15ec6edf531733b3fdbab5958c503602c9f05e39693986c688462128642a62" Feb 16 02:20:23.914556 master-0 kubenswrapper[7721]: I0216 02:20:23.914418 7721 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-etcd/etcd-master-0" Feb 16 02:20:23.945829 master-0 kubenswrapper[7721]: I0216 02:20:23.944411 7721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/etcd-master-0"] Feb 16 02:20:23.949799 master-0 kubenswrapper[7721]: I0216 02:20:23.949761 7721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/etcd-master-0"] Feb 16 02:20:23.972938 master-0 kubenswrapper[7721]: I0216 02:20:23.972518 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:20:23.972938 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:20:23.972938 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:20:23.972938 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:20:23.972938 master-0 kubenswrapper[7721]: I0216 02:20:23.972601 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:20:23.978413 master-0 kubenswrapper[7721]: I0216 02:20:23.978364 7721 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-zlbd2 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.13:8443/healthz\": dial tcp 10.128.0.13:8443: connect: connection refused" start-of-body= Feb 16 02:20:23.978413 master-0 kubenswrapper[7721]: I0216 02:20:23.978406 7721 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-zlbd2" podUID="9be9fd24-fdb1-43dc-80b8-68020427bfd7" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.13:8443/healthz\": dial tcp 10.128.0.13:8443: connect: connection refused" Feb 16 02:20:24.477918 master-0 kubenswrapper[7721]: I0216 02:20:24.477845 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-bd7dd5c46-qw2zq_fec84b8a-a0d1-4b07-8827-cef0beb89ecd/machine-api-operator/0.log" Feb 16 02:20:24.478718 master-0 kubenswrapper[7721]: I0216 02:20:24.478649 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-qw2zq" event={"ID":"fec84b8a-a0d1-4b07-8827-cef0beb89ecd","Type":"ContainerStarted","Data":"aa6c39bf935b4c52595cf11c2068c0bafa10dcd5f3b2328c53b8f20a75f34b3c"} Feb 16 02:20:24.483460 master-0 kubenswrapper[7721]: I0216 02:20:24.483376 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-5gs6g" event={"ID":"d870332c-2498-4135-a9b3-a71e67c2805b","Type":"ContainerStarted","Data":"680a105750002a075137decde7e5ac73ce781f625700d6eb9bc5290401f7a2d9"} Feb 16 02:20:24.492395 master-0 kubenswrapper[7721]: I0216 02:20:24.492343 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-htjgz" event={"ID":"724ac845-3835-458b-9645-e665be135ff9","Type":"ContainerStarted","Data":"4b9f21cc6316f1677cb58b34daa9d1d2ed484c6afb04e2b49a1f202ea2fbb547"} Feb 16 02:20:24.495844 master-0 kubenswrapper[7721]: I0216 02:20:24.495787 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-67fd9768b5-9rvcj_48863ff6-63ac-42d7-bac7-29d888c92db9/cluster-autoscaler-operator/0.log" Feb 16 02:20:24.496484 master-0 kubenswrapper[7721]: I0216 02:20:24.496403 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-9rvcj" event={"ID":"48863ff6-63ac-42d7-bac7-29d888c92db9","Type":"ContainerStarted","Data":"27625c929ad72e182a2f7e55b780a3cf4b6052104c239ec643b8a4b4957ed842"} Feb 16 02:20:24.974707 master-0 kubenswrapper[7721]: I0216 02:20:24.972850 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:20:24.974707 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:20:24.974707 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:20:24.974707 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:20:24.974707 master-0 kubenswrapper[7721]: I0216 02:20:24.972953 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:20:25.394329 master-0 kubenswrapper[7721]: I0216 02:20:25.394178 7721 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-zlbd2 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.13:8443/healthz\": dial tcp 10.128.0.13:8443: connect: connection refused" start-of-body= Feb 16 02:20:25.394779 master-0 kubenswrapper[7721]: I0216 02:20:25.394712 7721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-zlbd2" podUID="9be9fd24-fdb1-43dc-80b8-68020427bfd7" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.13:8443/healthz\": dial tcp 10.128.0.13:8443: connect: connection refused" Feb 16 02:20:25.395021 master-0 kubenswrapper[7721]: I0216 02:20:25.394993 7721 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-zlbd2" Feb 16 02:20:25.396164 master-0 kubenswrapper[7721]: I0216 02:20:25.396123 7721 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"917b2632e8a4e3e770f86449d7b3ef73654f05bfc0dedeadd349ab3f3148bd85"} pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-zlbd2" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Feb 16 02:20:25.396399 master-0 kubenswrapper[7721]: I0216 02:20:25.396363 7721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-zlbd2" podUID="9be9fd24-fdb1-43dc-80b8-68020427bfd7" containerName="openshift-config-operator" containerID="cri-o://917b2632e8a4e3e770f86449d7b3ef73654f05bfc0dedeadd349ab3f3148bd85" gracePeriod=30 Feb 16 02:20:25.396702 master-0 kubenswrapper[7721]: I0216 02:20:25.396498 7721 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-zlbd2 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.13:8443/healthz\": dial tcp 10.128.0.13:8443: connect: connection refused" start-of-body= Feb 16 02:20:25.396828 master-0 kubenswrapper[7721]: I0216 02:20:25.396750 7721 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-zlbd2" podUID="9be9fd24-fdb1-43dc-80b8-68020427bfd7" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.13:8443/healthz\": dial tcp 10.128.0.13:8443: connect: connection refused" Feb 16 02:20:25.529831 master-0 kubenswrapper[7721]: I0216 02:20:25.529716 7721 generic.go:334] "Generic (PLEG): container finished" podID="30fef0d5-46ea-4fa3-9ffa-88187d010ffe" containerID="4f4386d569551a2cb1add9279ae5e39db1d0c3382f70cefdecbf2167f005bf64" exitCode=0 Feb 16 02:20:25.530077 master-0 kubenswrapper[7721]: I0216 02:20:25.529878 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-n8xmg" event={"ID":"30fef0d5-46ea-4fa3-9ffa-88187d010ffe","Type":"ContainerDied","Data":"4f4386d569551a2cb1add9279ae5e39db1d0c3382f70cefdecbf2167f005bf64"} Feb 16 02:20:25.531265 master-0 kubenswrapper[7721]: I0216 02:20:25.531205 7721 scope.go:117] "RemoveContainer" containerID="4f4386d569551a2cb1add9279ae5e39db1d0c3382f70cefdecbf2167f005bf64" Feb 16 02:20:25.533982 master-0 kubenswrapper[7721]: I0216 02:20:25.533926 7721 generic.go:334] "Generic (PLEG): container finished" podID="980aa005-f51d-4ca2-aee6-a6fdeefd86d0" containerID="a6d90aff6f8ce2ab976f48907c4d1b01e98afde362aa201e2dc712d88fff6eb6" exitCode=0 Feb 16 02:20:25.534139 master-0 kubenswrapper[7721]: I0216 02:20:25.534020 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-dsjz2" event={"ID":"980aa005-f51d-4ca2-aee6-a6fdeefd86d0","Type":"ContainerDied","Data":"a6d90aff6f8ce2ab976f48907c4d1b01e98afde362aa201e2dc712d88fff6eb6"} Feb 16 02:20:25.534139 master-0 kubenswrapper[7721]: I0216 02:20:25.534079 7721 scope.go:117] "RemoveContainer" containerID="ff658673f7455373622a5b5ee3d4af2de4dca2c4bf35a4e09ed477558c99902a" Feb 16 02:20:25.535154 master-0 kubenswrapper[7721]: I0216 02:20:25.535006 7721 scope.go:117] "RemoveContainer" containerID="a6d90aff6f8ce2ab976f48907c4d1b01e98afde362aa201e2dc712d88fff6eb6" Feb 16 02:20:25.537759 master-0 kubenswrapper[7721]: I0216 02:20:25.537554 7721 generic.go:334] "Generic (PLEG): container finished" podID="a77e2f8f-d164-4a58-aab2-f3444c05cacb" containerID="992140dbf9ae65014df74f84c27ac943b6aa3fa48ebab2a299f13ea17d92ff73" exitCode=0 Feb 16 02:20:25.537759 master-0 kubenswrapper[7721]: I0216 02:20:25.537696 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-qm7rm" event={"ID":"a77e2f8f-d164-4a58-aab2-f3444c05cacb","Type":"ContainerDied","Data":"992140dbf9ae65014df74f84c27ac943b6aa3fa48ebab2a299f13ea17d92ff73"} Feb 16 02:20:25.538220 master-0 kubenswrapper[7721]: I0216 02:20:25.538180 7721 scope.go:117] "RemoveContainer" containerID="992140dbf9ae65014df74f84c27ac943b6aa3fa48ebab2a299f13ea17d92ff73" Feb 16 02:20:25.541257 master-0 kubenswrapper[7721]: I0216 02:20:25.541218 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-6fcf4c966-dctqr_456e6c3a-c16c-470b-a0cd-bb79865b54f0/network-operator/1.log" Feb 16 02:20:25.541394 master-0 kubenswrapper[7721]: I0216 02:20:25.541276 7721 generic.go:334] "Generic (PLEG): container finished" podID="456e6c3a-c16c-470b-a0cd-bb79865b54f0" containerID="0315328a7c0259163748331a3160b081a82efff7afa5ee439e110ed017ac4025" exitCode=0 Feb 16 02:20:25.541394 master-0 kubenswrapper[7721]: I0216 02:20:25.541356 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-6fcf4c966-dctqr" event={"ID":"456e6c3a-c16c-470b-a0cd-bb79865b54f0","Type":"ContainerDied","Data":"0315328a7c0259163748331a3160b081a82efff7afa5ee439e110ed017ac4025"} Feb 16 02:20:25.542019 master-0 kubenswrapper[7721]: I0216 02:20:25.541906 7721 scope.go:117] "RemoveContainer" containerID="0315328a7c0259163748331a3160b081a82efff7afa5ee439e110ed017ac4025" Feb 16 02:20:25.544538 master-0 kubenswrapper[7721]: I0216 02:20:25.544481 7721 generic.go:334] "Generic (PLEG): container finished" podID="83883885-f493-4559-9c0f-e28d69712475" containerID="2a160f2c1742de9f4ba99becfe5db3107e11e652c40ff70cb3349e1627a9a147" exitCode=0 Feb 16 02:20:25.544718 master-0 kubenswrapper[7721]: I0216 02:20:25.544488 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-998bd8b4b-hm5k2" event={"ID":"83883885-f493-4559-9c0f-e28d69712475","Type":"ContainerDied","Data":"2a160f2c1742de9f4ba99becfe5db3107e11e652c40ff70cb3349e1627a9a147"} Feb 16 02:20:25.545407 master-0 kubenswrapper[7721]: I0216 02:20:25.545368 7721 scope.go:117] "RemoveContainer" containerID="2a160f2c1742de9f4ba99becfe5db3107e11e652c40ff70cb3349e1627a9a147" Feb 16 02:20:25.552243 master-0 kubenswrapper[7721]: I0216 02:20:25.549104 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-6d4655d9cf-v7lmz_e379cfaf-3a4c-40e7-8641-3524b3669295/openshift-apiserver-operator/1.log" Feb 16 02:20:25.552243 master-0 kubenswrapper[7721]: I0216 02:20:25.549158 7721 generic.go:334] "Generic (PLEG): container finished" podID="e379cfaf-3a4c-40e7-8641-3524b3669295" containerID="29588e18b21fc378729e293fc4d3e978d87e6e1444fa9f91d1cf677cd080ce85" exitCode=0 Feb 16 02:20:25.552243 master-0 kubenswrapper[7721]: I0216 02:20:25.549195 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-v7lmz" event={"ID":"e379cfaf-3a4c-40e7-8641-3524b3669295","Type":"ContainerDied","Data":"29588e18b21fc378729e293fc4d3e978d87e6e1444fa9f91d1cf677cd080ce85"} Feb 16 02:20:25.552243 master-0 kubenswrapper[7721]: I0216 02:20:25.549732 7721 scope.go:117] "RemoveContainer" containerID="29588e18b21fc378729e293fc4d3e978d87e6e1444fa9f91d1cf677cd080ce85" Feb 16 02:20:25.753311 master-0 kubenswrapper[7721]: I0216 02:20:25.753278 7721 scope.go:117] "RemoveContainer" containerID="d6b8bd52621bc720ed9dc674f34ff05c02fea3c605d0d925e1c0610bec6f8610" Feb 16 02:20:25.829108 master-0 kubenswrapper[7721]: I0216 02:20:25.828777 7721 scope.go:117] "RemoveContainer" containerID="e691a05529b4feda1459fb089aa0bfd36c24c35f07686b8d317ee98a6be4be8a" Feb 16 02:20:25.969079 master-0 kubenswrapper[7721]: I0216 02:20:25.969031 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:20:25.969079 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:20:25.969079 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:20:25.969079 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:20:25.969379 master-0 kubenswrapper[7721]: I0216 02:20:25.969118 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:20:26.563609 master-0 kubenswrapper[7721]: I0216 02:20:26.563507 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-998bd8b4b-hm5k2" event={"ID":"83883885-f493-4559-9c0f-e28d69712475","Type":"ContainerStarted","Data":"d4e52a7c33349d83246362e459328cc0aaf6ec2111900263828e0014dd564224"} Feb 16 02:20:26.564378 master-0 kubenswrapper[7721]: I0216 02:20:26.564188 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-998bd8b4b-hm5k2" Feb 16 02:20:26.567181 master-0 kubenswrapper[7721]: I0216 02:20:26.567117 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-v7lmz" event={"ID":"e379cfaf-3a4c-40e7-8641-3524b3669295","Type":"ContainerStarted","Data":"5706b7fa16021c47b9a840af8b157eb4cc65f8c99a56cbad194a882c1d62c146"} Feb 16 02:20:26.570402 master-0 kubenswrapper[7721]: I0216 02:20:26.570346 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-n8xmg" event={"ID":"30fef0d5-46ea-4fa3-9ffa-88187d010ffe","Type":"ContainerStarted","Data":"8674b9c5fc1b70f2696df422593300d650313e2c3b2ccfe73b7d4e2cbb332e54"} Feb 16 02:20:26.573760 master-0 kubenswrapper[7721]: I0216 02:20:26.572823 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-dsjz2" event={"ID":"980aa005-f51d-4ca2-aee6-a6fdeefd86d0","Type":"ContainerStarted","Data":"eb82a3a7f65c200dbb3560a84c8911d6a64389f4885f5913cde6df22c36d3007"} Feb 16 02:20:26.578225 master-0 kubenswrapper[7721]: I0216 02:20:26.578186 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-qm7rm" event={"ID":"a77e2f8f-d164-4a58-aab2-f3444c05cacb","Type":"ContainerStarted","Data":"8a7ce5fa85b8c537c19845c8159585edebdb77d590b7bfaebb922e88c337bd3e"} Feb 16 02:20:26.578538 master-0 kubenswrapper[7721]: I0216 02:20:26.578502 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-998bd8b4b-hm5k2" Feb 16 02:20:26.582048 master-0 kubenswrapper[7721]: I0216 02:20:26.581982 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-6fcf4c966-dctqr" event={"ID":"456e6c3a-c16c-470b-a0cd-bb79865b54f0","Type":"ContainerStarted","Data":"1daaab1794cfd1807a79ddf79bbc061f5a689969870687ca09707f5ea1c6f2e2"} Feb 16 02:20:26.585231 master-0 kubenswrapper[7721]: I0216 02:20:26.585135 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-7c6bdb986f-zlbd2_9be9fd24-fdb1-43dc-80b8-68020427bfd7/openshift-config-operator/2.log" Feb 16 02:20:26.586177 master-0 kubenswrapper[7721]: I0216 02:20:26.586118 7721 generic.go:334] "Generic (PLEG): container finished" podID="9be9fd24-fdb1-43dc-80b8-68020427bfd7" containerID="917b2632e8a4e3e770f86449d7b3ef73654f05bfc0dedeadd349ab3f3148bd85" exitCode=255 Feb 16 02:20:26.586177 master-0 kubenswrapper[7721]: I0216 02:20:26.586173 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-zlbd2" event={"ID":"9be9fd24-fdb1-43dc-80b8-68020427bfd7","Type":"ContainerDied","Data":"917b2632e8a4e3e770f86449d7b3ef73654f05bfc0dedeadd349ab3f3148bd85"} Feb 16 02:20:26.586347 master-0 kubenswrapper[7721]: I0216 02:20:26.586207 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-zlbd2" event={"ID":"9be9fd24-fdb1-43dc-80b8-68020427bfd7","Type":"ContainerStarted","Data":"7a516aec52660718ccf80f8448e598ce525c9666277508da67b9f886712a7edf"} Feb 16 02:20:26.586347 master-0 kubenswrapper[7721]: I0216 02:20:26.586242 7721 scope.go:117] "RemoveContainer" containerID="bd11ac1d6de053ea502aba0e630fef47dce8c71db2927be44e676bacf23e8754" Feb 16 02:20:26.586741 master-0 kubenswrapper[7721]: I0216 02:20:26.586702 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-zlbd2" Feb 16 02:20:26.969535 master-0 kubenswrapper[7721]: I0216 02:20:26.969394 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:20:26.969535 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:20:26.969535 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:20:26.969535 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:20:26.970016 master-0 kubenswrapper[7721]: I0216 02:20:26.969550 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:20:27.601426 master-0 kubenswrapper[7721]: I0216 02:20:27.601303 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-7c6bdb986f-zlbd2_9be9fd24-fdb1-43dc-80b8-68020427bfd7/openshift-config-operator/2.log" Feb 16 02:20:27.725705 master-0 kubenswrapper[7721]: I0216 02:20:27.725625 7721 scope.go:117] "RemoveContainer" containerID="0405f37172f7f0e66eacb12dabde4efc8bc5d9f141a69f5229eddcb49dd8fe93" Feb 16 02:20:27.726190 master-0 kubenswrapper[7721]: E0216 02:20:27.726120 7721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-c588d8cb4-nbjz6_openshift-ingress-operator(04804a08-e3a5-46f3-abcb-967866834baa)\"" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nbjz6" podUID="04804a08-e3a5-46f3-abcb-967866834baa" Feb 16 02:20:27.968148 master-0 kubenswrapper[7721]: I0216 02:20:27.968050 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:20:27.968148 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:20:27.968148 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:20:27.968148 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:20:27.968923 master-0 kubenswrapper[7721]: I0216 02:20:27.968823 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:20:28.968568 master-0 kubenswrapper[7721]: I0216 02:20:28.968483 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:20:28.968568 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:20:28.968568 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:20:28.968568 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:20:28.969824 master-0 kubenswrapper[7721]: I0216 02:20:28.969647 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:20:29.540978 master-0 kubenswrapper[7721]: I0216 02:20:29.540896 7721 patch_prober.go:28] interesting pod/etcd-operator-67bf55ccdd-htjgz container/etcd-operator namespace/openshift-etcd-operator: Liveness probe status=failure output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" start-of-body= Feb 16 02:20:29.541309 master-0 kubenswrapper[7721]: I0216 02:20:29.540989 7721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-htjgz" podUID="724ac845-3835-458b-9645-e665be135ff9" containerName="etcd-operator" probeResult="failure" output="Get \"https://10.128.0.12:8443/healthz\": dial tcp 10.128.0.12:8443: connect: connection refused" Feb 16 02:20:29.968309 master-0 kubenswrapper[7721]: I0216 02:20:29.968241 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:20:29.968309 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:20:29.968309 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:20:29.968309 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:20:29.969430 master-0 kubenswrapper[7721]: I0216 02:20:29.969382 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:20:29.979068 master-0 kubenswrapper[7721]: I0216 02:20:29.979025 7721 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-zlbd2 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.13:8443/healthz\": dial tcp 10.128.0.13:8443: connect: connection refused" start-of-body= Feb 16 02:20:29.979332 master-0 kubenswrapper[7721]: I0216 02:20:29.979293 7721 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-zlbd2" podUID="9be9fd24-fdb1-43dc-80b8-68020427bfd7" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.13:8443/healthz\": dial tcp 10.128.0.13:8443: connect: connection refused" Feb 16 02:20:30.725170 master-0 kubenswrapper[7721]: I0216 02:20:30.725069 7721 scope.go:117] "RemoveContainer" containerID="da8d9bb8a4f56bffe45c048c1c373bcf9e89a06a44a652b40fb9bf76cc60fa15" Feb 16 02:20:30.969174 master-0 kubenswrapper[7721]: I0216 02:20:30.969093 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:20:30.969174 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:20:30.969174 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:20:30.969174 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:20:30.970125 master-0 kubenswrapper[7721]: I0216 02:20:30.969190 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:20:31.394645 master-0 kubenswrapper[7721]: I0216 02:20:31.394419 7721 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-zlbd2 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.13:8443/healthz\": dial tcp 10.128.0.13:8443: connect: connection refused" start-of-body= Feb 16 02:20:31.394645 master-0 kubenswrapper[7721]: I0216 02:20:31.394568 7721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-zlbd2" podUID="9be9fd24-fdb1-43dc-80b8-68020427bfd7" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.13:8443/healthz\": dial tcp 10.128.0.13:8443: connect: connection refused" Feb 16 02:20:31.640896 master-0 kubenswrapper[7721]: I0216 02:20:31.640840 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-466x9_a3065737-c7c0-4fbb-b484-f2a9204d4908/snapshot-controller/4.log" Feb 16 02:20:31.641665 master-0 kubenswrapper[7721]: I0216 02:20:31.641627 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-466x9_a3065737-c7c0-4fbb-b484-f2a9204d4908/snapshot-controller/3.log" Feb 16 02:20:31.641806 master-0 kubenswrapper[7721]: I0216 02:20:31.641696 7721 generic.go:334] "Generic (PLEG): container finished" podID="a3065737-c7c0-4fbb-b484-f2a9204d4908" containerID="10c9eace2e116c89a552aa72158d1a899b8f235b70b43e057de19dffd38d865e" exitCode=1 Feb 16 02:20:31.641806 master-0 kubenswrapper[7721]: I0216 02:20:31.641785 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-466x9" event={"ID":"a3065737-c7c0-4fbb-b484-f2a9204d4908","Type":"ContainerDied","Data":"10c9eace2e116c89a552aa72158d1a899b8f235b70b43e057de19dffd38d865e"} Feb 16 02:20:31.641981 master-0 kubenswrapper[7721]: I0216 02:20:31.641841 7721 scope.go:117] "RemoveContainer" containerID="b4abe24fcfd1048dd070b8fa8ab3c32bc9a46d0638b59c84cbaf7245cf851339" Feb 16 02:20:31.642984 master-0 kubenswrapper[7721]: I0216 02:20:31.642925 7721 scope.go:117] "RemoveContainer" containerID="10c9eace2e116c89a552aa72158d1a899b8f235b70b43e057de19dffd38d865e" Feb 16 02:20:31.643897 master-0 kubenswrapper[7721]: E0216 02:20:31.643827 7721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-74b6595c6d-466x9_openshift-cluster-storage-operator(a3065737-c7c0-4fbb-b484-f2a9204d4908)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-466x9" podUID="a3065737-c7c0-4fbb-b484-f2a9204d4908" Feb 16 02:20:31.645467 master-0 kubenswrapper[7721]: I0216 02:20:31.645326 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_532487ad51c30257b744e7c1c79fb34f/cluster-policy-controller/3.log" Feb 16 02:20:31.646844 master-0 kubenswrapper[7721]: I0216 02:20:31.646801 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_532487ad51c30257b744e7c1c79fb34f/kube-controller-manager/1.log" Feb 16 02:20:31.647672 master-0 kubenswrapper[7721]: I0216 02:20:31.647631 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"532487ad51c30257b744e7c1c79fb34f","Type":"ContainerStarted","Data":"789e61bec232bf870ef2e4f73549435ac6af8ac001a93d4407c58240635552e4"} Feb 16 02:20:31.968911 master-0 kubenswrapper[7721]: I0216 02:20:31.968767 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:20:31.968911 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:20:31.968911 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:20:31.968911 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:20:31.969958 master-0 kubenswrapper[7721]: I0216 02:20:31.968959 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:20:32.658595 master-0 kubenswrapper[7721]: I0216 02:20:32.658506 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-466x9_a3065737-c7c0-4fbb-b484-f2a9204d4908/snapshot-controller/4.log" Feb 16 02:20:32.969010 master-0 kubenswrapper[7721]: I0216 02:20:32.968942 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:20:32.969010 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:20:32.969010 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:20:32.969010 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:20:32.969591 master-0 kubenswrapper[7721]: I0216 02:20:32.969035 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:20:32.978827 master-0 kubenswrapper[7721]: I0216 02:20:32.978710 7721 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-zlbd2 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.13:8443/healthz\": dial tcp 10.128.0.13:8443: connect: connection refused" start-of-body= Feb 16 02:20:32.978989 master-0 kubenswrapper[7721]: I0216 02:20:32.978911 7721 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-zlbd2" podUID="9be9fd24-fdb1-43dc-80b8-68020427bfd7" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.13:8443/healthz\": dial tcp 10.128.0.13:8443: connect: connection refused" Feb 16 02:20:33.969266 master-0 kubenswrapper[7721]: I0216 02:20:33.969188 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:20:33.969266 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:20:33.969266 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:20:33.969266 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:20:33.970242 master-0 kubenswrapper[7721]: I0216 02:20:33.969272 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:20:34.393782 master-0 kubenswrapper[7721]: I0216 02:20:34.393594 7721 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-zlbd2 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.13:8443/healthz\": dial tcp 10.128.0.13:8443: connect: connection refused" start-of-body= Feb 16 02:20:34.393782 master-0 kubenswrapper[7721]: I0216 02:20:34.393707 7721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-zlbd2" podUID="9be9fd24-fdb1-43dc-80b8-68020427bfd7" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.13:8443/healthz\": dial tcp 10.128.0.13:8443: connect: connection refused" Feb 16 02:20:34.968772 master-0 kubenswrapper[7721]: I0216 02:20:34.968658 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:20:34.968772 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:20:34.968772 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:20:34.968772 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:20:34.970186 master-0 kubenswrapper[7721]: I0216 02:20:34.968782 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:20:35.969787 master-0 kubenswrapper[7721]: I0216 02:20:35.969662 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:20:35.969787 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:20:35.969787 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:20:35.969787 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:20:35.970953 master-0 kubenswrapper[7721]: I0216 02:20:35.969809 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:20:35.978489 master-0 kubenswrapper[7721]: I0216 02:20:35.978397 7721 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-zlbd2 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.13:8443/healthz\": dial tcp 10.128.0.13:8443: connect: connection refused" start-of-body= Feb 16 02:20:35.978638 master-0 kubenswrapper[7721]: I0216 02:20:35.978485 7721 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-zlbd2" podUID="9be9fd24-fdb1-43dc-80b8-68020427bfd7" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.13:8443/healthz\": dial tcp 10.128.0.13:8443: connect: connection refused" Feb 16 02:20:36.925102 master-0 kubenswrapper[7721]: E0216 02:20:36.925002 7721 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Feb 16 02:20:36.969205 master-0 kubenswrapper[7721]: I0216 02:20:36.969107 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:20:36.969205 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:20:36.969205 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:20:36.969205 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:20:36.969958 master-0 kubenswrapper[7721]: I0216 02:20:36.969904 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:20:37.395643 master-0 kubenswrapper[7721]: I0216 02:20:37.393862 7721 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-zlbd2 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.13:8443/healthz\": dial tcp 10.128.0.13:8443: connect: connection refused" start-of-body= Feb 16 02:20:37.395643 master-0 kubenswrapper[7721]: I0216 02:20:37.393928 7721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-zlbd2" podUID="9be9fd24-fdb1-43dc-80b8-68020427bfd7" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.13:8443/healthz\": dial tcp 10.128.0.13:8443: connect: connection refused" Feb 16 02:20:37.395643 master-0 kubenswrapper[7721]: I0216 02:20:37.393981 7721 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-zlbd2" Feb 16 02:20:37.395643 master-0 kubenswrapper[7721]: I0216 02:20:37.394681 7721 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"7a516aec52660718ccf80f8448e598ce525c9666277508da67b9f886712a7edf"} pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-zlbd2" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Feb 16 02:20:37.395643 master-0 kubenswrapper[7721]: I0216 02:20:37.394724 7721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-zlbd2" podUID="9be9fd24-fdb1-43dc-80b8-68020427bfd7" containerName="openshift-config-operator" containerID="cri-o://7a516aec52660718ccf80f8448e598ce525c9666277508da67b9f886712a7edf" gracePeriod=30 Feb 16 02:20:37.395643 master-0 kubenswrapper[7721]: I0216 02:20:37.395333 7721 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-zlbd2 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.13:8443/healthz\": dial tcp 10.128.0.13:8443: connect: connection refused" start-of-body= Feb 16 02:20:37.395643 master-0 kubenswrapper[7721]: I0216 02:20:37.395363 7721 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-zlbd2" podUID="9be9fd24-fdb1-43dc-80b8-68020427bfd7" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.13:8443/healthz\": dial tcp 10.128.0.13:8443: connect: connection refused" Feb 16 02:20:37.705378 master-0 kubenswrapper[7721]: I0216 02:20:37.705283 7721 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="75e1522d-2c8c-4dbc-830d-47636881cc06" Feb 16 02:20:37.705378 master-0 kubenswrapper[7721]: I0216 02:20:37.705338 7721 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="75e1522d-2c8c-4dbc-830d-47636881cc06" Feb 16 02:20:37.741200 master-0 kubenswrapper[7721]: E0216 02:20:37.741140 7721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-config-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=openshift-config-operator pod=openshift-config-operator-7c6bdb986f-zlbd2_openshift-config-operator(9be9fd24-fdb1-43dc-80b8-68020427bfd7)\"" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-zlbd2" podUID="9be9fd24-fdb1-43dc-80b8-68020427bfd7" Feb 16 02:20:37.968696 master-0 kubenswrapper[7721]: I0216 02:20:37.968491 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:20:37.968696 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:20:37.968696 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:20:37.968696 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:20:37.968696 master-0 kubenswrapper[7721]: I0216 02:20:37.968564 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:20:38.715953 master-0 kubenswrapper[7721]: I0216 02:20:38.715888 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-7c6bdb986f-zlbd2_9be9fd24-fdb1-43dc-80b8-68020427bfd7/openshift-config-operator/3.log" Feb 16 02:20:38.717085 master-0 kubenswrapper[7721]: I0216 02:20:38.717021 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-7c6bdb986f-zlbd2_9be9fd24-fdb1-43dc-80b8-68020427bfd7/openshift-config-operator/2.log" Feb 16 02:20:38.717663 master-0 kubenswrapper[7721]: I0216 02:20:38.717610 7721 generic.go:334] "Generic (PLEG): container finished" podID="9be9fd24-fdb1-43dc-80b8-68020427bfd7" containerID="7a516aec52660718ccf80f8448e598ce525c9666277508da67b9f886712a7edf" exitCode=255 Feb 16 02:20:38.717790 master-0 kubenswrapper[7721]: I0216 02:20:38.717675 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-zlbd2" event={"ID":"9be9fd24-fdb1-43dc-80b8-68020427bfd7","Type":"ContainerDied","Data":"7a516aec52660718ccf80f8448e598ce525c9666277508da67b9f886712a7edf"} Feb 16 02:20:38.717790 master-0 kubenswrapper[7721]: I0216 02:20:38.717732 7721 scope.go:117] "RemoveContainer" containerID="917b2632e8a4e3e770f86449d7b3ef73654f05bfc0dedeadd349ab3f3148bd85" Feb 16 02:20:38.718430 master-0 kubenswrapper[7721]: I0216 02:20:38.718383 7721 scope.go:117] "RemoveContainer" containerID="7a516aec52660718ccf80f8448e598ce525c9666277508da67b9f886712a7edf" Feb 16 02:20:38.718771 master-0 kubenswrapper[7721]: E0216 02:20:38.718725 7721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-config-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=openshift-config-operator pod=openshift-config-operator-7c6bdb986f-zlbd2_openshift-config-operator(9be9fd24-fdb1-43dc-80b8-68020427bfd7)\"" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-zlbd2" podUID="9be9fd24-fdb1-43dc-80b8-68020427bfd7" Feb 16 02:20:38.725950 master-0 kubenswrapper[7721]: I0216 02:20:38.725885 7721 scope.go:117] "RemoveContainer" containerID="0405f37172f7f0e66eacb12dabde4efc8bc5d9f141a69f5229eddcb49dd8fe93" Feb 16 02:20:38.726624 master-0 kubenswrapper[7721]: E0216 02:20:38.726427 7721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-c588d8cb4-nbjz6_openshift-ingress-operator(04804a08-e3a5-46f3-abcb-967866834baa)\"" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nbjz6" podUID="04804a08-e3a5-46f3-abcb-967866834baa" Feb 16 02:20:38.969091 master-0 kubenswrapper[7721]: I0216 02:20:38.968809 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:20:38.969091 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:20:38.969091 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:20:38.969091 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:20:38.969091 master-0 kubenswrapper[7721]: I0216 02:20:38.968920 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:20:39.731011 master-0 kubenswrapper[7721]: I0216 02:20:39.730880 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-7c6bdb986f-zlbd2_9be9fd24-fdb1-43dc-80b8-68020427bfd7/openshift-config-operator/3.log" Feb 16 02:20:39.969055 master-0 kubenswrapper[7721]: I0216 02:20:39.968976 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:20:39.969055 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:20:39.969055 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:20:39.969055 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:20:39.969055 master-0 kubenswrapper[7721]: I0216 02:20:39.969048 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:20:40.797393 master-0 kubenswrapper[7721]: I0216 02:20:40.797282 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:20:40.798403 master-0 kubenswrapper[7721]: I0216 02:20:40.797708 7721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:20:40.804268 master-0 kubenswrapper[7721]: I0216 02:20:40.804205 7721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:20:40.970054 master-0 kubenswrapper[7721]: I0216 02:20:40.969753 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:20:40.970054 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:20:40.970054 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:20:40.970054 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:20:40.970744 master-0 kubenswrapper[7721]: I0216 02:20:40.970075 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:20:41.755495 master-0 kubenswrapper[7721]: I0216 02:20:41.755370 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:20:41.970278 master-0 kubenswrapper[7721]: I0216 02:20:41.970186 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:20:41.970278 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:20:41.970278 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:20:41.970278 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:20:41.970278 master-0 kubenswrapper[7721]: I0216 02:20:41.970284 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:20:42.725278 master-0 kubenswrapper[7721]: I0216 02:20:42.725150 7721 scope.go:117] "RemoveContainer" containerID="10c9eace2e116c89a552aa72158d1a899b8f235b70b43e057de19dffd38d865e" Feb 16 02:20:42.725786 master-0 kubenswrapper[7721]: E0216 02:20:42.725581 7721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-74b6595c6d-466x9_openshift-cluster-storage-operator(a3065737-c7c0-4fbb-b484-f2a9204d4908)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-466x9" podUID="a3065737-c7c0-4fbb-b484-f2a9204d4908" Feb 16 02:20:42.969262 master-0 kubenswrapper[7721]: I0216 02:20:42.969139 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:20:42.969262 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:20:42.969262 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:20:42.969262 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:20:42.969262 master-0 kubenswrapper[7721]: I0216 02:20:42.969256 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:20:43.973814 master-0 kubenswrapper[7721]: I0216 02:20:43.969271 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:20:43.973814 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:20:43.973814 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:20:43.973814 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:20:43.973814 master-0 kubenswrapper[7721]: I0216 02:20:43.969394 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:20:44.968992 master-0 kubenswrapper[7721]: I0216 02:20:44.968899 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:20:44.968992 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:20:44.968992 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:20:44.968992 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:20:44.972876 master-0 kubenswrapper[7721]: I0216 02:20:44.969003 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:20:45.969175 master-0 kubenswrapper[7721]: I0216 02:20:45.969080 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:20:45.969175 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:20:45.969175 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:20:45.969175 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:20:45.969175 master-0 kubenswrapper[7721]: I0216 02:20:45.969145 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:20:46.970768 master-0 kubenswrapper[7721]: I0216 02:20:46.970533 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:20:46.970768 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:20:46.970768 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:20:46.970768 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:20:46.970768 master-0 kubenswrapper[7721]: I0216 02:20:46.970640 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:20:47.968900 master-0 kubenswrapper[7721]: I0216 02:20:47.968779 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:20:47.968900 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:20:47.968900 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:20:47.968900 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:20:47.969350 master-0 kubenswrapper[7721]: I0216 02:20:47.968929 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:20:48.968383 master-0 kubenswrapper[7721]: I0216 02:20:48.968238 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:20:48.968383 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:20:48.968383 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:20:48.968383 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:20:48.968383 master-0 kubenswrapper[7721]: I0216 02:20:48.968338 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:20:49.799572 master-0 kubenswrapper[7721]: I0216 02:20:49.799424 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-tkqng" Feb 16 02:20:49.967653 master-0 kubenswrapper[7721]: I0216 02:20:49.967604 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:20:49.967653 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:20:49.967653 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:20:49.967653 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:20:49.967965 master-0 kubenswrapper[7721]: I0216 02:20:49.967663 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:20:50.969608 master-0 kubenswrapper[7721]: I0216 02:20:50.969182 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:20:50.969608 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:20:50.969608 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:20:50.969608 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:20:50.969608 master-0 kubenswrapper[7721]: I0216 02:20:50.969317 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:20:51.725069 master-0 kubenswrapper[7721]: I0216 02:20:51.724966 7721 scope.go:117] "RemoveContainer" containerID="0405f37172f7f0e66eacb12dabde4efc8bc5d9f141a69f5229eddcb49dd8fe93" Feb 16 02:20:51.725533 master-0 kubenswrapper[7721]: E0216 02:20:51.725484 7721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-c588d8cb4-nbjz6_openshift-ingress-operator(04804a08-e3a5-46f3-abcb-967866834baa)\"" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nbjz6" podUID="04804a08-e3a5-46f3-abcb-967866834baa" Feb 16 02:20:51.969600 master-0 kubenswrapper[7721]: I0216 02:20:51.969521 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:20:51.969600 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:20:51.969600 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:20:51.969600 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:20:51.970971 master-0 kubenswrapper[7721]: I0216 02:20:51.970693 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:20:52.725622 master-0 kubenswrapper[7721]: I0216 02:20:52.725543 7721 scope.go:117] "RemoveContainer" containerID="7a516aec52660718ccf80f8448e598ce525c9666277508da67b9f886712a7edf" Feb 16 02:20:52.725911 master-0 kubenswrapper[7721]: E0216 02:20:52.725883 7721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-config-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=openshift-config-operator pod=openshift-config-operator-7c6bdb986f-zlbd2_openshift-config-operator(9be9fd24-fdb1-43dc-80b8-68020427bfd7)\"" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-zlbd2" podUID="9be9fd24-fdb1-43dc-80b8-68020427bfd7" Feb 16 02:20:52.968903 master-0 kubenswrapper[7721]: I0216 02:20:52.968787 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:20:52.968903 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:20:52.968903 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:20:52.968903 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:20:52.968903 master-0 kubenswrapper[7721]: I0216 02:20:52.968884 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:20:53.983594 master-0 kubenswrapper[7721]: I0216 02:20:53.972881 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:20:53.983594 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:20:53.983594 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:20:53.983594 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:20:53.983594 master-0 kubenswrapper[7721]: I0216 02:20:53.972934 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:20:53.983594 master-0 kubenswrapper[7721]: I0216 02:20:53.972969 7721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-864ddd5f56-ffptx" Feb 16 02:20:53.983594 master-0 kubenswrapper[7721]: I0216 02:20:53.973488 7721 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"6ef739f702cc8cdaf44b732cdf8fab6363588dea12a3413585529df5415a8dd4"} pod="openshift-ingress/router-default-864ddd5f56-ffptx" containerMessage="Container router failed startup probe, will be restarted" Feb 16 02:20:53.983594 master-0 kubenswrapper[7721]: I0216 02:20:53.973516 7721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" containerID="cri-o://6ef739f702cc8cdaf44b732cdf8fab6363588dea12a3413585529df5415a8dd4" gracePeriod=3600 Feb 16 02:20:55.725600 master-0 kubenswrapper[7721]: I0216 02:20:55.725524 7721 scope.go:117] "RemoveContainer" containerID="10c9eace2e116c89a552aa72158d1a899b8f235b70b43e057de19dffd38d865e" Feb 16 02:20:55.726504 master-0 kubenswrapper[7721]: E0216 02:20:55.725953 7721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-74b6595c6d-466x9_openshift-cluster-storage-operator(a3065737-c7c0-4fbb-b484-f2a9204d4908)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-466x9" podUID="a3065737-c7c0-4fbb-b484-f2a9204d4908" Feb 16 02:21:04.724827 master-0 kubenswrapper[7721]: I0216 02:21:04.724773 7721 scope.go:117] "RemoveContainer" containerID="0405f37172f7f0e66eacb12dabde4efc8bc5d9f141a69f5229eddcb49dd8fe93" Feb 16 02:21:04.725803 master-0 kubenswrapper[7721]: E0216 02:21:04.725007 7721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-c588d8cb4-nbjz6_openshift-ingress-operator(04804a08-e3a5-46f3-abcb-967866834baa)\"" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nbjz6" podUID="04804a08-e3a5-46f3-abcb-967866834baa" Feb 16 02:21:04.726292 master-0 kubenswrapper[7721]: I0216 02:21:04.726224 7721 scope.go:117] "RemoveContainer" containerID="7a516aec52660718ccf80f8448e598ce525c9666277508da67b9f886712a7edf" Feb 16 02:21:05.777664 master-0 kubenswrapper[7721]: I0216 02:21:05.777581 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Feb 16 02:21:05.778177 master-0 kubenswrapper[7721]: E0216 02:21:05.778008 7721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ff0fbd8-9ecc-421f-952c-c90ea17ddc7b" containerName="installer" Feb 16 02:21:05.778177 master-0 kubenswrapper[7721]: I0216 02:21:05.778037 7721 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ff0fbd8-9ecc-421f-952c-c90ea17ddc7b" containerName="installer" Feb 16 02:21:05.778177 master-0 kubenswrapper[7721]: E0216 02:21:05.778120 7721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9063971f-d258-4c4b-9e12-06b7de390d3b" containerName="installer" Feb 16 02:21:05.778177 master-0 kubenswrapper[7721]: I0216 02:21:05.778138 7721 state_mem.go:107] "Deleted CPUSet assignment" podUID="9063971f-d258-4c4b-9e12-06b7de390d3b" containerName="installer" Feb 16 02:21:05.778515 master-0 kubenswrapper[7721]: I0216 02:21:05.778476 7721 memory_manager.go:354] "RemoveStaleState removing state" podUID="9063971f-d258-4c4b-9e12-06b7de390d3b" containerName="installer" Feb 16 02:21:05.778588 master-0 kubenswrapper[7721]: I0216 02:21:05.778535 7721 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ff0fbd8-9ecc-421f-952c-c90ea17ddc7b" containerName="installer" Feb 16 02:21:05.779331 master-0 kubenswrapper[7721]: I0216 02:21:05.779291 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Feb 16 02:21:05.781341 master-0 kubenswrapper[7721]: I0216 02:21:05.781270 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-wnpkt" Feb 16 02:21:05.783399 master-0 kubenswrapper[7721]: I0216 02:21:05.783351 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 16 02:21:05.788828 master-0 kubenswrapper[7721]: I0216 02:21:05.788718 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-5-master-0"] Feb 16 02:21:05.789521 master-0 kubenswrapper[7721]: I0216 02:21:05.789497 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Feb 16 02:21:05.794629 master-0 kubenswrapper[7721]: I0216 02:21:05.794572 7721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-bjhh5" Feb 16 02:21:05.794802 master-0 kubenswrapper[7721]: I0216 02:21:05.794708 7721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Feb 16 02:21:05.808912 master-0 kubenswrapper[7721]: I0216 02:21:05.808870 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Feb 16 02:21:05.873216 master-0 kubenswrapper[7721]: I0216 02:21:05.873150 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-5-master-0"] Feb 16 02:21:05.886491 master-0 kubenswrapper[7721]: I0216 02:21:05.886428 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bbea4f60-d0a6-402d-9cff-1df19947e3e4-var-lock\") pod \"installer-2-master-0\" (UID: \"bbea4f60-d0a6-402d-9cff-1df19947e3e4\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 16 02:21:05.886707 master-0 kubenswrapper[7721]: I0216 02:21:05.886503 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bbea4f60-d0a6-402d-9cff-1df19947e3e4-kube-api-access\") pod \"installer-2-master-0\" (UID: \"bbea4f60-d0a6-402d-9cff-1df19947e3e4\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 16 02:21:05.886707 master-0 kubenswrapper[7721]: I0216 02:21:05.886542 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1c399bab-ff5e-4fd0-959b-354508c39eec-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"1c399bab-ff5e-4fd0-959b-354508c39eec\") " pod="openshift-kube-scheduler/installer-5-master-0" Feb 16 02:21:05.886707 master-0 kubenswrapper[7721]: I0216 02:21:05.886563 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1c399bab-ff5e-4fd0-959b-354508c39eec-kube-api-access\") pod \"installer-5-master-0\" (UID: \"1c399bab-ff5e-4fd0-959b-354508c39eec\") " pod="openshift-kube-scheduler/installer-5-master-0" Feb 16 02:21:05.886707 master-0 kubenswrapper[7721]: I0216 02:21:05.886587 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1c399bab-ff5e-4fd0-959b-354508c39eec-var-lock\") pod \"installer-5-master-0\" (UID: \"1c399bab-ff5e-4fd0-959b-354508c39eec\") " pod="openshift-kube-scheduler/installer-5-master-0" Feb 16 02:21:05.886707 master-0 kubenswrapper[7721]: I0216 02:21:05.886634 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bbea4f60-d0a6-402d-9cff-1df19947e3e4-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"bbea4f60-d0a6-402d-9cff-1df19947e3e4\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 16 02:21:05.942549 master-0 kubenswrapper[7721]: I0216 02:21:05.942502 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-7c6bdb986f-zlbd2_9be9fd24-fdb1-43dc-80b8-68020427bfd7/openshift-config-operator/3.log" Feb 16 02:21:05.943546 master-0 kubenswrapper[7721]: I0216 02:21:05.943497 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-zlbd2" event={"ID":"9be9fd24-fdb1-43dc-80b8-68020427bfd7","Type":"ContainerStarted","Data":"d5340edfc712ed1b46fa67df7be743b58b3e546895b86b4ea9c8db071bb7af63"} Feb 16 02:21:05.943990 master-0 kubenswrapper[7721]: I0216 02:21:05.943937 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-zlbd2" Feb 16 02:21:05.988526 master-0 kubenswrapper[7721]: I0216 02:21:05.988468 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bbea4f60-d0a6-402d-9cff-1df19947e3e4-var-lock\") pod \"installer-2-master-0\" (UID: \"bbea4f60-d0a6-402d-9cff-1df19947e3e4\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 16 02:21:05.988982 master-0 kubenswrapper[7721]: I0216 02:21:05.988610 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bbea4f60-d0a6-402d-9cff-1df19947e3e4-var-lock\") pod \"installer-2-master-0\" (UID: \"bbea4f60-d0a6-402d-9cff-1df19947e3e4\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 16 02:21:05.988982 master-0 kubenswrapper[7721]: I0216 02:21:05.988939 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bbea4f60-d0a6-402d-9cff-1df19947e3e4-kube-api-access\") pod \"installer-2-master-0\" (UID: \"bbea4f60-d0a6-402d-9cff-1df19947e3e4\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 16 02:21:05.989772 master-0 kubenswrapper[7721]: I0216 02:21:05.989690 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1c399bab-ff5e-4fd0-959b-354508c39eec-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"1c399bab-ff5e-4fd0-959b-354508c39eec\") " pod="openshift-kube-scheduler/installer-5-master-0" Feb 16 02:21:05.989911 master-0 kubenswrapper[7721]: I0216 02:21:05.989816 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1c399bab-ff5e-4fd0-959b-354508c39eec-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"1c399bab-ff5e-4fd0-959b-354508c39eec\") " pod="openshift-kube-scheduler/installer-5-master-0" Feb 16 02:21:05.989988 master-0 kubenswrapper[7721]: I0216 02:21:05.989909 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1c399bab-ff5e-4fd0-959b-354508c39eec-kube-api-access\") pod \"installer-5-master-0\" (UID: \"1c399bab-ff5e-4fd0-959b-354508c39eec\") " pod="openshift-kube-scheduler/installer-5-master-0" Feb 16 02:21:05.990480 master-0 kubenswrapper[7721]: I0216 02:21:05.990415 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1c399bab-ff5e-4fd0-959b-354508c39eec-var-lock\") pod \"installer-5-master-0\" (UID: \"1c399bab-ff5e-4fd0-959b-354508c39eec\") " pod="openshift-kube-scheduler/installer-5-master-0" Feb 16 02:21:05.990647 master-0 kubenswrapper[7721]: I0216 02:21:05.990509 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1c399bab-ff5e-4fd0-959b-354508c39eec-var-lock\") pod \"installer-5-master-0\" (UID: \"1c399bab-ff5e-4fd0-959b-354508c39eec\") " pod="openshift-kube-scheduler/installer-5-master-0" Feb 16 02:21:05.990816 master-0 kubenswrapper[7721]: I0216 02:21:05.990771 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bbea4f60-d0a6-402d-9cff-1df19947e3e4-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"bbea4f60-d0a6-402d-9cff-1df19947e3e4\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 16 02:21:05.990895 master-0 kubenswrapper[7721]: I0216 02:21:05.990857 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bbea4f60-d0a6-402d-9cff-1df19947e3e4-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"bbea4f60-d0a6-402d-9cff-1df19947e3e4\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 16 02:21:06.011068 master-0 kubenswrapper[7721]: I0216 02:21:06.010995 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bbea4f60-d0a6-402d-9cff-1df19947e3e4-kube-api-access\") pod \"installer-2-master-0\" (UID: \"bbea4f60-d0a6-402d-9cff-1df19947e3e4\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 16 02:21:06.019069 master-0 kubenswrapper[7721]: I0216 02:21:06.019024 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1c399bab-ff5e-4fd0-959b-354508c39eec-kube-api-access\") pod \"installer-5-master-0\" (UID: \"1c399bab-ff5e-4fd0-959b-354508c39eec\") " pod="openshift-kube-scheduler/installer-5-master-0" Feb 16 02:21:06.152772 master-0 kubenswrapper[7721]: I0216 02:21:06.152558 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Feb 16 02:21:06.179173 master-0 kubenswrapper[7721]: I0216 02:21:06.178091 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Feb 16 02:21:06.669329 master-0 kubenswrapper[7721]: I0216 02:21:06.669246 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Feb 16 02:21:06.678327 master-0 kubenswrapper[7721]: W0216 02:21:06.678220 7721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podbbea4f60_d0a6_402d_9cff_1df19947e3e4.slice/crio-96be8f863582a9bccfabc0c715a30e101e0cc04fb2e4cf5d4839821d7da13fd5 WatchSource:0}: Error finding container 96be8f863582a9bccfabc0c715a30e101e0cc04fb2e4cf5d4839821d7da13fd5: Status 404 returned error can't find the container with id 96be8f863582a9bccfabc0c715a30e101e0cc04fb2e4cf5d4839821d7da13fd5 Feb 16 02:21:06.746502 master-0 kubenswrapper[7721]: I0216 02:21:06.744132 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-5-master-0"] Feb 16 02:21:06.746502 master-0 kubenswrapper[7721]: W0216 02:21:06.745493 7721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod1c399bab_ff5e_4fd0_959b_354508c39eec.slice/crio-5001e844bddc55d075219ca34aad85fbea6d50d9661dd853374d08556f873a41 WatchSource:0}: Error finding container 5001e844bddc55d075219ca34aad85fbea6d50d9661dd853374d08556f873a41: Status 404 returned error can't find the container with id 5001e844bddc55d075219ca34aad85fbea6d50d9661dd853374d08556f873a41 Feb 16 02:21:06.954241 master-0 kubenswrapper[7721]: I0216 02:21:06.954162 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"1c399bab-ff5e-4fd0-959b-354508c39eec","Type":"ContainerStarted","Data":"5001e844bddc55d075219ca34aad85fbea6d50d9661dd853374d08556f873a41"} Feb 16 02:21:06.955850 master-0 kubenswrapper[7721]: I0216 02:21:06.955765 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"bbea4f60-d0a6-402d-9cff-1df19947e3e4","Type":"ContainerStarted","Data":"96be8f863582a9bccfabc0c715a30e101e0cc04fb2e4cf5d4839821d7da13fd5"} Feb 16 02:21:07.725480 master-0 kubenswrapper[7721]: I0216 02:21:07.725384 7721 scope.go:117] "RemoveContainer" containerID="10c9eace2e116c89a552aa72158d1a899b8f235b70b43e057de19dffd38d865e" Feb 16 02:21:07.725883 master-0 kubenswrapper[7721]: E0216 02:21:07.725828 7721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-74b6595c6d-466x9_openshift-cluster-storage-operator(a3065737-c7c0-4fbb-b484-f2a9204d4908)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-466x9" podUID="a3065737-c7c0-4fbb-b484-f2a9204d4908" Feb 16 02:21:07.977011 master-0 kubenswrapper[7721]: I0216 02:21:07.976827 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"1c399bab-ff5e-4fd0-959b-354508c39eec","Type":"ContainerStarted","Data":"6cbcdbcbd020aa118f6d5315c540820d270da381e100f19561c88fefb12f18a7"} Feb 16 02:21:07.982807 master-0 kubenswrapper[7721]: I0216 02:21:07.981825 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"bbea4f60-d0a6-402d-9cff-1df19947e3e4","Type":"ContainerStarted","Data":"bb657cd74be97a4b24393350381c44eacba835edbee1f8e3b9ef27c11f752f2e"} Feb 16 02:21:08.004737 master-0 kubenswrapper[7721]: I0216 02:21:08.004614 7721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-5-master-0" podStartSLOduration=3.004588946 podStartE2EDuration="3.004588946s" podCreationTimestamp="2026-02-16 02:21:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:21:08.00361665 +0000 UTC m=+871.497850952" watchObservedRunningTime="2026-02-16 02:21:08.004588946 +0000 UTC m=+871.498823248" Feb 16 02:21:08.984616 master-0 kubenswrapper[7721]: I0216 02:21:08.984558 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-zlbd2" Feb 16 02:21:09.021462 master-0 kubenswrapper[7721]: I0216 02:21:09.021307 7721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-2-master-0" podStartSLOduration=4.021274208 podStartE2EDuration="4.021274208s" podCreationTimestamp="2026-02-16 02:21:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:21:08.042133988 +0000 UTC m=+871.536368290" watchObservedRunningTime="2026-02-16 02:21:09.021274208 +0000 UTC m=+872.515508510" Feb 16 02:21:10.621107 master-0 kubenswrapper[7721]: I0216 02:21:10.620992 7721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Feb 16 02:21:10.622154 master-0 kubenswrapper[7721]: I0216 02:21:10.621277 7721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-2-master-0" podUID="bbea4f60-d0a6-402d-9cff-1df19947e3e4" containerName="installer" containerID="cri-o://bb657cd74be97a4b24393350381c44eacba835edbee1f8e3b9ef27c11f752f2e" gracePeriod=30 Feb 16 02:21:13.823357 master-0 kubenswrapper[7721]: I0216 02:21:13.823270 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Feb 16 02:21:13.824594 master-0 kubenswrapper[7721]: I0216 02:21:13.824548 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Feb 16 02:21:13.849383 master-0 kubenswrapper[7721]: I0216 02:21:13.849299 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Feb 16 02:21:13.940350 master-0 kubenswrapper[7721]: I0216 02:21:13.940111 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/20bf60f7-9e36-477e-96a5-4fc8dc1bca5e-kube-api-access\") pod \"installer-3-master-0\" (UID: \"20bf60f7-9e36-477e-96a5-4fc8dc1bca5e\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 16 02:21:13.940640 master-0 kubenswrapper[7721]: I0216 02:21:13.940578 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/20bf60f7-9e36-477e-96a5-4fc8dc1bca5e-var-lock\") pod \"installer-3-master-0\" (UID: \"20bf60f7-9e36-477e-96a5-4fc8dc1bca5e\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 16 02:21:13.940744 master-0 kubenswrapper[7721]: I0216 02:21:13.940708 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/20bf60f7-9e36-477e-96a5-4fc8dc1bca5e-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"20bf60f7-9e36-477e-96a5-4fc8dc1bca5e\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 16 02:21:14.042380 master-0 kubenswrapper[7721]: I0216 02:21:14.042147 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/20bf60f7-9e36-477e-96a5-4fc8dc1bca5e-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"20bf60f7-9e36-477e-96a5-4fc8dc1bca5e\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 16 02:21:14.042380 master-0 kubenswrapper[7721]: I0216 02:21:14.042237 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/20bf60f7-9e36-477e-96a5-4fc8dc1bca5e-kube-api-access\") pod \"installer-3-master-0\" (UID: \"20bf60f7-9e36-477e-96a5-4fc8dc1bca5e\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 16 02:21:14.042380 master-0 kubenswrapper[7721]: I0216 02:21:14.042307 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/20bf60f7-9e36-477e-96a5-4fc8dc1bca5e-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"20bf60f7-9e36-477e-96a5-4fc8dc1bca5e\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 16 02:21:14.042834 master-0 kubenswrapper[7721]: I0216 02:21:14.042486 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/20bf60f7-9e36-477e-96a5-4fc8dc1bca5e-var-lock\") pod \"installer-3-master-0\" (UID: \"20bf60f7-9e36-477e-96a5-4fc8dc1bca5e\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 16 02:21:14.042834 master-0 kubenswrapper[7721]: I0216 02:21:14.042741 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/20bf60f7-9e36-477e-96a5-4fc8dc1bca5e-var-lock\") pod \"installer-3-master-0\" (UID: \"20bf60f7-9e36-477e-96a5-4fc8dc1bca5e\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 16 02:21:14.073190 master-0 kubenswrapper[7721]: I0216 02:21:14.072559 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/20bf60f7-9e36-477e-96a5-4fc8dc1bca5e-kube-api-access\") pod \"installer-3-master-0\" (UID: \"20bf60f7-9e36-477e-96a5-4fc8dc1bca5e\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 16 02:21:14.199820 master-0 kubenswrapper[7721]: I0216 02:21:14.199744 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Feb 16 02:21:14.722748 master-0 kubenswrapper[7721]: I0216 02:21:14.722656 7721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Feb 16 02:21:15.057169 master-0 kubenswrapper[7721]: I0216 02:21:15.057066 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"20bf60f7-9e36-477e-96a5-4fc8dc1bca5e","Type":"ContainerStarted","Data":"519d12e7d67d992627ab3afec4b63569e16dcc4c57e6118793f1a36ff0f10027"} Feb 16 02:21:16.085558 master-0 kubenswrapper[7721]: I0216 02:21:16.085481 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"20bf60f7-9e36-477e-96a5-4fc8dc1bca5e","Type":"ContainerStarted","Data":"573f599caba2d6aea83d83677716e638746c9026d70482beb5f92bc432117189"} Feb 16 02:21:16.111812 master-0 kubenswrapper[7721]: I0216 02:21:16.111674 7721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-3-master-0" podStartSLOduration=3.111655294 podStartE2EDuration="3.111655294s" podCreationTimestamp="2026-02-16 02:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:21:16.107843493 +0000 UTC m=+879.602077755" watchObservedRunningTime="2026-02-16 02:21:16.111655294 +0000 UTC m=+879.605889556" Feb 16 02:21:19.724967 master-0 kubenswrapper[7721]: I0216 02:21:19.724855 7721 scope.go:117] "RemoveContainer" containerID="0405f37172f7f0e66eacb12dabde4efc8bc5d9f141a69f5229eddcb49dd8fe93" Feb 16 02:21:20.126746 master-0 kubenswrapper[7721]: I0216 02:21:20.126225 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-c588d8cb4-nbjz6_04804a08-e3a5-46f3-abcb-967866834baa/ingress-operator/4.log" Feb 16 02:21:20.127805 master-0 kubenswrapper[7721]: I0216 02:21:20.127317 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nbjz6" event={"ID":"04804a08-e3a5-46f3-abcb-967866834baa","Type":"ContainerStarted","Data":"3329f9189665a8c61397d4c024786c6b8ffec89003c5c45fd95ecbe7afca2be8"} Feb 16 02:21:22.725754 master-0 kubenswrapper[7721]: I0216 02:21:22.725673 7721 scope.go:117] "RemoveContainer" containerID="10c9eace2e116c89a552aa72158d1a899b8f235b70b43e057de19dffd38d865e" Feb 16 02:21:22.726729 master-0 kubenswrapper[7721]: E0216 02:21:22.726114 7721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-74b6595c6d-466x9_openshift-cluster-storage-operator(a3065737-c7c0-4fbb-b484-f2a9204d4908)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-466x9" podUID="a3065737-c7c0-4fbb-b484-f2a9204d4908" Feb 16 02:21:32.234474 master-0 kubenswrapper[7721]: I0216 02:21:32.234369 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_532487ad51c30257b744e7c1c79fb34f/cluster-policy-controller/3.log" Feb 16 02:21:32.235953 master-0 kubenswrapper[7721]: I0216 02:21:32.235903 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_532487ad51c30257b744e7c1c79fb34f/kube-controller-manager/1.log" Feb 16 02:21:32.240688 master-0 kubenswrapper[7721]: I0216 02:21:32.240639 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_532487ad51c30257b744e7c1c79fb34f/kube-controller-manager-cert-syncer/0.log" Feb 16 02:21:32.240812 master-0 kubenswrapper[7721]: I0216 02:21:32.240713 7721 generic.go:334] "Generic (PLEG): container finished" podID="532487ad51c30257b744e7c1c79fb34f" containerID="c1607c7a684a009a85d360c6358aedc027d89ca14606abafaf65b0d9cbaca7c9" exitCode=1 Feb 16 02:21:32.240812 master-0 kubenswrapper[7721]: I0216 02:21:32.240766 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"532487ad51c30257b744e7c1c79fb34f","Type":"ContainerDied","Data":"c1607c7a684a009a85d360c6358aedc027d89ca14606abafaf65b0d9cbaca7c9"} Feb 16 02:21:32.241735 master-0 kubenswrapper[7721]: I0216 02:21:32.241687 7721 scope.go:117] "RemoveContainer" containerID="c1607c7a684a009a85d360c6358aedc027d89ca14606abafaf65b0d9cbaca7c9" Feb 16 02:21:33.252551 master-0 kubenswrapper[7721]: I0216 02:21:33.252459 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_532487ad51c30257b744e7c1c79fb34f/cluster-policy-controller/3.log" Feb 16 02:21:33.253769 master-0 kubenswrapper[7721]: I0216 02:21:33.253719 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_532487ad51c30257b744e7c1c79fb34f/kube-controller-manager/1.log" Feb 16 02:21:33.254905 master-0 kubenswrapper[7721]: I0216 02:21:33.254859 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_532487ad51c30257b744e7c1c79fb34f/kube-controller-manager-cert-syncer/0.log" Feb 16 02:21:33.255045 master-0 kubenswrapper[7721]: I0216 02:21:33.254919 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"532487ad51c30257b744e7c1c79fb34f","Type":"ContainerStarted","Data":"ab9f31d8a9dea7f17fe5df1556062b9ee37acd8a1e22d617b3329084d777dce1"} Feb 16 02:21:37.775691 master-0 kubenswrapper[7721]: I0216 02:21:37.775518 7721 scope.go:117] "RemoveContainer" containerID="10c9eace2e116c89a552aa72158d1a899b8f235b70b43e057de19dffd38d865e" Feb 16 02:21:37.776368 master-0 kubenswrapper[7721]: E0216 02:21:37.775730 7721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-74b6595c6d-466x9_openshift-cluster-storage-operator(a3065737-c7c0-4fbb-b484-f2a9204d4908)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-466x9" podUID="a3065737-c7c0-4fbb-b484-f2a9204d4908" Feb 16 02:21:39.159790 master-0 kubenswrapper[7721]: I0216 02:21:39.159701 7721 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Feb 16 02:21:39.160629 master-0 kubenswrapper[7721]: I0216 02:21:39.160089 7721 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="9460ca0802075a8a6a10d7b3e6052c4d" containerName="kube-scheduler" containerID="cri-o://6bc4b5ee1e89ed7a76ec9068e6cdb19289d70c03bd852b3dc8e93c9d7f9e1ba4" gracePeriod=30 Feb 16 02:21:39.161825 master-0 kubenswrapper[7721]: I0216 02:21:39.161753 7721 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Feb 16 02:21:39.162558 master-0 kubenswrapper[7721]: E0216 02:21:39.162228 7721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9460ca0802075a8a6a10d7b3e6052c4d" containerName="kube-scheduler" Feb 16 02:21:39.162648 master-0 kubenswrapper[7721]: I0216 02:21:39.162586 7721 state_mem.go:107] "Deleted CPUSet assignment" podUID="9460ca0802075a8a6a10d7b3e6052c4d" containerName="kube-scheduler" Feb 16 02:21:39.162648 master-0 kubenswrapper[7721]: E0216 02:21:39.162644 7721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9460ca0802075a8a6a10d7b3e6052c4d" containerName="kube-scheduler" Feb 16 02:21:39.162769 master-0 kubenswrapper[7721]: I0216 02:21:39.162658 7721 state_mem.go:107] "Deleted CPUSet assignment" podUID="9460ca0802075a8a6a10d7b3e6052c4d" containerName="kube-scheduler" Feb 16 02:21:39.162769 master-0 kubenswrapper[7721]: E0216 02:21:39.162690 7721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9460ca0802075a8a6a10d7b3e6052c4d" containerName="kube-scheduler" Feb 16 02:21:39.162769 master-0 kubenswrapper[7721]: I0216 02:21:39.162703 7721 state_mem.go:107] "Deleted CPUSet assignment" podUID="9460ca0802075a8a6a10d7b3e6052c4d" containerName="kube-scheduler" Feb 16 02:21:39.162966 master-0 kubenswrapper[7721]: I0216 02:21:39.162951 7721 memory_manager.go:354] "RemoveStaleState removing state" podUID="9460ca0802075a8a6a10d7b3e6052c4d" containerName="kube-scheduler" Feb 16 02:21:39.163042 master-0 kubenswrapper[7721]: I0216 02:21:39.162976 7721 memory_manager.go:354] "RemoveStaleState removing state" podUID="9460ca0802075a8a6a10d7b3e6052c4d" containerName="kube-scheduler" Feb 16 02:21:39.163461 master-0 kubenswrapper[7721]: I0216 02:21:39.163389 7721 memory_manager.go:354] "RemoveStaleState removing state" podUID="9460ca0802075a8a6a10d7b3e6052c4d" containerName="kube-scheduler" Feb 16 02:21:39.165477 master-0 kubenswrapper[7721]: I0216 02:21:39.165406 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 02:21:39.296255 master-0 kubenswrapper[7721]: I0216 02:21:39.296197 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/952766c3a88fd12345a552f1277199f9-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"952766c3a88fd12345a552f1277199f9\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 02:21:39.296473 master-0 kubenswrapper[7721]: I0216 02:21:39.296274 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/952766c3a88fd12345a552f1277199f9-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"952766c3a88fd12345a552f1277199f9\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 02:21:39.312238 master-0 kubenswrapper[7721]: I0216 02:21:39.312181 7721 generic.go:334] "Generic (PLEG): container finished" podID="9460ca0802075a8a6a10d7b3e6052c4d" containerID="6bc4b5ee1e89ed7a76ec9068e6cdb19289d70c03bd852b3dc8e93c9d7f9e1ba4" exitCode=0 Feb 16 02:21:39.312381 master-0 kubenswrapper[7721]: I0216 02:21:39.312265 7721 scope.go:117] "RemoveContainer" containerID="1bad524fd514e3639a6a8b060873c8398b9f534aa2528726df9aa2897827465b" Feb 16 02:21:39.314699 master-0 kubenswrapper[7721]: I0216 02:21:39.314650 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-master-0_bbea4f60-d0a6-402d-9cff-1df19947e3e4/installer/0.log" Feb 16 02:21:39.314827 master-0 kubenswrapper[7721]: I0216 02:21:39.314706 7721 generic.go:334] "Generic (PLEG): container finished" podID="bbea4f60-d0a6-402d-9cff-1df19947e3e4" containerID="bb657cd74be97a4b24393350381c44eacba835edbee1f8e3b9ef27c11f752f2e" exitCode=1 Feb 16 02:21:39.314827 master-0 kubenswrapper[7721]: I0216 02:21:39.314737 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"bbea4f60-d0a6-402d-9cff-1df19947e3e4","Type":"ContainerDied","Data":"bb657cd74be97a4b24393350381c44eacba835edbee1f8e3b9ef27c11f752f2e"} Feb 16 02:21:39.392921 master-0 kubenswrapper[7721]: I0216 02:21:39.392827 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Feb 16 02:21:39.400782 master-0 kubenswrapper[7721]: I0216 02:21:39.398335 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/952766c3a88fd12345a552f1277199f9-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"952766c3a88fd12345a552f1277199f9\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 02:21:39.400782 master-0 kubenswrapper[7721]: I0216 02:21:39.398538 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/952766c3a88fd12345a552f1277199f9-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"952766c3a88fd12345a552f1277199f9\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 02:21:39.400782 master-0 kubenswrapper[7721]: I0216 02:21:39.398668 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/952766c3a88fd12345a552f1277199f9-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"952766c3a88fd12345a552f1277199f9\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 02:21:39.402732 master-0 kubenswrapper[7721]: I0216 02:21:39.401644 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/952766c3a88fd12345a552f1277199f9-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"952766c3a88fd12345a552f1277199f9\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 02:21:39.451930 master-0 kubenswrapper[7721]: I0216 02:21:39.451850 7721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 16 02:21:39.457722 master-0 kubenswrapper[7721]: I0216 02:21:39.457668 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-master-0_bbea4f60-d0a6-402d-9cff-1df19947e3e4/installer/0.log" Feb 16 02:21:39.457884 master-0 kubenswrapper[7721]: I0216 02:21:39.457742 7721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Feb 16 02:21:39.479028 master-0 kubenswrapper[7721]: I0216 02:21:39.478932 7721 kubelet.go:2706] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="9c06fa42-1420-4b69-9bf4-8a782a5e3fd2" Feb 16 02:21:39.603046 master-0 kubenswrapper[7721]: I0216 02:21:39.602859 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/9460ca0802075a8a6a10d7b3e6052c4d-logs\") pod \"9460ca0802075a8a6a10d7b3e6052c4d\" (UID: \"9460ca0802075a8a6a10d7b3e6052c4d\") " Feb 16 02:21:39.603046 master-0 kubenswrapper[7721]: I0216 02:21:39.602956 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bbea4f60-d0a6-402d-9cff-1df19947e3e4-var-lock\") pod \"bbea4f60-d0a6-402d-9cff-1df19947e3e4\" (UID: \"bbea4f60-d0a6-402d-9cff-1df19947e3e4\") " Feb 16 02:21:39.603046 master-0 kubenswrapper[7721]: I0216 02:21:39.602990 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bbea4f60-d0a6-402d-9cff-1df19947e3e4-kubelet-dir\") pod \"bbea4f60-d0a6-402d-9cff-1df19947e3e4\" (UID: \"bbea4f60-d0a6-402d-9cff-1df19947e3e4\") " Feb 16 02:21:39.603046 master-0 kubenswrapper[7721]: I0216 02:21:39.603037 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/9460ca0802075a8a6a10d7b3e6052c4d-secrets\") pod \"9460ca0802075a8a6a10d7b3e6052c4d\" (UID: \"9460ca0802075a8a6a10d7b3e6052c4d\") " Feb 16 02:21:39.603681 master-0 kubenswrapper[7721]: I0216 02:21:39.603017 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9460ca0802075a8a6a10d7b3e6052c4d-logs" (OuterVolumeSpecName: "logs") pod "9460ca0802075a8a6a10d7b3e6052c4d" (UID: "9460ca0802075a8a6a10d7b3e6052c4d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:21:39.603681 master-0 kubenswrapper[7721]: I0216 02:21:39.603095 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bbea4f60-d0a6-402d-9cff-1df19947e3e4-kube-api-access\") pod \"bbea4f60-d0a6-402d-9cff-1df19947e3e4\" (UID: \"bbea4f60-d0a6-402d-9cff-1df19947e3e4\") " Feb 16 02:21:39.603681 master-0 kubenswrapper[7721]: I0216 02:21:39.603079 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9460ca0802075a8a6a10d7b3e6052c4d-secrets" (OuterVolumeSpecName: "secrets") pod "9460ca0802075a8a6a10d7b3e6052c4d" (UID: "9460ca0802075a8a6a10d7b3e6052c4d"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:21:39.603681 master-0 kubenswrapper[7721]: I0216 02:21:39.603088 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bbea4f60-d0a6-402d-9cff-1df19947e3e4-var-lock" (OuterVolumeSpecName: "var-lock") pod "bbea4f60-d0a6-402d-9cff-1df19947e3e4" (UID: "bbea4f60-d0a6-402d-9cff-1df19947e3e4"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:21:39.603681 master-0 kubenswrapper[7721]: I0216 02:21:39.603136 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bbea4f60-d0a6-402d-9cff-1df19947e3e4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "bbea4f60-d0a6-402d-9cff-1df19947e3e4" (UID: "bbea4f60-d0a6-402d-9cff-1df19947e3e4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:21:39.604054 master-0 kubenswrapper[7721]: I0216 02:21:39.603691 7721 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/9460ca0802075a8a6a10d7b3e6052c4d-logs\") on node \"master-0\" DevicePath \"\"" Feb 16 02:21:39.604054 master-0 kubenswrapper[7721]: I0216 02:21:39.603714 7721 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bbea4f60-d0a6-402d-9cff-1df19947e3e4-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 16 02:21:39.604054 master-0 kubenswrapper[7721]: I0216 02:21:39.603727 7721 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bbea4f60-d0a6-402d-9cff-1df19947e3e4-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 02:21:39.604054 master-0 kubenswrapper[7721]: I0216 02:21:39.603742 7721 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/9460ca0802075a8a6a10d7b3e6052c4d-secrets\") on node \"master-0\" DevicePath \"\"" Feb 16 02:21:39.608003 master-0 kubenswrapper[7721]: I0216 02:21:39.607942 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbea4f60-d0a6-402d-9cff-1df19947e3e4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "bbea4f60-d0a6-402d-9cff-1df19947e3e4" (UID: "bbea4f60-d0a6-402d-9cff-1df19947e3e4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:21:39.683769 master-0 kubenswrapper[7721]: I0216 02:21:39.683706 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 02:21:39.704896 master-0 kubenswrapper[7721]: I0216 02:21:39.704825 7721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bbea4f60-d0a6-402d-9cff-1df19947e3e4-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 16 02:21:39.715032 master-0 kubenswrapper[7721]: W0216 02:21:39.714963 7721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod952766c3a88fd12345a552f1277199f9.slice/crio-92790def01d9723678e72ddb37afa203e6ce284de27eb1ef78e5d202635e3d9e WatchSource:0}: Error finding container 92790def01d9723678e72ddb37afa203e6ce284de27eb1ef78e5d202635e3d9e: Status 404 returned error can't find the container with id 92790def01d9723678e72ddb37afa203e6ce284de27eb1ef78e5d202635e3d9e Feb 16 02:21:40.326367 master-0 kubenswrapper[7721]: I0216 02:21:40.326287 7721 generic.go:334] "Generic (PLEG): container finished" podID="952766c3a88fd12345a552f1277199f9" containerID="76eaee0713adf3d6273ac37acfe2aa28acfb59f88749a49fa74c637faf8ccbb7" exitCode=0 Feb 16 02:21:40.327238 master-0 kubenswrapper[7721]: I0216 02:21:40.326368 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"952766c3a88fd12345a552f1277199f9","Type":"ContainerDied","Data":"76eaee0713adf3d6273ac37acfe2aa28acfb59f88749a49fa74c637faf8ccbb7"} Feb 16 02:21:40.327238 master-0 kubenswrapper[7721]: I0216 02:21:40.326465 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"952766c3a88fd12345a552f1277199f9","Type":"ContainerStarted","Data":"92790def01d9723678e72ddb37afa203e6ce284de27eb1ef78e5d202635e3d9e"} Feb 16 02:21:40.330946 master-0 kubenswrapper[7721]: I0216 02:21:40.330892 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-master-0_bbea4f60-d0a6-402d-9cff-1df19947e3e4/installer/0.log" Feb 16 02:21:40.331078 master-0 kubenswrapper[7721]: I0216 02:21:40.331022 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"bbea4f60-d0a6-402d-9cff-1df19947e3e4","Type":"ContainerDied","Data":"96be8f863582a9bccfabc0c715a30e101e0cc04fb2e4cf5d4839821d7da13fd5"} Feb 16 02:21:40.331157 master-0 kubenswrapper[7721]: I0216 02:21:40.331084 7721 scope.go:117] "RemoveContainer" containerID="bb657cd74be97a4b24393350381c44eacba835edbee1f8e3b9ef27c11f752f2e" Feb 16 02:21:40.331318 master-0 kubenswrapper[7721]: I0216 02:21:40.331196 7721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Feb 16 02:21:40.339969 master-0 kubenswrapper[7721]: I0216 02:21:40.339910 7721 generic.go:334] "Generic (PLEG): container finished" podID="17390d9a-148d-4927-a831-5bc4873c43d5" containerID="6ef739f702cc8cdaf44b732cdf8fab6363588dea12a3413585529df5415a8dd4" exitCode=0 Feb 16 02:21:40.339969 master-0 kubenswrapper[7721]: I0216 02:21:40.339951 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-864ddd5f56-ffptx" event={"ID":"17390d9a-148d-4927-a831-5bc4873c43d5","Type":"ContainerDied","Data":"6ef739f702cc8cdaf44b732cdf8fab6363588dea12a3413585529df5415a8dd4"} Feb 16 02:21:40.344146 master-0 kubenswrapper[7721]: I0216 02:21:40.344100 7721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8b97e97c35ace1d8e3342fb279b58e63ecd66a09abba6b504fe344a2864fe27" Feb 16 02:21:40.344271 master-0 kubenswrapper[7721]: I0216 02:21:40.344139 7721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 16 02:21:40.347589 master-0 kubenswrapper[7721]: I0216 02:21:40.347538 7721 generic.go:334] "Generic (PLEG): container finished" podID="1c399bab-ff5e-4fd0-959b-354508c39eec" containerID="6cbcdbcbd020aa118f6d5315c540820d270da381e100f19561c88fefb12f18a7" exitCode=0 Feb 16 02:21:40.347725 master-0 kubenswrapper[7721]: I0216 02:21:40.347602 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"1c399bab-ff5e-4fd0-959b-354508c39eec","Type":"ContainerDied","Data":"6cbcdbcbd020aa118f6d5315c540820d270da381e100f19561c88fefb12f18a7"} Feb 16 02:21:40.401149 master-0 kubenswrapper[7721]: I0216 02:21:40.401092 7721 scope.go:117] "RemoveContainer" containerID="3ce3bc4dbf9d2b84eb229ace41c4b8a84419c825acb6db3b3ccaf2d00311773f" Feb 16 02:21:40.424347 master-0 kubenswrapper[7721]: I0216 02:21:40.422891 7721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Feb 16 02:21:40.431009 master-0 kubenswrapper[7721]: I0216 02:21:40.430935 7721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Feb 16 02:21:40.739726 master-0 kubenswrapper[7721]: I0216 02:21:40.739643 7721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9460ca0802075a8a6a10d7b3e6052c4d" path="/var/lib/kubelet/pods/9460ca0802075a8a6a10d7b3e6052c4d/volumes" Feb 16 02:21:40.740780 master-0 kubenswrapper[7721]: I0216 02:21:40.740725 7721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbea4f60-d0a6-402d-9cff-1df19947e3e4" path="/var/lib/kubelet/pods/bbea4f60-d0a6-402d-9cff-1df19947e3e4/volumes" Feb 16 02:21:40.741326 master-0 kubenswrapper[7721]: I0216 02:21:40.741257 7721 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="" Feb 16 02:21:40.759851 master-0 kubenswrapper[7721]: I0216 02:21:40.759802 7721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Feb 16 02:21:40.759851 master-0 kubenswrapper[7721]: I0216 02:21:40.759839 7721 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="9c06fa42-1420-4b69-9bf4-8a782a5e3fd2" Feb 16 02:21:40.765117 master-0 kubenswrapper[7721]: I0216 02:21:40.765076 7721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Feb 16 02:21:40.765117 master-0 kubenswrapper[7721]: I0216 02:21:40.765103 7721 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="9c06fa42-1420-4b69-9bf4-8a782a5e3fd2" Feb 16 02:21:41.358334 master-0 kubenswrapper[7721]: I0216 02:21:41.358261 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-864ddd5f56-ffptx" event={"ID":"17390d9a-148d-4927-a831-5bc4873c43d5","Type":"ContainerStarted","Data":"44f96ad23d55ad9e246235dc60ba2235ad3ea0f642a102bbde1c5e9f778554da"} Feb 16 02:21:41.377017 master-0 kubenswrapper[7721]: I0216 02:21:41.366030 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"952766c3a88fd12345a552f1277199f9","Type":"ContainerStarted","Data":"b1719f823da9232e4df52b63f4eabde4790d28c377a06111c61f6b0b4e7b4fdc"} Feb 16 02:21:41.377017 master-0 kubenswrapper[7721]: I0216 02:21:41.366090 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"952766c3a88fd12345a552f1277199f9","Type":"ContainerStarted","Data":"779e04f06bfe5ff236c55c41264d4f056bfa7e1b617ccd26b61c5cc8d3f21521"} Feb 16 02:21:41.766997 master-0 kubenswrapper[7721]: I0216 02:21:41.766889 7721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Feb 16 02:21:41.945176 master-0 kubenswrapper[7721]: I0216 02:21:41.944846 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1c399bab-ff5e-4fd0-959b-354508c39eec-kube-api-access\") pod \"1c399bab-ff5e-4fd0-959b-354508c39eec\" (UID: \"1c399bab-ff5e-4fd0-959b-354508c39eec\") " Feb 16 02:21:41.945176 master-0 kubenswrapper[7721]: I0216 02:21:41.944980 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1c399bab-ff5e-4fd0-959b-354508c39eec-var-lock\") pod \"1c399bab-ff5e-4fd0-959b-354508c39eec\" (UID: \"1c399bab-ff5e-4fd0-959b-354508c39eec\") " Feb 16 02:21:41.945176 master-0 kubenswrapper[7721]: I0216 02:21:41.945052 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1c399bab-ff5e-4fd0-959b-354508c39eec-kubelet-dir\") pod \"1c399bab-ff5e-4fd0-959b-354508c39eec\" (UID: \"1c399bab-ff5e-4fd0-959b-354508c39eec\") " Feb 16 02:21:41.946203 master-0 kubenswrapper[7721]: I0216 02:21:41.946110 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c399bab-ff5e-4fd0-959b-354508c39eec-var-lock" (OuterVolumeSpecName: "var-lock") pod "1c399bab-ff5e-4fd0-959b-354508c39eec" (UID: "1c399bab-ff5e-4fd0-959b-354508c39eec"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:21:41.946310 master-0 kubenswrapper[7721]: I0216 02:21:41.946201 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c399bab-ff5e-4fd0-959b-354508c39eec-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1c399bab-ff5e-4fd0-959b-354508c39eec" (UID: "1c399bab-ff5e-4fd0-959b-354508c39eec"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:21:41.948352 master-0 kubenswrapper[7721]: I0216 02:21:41.948281 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c399bab-ff5e-4fd0-959b-354508c39eec-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1c399bab-ff5e-4fd0-959b-354508c39eec" (UID: "1c399bab-ff5e-4fd0-959b-354508c39eec"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:21:41.966083 master-0 kubenswrapper[7721]: I0216 02:21:41.965967 7721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-864ddd5f56-ffptx" Feb 16 02:21:41.969622 master-0 kubenswrapper[7721]: I0216 02:21:41.969528 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:21:41.969622 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:21:41.969622 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:21:41.969622 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:21:41.970090 master-0 kubenswrapper[7721]: I0216 02:21:41.969663 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:21:42.047614 master-0 kubenswrapper[7721]: I0216 02:21:42.047526 7721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1c399bab-ff5e-4fd0-959b-354508c39eec-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 16 02:21:42.047614 master-0 kubenswrapper[7721]: I0216 02:21:42.047584 7721 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1c399bab-ff5e-4fd0-959b-354508c39eec-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 16 02:21:42.047614 master-0 kubenswrapper[7721]: I0216 02:21:42.047603 7721 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1c399bab-ff5e-4fd0-959b-354508c39eec-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 02:21:42.381831 master-0 kubenswrapper[7721]: I0216 02:21:42.381611 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"952766c3a88fd12345a552f1277199f9","Type":"ContainerStarted","Data":"0e646a3e6607e9273936bfd732054c28836f671166eb8f720bf310ef03bc905c"} Feb 16 02:21:42.381831 master-0 kubenswrapper[7721]: I0216 02:21:42.381768 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 02:21:42.384286 master-0 kubenswrapper[7721]: I0216 02:21:42.384208 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"1c399bab-ff5e-4fd0-959b-354508c39eec","Type":"ContainerDied","Data":"5001e844bddc55d075219ca34aad85fbea6d50d9661dd853374d08556f873a41"} Feb 16 02:21:42.384286 master-0 kubenswrapper[7721]: I0216 02:21:42.384254 7721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Feb 16 02:21:42.384551 master-0 kubenswrapper[7721]: I0216 02:21:42.384278 7721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5001e844bddc55d075219ca34aad85fbea6d50d9661dd853374d08556f873a41" Feb 16 02:21:42.416563 master-0 kubenswrapper[7721]: I0216 02:21:42.414035 7721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podStartSLOduration=3.414007924 podStartE2EDuration="3.414007924s" podCreationTimestamp="2026-02-16 02:21:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:21:42.413764187 +0000 UTC m=+905.907998479" watchObservedRunningTime="2026-02-16 02:21:42.414007924 +0000 UTC m=+905.908242226" Feb 16 02:21:42.969537 master-0 kubenswrapper[7721]: I0216 02:21:42.969411 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:21:42.969537 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:21:42.969537 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:21:42.969537 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:21:42.969537 master-0 kubenswrapper[7721]: I0216 02:21:42.969523 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:21:43.974182 master-0 kubenswrapper[7721]: I0216 02:21:43.970259 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:21:43.974182 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:21:43.974182 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:21:43.974182 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:21:43.974182 master-0 kubenswrapper[7721]: I0216 02:21:43.970359 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:21:44.969130 master-0 kubenswrapper[7721]: I0216 02:21:44.969027 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:21:44.969130 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:21:44.969130 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:21:44.969130 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:21:44.969624 master-0 kubenswrapper[7721]: I0216 02:21:44.969149 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:21:45.969149 master-0 kubenswrapper[7721]: I0216 02:21:45.969073 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:21:45.969149 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:21:45.969149 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:21:45.969149 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:21:45.970082 master-0 kubenswrapper[7721]: I0216 02:21:45.969208 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:21:46.968323 master-0 kubenswrapper[7721]: I0216 02:21:46.968233 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:21:46.968323 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:21:46.968323 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:21:46.968323 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:21:46.968922 master-0 kubenswrapper[7721]: I0216 02:21:46.968349 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:21:47.968904 master-0 kubenswrapper[7721]: I0216 02:21:47.968756 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:21:47.968904 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:21:47.968904 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:21:47.968904 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:21:47.968904 master-0 kubenswrapper[7721]: I0216 02:21:47.968884 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:21:48.968404 master-0 kubenswrapper[7721]: I0216 02:21:48.968338 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:21:48.968404 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:21:48.968404 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:21:48.968404 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:21:48.969613 master-0 kubenswrapper[7721]: I0216 02:21:48.969019 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:21:49.756596 master-0 kubenswrapper[7721]: I0216 02:21:49.754089 7721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0"] Feb 16 02:21:49.968728 master-0 kubenswrapper[7721]: I0216 02:21:49.968611 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:21:49.968728 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:21:49.968728 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:21:49.968728 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:21:49.968728 master-0 kubenswrapper[7721]: I0216 02:21:49.968692 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:21:50.966857 master-0 kubenswrapper[7721]: I0216 02:21:50.966755 7721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-864ddd5f56-ffptx" Feb 16 02:21:50.969705 master-0 kubenswrapper[7721]: I0216 02:21:50.969613 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:21:50.969705 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:21:50.969705 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:21:50.969705 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:21:50.970526 master-0 kubenswrapper[7721]: I0216 02:21:50.969756 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:21:51.968755 master-0 kubenswrapper[7721]: I0216 02:21:51.968666 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:21:51.968755 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:21:51.968755 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:21:51.968755 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:21:51.969292 master-0 kubenswrapper[7721]: I0216 02:21:51.968762 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:21:52.725672 master-0 kubenswrapper[7721]: I0216 02:21:52.725577 7721 scope.go:117] "RemoveContainer" containerID="10c9eace2e116c89a552aa72158d1a899b8f235b70b43e057de19dffd38d865e" Feb 16 02:21:52.969043 master-0 kubenswrapper[7721]: I0216 02:21:52.968951 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:21:52.969043 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:21:52.969043 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:21:52.969043 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:21:52.969506 master-0 kubenswrapper[7721]: I0216 02:21:52.969058 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:21:53.494371 master-0 kubenswrapper[7721]: I0216 02:21:53.494309 7721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-466x9_a3065737-c7c0-4fbb-b484-f2a9204d4908/snapshot-controller/4.log" Feb 16 02:21:53.494694 master-0 kubenswrapper[7721]: I0216 02:21:53.494398 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-466x9" event={"ID":"a3065737-c7c0-4fbb-b484-f2a9204d4908","Type":"ContainerStarted","Data":"bda45e449010ac80aebe7e69ac6212061e08de3abb8a13bb6a0137df63e5677b"} Feb 16 02:21:53.556786 master-0 kubenswrapper[7721]: I0216 02:21:53.556183 7721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0" podStartSLOduration=4.556156735 podStartE2EDuration="4.556156735s" podCreationTimestamp="2026-02-16 02:21:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:21:53.545045471 +0000 UTC m=+917.039279803" watchObservedRunningTime="2026-02-16 02:21:53.556156735 +0000 UTC m=+917.050391027" Feb 16 02:21:53.969464 master-0 kubenswrapper[7721]: I0216 02:21:53.969345 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:21:53.969464 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:21:53.969464 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:21:53.969464 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:21:53.970404 master-0 kubenswrapper[7721]: I0216 02:21:53.970310 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:21:54.969422 master-0 kubenswrapper[7721]: I0216 02:21:54.969311 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:21:54.969422 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:21:54.969422 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:21:54.969422 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:21:54.970694 master-0 kubenswrapper[7721]: I0216 02:21:54.969474 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:21:55.971111 master-0 kubenswrapper[7721]: I0216 02:21:55.971021 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:21:55.971111 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:21:55.971111 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:21:55.971111 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:21:55.972095 master-0 kubenswrapper[7721]: I0216 02:21:55.971140 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:21:56.969408 master-0 kubenswrapper[7721]: I0216 02:21:56.969277 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:21:56.969408 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:21:56.969408 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:21:56.969408 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:21:56.970029 master-0 kubenswrapper[7721]: I0216 02:21:56.969489 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:21:57.969930 master-0 kubenswrapper[7721]: I0216 02:21:57.969779 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:21:57.969930 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:21:57.969930 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:21:57.969930 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:21:57.971168 master-0 kubenswrapper[7721]: I0216 02:21:57.969973 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:21:58.970582 master-0 kubenswrapper[7721]: I0216 02:21:58.970463 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:21:58.970582 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:21:58.970582 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:21:58.970582 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:21:58.971602 master-0 kubenswrapper[7721]: I0216 02:21:58.970605 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:21:59.968873 master-0 kubenswrapper[7721]: I0216 02:21:59.968763 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:21:59.968873 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:21:59.968873 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:21:59.968873 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:21:59.968873 master-0 kubenswrapper[7721]: I0216 02:21:59.968870 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:22:00.969684 master-0 kubenswrapper[7721]: I0216 02:22:00.969584 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:22:00.969684 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:22:00.969684 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:22:00.969684 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:22:00.970931 master-0 kubenswrapper[7721]: I0216 02:22:00.969687 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:22:01.968828 master-0 kubenswrapper[7721]: I0216 02:22:01.968716 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:22:01.968828 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:22:01.968828 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:22:01.968828 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:22:01.968828 master-0 kubenswrapper[7721]: I0216 02:22:01.968827 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:22:02.968778 master-0 kubenswrapper[7721]: I0216 02:22:02.968697 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:22:02.968778 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:22:02.968778 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:22:02.968778 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:22:02.969785 master-0 kubenswrapper[7721]: I0216 02:22:02.968802 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:22:03.969956 master-0 kubenswrapper[7721]: I0216 02:22:03.969860 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:22:03.969956 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:22:03.969956 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:22:03.969956 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:22:03.971248 master-0 kubenswrapper[7721]: I0216 02:22:03.969963 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:22:04.968116 master-0 kubenswrapper[7721]: I0216 02:22:04.967997 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:22:04.968116 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:22:04.968116 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:22:04.968116 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:22:04.968598 master-0 kubenswrapper[7721]: I0216 02:22:04.968131 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:22:05.969426 master-0 kubenswrapper[7721]: I0216 02:22:05.969345 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:22:05.969426 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:22:05.969426 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:22:05.969426 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:22:05.970497 master-0 kubenswrapper[7721]: I0216 02:22:05.969510 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:22:06.968397 master-0 kubenswrapper[7721]: I0216 02:22:06.968310 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:22:06.968397 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:22:06.968397 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:22:06.968397 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:22:06.968397 master-0 kubenswrapper[7721]: I0216 02:22:06.968394 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:22:07.969557 master-0 kubenswrapper[7721]: I0216 02:22:07.969482 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:22:07.969557 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:22:07.969557 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:22:07.969557 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:22:07.970976 master-0 kubenswrapper[7721]: I0216 02:22:07.969578 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:22:08.967967 master-0 kubenswrapper[7721]: I0216 02:22:08.967883 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:22:08.967967 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:22:08.967967 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:22:08.967967 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:22:08.967967 master-0 kubenswrapper[7721]: I0216 02:22:08.967962 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:22:09.968598 master-0 kubenswrapper[7721]: I0216 02:22:09.968485 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:22:09.968598 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:22:09.968598 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:22:09.968598 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:22:09.968598 master-0 kubenswrapper[7721]: I0216 02:22:09.968580 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:22:10.969054 master-0 kubenswrapper[7721]: I0216 02:22:10.968859 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:22:10.969054 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:22:10.969054 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:22:10.969054 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:22:10.969054 master-0 kubenswrapper[7721]: I0216 02:22:10.968922 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:22:11.969348 master-0 kubenswrapper[7721]: I0216 02:22:11.969222 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:22:11.969348 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:22:11.969348 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:22:11.969348 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:22:11.969348 master-0 kubenswrapper[7721]: I0216 02:22:11.969342 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:22:12.969136 master-0 kubenswrapper[7721]: I0216 02:22:12.969038 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:22:12.969136 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:22:12.969136 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:22:12.969136 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:22:12.970483 master-0 kubenswrapper[7721]: I0216 02:22:12.969137 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:22:13.968956 master-0 kubenswrapper[7721]: I0216 02:22:13.968696 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:22:13.968956 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:22:13.968956 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:22:13.968956 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:22:13.968956 master-0 kubenswrapper[7721]: I0216 02:22:13.968781 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:22:14.968522 master-0 kubenswrapper[7721]: I0216 02:22:14.968410 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:22:14.968522 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:22:14.968522 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:22:14.968522 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:22:14.969643 master-0 kubenswrapper[7721]: I0216 02:22:14.968537 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:22:15.969280 master-0 kubenswrapper[7721]: I0216 02:22:15.969135 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:22:15.969280 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:22:15.969280 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:22:15.969280 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:22:15.969280 master-0 kubenswrapper[7721]: I0216 02:22:15.969260 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:22:16.968941 master-0 kubenswrapper[7721]: I0216 02:22:16.968850 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:22:16.968941 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:22:16.968941 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:22:16.968941 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:22:16.970166 master-0 kubenswrapper[7721]: I0216 02:22:16.969379 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:22:17.968749 master-0 kubenswrapper[7721]: I0216 02:22:17.968699 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:22:17.968749 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:22:17.968749 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:22:17.968749 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:22:17.969149 master-0 kubenswrapper[7721]: I0216 02:22:17.969120 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:22:18.969022 master-0 kubenswrapper[7721]: I0216 02:22:18.968959 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:22:18.969022 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:22:18.969022 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:22:18.969022 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:22:18.970340 master-0 kubenswrapper[7721]: I0216 02:22:18.970292 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:22:19.970604 master-0 kubenswrapper[7721]: I0216 02:22:19.970503 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:22:19.970604 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:22:19.970604 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:22:19.970604 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:22:19.970604 master-0 kubenswrapper[7721]: I0216 02:22:19.970580 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:22:20.969567 master-0 kubenswrapper[7721]: I0216 02:22:20.969460 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:22:20.969567 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:22:20.969567 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:22:20.969567 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:22:20.970246 master-0 kubenswrapper[7721]: I0216 02:22:20.969578 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:22:21.969202 master-0 kubenswrapper[7721]: I0216 02:22:21.969133 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:22:21.969202 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:22:21.969202 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:22:21.969202 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:22:21.970356 master-0 kubenswrapper[7721]: I0216 02:22:21.969234 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:22:22.968385 master-0 kubenswrapper[7721]: I0216 02:22:22.968288 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:22:22.968385 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:22:22.968385 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:22:22.968385 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:22:22.968868 master-0 kubenswrapper[7721]: I0216 02:22:22.968408 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:22:23.333961 master-0 kubenswrapper[7721]: I0216 02:22:23.333806 7721 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 16 02:22:23.334855 master-0 kubenswrapper[7721]: E0216 02:22:23.334181 7721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbea4f60-d0a6-402d-9cff-1df19947e3e4" containerName="installer" Feb 16 02:22:23.334855 master-0 kubenswrapper[7721]: I0216 02:22:23.334203 7721 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbea4f60-d0a6-402d-9cff-1df19947e3e4" containerName="installer" Feb 16 02:22:23.334855 master-0 kubenswrapper[7721]: E0216 02:22:23.334224 7721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c399bab-ff5e-4fd0-959b-354508c39eec" containerName="installer" Feb 16 02:22:23.334855 master-0 kubenswrapper[7721]: I0216 02:22:23.334239 7721 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c399bab-ff5e-4fd0-959b-354508c39eec" containerName="installer" Feb 16 02:22:23.334855 master-0 kubenswrapper[7721]: I0216 02:22:23.334537 7721 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c399bab-ff5e-4fd0-959b-354508c39eec" containerName="installer" Feb 16 02:22:23.334855 master-0 kubenswrapper[7721]: I0216 02:22:23.334565 7721 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbea4f60-d0a6-402d-9cff-1df19947e3e4" containerName="installer" Feb 16 02:22:23.335303 master-0 kubenswrapper[7721]: I0216 02:22:23.335112 7721 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Feb 16 02:22:23.335509 master-0 kubenswrapper[7721]: I0216 02:22:23.335417 7721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="5d1e91e5a1fed5cf7076a92d2830d36f" containerName="kube-apiserver" containerID="cri-o://b4c44b5842e3ec8dee60bb9b2661e316b9a431e19fc3d6452f904a284fd4a961" gracePeriod=15 Feb 16 02:22:23.335639 master-0 kubenswrapper[7721]: I0216 02:22:23.335536 7721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="5d1e91e5a1fed5cf7076a92d2830d36f" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://34fdb1528016b6c99888a5b17a344114bc05a46b5e53091141b876be457cb369" gracePeriod=15 Feb 16 02:22:23.335747 master-0 kubenswrapper[7721]: I0216 02:22:23.335577 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 02:22:23.337756 master-0 kubenswrapper[7721]: I0216 02:22:23.337703 7721 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Feb 16 02:22:23.338120 master-0 kubenswrapper[7721]: E0216 02:22:23.338078 7721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d1e91e5a1fed5cf7076a92d2830d36f" containerName="kube-apiserver-insecure-readyz" Feb 16 02:22:23.338120 master-0 kubenswrapper[7721]: I0216 02:22:23.338113 7721 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d1e91e5a1fed5cf7076a92d2830d36f" containerName="kube-apiserver-insecure-readyz" Feb 16 02:22:23.338289 master-0 kubenswrapper[7721]: E0216 02:22:23.338141 7721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d1e91e5a1fed5cf7076a92d2830d36f" containerName="kube-apiserver" Feb 16 02:22:23.338289 master-0 kubenswrapper[7721]: I0216 02:22:23.338155 7721 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d1e91e5a1fed5cf7076a92d2830d36f" containerName="kube-apiserver" Feb 16 02:22:23.338289 master-0 kubenswrapper[7721]: E0216 02:22:23.338189 7721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d1e91e5a1fed5cf7076a92d2830d36f" containerName="setup" Feb 16 02:22:23.338289 master-0 kubenswrapper[7721]: I0216 02:22:23.338201 7721 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d1e91e5a1fed5cf7076a92d2830d36f" containerName="setup" Feb 16 02:22:23.338614 master-0 kubenswrapper[7721]: I0216 02:22:23.338507 7721 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d1e91e5a1fed5cf7076a92d2830d36f" containerName="setup" Feb 16 02:22:23.338614 master-0 kubenswrapper[7721]: I0216 02:22:23.338534 7721 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d1e91e5a1fed5cf7076a92d2830d36f" containerName="kube-apiserver-insecure-readyz" Feb 16 02:22:23.338614 master-0 kubenswrapper[7721]: I0216 02:22:23.338561 7721 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d1e91e5a1fed5cf7076a92d2830d36f" containerName="kube-apiserver" Feb 16 02:22:23.341289 master-0 kubenswrapper[7721]: I0216 02:22:23.341239 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 02:22:23.389958 master-0 kubenswrapper[7721]: E0216 02:22:23.389873 7721 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 02:22:23.398341 master-0 kubenswrapper[7721]: E0216 02:22:23.398220 7721 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 02:22:23.443730 master-0 kubenswrapper[7721]: I0216 02:22:23.443627 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/619e637b8575311b72d43b7b782d610a-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"619e637b8575311b72d43b7b782d610a\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 02:22:23.443730 master-0 kubenswrapper[7721]: I0216 02:22:23.443721 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebf941eaba3a97825b1c8002f4b27a20\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 02:22:23.443991 master-0 kubenswrapper[7721]: I0216 02:22:23.443911 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebf941eaba3a97825b1c8002f4b27a20\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 02:22:23.444106 master-0 kubenswrapper[7721]: I0216 02:22:23.444060 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/619e637b8575311b72d43b7b782d610a-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"619e637b8575311b72d43b7b782d610a\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 02:22:23.444181 master-0 kubenswrapper[7721]: I0216 02:22:23.444127 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebf941eaba3a97825b1c8002f4b27a20\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 02:22:23.444397 master-0 kubenswrapper[7721]: I0216 02:22:23.444320 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebf941eaba3a97825b1c8002f4b27a20\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 02:22:23.444669 master-0 kubenswrapper[7721]: I0216 02:22:23.444625 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebf941eaba3a97825b1c8002f4b27a20\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 02:22:23.444758 master-0 kubenswrapper[7721]: I0216 02:22:23.444681 7721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/619e637b8575311b72d43b7b782d610a-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"619e637b8575311b72d43b7b782d610a\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 02:22:23.546010 master-0 kubenswrapper[7721]: I0216 02:22:23.545924 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebf941eaba3a97825b1c8002f4b27a20\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 02:22:23.546244 master-0 kubenswrapper[7721]: I0216 02:22:23.546020 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/619e637b8575311b72d43b7b782d610a-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"619e637b8575311b72d43b7b782d610a\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 02:22:23.546244 master-0 kubenswrapper[7721]: I0216 02:22:23.546059 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebf941eaba3a97825b1c8002f4b27a20\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 02:22:23.546244 master-0 kubenswrapper[7721]: I0216 02:22:23.546090 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/619e637b8575311b72d43b7b782d610a-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"619e637b8575311b72d43b7b782d610a\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 02:22:23.546244 master-0 kubenswrapper[7721]: I0216 02:22:23.546152 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/619e637b8575311b72d43b7b782d610a-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"619e637b8575311b72d43b7b782d610a\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 02:22:23.546244 master-0 kubenswrapper[7721]: I0216 02:22:23.546169 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebf941eaba3a97825b1c8002f4b27a20\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 02:22:23.546691 master-0 kubenswrapper[7721]: I0216 02:22:23.546263 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebf941eaba3a97825b1c8002f4b27a20\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 02:22:23.546691 master-0 kubenswrapper[7721]: I0216 02:22:23.546271 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/619e637b8575311b72d43b7b782d610a-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"619e637b8575311b72d43b7b782d610a\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 02:22:23.546691 master-0 kubenswrapper[7721]: I0216 02:22:23.546378 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebf941eaba3a97825b1c8002f4b27a20\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 02:22:23.546691 master-0 kubenswrapper[7721]: I0216 02:22:23.546412 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/619e637b8575311b72d43b7b782d610a-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"619e637b8575311b72d43b7b782d610a\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 02:22:23.546691 master-0 kubenswrapper[7721]: I0216 02:22:23.546535 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebf941eaba3a97825b1c8002f4b27a20\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 02:22:23.546691 master-0 kubenswrapper[7721]: I0216 02:22:23.546578 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebf941eaba3a97825b1c8002f4b27a20\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 02:22:23.546691 master-0 kubenswrapper[7721]: I0216 02:22:23.546687 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/619e637b8575311b72d43b7b782d610a-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"619e637b8575311b72d43b7b782d610a\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 02:22:23.547111 master-0 kubenswrapper[7721]: I0216 02:22:23.546698 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebf941eaba3a97825b1c8002f4b27a20\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 02:22:23.547111 master-0 kubenswrapper[7721]: I0216 02:22:23.546721 7721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebf941eaba3a97825b1c8002f4b27a20\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 02:22:23.547111 master-0 kubenswrapper[7721]: I0216 02:22:23.546761 7721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebf941eaba3a97825b1c8002f4b27a20\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 02:22:23.691575 master-0 kubenswrapper[7721]: I0216 02:22:23.691252 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 02:22:23.699897 master-0 kubenswrapper[7721]: I0216 02:22:23.699774 7721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 02:22:23.749377 master-0 kubenswrapper[7721]: W0216 02:22:23.748685 7721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podebf941eaba3a97825b1c8002f4b27a20.slice/crio-460579bff076abf6e4d419f44b68400fd50e9b2bc9a03fc49494f7b68ef04045 WatchSource:0}: Error finding container 460579bff076abf6e4d419f44b68400fd50e9b2bc9a03fc49494f7b68ef04045: Status 404 returned error can't find the container with id 460579bff076abf6e4d419f44b68400fd50e9b2bc9a03fc49494f7b68ef04045 Feb 16 02:22:23.751457 master-0 kubenswrapper[7721]: W0216 02:22:23.751353 7721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod619e637b8575311b72d43b7b782d610a.slice/crio-47742d18510812b119307e6d49d3c726c8e73fb1e202f5e57c5cfb5945faf19d WatchSource:0}: Error finding container 47742d18510812b119307e6d49d3c726c8e73fb1e202f5e57c5cfb5945faf19d: Status 404 returned error can't find the container with id 47742d18510812b119307e6d49d3c726c8e73fb1e202f5e57c5cfb5945faf19d Feb 16 02:22:23.758823 master-0 kubenswrapper[7721]: E0216 02:22:23.758549 7721 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.189498c949acfdb5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:ebf941eaba3a97825b1c8002f4b27a20,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:22:23.755369909 +0000 UTC m=+947.249604211,LastTimestamp:2026-02-16 02:22:23.755369909 +0000 UTC m=+947.249604211,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:22:23.766707 master-0 kubenswrapper[7721]: I0216 02:22:23.766491 7721 generic.go:334] "Generic (PLEG): container finished" podID="5d1e91e5a1fed5cf7076a92d2830d36f" containerID="34fdb1528016b6c99888a5b17a344114bc05a46b5e53091141b876be457cb369" exitCode=0 Feb 16 02:22:23.769646 master-0 kubenswrapper[7721]: I0216 02:22:23.769537 7721 generic.go:334] "Generic (PLEG): container finished" podID="20bf60f7-9e36-477e-96a5-4fc8dc1bca5e" containerID="573f599caba2d6aea83d83677716e638746c9026d70482beb5f92bc432117189" exitCode=0 Feb 16 02:22:23.769797 master-0 kubenswrapper[7721]: I0216 02:22:23.769641 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"20bf60f7-9e36-477e-96a5-4fc8dc1bca5e","Type":"ContainerDied","Data":"573f599caba2d6aea83d83677716e638746c9026d70482beb5f92bc432117189"} Feb 16 02:22:23.771402 master-0 kubenswrapper[7721]: I0216 02:22:23.771027 7721 status_manager.go:851] "Failed to get status for pod" podUID="20bf60f7-9e36-477e-96a5-4fc8dc1bca5e" pod="openshift-kube-apiserver/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:22:23.774461 master-0 kubenswrapper[7721]: I0216 02:22:23.774340 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"ebf941eaba3a97825b1c8002f4b27a20","Type":"ContainerStarted","Data":"460579bff076abf6e4d419f44b68400fd50e9b2bc9a03fc49494f7b68ef04045"} Feb 16 02:22:23.969235 master-0 kubenswrapper[7721]: I0216 02:22:23.969142 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:22:23.969235 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:22:23.969235 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:22:23.969235 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:22:23.969591 master-0 kubenswrapper[7721]: I0216 02:22:23.969236 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:22:24.787727 master-0 kubenswrapper[7721]: I0216 02:22:24.787646 7721 generic.go:334] "Generic (PLEG): container finished" podID="619e637b8575311b72d43b7b782d610a" containerID="2f315c09e62d7e5ecdac8433decccf201da1935e2dc178927c912fe29e35daf4" exitCode=0 Feb 16 02:22:24.788837 master-0 kubenswrapper[7721]: I0216 02:22:24.787779 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"619e637b8575311b72d43b7b782d610a","Type":"ContainerDied","Data":"2f315c09e62d7e5ecdac8433decccf201da1935e2dc178927c912fe29e35daf4"} Feb 16 02:22:24.788942 master-0 kubenswrapper[7721]: I0216 02:22:24.788844 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"619e637b8575311b72d43b7b782d610a","Type":"ContainerStarted","Data":"47742d18510812b119307e6d49d3c726c8e73fb1e202f5e57c5cfb5945faf19d"} Feb 16 02:22:24.791952 master-0 kubenswrapper[7721]: I0216 02:22:24.791891 7721 status_manager.go:851] "Failed to get status for pod" podUID="20bf60f7-9e36-477e-96a5-4fc8dc1bca5e" pod="openshift-kube-apiserver/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:22:24.792261 master-0 kubenswrapper[7721]: E0216 02:22:24.791975 7721 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 02:22:24.792873 master-0 kubenswrapper[7721]: I0216 02:22:24.792806 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"ebf941eaba3a97825b1c8002f4b27a20","Type":"ContainerStarted","Data":"c2d62a7e7f4bbc8330cc4b08997f3b933fe2a87b2e1d2971c8efe2da06bf71fa"} Feb 16 02:22:24.794843 master-0 kubenswrapper[7721]: I0216 02:22:24.794131 7721 status_manager.go:851] "Failed to get status for pod" podUID="20bf60f7-9e36-477e-96a5-4fc8dc1bca5e" pod="openshift-kube-apiserver/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:22:24.794843 master-0 kubenswrapper[7721]: E0216 02:22:24.794496 7721 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 02:22:24.968716 master-0 kubenswrapper[7721]: I0216 02:22:24.968549 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:22:24.968716 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:22:24.968716 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:22:24.968716 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:22:24.969161 master-0 kubenswrapper[7721]: I0216 02:22:24.968745 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:22:25.179903 master-0 kubenswrapper[7721]: I0216 02:22:25.179809 7721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Feb 16 02:22:25.181287 master-0 kubenswrapper[7721]: I0216 02:22:25.181050 7721 status_manager.go:851] "Failed to get status for pod" podUID="20bf60f7-9e36-477e-96a5-4fc8dc1bca5e" pod="openshift-kube-apiserver/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:22:25.374470 master-0 kubenswrapper[7721]: I0216 02:22:25.374366 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/20bf60f7-9e36-477e-96a5-4fc8dc1bca5e-kubelet-dir\") pod \"20bf60f7-9e36-477e-96a5-4fc8dc1bca5e\" (UID: \"20bf60f7-9e36-477e-96a5-4fc8dc1bca5e\") " Feb 16 02:22:25.374750 master-0 kubenswrapper[7721]: I0216 02:22:25.374557 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20bf60f7-9e36-477e-96a5-4fc8dc1bca5e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "20bf60f7-9e36-477e-96a5-4fc8dc1bca5e" (UID: "20bf60f7-9e36-477e-96a5-4fc8dc1bca5e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:22:25.394599 master-0 kubenswrapper[7721]: I0216 02:22:25.393772 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/20bf60f7-9e36-477e-96a5-4fc8dc1bca5e-var-lock\") pod \"20bf60f7-9e36-477e-96a5-4fc8dc1bca5e\" (UID: \"20bf60f7-9e36-477e-96a5-4fc8dc1bca5e\") " Feb 16 02:22:25.394599 master-0 kubenswrapper[7721]: I0216 02:22:25.394178 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/20bf60f7-9e36-477e-96a5-4fc8dc1bca5e-kube-api-access\") pod \"20bf60f7-9e36-477e-96a5-4fc8dc1bca5e\" (UID: \"20bf60f7-9e36-477e-96a5-4fc8dc1bca5e\") " Feb 16 02:22:25.394599 master-0 kubenswrapper[7721]: I0216 02:22:25.394396 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20bf60f7-9e36-477e-96a5-4fc8dc1bca5e-var-lock" (OuterVolumeSpecName: "var-lock") pod "20bf60f7-9e36-477e-96a5-4fc8dc1bca5e" (UID: "20bf60f7-9e36-477e-96a5-4fc8dc1bca5e"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:22:25.396004 master-0 kubenswrapper[7721]: I0216 02:22:25.395189 7721 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/20bf60f7-9e36-477e-96a5-4fc8dc1bca5e-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 02:22:25.396004 master-0 kubenswrapper[7721]: I0216 02:22:25.395247 7721 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/20bf60f7-9e36-477e-96a5-4fc8dc1bca5e-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 16 02:22:25.426801 master-0 kubenswrapper[7721]: I0216 02:22:25.425755 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20bf60f7-9e36-477e-96a5-4fc8dc1bca5e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "20bf60f7-9e36-477e-96a5-4fc8dc1bca5e" (UID: "20bf60f7-9e36-477e-96a5-4fc8dc1bca5e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:22:25.497781 master-0 kubenswrapper[7721]: I0216 02:22:25.497749 7721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/20bf60f7-9e36-477e-96a5-4fc8dc1bca5e-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 16 02:22:25.667420 master-0 kubenswrapper[7721]: I0216 02:22:25.667369 7721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 02:22:25.699447 master-0 kubenswrapper[7721]: I0216 02:22:25.699347 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-logs\") pod \"5d1e91e5a1fed5cf7076a92d2830d36f\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " Feb 16 02:22:25.699658 master-0 kubenswrapper[7721]: I0216 02:22:25.699508 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-logs" (OuterVolumeSpecName: "logs") pod "5d1e91e5a1fed5cf7076a92d2830d36f" (UID: "5d1e91e5a1fed5cf7076a92d2830d36f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:22:25.699658 master-0 kubenswrapper[7721]: I0216 02:22:25.699566 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-etc-kubernetes-cloud\") pod \"5d1e91e5a1fed5cf7076a92d2830d36f\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " Feb 16 02:22:25.699658 master-0 kubenswrapper[7721]: I0216 02:22:25.699623 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-secrets\") pod \"5d1e91e5a1fed5cf7076a92d2830d36f\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " Feb 16 02:22:25.699658 master-0 kubenswrapper[7721]: I0216 02:22:25.699645 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-etc-kubernetes-cloud" (OuterVolumeSpecName: "etc-kubernetes-cloud") pod "5d1e91e5a1fed5cf7076a92d2830d36f" (UID: "5d1e91e5a1fed5cf7076a92d2830d36f"). InnerVolumeSpecName "etc-kubernetes-cloud". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:22:25.699846 master-0 kubenswrapper[7721]: I0216 02:22:25.699700 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-ssl-certs-host\") pod \"5d1e91e5a1fed5cf7076a92d2830d36f\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " Feb 16 02:22:25.699846 master-0 kubenswrapper[7721]: I0216 02:22:25.699694 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-secrets" (OuterVolumeSpecName: "secrets") pod "5d1e91e5a1fed5cf7076a92d2830d36f" (UID: "5d1e91e5a1fed5cf7076a92d2830d36f"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:22:25.699846 master-0 kubenswrapper[7721]: I0216 02:22:25.699735 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-audit-dir\") pod \"5d1e91e5a1fed5cf7076a92d2830d36f\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " Feb 16 02:22:25.699846 master-0 kubenswrapper[7721]: I0216 02:22:25.699727 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-ssl-certs-host" (OuterVolumeSpecName: "ssl-certs-host") pod "5d1e91e5a1fed5cf7076a92d2830d36f" (UID: "5d1e91e5a1fed5cf7076a92d2830d36f"). InnerVolumeSpecName "ssl-certs-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:22:25.699846 master-0 kubenswrapper[7721]: I0216 02:22:25.699839 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-config" (OuterVolumeSpecName: "config") pod "5d1e91e5a1fed5cf7076a92d2830d36f" (UID: "5d1e91e5a1fed5cf7076a92d2830d36f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:22:25.699846 master-0 kubenswrapper[7721]: I0216 02:22:25.699800 7721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "5d1e91e5a1fed5cf7076a92d2830d36f" (UID: "5d1e91e5a1fed5cf7076a92d2830d36f"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:22:25.700103 master-0 kubenswrapper[7721]: I0216 02:22:25.699816 7721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-config\") pod \"5d1e91e5a1fed5cf7076a92d2830d36f\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " Feb 16 02:22:25.701457 master-0 kubenswrapper[7721]: I0216 02:22:25.700272 7721 reconciler_common.go:293] "Volume detached for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-ssl-certs-host\") on node \"master-0\" DevicePath \"\"" Feb 16 02:22:25.701457 master-0 kubenswrapper[7721]: I0216 02:22:25.700304 7721 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-audit-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 02:22:25.701457 master-0 kubenswrapper[7721]: I0216 02:22:25.700323 7721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-config\") on node \"master-0\" DevicePath \"\"" Feb 16 02:22:25.701457 master-0 kubenswrapper[7721]: I0216 02:22:25.700343 7721 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-logs\") on node \"master-0\" DevicePath \"\"" Feb 16 02:22:25.701457 master-0 kubenswrapper[7721]: I0216 02:22:25.700362 7721 reconciler_common.go:293] "Volume detached for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-etc-kubernetes-cloud\") on node \"master-0\" DevicePath \"\"" Feb 16 02:22:25.701457 master-0 kubenswrapper[7721]: I0216 02:22:25.700380 7721 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-secrets\") on node \"master-0\" DevicePath \"\"" Feb 16 02:22:25.800587 master-0 kubenswrapper[7721]: I0216 02:22:25.800519 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"619e637b8575311b72d43b7b782d610a","Type":"ContainerStarted","Data":"fd47e260d84ca6db305938f1f4e1895a6f6bdda99aeb361b11a3ab5204667a82"} Feb 16 02:22:25.800587 master-0 kubenswrapper[7721]: I0216 02:22:25.800564 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"619e637b8575311b72d43b7b782d610a","Type":"ContainerStarted","Data":"8390bbe4d8742fdad642a6f50a5fdda06aa95077fda8b2a4a38589b254209605"} Feb 16 02:22:25.800587 master-0 kubenswrapper[7721]: I0216 02:22:25.800573 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"619e637b8575311b72d43b7b782d610a","Type":"ContainerStarted","Data":"fec477df3910b6405819fe05a8d3d1b8456afafc6dfca3d23c64fa136cd595d6"} Feb 16 02:22:25.803180 master-0 kubenswrapper[7721]: I0216 02:22:25.803142 7721 generic.go:334] "Generic (PLEG): container finished" podID="5d1e91e5a1fed5cf7076a92d2830d36f" containerID="b4c44b5842e3ec8dee60bb9b2661e316b9a431e19fc3d6452f904a284fd4a961" exitCode=0 Feb 16 02:22:25.803255 master-0 kubenswrapper[7721]: I0216 02:22:25.803212 7721 scope.go:117] "RemoveContainer" containerID="34fdb1528016b6c99888a5b17a344114bc05a46b5e53091141b876be457cb369" Feb 16 02:22:25.803255 master-0 kubenswrapper[7721]: I0216 02:22:25.803220 7721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 02:22:25.816665 master-0 kubenswrapper[7721]: I0216 02:22:25.816626 7721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Feb 16 02:22:25.816764 master-0 kubenswrapper[7721]: I0216 02:22:25.816682 7721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"20bf60f7-9e36-477e-96a5-4fc8dc1bca5e","Type":"ContainerDied","Data":"519d12e7d67d992627ab3afec4b63569e16dcc4c57e6118793f1a36ff0f10027"} Feb 16 02:22:25.816764 master-0 kubenswrapper[7721]: I0216 02:22:25.816718 7721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="519d12e7d67d992627ab3afec4b63569e16dcc4c57e6118793f1a36ff0f10027" Feb 16 02:22:25.853549 master-0 kubenswrapper[7721]: I0216 02:22:25.852665 7721 scope.go:117] "RemoveContainer" containerID="b4c44b5842e3ec8dee60bb9b2661e316b9a431e19fc3d6452f904a284fd4a961" Feb 16 02:22:25.872074 master-0 kubenswrapper[7721]: I0216 02:22:25.871740 7721 scope.go:117] "RemoveContainer" containerID="3872302a7c76c50aca9a9d80255ef01b4820b2081427956ca06ca96b2b4c695b" Feb 16 02:22:25.890833 master-0 kubenswrapper[7721]: I0216 02:22:25.890469 7721 scope.go:117] "RemoveContainer" containerID="34fdb1528016b6c99888a5b17a344114bc05a46b5e53091141b876be457cb369" Feb 16 02:22:25.893885 master-0 kubenswrapper[7721]: E0216 02:22:25.893839 7721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34fdb1528016b6c99888a5b17a344114bc05a46b5e53091141b876be457cb369\": container with ID starting with 34fdb1528016b6c99888a5b17a344114bc05a46b5e53091141b876be457cb369 not found: ID does not exist" containerID="34fdb1528016b6c99888a5b17a344114bc05a46b5e53091141b876be457cb369" Feb 16 02:22:25.893967 master-0 kubenswrapper[7721]: I0216 02:22:25.893887 7721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34fdb1528016b6c99888a5b17a344114bc05a46b5e53091141b876be457cb369"} err="failed to get container status \"34fdb1528016b6c99888a5b17a344114bc05a46b5e53091141b876be457cb369\": rpc error: code = NotFound desc = could not find container \"34fdb1528016b6c99888a5b17a344114bc05a46b5e53091141b876be457cb369\": container with ID starting with 34fdb1528016b6c99888a5b17a344114bc05a46b5e53091141b876be457cb369 not found: ID does not exist" Feb 16 02:22:25.893967 master-0 kubenswrapper[7721]: I0216 02:22:25.893914 7721 scope.go:117] "RemoveContainer" containerID="b4c44b5842e3ec8dee60bb9b2661e316b9a431e19fc3d6452f904a284fd4a961" Feb 16 02:22:25.899036 master-0 kubenswrapper[7721]: E0216 02:22:25.898996 7721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4c44b5842e3ec8dee60bb9b2661e316b9a431e19fc3d6452f904a284fd4a961\": container with ID starting with b4c44b5842e3ec8dee60bb9b2661e316b9a431e19fc3d6452f904a284fd4a961 not found: ID does not exist" containerID="b4c44b5842e3ec8dee60bb9b2661e316b9a431e19fc3d6452f904a284fd4a961" Feb 16 02:22:25.899036 master-0 kubenswrapper[7721]: I0216 02:22:25.899027 7721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4c44b5842e3ec8dee60bb9b2661e316b9a431e19fc3d6452f904a284fd4a961"} err="failed to get container status \"b4c44b5842e3ec8dee60bb9b2661e316b9a431e19fc3d6452f904a284fd4a961\": rpc error: code = NotFound desc = could not find container \"b4c44b5842e3ec8dee60bb9b2661e316b9a431e19fc3d6452f904a284fd4a961\": container with ID starting with b4c44b5842e3ec8dee60bb9b2661e316b9a431e19fc3d6452f904a284fd4a961 not found: ID does not exist" Feb 16 02:22:25.899208 master-0 kubenswrapper[7721]: I0216 02:22:25.899045 7721 scope.go:117] "RemoveContainer" containerID="3872302a7c76c50aca9a9d80255ef01b4820b2081427956ca06ca96b2b4c695b" Feb 16 02:22:25.899460 master-0 kubenswrapper[7721]: E0216 02:22:25.899420 7721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3872302a7c76c50aca9a9d80255ef01b4820b2081427956ca06ca96b2b4c695b\": container with ID starting with 3872302a7c76c50aca9a9d80255ef01b4820b2081427956ca06ca96b2b4c695b not found: ID does not exist" containerID="3872302a7c76c50aca9a9d80255ef01b4820b2081427956ca06ca96b2b4c695b" Feb 16 02:22:25.899460 master-0 kubenswrapper[7721]: I0216 02:22:25.899454 7721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3872302a7c76c50aca9a9d80255ef01b4820b2081427956ca06ca96b2b4c695b"} err="failed to get container status \"3872302a7c76c50aca9a9d80255ef01b4820b2081427956ca06ca96b2b4c695b\": rpc error: code = NotFound desc = could not find container \"3872302a7c76c50aca9a9d80255ef01b4820b2081427956ca06ca96b2b4c695b\": container with ID starting with 3872302a7c76c50aca9a9d80255ef01b4820b2081427956ca06ca96b2b4c695b not found: ID does not exist" Feb 16 02:22:25.968277 master-0 kubenswrapper[7721]: I0216 02:22:25.968228 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:22:25.968277 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:22:25.968277 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:22:25.968277 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:22:25.968530 master-0 kubenswrapper[7721]: I0216 02:22:25.968291 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:22:26.796523 master-0 kubenswrapper[7721]: I0216 02:22:26.796392 7721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d1e91e5a1fed5cf7076a92d2830d36f" path="/var/lib/kubelet/pods/5d1e91e5a1fed5cf7076a92d2830d36f/volumes" Feb 16 02:22:26.796800 master-0 kubenswrapper[7721]: I0216 02:22:26.796780 7721 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Feb 16 02:22:26.967474 master-0 kubenswrapper[7721]: I0216 02:22:26.967296 7721 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:22:26.967474 master-0 kubenswrapper[7721]: [-]has-synced failed: reason withheld Feb 16 02:22:26.967474 master-0 kubenswrapper[7721]: [+]process-running ok Feb 16 02:22:26.967474 master-0 kubenswrapper[7721]: healthz check failed Feb 16 02:22:26.968182 master-0 kubenswrapper[7721]: I0216 02:22:26.967484 7721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:22:27.509293 master-0 systemd[1]: Stopping Kubernetes Kubelet... Feb 16 02:22:27.544510 master-0 systemd[1]: kubelet.service: Deactivated successfully. Feb 16 02:22:27.544831 master-0 systemd[1]: Stopped Kubernetes Kubelet. Feb 16 02:22:27.546155 master-0 systemd[1]: kubelet.service: Consumed 2min 48.273s CPU time. Feb 16 02:22:27.570063 master-0 systemd[1]: Starting Kubernetes Kubelet... Feb 16 02:22:27.710724 master-0 kubenswrapper[31559]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 02:22:27.710724 master-0 kubenswrapper[31559]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 16 02:22:27.710724 master-0 kubenswrapper[31559]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 02:22:27.710724 master-0 kubenswrapper[31559]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 02:22:27.710724 master-0 kubenswrapper[31559]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 16 02:22:27.710724 master-0 kubenswrapper[31559]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 02:22:27.711984 master-0 kubenswrapper[31559]: I0216 02:22:27.710836 31559 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 16 02:22:27.713719 master-0 kubenswrapper[31559]: W0216 02:22:27.713681 31559 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 02:22:27.713719 master-0 kubenswrapper[31559]: W0216 02:22:27.713705 31559 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 02:22:27.713719 master-0 kubenswrapper[31559]: W0216 02:22:27.713713 31559 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 02:22:27.713719 master-0 kubenswrapper[31559]: W0216 02:22:27.713719 31559 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 02:22:27.713719 master-0 kubenswrapper[31559]: W0216 02:22:27.713724 31559 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 02:22:27.713719 master-0 kubenswrapper[31559]: W0216 02:22:27.713731 31559 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 02:22:27.713719 master-0 kubenswrapper[31559]: W0216 02:22:27.713736 31559 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 02:22:27.714420 master-0 kubenswrapper[31559]: W0216 02:22:27.713742 31559 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 02:22:27.714420 master-0 kubenswrapper[31559]: W0216 02:22:27.713748 31559 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 02:22:27.714420 master-0 kubenswrapper[31559]: W0216 02:22:27.713753 31559 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 02:22:27.714420 master-0 kubenswrapper[31559]: W0216 02:22:27.713758 31559 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 02:22:27.714420 master-0 kubenswrapper[31559]: W0216 02:22:27.713763 31559 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 02:22:27.714420 master-0 kubenswrapper[31559]: W0216 02:22:27.713771 31559 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 02:22:27.714420 master-0 kubenswrapper[31559]: W0216 02:22:27.713779 31559 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 02:22:27.714420 master-0 kubenswrapper[31559]: W0216 02:22:27.713785 31559 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 02:22:27.714420 master-0 kubenswrapper[31559]: W0216 02:22:27.713790 31559 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 02:22:27.714420 master-0 kubenswrapper[31559]: W0216 02:22:27.713795 31559 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 02:22:27.714420 master-0 kubenswrapper[31559]: W0216 02:22:27.713802 31559 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 02:22:27.714420 master-0 kubenswrapper[31559]: W0216 02:22:27.713808 31559 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 02:22:27.714420 master-0 kubenswrapper[31559]: W0216 02:22:27.713814 31559 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 02:22:27.714420 master-0 kubenswrapper[31559]: W0216 02:22:27.713819 31559 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 02:22:27.714420 master-0 kubenswrapper[31559]: W0216 02:22:27.713824 31559 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 02:22:27.714420 master-0 kubenswrapper[31559]: W0216 02:22:27.713829 31559 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 02:22:27.714420 master-0 kubenswrapper[31559]: W0216 02:22:27.713835 31559 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 02:22:27.714420 master-0 kubenswrapper[31559]: W0216 02:22:27.713840 31559 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 02:22:27.714420 master-0 kubenswrapper[31559]: W0216 02:22:27.713845 31559 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 02:22:27.715793 master-0 kubenswrapper[31559]: W0216 02:22:27.713858 31559 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 02:22:27.715793 master-0 kubenswrapper[31559]: W0216 02:22:27.713864 31559 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 02:22:27.715793 master-0 kubenswrapper[31559]: W0216 02:22:27.713869 31559 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 02:22:27.715793 master-0 kubenswrapper[31559]: W0216 02:22:27.713874 31559 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 02:22:27.715793 master-0 kubenswrapper[31559]: W0216 02:22:27.713879 31559 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 02:22:27.715793 master-0 kubenswrapper[31559]: W0216 02:22:27.713884 31559 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 02:22:27.715793 master-0 kubenswrapper[31559]: W0216 02:22:27.713889 31559 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 02:22:27.715793 master-0 kubenswrapper[31559]: W0216 02:22:27.713895 31559 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 02:22:27.715793 master-0 kubenswrapper[31559]: W0216 02:22:27.713900 31559 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 02:22:27.715793 master-0 kubenswrapper[31559]: W0216 02:22:27.713907 31559 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 02:22:27.715793 master-0 kubenswrapper[31559]: W0216 02:22:27.713913 31559 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 02:22:27.715793 master-0 kubenswrapper[31559]: W0216 02:22:27.713918 31559 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 02:22:27.715793 master-0 kubenswrapper[31559]: W0216 02:22:27.713924 31559 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 02:22:27.715793 master-0 kubenswrapper[31559]: W0216 02:22:27.713929 31559 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 02:22:27.715793 master-0 kubenswrapper[31559]: W0216 02:22:27.713935 31559 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 02:22:27.715793 master-0 kubenswrapper[31559]: W0216 02:22:27.713940 31559 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 02:22:27.715793 master-0 kubenswrapper[31559]: W0216 02:22:27.713945 31559 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 02:22:27.715793 master-0 kubenswrapper[31559]: W0216 02:22:27.713950 31559 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 02:22:27.715793 master-0 kubenswrapper[31559]: W0216 02:22:27.713955 31559 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 02:22:27.716948 master-0 kubenswrapper[31559]: W0216 02:22:27.713960 31559 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 02:22:27.716948 master-0 kubenswrapper[31559]: W0216 02:22:27.713964 31559 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 02:22:27.716948 master-0 kubenswrapper[31559]: W0216 02:22:27.713970 31559 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 02:22:27.716948 master-0 kubenswrapper[31559]: W0216 02:22:27.713975 31559 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 02:22:27.716948 master-0 kubenswrapper[31559]: W0216 02:22:27.713982 31559 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 02:22:27.716948 master-0 kubenswrapper[31559]: W0216 02:22:27.713987 31559 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 02:22:27.716948 master-0 kubenswrapper[31559]: W0216 02:22:27.713993 31559 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 02:22:27.716948 master-0 kubenswrapper[31559]: W0216 02:22:27.714000 31559 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 02:22:27.716948 master-0 kubenswrapper[31559]: W0216 02:22:27.714006 31559 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 02:22:27.716948 master-0 kubenswrapper[31559]: W0216 02:22:27.714011 31559 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 02:22:27.716948 master-0 kubenswrapper[31559]: W0216 02:22:27.714016 31559 feature_gate.go:330] unrecognized feature gate: Example Feb 16 02:22:27.716948 master-0 kubenswrapper[31559]: W0216 02:22:27.714021 31559 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 02:22:27.716948 master-0 kubenswrapper[31559]: W0216 02:22:27.714026 31559 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 02:22:27.716948 master-0 kubenswrapper[31559]: W0216 02:22:27.714031 31559 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 02:22:27.716948 master-0 kubenswrapper[31559]: W0216 02:22:27.714036 31559 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 02:22:27.716948 master-0 kubenswrapper[31559]: W0216 02:22:27.714041 31559 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 02:22:27.716948 master-0 kubenswrapper[31559]: W0216 02:22:27.714046 31559 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 02:22:27.716948 master-0 kubenswrapper[31559]: W0216 02:22:27.714051 31559 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 02:22:27.716948 master-0 kubenswrapper[31559]: W0216 02:22:27.714056 31559 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 02:22:27.716948 master-0 kubenswrapper[31559]: W0216 02:22:27.714061 31559 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 02:22:27.718648 master-0 kubenswrapper[31559]: W0216 02:22:27.714066 31559 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 02:22:27.718648 master-0 kubenswrapper[31559]: W0216 02:22:27.714071 31559 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 16 02:22:27.718648 master-0 kubenswrapper[31559]: W0216 02:22:27.714076 31559 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 02:22:27.718648 master-0 kubenswrapper[31559]: W0216 02:22:27.714081 31559 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 02:22:27.718648 master-0 kubenswrapper[31559]: W0216 02:22:27.714086 31559 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 02:22:27.718648 master-0 kubenswrapper[31559]: W0216 02:22:27.714091 31559 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 02:22:27.718648 master-0 kubenswrapper[31559]: W0216 02:22:27.714096 31559 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 02:22:27.718648 master-0 kubenswrapper[31559]: I0216 02:22:27.714198 31559 flags.go:64] FLAG: --address="0.0.0.0" Feb 16 02:22:27.718648 master-0 kubenswrapper[31559]: I0216 02:22:27.714209 31559 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 16 02:22:27.718648 master-0 kubenswrapper[31559]: I0216 02:22:27.714218 31559 flags.go:64] FLAG: --anonymous-auth="true" Feb 16 02:22:27.718648 master-0 kubenswrapper[31559]: I0216 02:22:27.714225 31559 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 16 02:22:27.718648 master-0 kubenswrapper[31559]: I0216 02:22:27.714233 31559 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 16 02:22:27.718648 master-0 kubenswrapper[31559]: I0216 02:22:27.714239 31559 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 16 02:22:27.718648 master-0 kubenswrapper[31559]: I0216 02:22:27.714246 31559 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 16 02:22:27.718648 master-0 kubenswrapper[31559]: I0216 02:22:27.714256 31559 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 16 02:22:27.718648 master-0 kubenswrapper[31559]: I0216 02:22:27.714262 31559 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 16 02:22:27.718648 master-0 kubenswrapper[31559]: I0216 02:22:27.714268 31559 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 16 02:22:27.718648 master-0 kubenswrapper[31559]: I0216 02:22:27.714274 31559 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 16 02:22:27.718648 master-0 kubenswrapper[31559]: I0216 02:22:27.714281 31559 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 16 02:22:27.718648 master-0 kubenswrapper[31559]: I0216 02:22:27.714287 31559 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 16 02:22:27.718648 master-0 kubenswrapper[31559]: I0216 02:22:27.714293 31559 flags.go:64] FLAG: --cgroup-root="" Feb 16 02:22:27.718648 master-0 kubenswrapper[31559]: I0216 02:22:27.714299 31559 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 16 02:22:27.718648 master-0 kubenswrapper[31559]: I0216 02:22:27.714305 31559 flags.go:64] FLAG: --client-ca-file="" Feb 16 02:22:27.720767 master-0 kubenswrapper[31559]: I0216 02:22:27.714310 31559 flags.go:64] FLAG: --cloud-config="" Feb 16 02:22:27.720767 master-0 kubenswrapper[31559]: I0216 02:22:27.714316 31559 flags.go:64] FLAG: --cloud-provider="" Feb 16 02:22:27.720767 master-0 kubenswrapper[31559]: I0216 02:22:27.714321 31559 flags.go:64] FLAG: --cluster-dns="[]" Feb 16 02:22:27.720767 master-0 kubenswrapper[31559]: I0216 02:22:27.714329 31559 flags.go:64] FLAG: --cluster-domain="" Feb 16 02:22:27.720767 master-0 kubenswrapper[31559]: I0216 02:22:27.714335 31559 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 16 02:22:27.720767 master-0 kubenswrapper[31559]: I0216 02:22:27.714340 31559 flags.go:64] FLAG: --config-dir="" Feb 16 02:22:27.720767 master-0 kubenswrapper[31559]: I0216 02:22:27.714346 31559 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 16 02:22:27.720767 master-0 kubenswrapper[31559]: I0216 02:22:27.714352 31559 flags.go:64] FLAG: --container-log-max-files="5" Feb 16 02:22:27.720767 master-0 kubenswrapper[31559]: I0216 02:22:27.714359 31559 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 16 02:22:27.720767 master-0 kubenswrapper[31559]: I0216 02:22:27.714365 31559 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 16 02:22:27.720767 master-0 kubenswrapper[31559]: I0216 02:22:27.714371 31559 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 16 02:22:27.720767 master-0 kubenswrapper[31559]: I0216 02:22:27.714377 31559 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 16 02:22:27.720767 master-0 kubenswrapper[31559]: I0216 02:22:27.714382 31559 flags.go:64] FLAG: --contention-profiling="false" Feb 16 02:22:27.720767 master-0 kubenswrapper[31559]: I0216 02:22:27.714388 31559 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 16 02:22:27.720767 master-0 kubenswrapper[31559]: I0216 02:22:27.714395 31559 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 16 02:22:27.720767 master-0 kubenswrapper[31559]: I0216 02:22:27.714401 31559 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 16 02:22:27.720767 master-0 kubenswrapper[31559]: I0216 02:22:27.714407 31559 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 16 02:22:27.720767 master-0 kubenswrapper[31559]: I0216 02:22:27.714414 31559 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 16 02:22:27.720767 master-0 kubenswrapper[31559]: I0216 02:22:27.714420 31559 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 16 02:22:27.720767 master-0 kubenswrapper[31559]: I0216 02:22:27.714427 31559 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 16 02:22:27.720767 master-0 kubenswrapper[31559]: I0216 02:22:27.714451 31559 flags.go:64] FLAG: --enable-load-reader="false" Feb 16 02:22:27.720767 master-0 kubenswrapper[31559]: I0216 02:22:27.714459 31559 flags.go:64] FLAG: --enable-server="true" Feb 16 02:22:27.720767 master-0 kubenswrapper[31559]: I0216 02:22:27.714465 31559 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 16 02:22:27.720767 master-0 kubenswrapper[31559]: I0216 02:22:27.714474 31559 flags.go:64] FLAG: --event-burst="100" Feb 16 02:22:27.720767 master-0 kubenswrapper[31559]: I0216 02:22:27.714480 31559 flags.go:64] FLAG: --event-qps="50" Feb 16 02:22:27.722783 master-0 kubenswrapper[31559]: I0216 02:22:27.714486 31559 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 16 02:22:27.722783 master-0 kubenswrapper[31559]: I0216 02:22:27.714493 31559 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 16 02:22:27.722783 master-0 kubenswrapper[31559]: I0216 02:22:27.714498 31559 flags.go:64] FLAG: --eviction-hard="" Feb 16 02:22:27.722783 master-0 kubenswrapper[31559]: I0216 02:22:27.714505 31559 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 16 02:22:27.722783 master-0 kubenswrapper[31559]: I0216 02:22:27.714512 31559 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 16 02:22:27.722783 master-0 kubenswrapper[31559]: I0216 02:22:27.714517 31559 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 16 02:22:27.722783 master-0 kubenswrapper[31559]: I0216 02:22:27.714524 31559 flags.go:64] FLAG: --eviction-soft="" Feb 16 02:22:27.722783 master-0 kubenswrapper[31559]: I0216 02:22:27.714529 31559 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 16 02:22:27.722783 master-0 kubenswrapper[31559]: I0216 02:22:27.714536 31559 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 16 02:22:27.722783 master-0 kubenswrapper[31559]: I0216 02:22:27.714542 31559 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 16 02:22:27.722783 master-0 kubenswrapper[31559]: I0216 02:22:27.714548 31559 flags.go:64] FLAG: --experimental-mounter-path="" Feb 16 02:22:27.722783 master-0 kubenswrapper[31559]: I0216 02:22:27.714553 31559 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 16 02:22:27.722783 master-0 kubenswrapper[31559]: I0216 02:22:27.714559 31559 flags.go:64] FLAG: --fail-swap-on="true" Feb 16 02:22:27.722783 master-0 kubenswrapper[31559]: I0216 02:22:27.714565 31559 flags.go:64] FLAG: --feature-gates="" Feb 16 02:22:27.722783 master-0 kubenswrapper[31559]: I0216 02:22:27.714572 31559 flags.go:64] FLAG: --file-check-frequency="20s" Feb 16 02:22:27.722783 master-0 kubenswrapper[31559]: I0216 02:22:27.714578 31559 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 16 02:22:27.722783 master-0 kubenswrapper[31559]: I0216 02:22:27.714584 31559 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 16 02:22:27.722783 master-0 kubenswrapper[31559]: I0216 02:22:27.714590 31559 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 16 02:22:27.722783 master-0 kubenswrapper[31559]: I0216 02:22:27.714596 31559 flags.go:64] FLAG: --healthz-port="10248" Feb 16 02:22:27.722783 master-0 kubenswrapper[31559]: I0216 02:22:27.714602 31559 flags.go:64] FLAG: --help="false" Feb 16 02:22:27.722783 master-0 kubenswrapper[31559]: I0216 02:22:27.714608 31559 flags.go:64] FLAG: --hostname-override="" Feb 16 02:22:27.722783 master-0 kubenswrapper[31559]: I0216 02:22:27.714614 31559 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 16 02:22:27.722783 master-0 kubenswrapper[31559]: I0216 02:22:27.714620 31559 flags.go:64] FLAG: --http-check-frequency="20s" Feb 16 02:22:27.722783 master-0 kubenswrapper[31559]: I0216 02:22:27.714625 31559 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 16 02:22:27.722783 master-0 kubenswrapper[31559]: I0216 02:22:27.714631 31559 flags.go:64] FLAG: --image-credential-provider-config="" Feb 16 02:22:27.724347 master-0 kubenswrapper[31559]: I0216 02:22:27.714636 31559 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 16 02:22:27.724347 master-0 kubenswrapper[31559]: I0216 02:22:27.714642 31559 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 16 02:22:27.724347 master-0 kubenswrapper[31559]: I0216 02:22:27.714648 31559 flags.go:64] FLAG: --image-service-endpoint="" Feb 16 02:22:27.724347 master-0 kubenswrapper[31559]: I0216 02:22:27.714653 31559 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 16 02:22:27.724347 master-0 kubenswrapper[31559]: I0216 02:22:27.714659 31559 flags.go:64] FLAG: --kube-api-burst="100" Feb 16 02:22:27.724347 master-0 kubenswrapper[31559]: I0216 02:22:27.714664 31559 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 16 02:22:27.724347 master-0 kubenswrapper[31559]: I0216 02:22:27.714671 31559 flags.go:64] FLAG: --kube-api-qps="50" Feb 16 02:22:27.724347 master-0 kubenswrapper[31559]: I0216 02:22:27.714676 31559 flags.go:64] FLAG: --kube-reserved="" Feb 16 02:22:27.724347 master-0 kubenswrapper[31559]: I0216 02:22:27.714682 31559 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 16 02:22:27.724347 master-0 kubenswrapper[31559]: I0216 02:22:27.714687 31559 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 16 02:22:27.724347 master-0 kubenswrapper[31559]: I0216 02:22:27.714693 31559 flags.go:64] FLAG: --kubelet-cgroups="" Feb 16 02:22:27.724347 master-0 kubenswrapper[31559]: I0216 02:22:27.714699 31559 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 16 02:22:27.724347 master-0 kubenswrapper[31559]: I0216 02:22:27.714704 31559 flags.go:64] FLAG: --lock-file="" Feb 16 02:22:27.724347 master-0 kubenswrapper[31559]: I0216 02:22:27.714710 31559 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 16 02:22:27.724347 master-0 kubenswrapper[31559]: I0216 02:22:27.714715 31559 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 16 02:22:27.724347 master-0 kubenswrapper[31559]: I0216 02:22:27.714721 31559 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 16 02:22:27.724347 master-0 kubenswrapper[31559]: I0216 02:22:27.714730 31559 flags.go:64] FLAG: --log-json-split-stream="false" Feb 16 02:22:27.724347 master-0 kubenswrapper[31559]: I0216 02:22:27.714736 31559 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 16 02:22:27.724347 master-0 kubenswrapper[31559]: I0216 02:22:27.714742 31559 flags.go:64] FLAG: --log-text-split-stream="false" Feb 16 02:22:27.724347 master-0 kubenswrapper[31559]: I0216 02:22:27.714748 31559 flags.go:64] FLAG: --logging-format="text" Feb 16 02:22:27.724347 master-0 kubenswrapper[31559]: I0216 02:22:27.714753 31559 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 16 02:22:27.724347 master-0 kubenswrapper[31559]: I0216 02:22:27.714760 31559 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 16 02:22:27.724347 master-0 kubenswrapper[31559]: I0216 02:22:27.714765 31559 flags.go:64] FLAG: --manifest-url="" Feb 16 02:22:27.724347 master-0 kubenswrapper[31559]: I0216 02:22:27.714771 31559 flags.go:64] FLAG: --manifest-url-header="" Feb 16 02:22:27.724347 master-0 kubenswrapper[31559]: I0216 02:22:27.714778 31559 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 16 02:22:27.727835 master-0 kubenswrapper[31559]: I0216 02:22:27.714784 31559 flags.go:64] FLAG: --max-open-files="1000000" Feb 16 02:22:27.727835 master-0 kubenswrapper[31559]: I0216 02:22:27.714791 31559 flags.go:64] FLAG: --max-pods="110" Feb 16 02:22:27.727835 master-0 kubenswrapper[31559]: I0216 02:22:27.714796 31559 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 16 02:22:27.727835 master-0 kubenswrapper[31559]: I0216 02:22:27.714803 31559 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 16 02:22:27.727835 master-0 kubenswrapper[31559]: I0216 02:22:27.714809 31559 flags.go:64] FLAG: --memory-manager-policy="None" Feb 16 02:22:27.727835 master-0 kubenswrapper[31559]: I0216 02:22:27.714814 31559 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 16 02:22:27.727835 master-0 kubenswrapper[31559]: I0216 02:22:27.714820 31559 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 16 02:22:27.727835 master-0 kubenswrapper[31559]: I0216 02:22:27.714826 31559 flags.go:64] FLAG: --node-ip="192.168.32.10" Feb 16 02:22:27.727835 master-0 kubenswrapper[31559]: I0216 02:22:27.714831 31559 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 16 02:22:27.727835 master-0 kubenswrapper[31559]: I0216 02:22:27.714844 31559 flags.go:64] FLAG: --node-status-max-images="50" Feb 16 02:22:27.727835 master-0 kubenswrapper[31559]: I0216 02:22:27.714849 31559 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 16 02:22:27.727835 master-0 kubenswrapper[31559]: I0216 02:22:27.714855 31559 flags.go:64] FLAG: --oom-score-adj="-999" Feb 16 02:22:27.727835 master-0 kubenswrapper[31559]: I0216 02:22:27.714861 31559 flags.go:64] FLAG: --pod-cidr="" Feb 16 02:22:27.727835 master-0 kubenswrapper[31559]: I0216 02:22:27.714867 31559 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1593b6aac7bb18c1bbb5d41693e8b8c7f0c0410fcc09e15de52d8bd53e356541" Feb 16 02:22:27.727835 master-0 kubenswrapper[31559]: I0216 02:22:27.714875 31559 flags.go:64] FLAG: --pod-manifest-path="" Feb 16 02:22:27.727835 master-0 kubenswrapper[31559]: I0216 02:22:27.714881 31559 flags.go:64] FLAG: --pod-max-pids="-1" Feb 16 02:22:27.727835 master-0 kubenswrapper[31559]: I0216 02:22:27.714892 31559 flags.go:64] FLAG: --pods-per-core="0" Feb 16 02:22:27.727835 master-0 kubenswrapper[31559]: I0216 02:22:27.714897 31559 flags.go:64] FLAG: --port="10250" Feb 16 02:22:27.727835 master-0 kubenswrapper[31559]: I0216 02:22:27.714903 31559 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 16 02:22:27.727835 master-0 kubenswrapper[31559]: I0216 02:22:27.714909 31559 flags.go:64] FLAG: --provider-id="" Feb 16 02:22:27.727835 master-0 kubenswrapper[31559]: I0216 02:22:27.714915 31559 flags.go:64] FLAG: --qos-reserved="" Feb 16 02:22:27.727835 master-0 kubenswrapper[31559]: I0216 02:22:27.714920 31559 flags.go:64] FLAG: --read-only-port="10255" Feb 16 02:22:27.727835 master-0 kubenswrapper[31559]: I0216 02:22:27.714927 31559 flags.go:64] FLAG: --register-node="true" Feb 16 02:22:27.727835 master-0 kubenswrapper[31559]: I0216 02:22:27.714932 31559 flags.go:64] FLAG: --register-schedulable="true" Feb 16 02:22:27.729617 master-0 kubenswrapper[31559]: I0216 02:22:27.714938 31559 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 16 02:22:27.729617 master-0 kubenswrapper[31559]: I0216 02:22:27.714947 31559 flags.go:64] FLAG: --registry-burst="10" Feb 16 02:22:27.729617 master-0 kubenswrapper[31559]: I0216 02:22:27.714952 31559 flags.go:64] FLAG: --registry-qps="5" Feb 16 02:22:27.729617 master-0 kubenswrapper[31559]: I0216 02:22:27.714958 31559 flags.go:64] FLAG: --reserved-cpus="" Feb 16 02:22:27.729617 master-0 kubenswrapper[31559]: I0216 02:22:27.714964 31559 flags.go:64] FLAG: --reserved-memory="" Feb 16 02:22:27.729617 master-0 kubenswrapper[31559]: I0216 02:22:27.714970 31559 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 16 02:22:27.729617 master-0 kubenswrapper[31559]: I0216 02:22:27.714977 31559 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 16 02:22:27.729617 master-0 kubenswrapper[31559]: I0216 02:22:27.714983 31559 flags.go:64] FLAG: --rotate-certificates="false" Feb 16 02:22:27.729617 master-0 kubenswrapper[31559]: I0216 02:22:27.714989 31559 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 16 02:22:27.729617 master-0 kubenswrapper[31559]: I0216 02:22:27.714995 31559 flags.go:64] FLAG: --runonce="false" Feb 16 02:22:27.729617 master-0 kubenswrapper[31559]: I0216 02:22:27.715003 31559 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 16 02:22:27.729617 master-0 kubenswrapper[31559]: I0216 02:22:27.715009 31559 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 16 02:22:27.729617 master-0 kubenswrapper[31559]: I0216 02:22:27.715015 31559 flags.go:64] FLAG: --seccomp-default="false" Feb 16 02:22:27.729617 master-0 kubenswrapper[31559]: I0216 02:22:27.715020 31559 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 16 02:22:27.729617 master-0 kubenswrapper[31559]: I0216 02:22:27.715026 31559 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 16 02:22:27.729617 master-0 kubenswrapper[31559]: I0216 02:22:27.715032 31559 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 16 02:22:27.729617 master-0 kubenswrapper[31559]: I0216 02:22:27.715038 31559 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 16 02:22:27.729617 master-0 kubenswrapper[31559]: I0216 02:22:27.715043 31559 flags.go:64] FLAG: --storage-driver-password="root" Feb 16 02:22:27.729617 master-0 kubenswrapper[31559]: I0216 02:22:27.715049 31559 flags.go:64] FLAG: --storage-driver-secure="false" Feb 16 02:22:27.729617 master-0 kubenswrapper[31559]: I0216 02:22:27.715055 31559 flags.go:64] FLAG: --storage-driver-table="stats" Feb 16 02:22:27.729617 master-0 kubenswrapper[31559]: I0216 02:22:27.715060 31559 flags.go:64] FLAG: --storage-driver-user="root" Feb 16 02:22:27.729617 master-0 kubenswrapper[31559]: I0216 02:22:27.715066 31559 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 16 02:22:27.729617 master-0 kubenswrapper[31559]: I0216 02:22:27.715072 31559 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 16 02:22:27.729617 master-0 kubenswrapper[31559]: I0216 02:22:27.715077 31559 flags.go:64] FLAG: --system-cgroups="" Feb 16 02:22:27.729617 master-0 kubenswrapper[31559]: I0216 02:22:27.715090 31559 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Feb 16 02:22:27.731788 master-0 kubenswrapper[31559]: I0216 02:22:27.715099 31559 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 16 02:22:27.731788 master-0 kubenswrapper[31559]: I0216 02:22:27.715105 31559 flags.go:64] FLAG: --tls-cert-file="" Feb 16 02:22:27.731788 master-0 kubenswrapper[31559]: I0216 02:22:27.715110 31559 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 16 02:22:27.731788 master-0 kubenswrapper[31559]: I0216 02:22:27.716062 31559 flags.go:64] FLAG: --tls-min-version="" Feb 16 02:22:27.731788 master-0 kubenswrapper[31559]: I0216 02:22:27.716070 31559 flags.go:64] FLAG: --tls-private-key-file="" Feb 16 02:22:27.731788 master-0 kubenswrapper[31559]: I0216 02:22:27.716076 31559 flags.go:64] FLAG: --topology-manager-policy="none" Feb 16 02:22:27.731788 master-0 kubenswrapper[31559]: I0216 02:22:27.716082 31559 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 16 02:22:27.731788 master-0 kubenswrapper[31559]: I0216 02:22:27.716090 31559 flags.go:64] FLAG: --topology-manager-scope="container" Feb 16 02:22:27.731788 master-0 kubenswrapper[31559]: I0216 02:22:27.716095 31559 flags.go:64] FLAG: --v="2" Feb 16 02:22:27.731788 master-0 kubenswrapper[31559]: I0216 02:22:27.716104 31559 flags.go:64] FLAG: --version="false" Feb 16 02:22:27.731788 master-0 kubenswrapper[31559]: I0216 02:22:27.716111 31559 flags.go:64] FLAG: --vmodule="" Feb 16 02:22:27.731788 master-0 kubenswrapper[31559]: I0216 02:22:27.716118 31559 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 16 02:22:27.731788 master-0 kubenswrapper[31559]: I0216 02:22:27.716124 31559 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 16 02:22:27.731788 master-0 kubenswrapper[31559]: W0216 02:22:27.716264 31559 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 02:22:27.731788 master-0 kubenswrapper[31559]: W0216 02:22:27.716271 31559 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 02:22:27.731788 master-0 kubenswrapper[31559]: W0216 02:22:27.716278 31559 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 02:22:27.731788 master-0 kubenswrapper[31559]: W0216 02:22:27.716286 31559 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 02:22:27.731788 master-0 kubenswrapper[31559]: W0216 02:22:27.716298 31559 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 02:22:27.731788 master-0 kubenswrapper[31559]: W0216 02:22:27.716305 31559 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 02:22:27.731788 master-0 kubenswrapper[31559]: W0216 02:22:27.716312 31559 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 02:22:27.731788 master-0 kubenswrapper[31559]: W0216 02:22:27.716320 31559 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 02:22:27.731788 master-0 kubenswrapper[31559]: W0216 02:22:27.716327 31559 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 02:22:27.733910 master-0 kubenswrapper[31559]: W0216 02:22:27.716332 31559 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 02:22:27.733910 master-0 kubenswrapper[31559]: W0216 02:22:27.716338 31559 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 02:22:27.733910 master-0 kubenswrapper[31559]: W0216 02:22:27.716343 31559 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 02:22:27.733910 master-0 kubenswrapper[31559]: W0216 02:22:27.716349 31559 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 02:22:27.733910 master-0 kubenswrapper[31559]: W0216 02:22:27.716367 31559 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 02:22:27.733910 master-0 kubenswrapper[31559]: W0216 02:22:27.716372 31559 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 02:22:27.733910 master-0 kubenswrapper[31559]: W0216 02:22:27.716377 31559 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 02:22:27.733910 master-0 kubenswrapper[31559]: W0216 02:22:27.716382 31559 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 02:22:27.733910 master-0 kubenswrapper[31559]: W0216 02:22:27.716387 31559 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 02:22:27.733910 master-0 kubenswrapper[31559]: W0216 02:22:27.716393 31559 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 02:22:27.733910 master-0 kubenswrapper[31559]: W0216 02:22:27.716399 31559 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 02:22:27.733910 master-0 kubenswrapper[31559]: W0216 02:22:27.716404 31559 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 02:22:27.733910 master-0 kubenswrapper[31559]: W0216 02:22:27.716409 31559 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 02:22:27.733910 master-0 kubenswrapper[31559]: W0216 02:22:27.716414 31559 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 02:22:27.733910 master-0 kubenswrapper[31559]: W0216 02:22:27.716419 31559 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 02:22:27.733910 master-0 kubenswrapper[31559]: W0216 02:22:27.716424 31559 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 02:22:27.733910 master-0 kubenswrapper[31559]: W0216 02:22:27.716430 31559 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 02:22:27.733910 master-0 kubenswrapper[31559]: W0216 02:22:27.716455 31559 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 02:22:27.733910 master-0 kubenswrapper[31559]: W0216 02:22:27.716461 31559 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 02:22:27.733910 master-0 kubenswrapper[31559]: W0216 02:22:27.716466 31559 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 02:22:27.735130 master-0 kubenswrapper[31559]: W0216 02:22:27.716471 31559 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 02:22:27.735130 master-0 kubenswrapper[31559]: W0216 02:22:27.716476 31559 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 02:22:27.735130 master-0 kubenswrapper[31559]: W0216 02:22:27.716481 31559 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 02:22:27.735130 master-0 kubenswrapper[31559]: W0216 02:22:27.716486 31559 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 02:22:27.735130 master-0 kubenswrapper[31559]: W0216 02:22:27.716491 31559 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 02:22:27.735130 master-0 kubenswrapper[31559]: W0216 02:22:27.716495 31559 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 02:22:27.735130 master-0 kubenswrapper[31559]: W0216 02:22:27.716500 31559 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 02:22:27.735130 master-0 kubenswrapper[31559]: W0216 02:22:27.716509 31559 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 02:22:27.735130 master-0 kubenswrapper[31559]: W0216 02:22:27.716514 31559 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 02:22:27.735130 master-0 kubenswrapper[31559]: W0216 02:22:27.716518 31559 feature_gate.go:330] unrecognized feature gate: Example Feb 16 02:22:27.735130 master-0 kubenswrapper[31559]: W0216 02:22:27.716523 31559 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 02:22:27.735130 master-0 kubenswrapper[31559]: W0216 02:22:27.716530 31559 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 02:22:27.735130 master-0 kubenswrapper[31559]: W0216 02:22:27.716536 31559 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 02:22:27.735130 master-0 kubenswrapper[31559]: W0216 02:22:27.716541 31559 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 02:22:27.735130 master-0 kubenswrapper[31559]: W0216 02:22:27.716548 31559 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 02:22:27.735130 master-0 kubenswrapper[31559]: W0216 02:22:27.716554 31559 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 02:22:27.735130 master-0 kubenswrapper[31559]: W0216 02:22:27.716561 31559 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 02:22:27.735130 master-0 kubenswrapper[31559]: W0216 02:22:27.716567 31559 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 02:22:27.735130 master-0 kubenswrapper[31559]: W0216 02:22:27.716573 31559 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 02:22:27.735130 master-0 kubenswrapper[31559]: W0216 02:22:27.716578 31559 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 02:22:27.736350 master-0 kubenswrapper[31559]: W0216 02:22:27.716583 31559 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 02:22:27.736350 master-0 kubenswrapper[31559]: W0216 02:22:27.716589 31559 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 02:22:27.736350 master-0 kubenswrapper[31559]: W0216 02:22:27.716595 31559 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 02:22:27.736350 master-0 kubenswrapper[31559]: W0216 02:22:27.716600 31559 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 02:22:27.736350 master-0 kubenswrapper[31559]: W0216 02:22:27.716605 31559 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 02:22:27.736350 master-0 kubenswrapper[31559]: W0216 02:22:27.716611 31559 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 02:22:27.736350 master-0 kubenswrapper[31559]: W0216 02:22:27.716626 31559 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 02:22:27.736350 master-0 kubenswrapper[31559]: W0216 02:22:27.716632 31559 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 02:22:27.736350 master-0 kubenswrapper[31559]: W0216 02:22:27.716637 31559 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 02:22:27.736350 master-0 kubenswrapper[31559]: W0216 02:22:27.716643 31559 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 02:22:27.736350 master-0 kubenswrapper[31559]: W0216 02:22:27.716649 31559 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 02:22:27.736350 master-0 kubenswrapper[31559]: W0216 02:22:27.716654 31559 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 02:22:27.736350 master-0 kubenswrapper[31559]: W0216 02:22:27.716659 31559 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 02:22:27.736350 master-0 kubenswrapper[31559]: W0216 02:22:27.716664 31559 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 02:22:27.736350 master-0 kubenswrapper[31559]: W0216 02:22:27.716669 31559 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 02:22:27.736350 master-0 kubenswrapper[31559]: W0216 02:22:27.716673 31559 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 02:22:27.736350 master-0 kubenswrapper[31559]: W0216 02:22:27.716678 31559 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 02:22:27.736350 master-0 kubenswrapper[31559]: W0216 02:22:27.716683 31559 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 16 02:22:27.736350 master-0 kubenswrapper[31559]: W0216 02:22:27.716689 31559 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 02:22:27.736350 master-0 kubenswrapper[31559]: W0216 02:22:27.716697 31559 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 02:22:27.737733 master-0 kubenswrapper[31559]: W0216 02:22:27.716702 31559 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 02:22:27.737733 master-0 kubenswrapper[31559]: W0216 02:22:27.716707 31559 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 02:22:27.737733 master-0 kubenswrapper[31559]: W0216 02:22:27.716712 31559 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 02:22:27.737733 master-0 kubenswrapper[31559]: I0216 02:22:27.716720 31559 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 02:22:27.737733 master-0 kubenswrapper[31559]: I0216 02:22:27.724796 31559 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Feb 16 02:22:27.737733 master-0 kubenswrapper[31559]: I0216 02:22:27.724839 31559 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 16 02:22:27.737733 master-0 kubenswrapper[31559]: W0216 02:22:27.724966 31559 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 02:22:27.737733 master-0 kubenswrapper[31559]: W0216 02:22:27.724978 31559 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 02:22:27.737733 master-0 kubenswrapper[31559]: W0216 02:22:27.724987 31559 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 02:22:27.737733 master-0 kubenswrapper[31559]: W0216 02:22:27.724997 31559 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 02:22:27.737733 master-0 kubenswrapper[31559]: W0216 02:22:27.725006 31559 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 02:22:27.737733 master-0 kubenswrapper[31559]: W0216 02:22:27.725015 31559 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 02:22:27.737733 master-0 kubenswrapper[31559]: W0216 02:22:27.725024 31559 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 02:22:27.737733 master-0 kubenswrapper[31559]: W0216 02:22:27.725032 31559 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 02:22:27.737733 master-0 kubenswrapper[31559]: W0216 02:22:27.725041 31559 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 02:22:27.741057 master-0 kubenswrapper[31559]: W0216 02:22:27.725049 31559 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 02:22:27.741057 master-0 kubenswrapper[31559]: W0216 02:22:27.725057 31559 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 02:22:27.741057 master-0 kubenswrapper[31559]: W0216 02:22:27.725065 31559 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 02:22:27.741057 master-0 kubenswrapper[31559]: W0216 02:22:27.725073 31559 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 02:22:27.741057 master-0 kubenswrapper[31559]: W0216 02:22:27.725081 31559 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 02:22:27.741057 master-0 kubenswrapper[31559]: W0216 02:22:27.725089 31559 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 02:22:27.741057 master-0 kubenswrapper[31559]: W0216 02:22:27.725098 31559 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 02:22:27.741057 master-0 kubenswrapper[31559]: W0216 02:22:27.725105 31559 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 02:22:27.741057 master-0 kubenswrapper[31559]: W0216 02:22:27.725114 31559 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 02:22:27.741057 master-0 kubenswrapper[31559]: W0216 02:22:27.725122 31559 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 02:22:27.741057 master-0 kubenswrapper[31559]: W0216 02:22:27.725130 31559 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 02:22:27.741057 master-0 kubenswrapper[31559]: W0216 02:22:27.725138 31559 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 02:22:27.741057 master-0 kubenswrapper[31559]: W0216 02:22:27.725146 31559 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 02:22:27.741057 master-0 kubenswrapper[31559]: W0216 02:22:27.725155 31559 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 02:22:27.741057 master-0 kubenswrapper[31559]: W0216 02:22:27.725163 31559 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 02:22:27.741057 master-0 kubenswrapper[31559]: W0216 02:22:27.725171 31559 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 02:22:27.741057 master-0 kubenswrapper[31559]: W0216 02:22:27.725179 31559 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 02:22:27.741057 master-0 kubenswrapper[31559]: W0216 02:22:27.725187 31559 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 02:22:27.741057 master-0 kubenswrapper[31559]: W0216 02:22:27.725195 31559 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 02:22:27.741057 master-0 kubenswrapper[31559]: W0216 02:22:27.725203 31559 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 02:22:27.767156 master-0 kubenswrapper[31559]: W0216 02:22:27.725214 31559 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 02:22:27.767156 master-0 kubenswrapper[31559]: W0216 02:22:27.725229 31559 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 02:22:27.767156 master-0 kubenswrapper[31559]: W0216 02:22:27.725240 31559 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 02:22:27.767156 master-0 kubenswrapper[31559]: W0216 02:22:27.725248 31559 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 02:22:27.767156 master-0 kubenswrapper[31559]: W0216 02:22:27.725257 31559 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 02:22:27.767156 master-0 kubenswrapper[31559]: W0216 02:22:27.725266 31559 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 02:22:27.767156 master-0 kubenswrapper[31559]: W0216 02:22:27.725274 31559 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 02:22:27.767156 master-0 kubenswrapper[31559]: W0216 02:22:27.725283 31559 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 02:22:27.767156 master-0 kubenswrapper[31559]: W0216 02:22:27.725290 31559 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 02:22:27.767156 master-0 kubenswrapper[31559]: W0216 02:22:27.725299 31559 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 02:22:27.767156 master-0 kubenswrapper[31559]: W0216 02:22:27.725307 31559 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 02:22:27.767156 master-0 kubenswrapper[31559]: W0216 02:22:27.725315 31559 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 02:22:27.767156 master-0 kubenswrapper[31559]: W0216 02:22:27.725323 31559 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 02:22:27.767156 master-0 kubenswrapper[31559]: W0216 02:22:27.725333 31559 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 02:22:27.767156 master-0 kubenswrapper[31559]: W0216 02:22:27.725343 31559 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 02:22:27.767156 master-0 kubenswrapper[31559]: W0216 02:22:27.725352 31559 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 02:22:27.767156 master-0 kubenswrapper[31559]: W0216 02:22:27.725361 31559 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 02:22:27.767156 master-0 kubenswrapper[31559]: W0216 02:22:27.725369 31559 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 02:22:27.767156 master-0 kubenswrapper[31559]: W0216 02:22:27.725377 31559 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 02:22:27.768187 master-0 kubenswrapper[31559]: W0216 02:22:27.725388 31559 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 02:22:27.768187 master-0 kubenswrapper[31559]: W0216 02:22:27.725398 31559 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 02:22:27.768187 master-0 kubenswrapper[31559]: W0216 02:22:27.725410 31559 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 02:22:27.768187 master-0 kubenswrapper[31559]: W0216 02:22:27.725419 31559 feature_gate.go:330] unrecognized feature gate: Example Feb 16 02:22:27.768187 master-0 kubenswrapper[31559]: W0216 02:22:27.725428 31559 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 02:22:27.768187 master-0 kubenswrapper[31559]: W0216 02:22:27.725460 31559 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 02:22:27.768187 master-0 kubenswrapper[31559]: W0216 02:22:27.725469 31559 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 02:22:27.768187 master-0 kubenswrapper[31559]: W0216 02:22:27.725477 31559 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 16 02:22:27.768187 master-0 kubenswrapper[31559]: W0216 02:22:27.725485 31559 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 02:22:27.768187 master-0 kubenswrapper[31559]: W0216 02:22:27.725493 31559 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 02:22:27.768187 master-0 kubenswrapper[31559]: W0216 02:22:27.725501 31559 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 02:22:27.768187 master-0 kubenswrapper[31559]: W0216 02:22:27.725509 31559 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 02:22:27.768187 master-0 kubenswrapper[31559]: W0216 02:22:27.725518 31559 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 02:22:27.768187 master-0 kubenswrapper[31559]: W0216 02:22:27.725526 31559 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 02:22:27.768187 master-0 kubenswrapper[31559]: W0216 02:22:27.725535 31559 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 02:22:27.768187 master-0 kubenswrapper[31559]: W0216 02:22:27.725543 31559 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 02:22:27.768187 master-0 kubenswrapper[31559]: W0216 02:22:27.725551 31559 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 02:22:27.768187 master-0 kubenswrapper[31559]: W0216 02:22:27.725559 31559 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 02:22:27.768187 master-0 kubenswrapper[31559]: W0216 02:22:27.725569 31559 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 02:22:27.769367 master-0 kubenswrapper[31559]: W0216 02:22:27.725577 31559 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 02:22:27.769367 master-0 kubenswrapper[31559]: W0216 02:22:27.725584 31559 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 02:22:27.769367 master-0 kubenswrapper[31559]: W0216 02:22:27.725592 31559 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 02:22:27.769367 master-0 kubenswrapper[31559]: W0216 02:22:27.725600 31559 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 02:22:27.769367 master-0 kubenswrapper[31559]: W0216 02:22:27.725608 31559 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 02:22:27.769367 master-0 kubenswrapper[31559]: I0216 02:22:27.725622 31559 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 02:22:27.769367 master-0 kubenswrapper[31559]: W0216 02:22:27.726926 31559 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 02:22:27.769367 master-0 kubenswrapper[31559]: W0216 02:22:27.727257 31559 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 02:22:27.769367 master-0 kubenswrapper[31559]: W0216 02:22:27.727265 31559 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 02:22:27.769367 master-0 kubenswrapper[31559]: W0216 02:22:27.727271 31559 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 02:22:27.769367 master-0 kubenswrapper[31559]: W0216 02:22:27.727278 31559 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 02:22:27.769367 master-0 kubenswrapper[31559]: W0216 02:22:27.727285 31559 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 02:22:27.769367 master-0 kubenswrapper[31559]: W0216 02:22:27.727291 31559 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 02:22:27.769367 master-0 kubenswrapper[31559]: W0216 02:22:27.727298 31559 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 02:22:27.769367 master-0 kubenswrapper[31559]: W0216 02:22:27.727337 31559 feature_gate.go:330] unrecognized feature gate: Example Feb 16 02:22:27.770711 master-0 kubenswrapper[31559]: W0216 02:22:27.727345 31559 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 02:22:27.770711 master-0 kubenswrapper[31559]: W0216 02:22:27.727351 31559 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 02:22:27.770711 master-0 kubenswrapper[31559]: W0216 02:22:27.727357 31559 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 02:22:27.770711 master-0 kubenswrapper[31559]: W0216 02:22:27.727362 31559 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 02:22:27.770711 master-0 kubenswrapper[31559]: W0216 02:22:27.727369 31559 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 02:22:27.770711 master-0 kubenswrapper[31559]: W0216 02:22:27.727376 31559 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 02:22:27.770711 master-0 kubenswrapper[31559]: W0216 02:22:27.727382 31559 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 02:22:27.770711 master-0 kubenswrapper[31559]: W0216 02:22:27.727388 31559 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 02:22:27.770711 master-0 kubenswrapper[31559]: W0216 02:22:27.727417 31559 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 02:22:27.770711 master-0 kubenswrapper[31559]: W0216 02:22:27.727423 31559 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 02:22:27.770711 master-0 kubenswrapper[31559]: W0216 02:22:27.727429 31559 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 02:22:27.770711 master-0 kubenswrapper[31559]: W0216 02:22:27.727471 31559 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 02:22:27.770711 master-0 kubenswrapper[31559]: W0216 02:22:27.727483 31559 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 02:22:27.770711 master-0 kubenswrapper[31559]: W0216 02:22:27.727492 31559 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 02:22:27.770711 master-0 kubenswrapper[31559]: W0216 02:22:27.727503 31559 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 02:22:27.770711 master-0 kubenswrapper[31559]: W0216 02:22:27.727509 31559 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 02:22:27.770711 master-0 kubenswrapper[31559]: W0216 02:22:27.727515 31559 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 02:22:27.770711 master-0 kubenswrapper[31559]: W0216 02:22:27.727521 31559 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 02:22:27.770711 master-0 kubenswrapper[31559]: W0216 02:22:27.727553 31559 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 02:22:27.771681 master-0 kubenswrapper[31559]: W0216 02:22:27.727561 31559 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 02:22:27.771681 master-0 kubenswrapper[31559]: W0216 02:22:27.727569 31559 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 02:22:27.771681 master-0 kubenswrapper[31559]: W0216 02:22:27.727576 31559 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 02:22:27.771681 master-0 kubenswrapper[31559]: W0216 02:22:27.727583 31559 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 16 02:22:27.771681 master-0 kubenswrapper[31559]: W0216 02:22:27.727589 31559 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 02:22:27.771681 master-0 kubenswrapper[31559]: W0216 02:22:27.727602 31559 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 02:22:27.771681 master-0 kubenswrapper[31559]: W0216 02:22:27.727633 31559 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 02:22:27.771681 master-0 kubenswrapper[31559]: W0216 02:22:27.727640 31559 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 02:22:27.771681 master-0 kubenswrapper[31559]: W0216 02:22:27.727646 31559 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 02:22:27.771681 master-0 kubenswrapper[31559]: W0216 02:22:27.727652 31559 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 02:22:27.771681 master-0 kubenswrapper[31559]: W0216 02:22:27.727658 31559 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 02:22:27.771681 master-0 kubenswrapper[31559]: W0216 02:22:27.727663 31559 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 02:22:27.771681 master-0 kubenswrapper[31559]: W0216 02:22:27.727668 31559 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 02:22:27.771681 master-0 kubenswrapper[31559]: W0216 02:22:27.727674 31559 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 02:22:27.771681 master-0 kubenswrapper[31559]: W0216 02:22:27.727690 31559 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 02:22:27.771681 master-0 kubenswrapper[31559]: W0216 02:22:27.727773 31559 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 02:22:27.771681 master-0 kubenswrapper[31559]: W0216 02:22:27.727785 31559 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 02:22:27.771681 master-0 kubenswrapper[31559]: W0216 02:22:27.727796 31559 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 02:22:27.771681 master-0 kubenswrapper[31559]: W0216 02:22:27.727807 31559 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 02:22:27.772628 master-0 kubenswrapper[31559]: W0216 02:22:27.727817 31559 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 02:22:27.772628 master-0 kubenswrapper[31559]: W0216 02:22:27.727830 31559 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 02:22:27.772628 master-0 kubenswrapper[31559]: W0216 02:22:27.727840 31559 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 02:22:27.772628 master-0 kubenswrapper[31559]: W0216 02:22:27.727849 31559 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 02:22:27.772628 master-0 kubenswrapper[31559]: W0216 02:22:27.727859 31559 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 02:22:27.772628 master-0 kubenswrapper[31559]: W0216 02:22:27.727913 31559 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 02:22:27.772628 master-0 kubenswrapper[31559]: W0216 02:22:27.727923 31559 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 02:22:27.772628 master-0 kubenswrapper[31559]: W0216 02:22:27.727930 31559 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 02:22:27.772628 master-0 kubenswrapper[31559]: W0216 02:22:27.727936 31559 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 02:22:27.772628 master-0 kubenswrapper[31559]: W0216 02:22:27.727969 31559 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 02:22:27.772628 master-0 kubenswrapper[31559]: W0216 02:22:27.727975 31559 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 02:22:27.772628 master-0 kubenswrapper[31559]: W0216 02:22:27.727983 31559 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 02:22:27.772628 master-0 kubenswrapper[31559]: W0216 02:22:27.727990 31559 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 02:22:27.772628 master-0 kubenswrapper[31559]: W0216 02:22:27.727996 31559 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 02:22:27.772628 master-0 kubenswrapper[31559]: W0216 02:22:27.728004 31559 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 02:22:27.772628 master-0 kubenswrapper[31559]: W0216 02:22:27.728018 31559 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 02:22:27.772628 master-0 kubenswrapper[31559]: W0216 02:22:27.728246 31559 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 02:22:27.772628 master-0 kubenswrapper[31559]: W0216 02:22:27.728260 31559 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 02:22:27.772628 master-0 kubenswrapper[31559]: W0216 02:22:27.728269 31559 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 02:22:27.772628 master-0 kubenswrapper[31559]: W0216 02:22:27.728276 31559 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 02:22:27.773552 master-0 kubenswrapper[31559]: W0216 02:22:27.728283 31559 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 02:22:27.773552 master-0 kubenswrapper[31559]: W0216 02:22:27.728317 31559 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 02:22:27.773552 master-0 kubenswrapper[31559]: W0216 02:22:27.728325 31559 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 02:22:27.773552 master-0 kubenswrapper[31559]: W0216 02:22:27.728331 31559 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 02:22:27.773552 master-0 kubenswrapper[31559]: W0216 02:22:27.728337 31559 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 02:22:27.773552 master-0 kubenswrapper[31559]: I0216 02:22:27.728349 31559 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 02:22:27.773552 master-0 kubenswrapper[31559]: I0216 02:22:27.728787 31559 server.go:940] "Client rotation is on, will bootstrap in background" Feb 16 02:22:27.773552 master-0 kubenswrapper[31559]: I0216 02:22:27.731351 31559 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 16 02:22:27.773552 master-0 kubenswrapper[31559]: I0216 02:22:27.731514 31559 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 16 02:22:27.773552 master-0 kubenswrapper[31559]: I0216 02:22:27.732064 31559 server.go:997] "Starting client certificate rotation" Feb 16 02:22:27.773552 master-0 kubenswrapper[31559]: I0216 02:22:27.732079 31559 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 16 02:22:27.773552 master-0 kubenswrapper[31559]: I0216 02:22:27.732514 31559 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-17 01:56:50 +0000 UTC, rotation deadline is 2026-02-16 21:53:49.38647424 +0000 UTC Feb 16 02:22:27.773552 master-0 kubenswrapper[31559]: I0216 02:22:27.732941 31559 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 19h31m21.653546026s for next certificate rotation Feb 16 02:22:27.774098 master-0 kubenswrapper[31559]: I0216 02:22:27.733268 31559 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 02:22:27.774098 master-0 kubenswrapper[31559]: I0216 02:22:27.735089 31559 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 02:22:27.774098 master-0 kubenswrapper[31559]: I0216 02:22:27.739135 31559 log.go:25] "Validated CRI v1 runtime API" Feb 16 02:22:27.774098 master-0 kubenswrapper[31559]: I0216 02:22:27.771817 31559 log.go:25] "Validated CRI v1 image API" Feb 16 02:22:27.774098 master-0 kubenswrapper[31559]: I0216 02:22:27.773225 31559 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 16 02:22:27.786637 master-0 kubenswrapper[31559]: I0216 02:22:27.786572 31559 fs.go:135] Filesystem UUIDs: map[62dc72f5-7748-49f9-b4d1-75449f1d8b55:/dev/vda3 7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4] Feb 16 02:22:27.790341 master-0 kubenswrapper[31559]: I0216 02:22:27.786622 31559 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0458436e1991707e1a1730601b13074a765d2ad430ff6238224fc587bdd11634/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0458436e1991707e1a1730601b13074a765d2ad430ff6238224fc587bdd11634/userdata/shm major:0 minor:777 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/07f7d55685e3891e139cfcc8fc39a4525349b15753a33187f5704239bf899022/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/07f7d55685e3891e139cfcc8fc39a4525349b15753a33187f5704239bf899022/userdata/shm major:0 minor:50 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0926ad4d371086f5079e933378a680e9ca645c38b72ce4fd73fbac3448ac6883/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0926ad4d371086f5079e933378a680e9ca645c38b72ce4fd73fbac3448ac6883/userdata/shm major:0 minor:1186 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/19f7fe92f509ebf58263703a24e99425e1eb0493ad65313aae50f23d57b15adc/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/19f7fe92f509ebf58263703a24e99425e1eb0493ad65313aae50f23d57b15adc/userdata/shm major:0 minor:295 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1a32ad8e692aa770e92a476bea483378880963d5f68eec068f4e2762350b7f00/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1a32ad8e692aa770e92a476bea483378880963d5f68eec068f4e2762350b7f00/userdata/shm major:0 minor:485 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2287a210e87155c02ab6e622acb47d96fd89d65dc49d2afbe29745b869fd7b87/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2287a210e87155c02ab6e622acb47d96fd89d65dc49d2afbe29745b869fd7b87/userdata/shm major:0 minor:291 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/248a67424bdbb1372ecb2fb070f15261787aceee6d09513f5274ef915ebe68ae/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/248a67424bdbb1372ecb2fb070f15261787aceee6d09513f5274ef915ebe68ae/userdata/shm major:0 minor:283 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/277a843980e79e0d5b023668b83b4bebf3c5a0fcb2193476c696948786d785da/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/277a843980e79e0d5b023668b83b4bebf3c5a0fcb2193476c696948786d785da/userdata/shm major:0 minor:562 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/28301e075bce0b536459155be0ee0c5701514de23367f3127b28f30bb9102319/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/28301e075bce0b536459155be0ee0c5701514de23367f3127b28f30bb9102319/userdata/shm major:0 minor:641 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/283ae7ddfaf1351f34dacc8beed10e6971e1ef88d2fe208447bc84c265e096e1/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/283ae7ddfaf1351f34dacc8beed10e6971e1ef88d2fe208447bc84c265e096e1/userdata/shm major:0 minor:372 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/32c10b6146223a63a25cf689fec7461ab6f7e2b10981648562bd63e55782f342/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/32c10b6146223a63a25cf689fec7461ab6f7e2b10981648562bd63e55782f342/userdata/shm major:0 minor:543 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/359c73798649b5d5b089f1492d973d2f87ffd23f53f2f5868ba22d8d7543d4cc/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/359c73798649b5d5b089f1492d973d2f87ffd23f53f2f5868ba22d8d7543d4cc/userdata/shm major:0 minor:120 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3aa52e31a2b9a476aba0a48b18d458a5a18722a85353d604bbc35df3b9829545/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3aa52e31a2b9a476aba0a48b18d458a5a18722a85353d604bbc35df3b9829545/userdata/shm major:0 minor:287 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3dde0495e5ec9d118f2ad7d1acff82faceae146e9c312fc50bf88cf24e85f414/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3dde0495e5ec9d118f2ad7d1acff82faceae146e9c312fc50bf88cf24e85f414/userdata/shm major:0 minor:915 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4134014c45e6845c874e6a32e463bf4a0bdd4d27746b06893f36026417f8e6db/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4134014c45e6845c874e6a32e463bf4a0bdd4d27746b06893f36026417f8e6db/userdata/shm major:0 minor:458 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4476a4bc67f1c6ee7cd4f19dd630e65931f829a02ef5d857f284d2df2a08dd8d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4476a4bc67f1c6ee7cd4f19dd630e65931f829a02ef5d857f284d2df2a08dd8d/userdata/shm major:0 minor:376 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/450f24c15fabf2ce3093e6381873f7497e388b2d4b0a5acae355eb63b714bf74/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/450f24c15fabf2ce3093e6381873f7497e388b2d4b0a5acae355eb63b714bf74/userdata/shm major:0 minor:77 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/460579bff076abf6e4d419f44b68400fd50e9b2bc9a03fc49494f7b68ef04045/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/460579bff076abf6e4d419f44b68400fd50e9b2bc9a03fc49494f7b68ef04045/userdata/shm major:0 minor:95 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/47742d18510812b119307e6d49d3c726c8e73fb1e202f5e57c5cfb5945faf19d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/47742d18510812b119307e6d49d3c726c8e73fb1e202f5e57c5cfb5945faf19d/userdata/shm major:0 minor:99 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/47db8b5a98f2dc9d4943b35b3435ff0c482e2b9de6a84290f317a3f7a8c32db3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/47db8b5a98f2dc9d4943b35b3435ff0c482e2b9de6a84290f317a3f7a8c32db3/userdata/shm major:0 minor:643 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/48118d83188cff04b48b2d21c92d5267795e6e491327e95878cf252a4b94caea/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/48118d83188cff04b48b2d21c92d5267795e6e491327e95878cf252a4b94caea/userdata/shm major:0 minor:41 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/48c24060310ea59ea7726c8831161c102b57d6b94e31b9bc5a4ace9382583b32/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/48c24060310ea59ea7726c8831161c102b57d6b94e31b9bc5a4ace9382583b32/userdata/shm major:0 minor:284 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/49dbd775e2346c33a70cf58828eb787b58a10dad1b4af76f1c103cfe7d36ce1d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/49dbd775e2346c33a70cf58828eb787b58a10dad1b4af76f1c103cfe7d36ce1d/userdata/shm major:0 minor:568 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/49e455816343db1118d91c9ccd06253262823aebbe81edbfd55679229021e38d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/49e455816343db1118d91c9ccd06253262823aebbe81edbfd55679229021e38d/userdata/shm major:0 minor:566 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4b7e8c0ad2cdf87e8552c9488b3b26422f87ac52802cbde7bf5707282a58545e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4b7e8c0ad2cdf87e8552c9488b3b26422f87ac52802cbde7bf5707282a58545e/userdata/shm major:0 minor:258 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4b908a349a6f9bb67998eaa77c0cb0b67337fd06d9753261cfa10d0744b50e07/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4b908a349a6f9bb67998eaa77c0cb0b67337fd06d9753261cfa10d0744b50e07/userdata/shm major:0 minor:383 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4bdfcd16962800c84c212d75250cbdd940c7636b2eed7dc4de2b7ca286aca5c8/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4bdfcd16962800c84c212d75250cbdd940c7636b2eed7dc4de2b7ca286aca5c8/userdata/shm major:0 minor:334 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4e9f20e89ac525e352545c86571266e96559d8a418a9a58ef9e04f14e5bd7411/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4e9f20e89ac525e352545c86571266e96559d8a418a9a58ef9e04f14e5bd7411/userdata/shm major:0 minor:457 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4f55a0409391e0031662fe90965f9c6570290d87940cb9577014c63ddf57bd34/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4f55a0409391e0031662fe90965f9c6570290d87940cb9577014c63ddf57bd34/userdata/shm major:0 minor:1281 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4fa491ec351633ec2d1e1b13971562156d70b9f7ae47702242163650f593c658/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4fa491ec351633ec2d1e1b13971562156d70b9f7ae47702242163650f593c658/userdata/shm major:0 minor:1044 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5ae32b90c0d9ee58a1aa24c15f184b908d4118e753afef2c6de3006c4e387eaa/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5ae32b90c0d9ee58a1aa24c15f184b908d4118e753afef2c6de3006c4e387eaa/userdata/shm major:0 minor:597 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5b8d0bef4d74f4dd2410957462f576db299d55ec6675ac364687f5b27fba5fd5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5b8d0bef4d74f4dd2410957462f576db299d55ec6675ac364687f5b27fba5fd5/userdata/shm major:0 minor:69 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5be46040ce7c3ed3d2b8402e810c80c1d12c6e7664eed391a78116018fc06276/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5be46040ce7c3ed3d2b8402e810c80c1d12c6e7664eed391a78116018fc06276/userdata/shm major:0 minor:1139 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/62a59e4f54ec7d1606e491cb0a7ae58230aff5c54f133cd8f1f5aab5922fd486/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/62a59e4f54ec7d1606e491cb0a7ae58230aff5c54f133cd8f1f5aab5922fd486/userdata/shm major:0 minor:640 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/65db28ff03b176892a8eec81629c7d19dbef022673e856af206a72dde2a48896/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/65db28ff03b176892a8eec81629c7d19dbef022673e856af206a72dde2a48896/userdata/shm major:0 minor:278 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/667ddc72e9342237de83564517ac6e7d5264569a48bf2ad6536c670aaa43e0af/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/667ddc72e9342237de83564517ac6e7d5264569a48bf2ad6536c670aaa43e0af/userdata/shm major:0 minor:664 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6c0cfba2536520f6ae9edb17fbb1f2d62f0a336f61c097893c2d906e44086caa/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6c0cfba2536520f6ae9edb17fbb1f2d62f0a336f61c097893c2d906e44086caa/userdata/shm major:0 minor:373 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6e4bdeb9de57d42d4e965ab51dbc3a2aa873e2346f227d5e0b75299b42bac97b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6e4bdeb9de57d42d4e965ab51dbc3a2aa873e2346f227d5e0b75299b42bac97b/userdata/shm major:0 minor:1135 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/72eeb0715f00e0e3f5313c94b059bdc92b87e369f23bdc44266053d9ec61b371/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/72eeb0715f00e0e3f5313c94b059bdc92b87e369f23bdc44266053d9ec61b371/userdata/shm major:0 minor:573 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7498fd943d93408ca13b7d162b110c819eb4973f8cff45407cca83058e5ae25e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7498fd943d93408ca13b7d162b110c819eb4973f8cff45407cca83058e5ae25e/userdata/shm major:0 minor:889 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7d9105e1418ead3e83bafdc82309e78dfce8ddc065bc7a3854cc209af8115774/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7d9105e1418ead3e83bafdc82309e78dfce8ddc065bc7a3854cc209af8115774/userdata/shm major:0 minor:1211 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7ed4da3dff52ca3a67ddd91e519fed951330dcfe09c0cb1c3559d79f70e3808d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7ed4da3dff52ca3a67ddd91e519fed951330dcfe09c0cb1c3559d79f70e3808d/userdata/shm major:0 minor:609 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7fa429b0e25c1a3fe1e0505256e1e19c0180fa5596c0ad7692ae5d9ed02cf363/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7fa429b0e25c1a3fe1e0505256e1e19c0180fa5596c0ad7692ae5d9ed02cf363/userdata/shm major:0 minor:639 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7ffd086045d7cf31e1d7c2da1b8924ee64ea940c7d3c880260b182cb5c759f90/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7ffd086045d7cf31e1d7c2da1b8924ee64ea940c7d3c880260b182cb5c759f90/userdata/shm major:0 minor:129 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/86f122d4749ad1424c989c1ff460643fbb5843c1b201386f01f941204bc41b87/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/86f122d4749ad1424c989c1ff460643fbb5843c1b201386f01f941204bc41b87/userdata/shm major:0 minor:375 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8dd32bd58a893bd46ee61ae39a01f4492842e7fb2c4d56eeca513230f073e979/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8dd32bd58a893bd46ee61ae39a01f4492842e7fb2c4d56eeca513230f073e979/userdata/shm major:0 minor:323 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8ec6f01b2f5ea3a8202f9a73fc87e859e09b3484fd1471c52da0bdebc2c97dba/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8ec6f01b2f5ea3a8202f9a73fc87e859e09b3484fd1471c52da0bdebc2c97dba/userdata/shm major:0 minor:637 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/92790def01d9723678e72ddb37afa203e6ce284de27eb1ef78e5d202635e3d9e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/92790def01d9723678e72ddb37afa203e6ce284de27eb1ef78e5d202635e3d9e/userdata/shm major:0 minor:42 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/92905ae35545e079e87d8908f39688e6a1a52d219de95b9b7367aad88b020b28/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/92905ae35545e079e87d8908f39688e6a1a52d219de95b9b7367aad88b020b28/userdata/shm major:0 minor:390 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/93b137c9da7cc55e696e731bc17c8d146d60020ad34798363a1b97a514dd88c5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/93b137c9da7cc55e696e731bc17c8d146d60020ad34798363a1b97a514dd88c5/userdata/shm major:0 minor:462 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/93c4687f8a629173f2b94639e83fbe20ff1ad3e33cea55c6d4a8747fb84f23bd/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/93c4687f8a629173f2b94639e83fbe20ff1ad3e33cea55c6d4a8747fb84f23bd/userdata/shm major:0 minor:395 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/93fe8320cd8b094f12e9b856631a3581df910023e217ba523e4fc8bbdc13eff6/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/93fe8320cd8b094f12e9b856631a3581df910023e217ba523e4fc8bbdc13eff6/userdata/shm major:0 minor:293 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/99cec3957b7591d54dab1b67d940469ccca762d577aa986d00f4e46746ba55f5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/99cec3957b7591d54dab1b67d940469ccca762d577aa986d00f4e46746ba55f5/userdata/shm major:0 minor:277 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a1854497ff5ad67aee5d94d7312b87d4baf7af5b3f4e0b712c8c8a5cd16079c9/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a1854497ff5ad67aee5d94d7312b87d4baf7af5b3f4e0b712c8c8a5cd16079c9/userdata/shm major:0 minor:1133 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a926069b18af0a45219030c9719e08a473a50355bc2d0c1fd700cdf2592cfa4c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a926069b18af0a45219030c9719e08a473a50355bc2d0c1fd700cdf2592cfa4c/userdata/shm major:0 minor:809 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/aab7eeb6e8bf766155c633f93a77e37a4ca269be0e48fc054214cf6cfcafebc6/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/aab7eeb6e8bf766155c633f93a77e37a4ca269be0e48fc054214cf6cfcafebc6/userdata/shm major:0 minor:381 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/aafd16466f6eed6a672c6fa59488bcd1ea8cbc42fa0dfe86540d9e97cd364cb6/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/aafd16466f6eed6a672c6fa59488bcd1ea8cbc42fa0dfe86540d9e97cd364cb6/userdata/shm major:0 minor:289 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ba4f7bcf968605deb298487c68a8b9824d062c97781f01a71a2b9894c49e23ed/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ba4f7bcf968605deb298487c68a8b9824d062c97781f01a71a2b9894c49e23ed/userdata/shm major:0 minor:792 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/bd5a6caddc3fbffdc59300004e09c460ee8e769674df58e5a2d88b92c736576e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/bd5a6caddc3fbffdc59300004e09c460ee8e769674df58e5a2d88b92c736576e/userdata/shm major:0 minor:864 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c4ca903ca847491a2f54378905e0af98cc694e7eee50b6b0fc0352bdc61947b5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c4ca903ca847491a2f54378905e0af98cc694e7eee50b6b0fc0352bdc61947b5/userdata/shm major:0 minor:1174 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c4f6fda3b9ae78250d34cf8b7c30c1c11f08347cd23c715bd4e28a7a30204cde/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c4f6fda3b9ae78250d34cf8b7c30c1c11f08347cd23c715bd4e28a7a30204cde/userdata/shm major:0 minor:555 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c7eb8cc3989ea5b05dd2c5ae1244d08f1947ba602e6ae89eb69848dbf5ea8e95/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c7eb8cc3989ea5b05dd2c5ae1244d08f1947ba602e6ae89eb69848dbf5ea8e95/userdata/shm major:0 minor:384 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d247b2be5bf1cb751d97a183400c5d6577356c5b5dce9cfa29235bda3ce8eb9a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d247b2be5bf1cb751d97a183400c5d6577356c5b5dce9cfa29235bda3ce8eb9a/userdata/shm major:0 minor:603 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d3a6bee8bdf67740901292411913794cb77a0e097ae15189322f724e1617872d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d3a6bee8bdf67740901292411913794cb77a0e097ae15189322f724e1617872d/userdata/shm major:0 minor:256 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d84f8ee868ca64ba6d178e5808a6769d2388e0cd861fe9fb0b41b3a95b7ca11c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d84f8ee868ca64ba6d178e5808a6769d2388e0cd861fe9fb0b41b3a95b7ca11c/userdata/shm major:0 minor:636 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/debb4f03d3db8741a1ba37d50a33cf649d64e1d2b1233200aa072be50cd42b72/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/debb4f03d3db8741a1ba37d50a33cf649d64e1d2b1233200aa072be50cd42b72/userdata/shm major:0 minor:1040 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e04624272a6ae061a3899df44b95c1f16652305181d31b35b7d1c234a03226ba/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e04624272a6ae061a3899df44b95c1f16652305181d31b35b7d1c234a03226ba/userdata/shm major:0 minor:382 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e55ecc9e109900a7552879fd5133496a07323a5858e7c3c83ce557b826a22cc6/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e55ecc9e109900a7552879fd5133496a07323a5858e7c3c83ce557b826a22cc6/userdata/shm major:0 minor:1042 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ec4a831847dbd9a3830625bc2566b19d885784da6d7562dca0d18bf050003dad/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ec4a831847dbd9a3830625bc2566b19d885784da6d7562dca0d18bf050003dad/userdata/shm major:0 minor:468 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ee59c4907031f715d1c9629d7cd8d627d819c1de44021beecbbfe36a41fcaf72/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ee59c4907031f715d1c9629d7cd8d627d819c1de44021beecbbfe36a41fcaf72/userdata/shm major:0 minor:166 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ef2bb9465307e33223e533d623bdfd016157fa7b6e73255487b68bf12c529272/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ef2bb9465307e33223e533d623bdfd016157fa7b6e73255487b68bf12c529272/userdata/shm major:0 minor:321 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f1504c9a4b0e4bf6149e0491153df3c7ffe2143b38a16877cba6aa9e83843b5a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f1504c9a4b0e4bf6149e0491153df3c7ffe2143b38a16877cba6aa9e83843b5a/userdata/shm major:0 minor:190 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f218aafff65afcf35d3001ac97851bc4eb0edf9e76199787dd7b9355dbf3fd1e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f218aafff65afcf35d3001ac97851bc4eb0edf9e76199787dd7b9355dbf3fd1e/userdata/shm major:0 minor:1209 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f2d5026b3d62b6eac44704f83447125870ee696cf63066e123a37273291b1d8f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f2d5026b3d62b6eac44704f83447125870ee696cf63066e123a37273291b1d8f/userdata/shm major:0 minor:1225 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f3b7c64f3be908fc19e9deab55b835cdfbaa84035406e99a4fd85bf496337788/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f3b7c64f3be908fc19e9deab55b835cdfbaa84035406e99a4fd85bf496337788/userdata/shm major:0 minor:144 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f9e85a0740edade16aca29d94977dcf8952ab075721ac965ae1df68ba4eec6d2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f9e85a0740edade16aca29d94977dcf8952ab075721ac965ae1df68ba4eec6d2/userdata/shm major:0 minor:492 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/fc01c156470f39fb1cec479037027fa891a6711ffbe4b5da46389ad652e479bb/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/fc01c156470f39fb1cec479037027fa891a6711ffbe4b5da46389ad652e479bb/userdata/shm major:0 minor:140 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/00ef3b03-55dc-4661-b7fd-1e586c45b5de/volumes/kubernetes.io~empty-dir/etc-tuned:{mountpoint:/var/lib/kubelet/pods/00ef3b03-55dc-4661-b7fd-1e586c45b5de/volumes/kubernetes.io~empty-dir/etc-tuned major:0 minor:537 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/00ef3b03-55dc-4661-b7fd-1e586c45b5de/volumes/kubernetes.io~empty-dir/tmp:{mountpoint:/var/lib/kubelet/pods/00ef3b03-55dc-4661-b7fd-1e586c45b5de/volumes/kubernetes.io~empty-dir/tmp major:0 minor:536 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/00ef3b03-55dc-4661-b7fd-1e586c45b5de/volumes/kubernetes.io~projected/kube-api-access-7kj7r:{mountpoint:/var/lib/kubelet/pods/00ef3b03-55dc-4661-b7fd-1e586c45b5de/volumes/kubernetes.io~projected/kube-api-access-7kj7r major:0 minor:545 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/04804a08-e3a5-46f3-abcb-967866834baa/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/04804a08-e3a5-46f3-abcb-967866834baa/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:270 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/04804a08-e3a5-46f3-abcb-967866834baa/volumes/kubernetes.io~projected/kube-api-access-8rc6w:{mountpoint:/var/lib/kubelet/pods/04804a08-e3a5-46f3-abcb-967866834baa/volumes/kubernetes.io~projected/kube-api-access-8rc6w major:0 minor:260 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/04804a08-e3a5-46f3-abcb-967866834baa/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/04804a08-e3a5-46f3-abcb-967866834baa/volumes/kubernetes.io~secret/metrics-tls major:0 minor:453 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0a900f93-91c9-4782-89a3-1cc09f3aec95/volumes/kubernetes.io~projected/kube-api-access-sctj8:{mountpoint:/var/lib/kubelet/pods/0a900f93-91c9-4782-89a3-1cc09f3aec95/volumes/kubernetes.io~projected/kube-api-access-sctj8 major:0 minor:1208 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0a900f93-91c9-4782-89a3-1cc09f3aec95/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/0a900f93-91c9-4782-89a3-1cc09f3aec95/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config major:0 minor:1204 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0a900f93-91c9-4782-89a3-1cc09f3aec95/volumes/kubernetes.io~secret/node-exporter-tls:{mountpoint:/var/lib/kubelet/pods/0a900f93-91c9-4782-89a3-1cc09f3aec95/volumes/kubernetes.io~secret/node-exporter-tls major:0 minor:1198 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0abea413-e08a-465a-8ec4-2be650bfd5bd/volumes/kubernetes.io~projected/kube-api-access-bxlnm:{mountpoint:/var/lib/kubelet/pods/0abea413-e08a-465a-8ec4-2be650bfd5bd/volumes/kubernetes.io~projected/kube-api-access-bxlnm major:0 minor:761 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0abea413-e08a-465a-8ec4-2be650bfd5bd/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/0abea413-e08a-465a-8ec4-2be650bfd5bd/volumes/kubernetes.io~secret/serving-cert major:0 minor:118 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0fbc8f91-f8cc-48d8-917c-64fa978069de/volumes/kubernetes.io~projected/kube-api-access-5bnwz:{mountpoint:/var/lib/kubelet/pods/0fbc8f91-f8cc-48d8-917c-64fa978069de/volumes/kubernetes.io~projected/kube-api-access-5bnwz major:0 minor:506 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0fbc8f91-f8cc-48d8-917c-64fa978069de/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/0fbc8f91-f8cc-48d8-917c-64fa978069de/volumes/kubernetes.io~secret/proxy-tls major:0 minor:374 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1487f82c-c14a-4f65-be77-5af2612f56f4/volumes/kubernetes.io~projected/kube-api-access-wxq28:{mountpoint:/var/lib/kubelet/pods/1487f82c-c14a-4f65-be77-5af2612f56f4/volumes/kubernetes.io~projected/kube-api-access-wxq28 major:0 minor:773 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/17390d9a-148d-4927-a831-5bc4873c43d5/volumes/kubernetes.io~projected/kube-api-access-85sdg:{mountpoint:/var/lib/kubelet/pods/17390d9a-148d-4927-a831-5bc4873c43d5/volumes/kubernetes.io~projected/kube-api-access-85sdg major:0 minor:1130 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/17390d9a-148d-4927-a831-5bc4873c43d5/volumes/kubernetes.io~secret/default-certificate:{mountpoint:/var/lib/kubelet/pods/17390d9a-148d-4927-a831-5bc4873c43d5/volumes/kubernetes.io~secret/default-certificate major:0 minor:1129 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/17390d9a-148d-4927-a831-5bc4873c43d5/volumes/kubernetes.io~secret/metrics-certs:{mountpoint:/var/lib/kubelet/pods/17390d9a-148d-4927-a831-5bc4873c43d5/volumes/kubernetes.io~secret/metrics-certs major:0 minor:1128 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/17390d9a-148d-4927-a831-5bc4873c43d5/volumes/kubernetes.io~secret/stats-auth:{mountpoint:/var/lib/kubelet/pods/17390d9a-148d-4927-a831-5bc4873c43d5/volumes/kubernetes.io~secret/stats-auth major:0 minor:1123 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1743372f-bdb0-4558-b47b-3714f3aa3fde/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/1743372f-bdb0-4558-b47b-3714f3aa3fde/volumes/kubernetes.io~projected/kube-api-access major:0 minor:269 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1743372f-bdb0-4558-b47b-3714f3aa3fde/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/1743372f-bdb0-4558-b47b-3714f3aa3fde/volumes/kubernetes.io~secret/serving-cert major:0 minor:248 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1a07cd28-a33d-4abd-9198-ba82bacd51ba/volumes/kubernetes.io~projected/kube-api-access-5j9vb:{mountpoint:/var/lib/kubelet/pods/1a07cd28-a33d-4abd-9198-ba82bacd51ba/volumes/kubernetes.io~projected/kube-api-access-5j9vb major:0 minor:589 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1f2d2601-481d-4e86-ac4c-3d34d5691261/volumes/kubernetes.io~projected/kube-api-access-8d49c:{mountpoint:/var/lib/kubelet/pods/1f2d2601-481d-4e86-ac4c-3d34d5691261/volumes/kubernetes.io~projected/kube-api-access-8d49c major:0 minor:263 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1f2d2601-481d-4e86-ac4c-3d34d5691261/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/1f2d2601-481d-4e86-ac4c-3d34d5691261/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert major:0 minor:253 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/21686a6d-f685-4fb6-98af-3e8a39c5981b/volumes/kubernetes.io~projected/kube-api-access-lvf8t:{mountpoint:/var/lib/kubelet/pods/21686a6d-f685-4fb6-98af-3e8a39c5981b/volumes/kubernetes.io~projected/kube-api-access-lvf8t major:0 minor:273 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/21686a6d-f685-4fb6-98af-3e8a39c5981b/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls:{mountpoint:/var/lib/kubelet/pods/21686a6d-f685-4fb6-98af-3e8a39c5981b/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls major:0 minor:632 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/22739961-e322-47f1-b232-eaa4cc35319c/volumes/kubernetes.io~projected/kube-api-access-9n58j:{mountpoint:/var/lib/kubelet/pods/22739961-e322-47f1-b232-eaa4cc35319c/volumes/kubernetes.io~projected/kube-api-access-9n58j major:0 minor:621 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/22739961-e322-47f1-b232-eaa4cc35319c/volumes/kubernetes.io~secret/encryption-config:{mountpoint:/var/lib/kubelet/pods/22739961-e322-47f1-b232-eaa4cc35319c/volumes/kubernetes.io~secret/encryption-config major:0 minor:342 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/22739961-e322-47f1-b232-eaa4cc35319c/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/22739961-e322-47f1-b232-eaa4cc35319c/volumes/kubernetes.io~secret/etcd-client major:0 minor:620 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/22739961-e322-47f1-b232-eaa4cc35319c/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/22739961-e322-47f1-b232-eaa4cc35319c/volumes/kubernetes.io~secret/serving-cert major:0 minor:343 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/23755f7f-dce6-4dcf-9664-22e3aedb5c81/volumes/kubernetes.io~projected/kube-api-access-n4gmn:{mountpoint:/var/lib/kubelet/pods/23755f7f-dce6-4dcf-9664-22e3aedb5c81/volumes/kubernetes.io~projected/kube-api-access-n4gmn major:0 minor:271 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/23755f7f-dce6-4dcf-9664-22e3aedb5c81/volumes/kubernetes.io~secret/package-server-manager-serving-cert:{mountpoint:/var/lib/kubelet/pods/23755f7f-dce6-4dcf-9664-22e3aedb5c81/volumes/kubernetes.io~secret/package-server-manager-serving-cert major:0 minor:630 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/27a42eb0-677c-414d-b0ec-f945ec39b7e9/volumes/kubernetes.io~projected/kube-api-access-l4djm:{mountpoint:/var/lib/kubelet/pods/27a42eb0-677c-414d-b0ec-f945ec39b7e9/volumes/kubernetes.io~projected/kube-api-access-l4djm major:0 minor:841 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/27a42eb0-677c-414d-b0ec-f945ec39b7e9/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/27a42eb0-677c-414d-b0ec-f945ec39b7e9/volumes/kubernetes.io~secret/cert major:0 minor:838 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/27a42eb0-677c-414d-b0ec-f945ec39b7e9/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls:{mountpoint:/var/lib/kubelet/pods/27a42eb0-677c-414d-b0ec-f945ec39b7e9/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls major:0 minor:839 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/27d876a7-6a48-4942-ad96-ed8ed3aa104b/volumes/kubernetes.io~projected/ca-certs:{mountpoint:/var/lib/kubelet/pods/27d876a7-6a48-4942-ad96-ed8ed3aa104b/volumes/kubernetes.io~projected/ca-certs major:0 minor:491 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/27d876a7-6a48-4942-ad96-ed8ed3aa104b/volumes/kubernetes.io~projected/kube-api-access-kf7tw:{mountpoint:/var/lib/kubelet/pods/27d876a7-6a48-4942-ad96-ed8ed3aa104b/volumes/kubernetes.io~projected/kube-api-access-kf7tw major:0 minor:443 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2a67f799-fd8d-4bee-9d67-720151c1650b/volumes/kubernetes.io~projected/kube-api-access-47lht:{mountpoint:/var/lib/kubelet/pods/2a67f799-fd8d-4bee-9d67-720151c1650b/volumes/kubernetes.io~projected/kube-api-access-47lht major:0 minor:267 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2ffa4db8-97da-42de-8e51-35680f518ca7/volumes/kubernetes.io~projected/kube-api-access-t9sgx:{mountpoint:/var/lib/kubelet/pods/2ffa4db8-97da-42de-8e51-35680f518ca7/volumes/kubernetes.io~projected/kube-api-access-t9sgx major:0 minor:243 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2ffa4db8-97da-42de-8e51-35680f518ca7/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/2ffa4db8-97da-42de-8e51-35680f518ca7/volumes/kubernetes.io~secret/metrics-tls major:0 minor:454 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/30fef0d5-46ea-4fa3-9ffa-88187d010ffe/volumes/kubernetes.io~projected/kube-api-access-xj8x2:{mountpoint:/var/lib/kubelet/pods/30fef0d5-46ea-4fa3-9ffa-88187d010ffe/volumes/kubernetes.io~projected/kube-api-access-xj8x2 major:0 minor:531 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/30fef0d5-46ea-4fa3-9ffa-88187d010ffe/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/30fef0d5-46ea-4fa3-9ffa-88187d010ffe/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert major:0 minor:530 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/32d420d6-bbda-42c0-82fe-8b187ad91607/volumes/kubernetes.io~projected/kube-api-access-w4mp4:{mountpoint:/var/lib/kubelet/pods/32d420d6-bbda-42c0-82fe-8b187ad91607/volumes/kubernetes.io~projected/kube-api-access-w4mp4 major:0 minor:1206 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/32d420d6-bbda-42c0-82fe-8b187ad91607/volumes/kubernetes.io~secret/openshift-state-metrics-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/32d420d6-bbda-42c0-82fe-8b187ad91607/volumes/kubernetes.io~secret/openshift-state-metrics-kube-rbac-proxy-config major:0 minor:1202 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/32d420d6-bbda-42c0-82fe-8b187ad91607/volumes/kubernetes.io~secret/openshift-state-metrics-tls:{mountpoint:/var/lib/kubelet/pods/32d420d6-bbda-42c0-82fe-8b187ad91607/volumes/kubernetes.io~secret/openshift-state-metrics-tls major:0 minor:1203 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/37fd7550-cc81-4180-8540-0bc5f62f63d2/volumes/kubernetes.io~secret/tls-certificates:{mountpoint:/var/lib/kubelet/pods/37fd7550-cc81-4180-8540-0bc5f62f63d2/volumes/kubernetes.io~secret/tls-certificates major:0 minor:1127 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/430c146b-ceaf-411a-add6-ce949243aabf/volumes/kubernetes.io~projected/kube-api-access-vdllq:{mountpoint:/var/lib/kubelet/pods/430c146b-ceaf-411a-add6-ce949243aabf/volumes/kubernetes.io~projected/kube-api-access-vdllq major:0 minor:111 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/456e6c3a-c16c-470b-a0cd-bb79865b54f0/volumes/kubernetes.io~projected/kube-api-access-nl7r8:{mountpoint:/var/lib/kubelet/pods/456e6c3a-c16c-470b-a0cd-bb79865b54f0/volumes/kubernetes.io~projected/kube-api-access-nl7r8 major:0 minor:70 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/456e6c3a-c16c-470b-a0cd-bb79865b54f0/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/456e6c3a-c16c-470b-a0cd-bb79865b54f0/volumes/kubernetes.io~secret/metrics-tls major:0 minor:43 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/467d92a2-1cf3-418d-b41e-8e5f9d7a5b74/volumes/kubernetes.io~projected/kube-api-access-f8lvq:{mountpoint:/var/lib/kubelet/pods/467d92a2-1cf3-418d-b41e-8e5f9d7a5b74/volumes/kubernetes.io~projected/kube-api-access-f8lvq major:0 minor:245 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/467d92a2-1cf3-418d-b41e-8e5f9d7a5b74/volumes/kubernetes.io~secret/profile-collector-cert:{mountpoint:/var/lib/kubelet/pods/467d92a2-1cf3-418d-b41e-8e5f9d7a5b74/volumes/kubernetes.io~secret/profile-collector-cert major:0 minor:239 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/467d92a2-1cf3-418d-b41e-8e5f9d7a5b74/volumes/kubernetes.io~secret/srv-cert:{mountpoint:/var/lib/kubelet/pods/467d92a2-1cf3-418d-b41e-8e5f9d7a5b74/volumes/kubernetes.io~secret/srv-cert major:0 minor:622 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/48863ff6-63ac-42d7-bac7-29d888c92db9/volumes/kubernetes.io~projected/kube-api-access-kgj82:{mountpoint:/var/lib/kubelet/pods/48863ff6-63ac-42d7-bac7-29d888c92db9/volumes/kubernetes.io~projected/kube-api-access-kgj82 major:0 minor:762 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/48863ff6-63ac-42d7-bac7-29d888c92db9/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/48863ff6-63ac-42d7-bac7-29d888c92db9/volumes/kubernetes.io~secret/cert major:0 minor:117 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4a5b01c1-1231-4e69-8b6c-c4981b65b26e/volumes/kubernetes.io~projected/kube-api-access-zr872:{mountpoint:/var/lib/kubelet/pods/4a5b01c1-1231-4e69-8b6c-c4981b65b26e/volumes/kubernetes.io~projected/kube-api-access-zr872 major:0 minor:265 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4a5b01c1-1231-4e69-8b6c-c4981b65b26e/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/4a5b01c1-1231-4e69-8b6c-c4981b65b26e/volumes/kubernetes.io~secret/serving-cert major:0 minor:252 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5b62004d-7fe3-47ae-8e26-8496befb047c/volumes/kubernetes.io~projected/kube-api-access-ln8g4:{mountpoint:/var/lib/kubelet/pods/5b62004d-7fe3-47ae-8e26-8496befb047c/volumes/kubernetes.io~projected/kube-api-access-ln8g4 major:0 minor:840 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5b62004d-7fe3-47ae-8e26-8496befb047c/volumes/kubernetes.io~secret/samples-operator-tls:{mountpoint:/var/lib/kubelet/pods/5b62004d-7fe3-47ae-8e26-8496befb047c/volumes/kubernetes.io~secret/samples-operator-tls major:0 minor:829 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5b923d74-bad3-4780-8e7e-e8365ac9ea06/volumes/kubernetes.io~projected/kube-api-access-cstlg:{mountpoint:/var/lib/kubelet/pods/5b923d74-bad3-4780-8e7e-e8365ac9ea06/volumes/kubernetes.io~projected/kube-api-access-cstlg major:0 minor:917 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5f810ea0-e32d-4097-beca-5194349a57a6/volumes/kubernetes.io~projected/kube-api-access-p49hf:{mountpoint:/var/lib/kubelet/pods/5f810ea0-e32d-4097-beca-5194349a57a6/volumes/kubernetes.io~projected/kube-api-access-p49hf major:0 minor:795 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/676adb95-3ffd-43e5-89e3-9d7a7d74df28/volumes/kubernetes.io~projected/kube-api-access-lnh76:{mountpoint:/var/lib/kubelet/pods/676adb95-3ffd-43e5-89e3-9d7a7d74df28/volumes/kubernetes.io~projected/kube-api-access-lnh76 major:0 minor:391 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/695d1f01-d3c1-4fb9-9dda-daf33eae11f5/volumes/kubernetes.io~projected/kube-api-access-kws4h:{mountpoint:/var/lib/kubelet/pods/695d1f01-d3c1-4fb9-9dda-daf33eae11f5/volumes/kubernetes.io~projected/kube-api-access-kws4h major:0 minor:1185 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/695d1f01-d3c1-4fb9-9dda-daf33eae11f5/volumes/kubernetes.io~secret/prometheus-operator-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/695d1f01-d3c1-4fb9-9dda-daf33eae11f5/volumes/kubernetes.io~secret/prometheus-operator-kube-rbac-proxy-config major:0 minor:1184 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/695d1f01-d3c1-4fb9-9dda-daf33eae11f5/volumes/kubernetes.io~secret/prometheus-operator-tls:{mountpoint:/var/lib/kubelet/pods/695d1f01-d3c1-4fb9-9dda-daf33eae11f5/volumes/kubernetes.io~secret/prometheus-operator-tls major:0 minor:1180 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6c02961f-30ec-4405-b7fa-9c4192342ae9/volumes/kubernetes.io~projected/kube-api-access-7llx6:{mountpoint:/var/lib/kubelet/pods/6c02961f-30ec-4405-b7fa-9c4192342ae9/volumes/kubernetes.io~projected/kube-api-access-7llx6 major:0 minor:242 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6c02961f-30ec-4405-b7fa-9c4192342ae9/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/6c02961f-30ec-4405-b7fa-9c4192342ae9/volumes/kubernetes.io~secret/serving-cert major:0 minor:237 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6dcef814-353e-4985-9afc-9e545f7853ae/volume-subpaths/run-systemd/ovnkube-controller/6:{mountpoint:/var/lib/kubelet/pods/6dcef814-353e-4985-9afc-9e545f7853ae/volume-subpaths/run-systemd/ovnkube-controller/6 major:0 minor:24 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6dcef814-353e-4985-9afc-9e545f7853ae/volumes/kubernetes.io~projected/kube-api-access-pjsbs:{mountpoint:/var/lib/kubelet/pods/6dcef814-353e-4985-9afc-9e545f7853ae/volumes/kubernetes.io~projected/kube-api-access-pjsbs major:0 minor:139 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6dcef814-353e-4985-9afc-9e545f7853ae/volumes/kubernetes.io~secret/ovn-node-metrics-cert:{mountpoint:/var/lib/kubelet/pods/6dcef814-353e-4985-9afc-9e545f7853ae/volumes/kubernetes.io~secret/ovn-node-metrics-cert major:0 minor:138 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/724ac845-3835-458b-9645-e665be135ff9/volumes/kubernetes.io~projected/kube-api-access-bff42:{mountpoint:/var/lib/kubelet/pods/724ac845-3835-458b-9645-e665be135ff9/volumes/kubernetes.io~projected/kube-api-access-bff42 major:0 minor:268 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/724ac845-3835-458b-9645-e665be135ff9/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/724ac845-3835-458b-9645-e665be135ff9/volumes/kubernetes.io~secret/etcd-client major:0 minor:247 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/724ac845-3835-458b-9645-e665be135ff9/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/724ac845-3835-458b-9645-e665be135ff9/volumes/kubernetes.io~secret/serving-cert major:0 minor:249 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/75915935-00a2-44ce-99d1-03e2492044d4/volumes/kubernetes.io~projected/kube-api-access-pc9jt:{mountpoint:/var/lib/kubelet/pods/75915935-00a2-44ce-99d1-03e2492044d4/volumes/kubernetes.io~projected/kube-api-access-pc9jt major:0 minor:1132 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/76915cba-7c11-4bd8-9943-81de74e7781b/volumes/kubernetes.io~projected/kube-api-access-6f8fj:{mountpoint:/var/lib/kubelet/pods/76915cba-7c11-4bd8-9943-81de74e7781b/volumes/kubernetes.io~projected/kube-api-access-6f8fj major:0 minor:240 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/76915cba-7c11-4bd8-9943-81de74e7781b/volumes/kubernetes.io~secret/profile-collector-cert:{mountpoint:/var/lib/kubelet/pods/76915cba-7c11-4bd8-9943-81de74e7781b/volumes/kubernetes.io~secret/profile-collector-cert major:0 minor:233 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/76915cba-7c11-4bd8-9943-81de74e7781b/volumes/kubernetes.io~secret/srv-cert:{mountpoint:/var/lib/kubelet/pods/76915cba-7c11-4bd8-9943-81de74e7781b/volumes/kubernetes.io~secret/srv-cert major:0 minor:618 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7846b339-c46d-4983-b586-a28f2868f665/volumes/kubernetes.io~projected/kube-api-access-5cnfs:{mountpoint:/var/lib/kubelet/pods/7846b339-c46d-4983-b586-a28f2868f665/volumes/kubernetes.io~projected/kube-api-access-5cnfs major:0 minor:495 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7846b339-c46d-4983-b586-a28f2868f665/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/7846b339-c46d-4983-b586-a28f2868f665/volumes/kubernetes.io~secret/cert major:0 minor:465 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7ac81030-35d1-4d86-844d-65d1156d8944/volumes/kubernetes.io~projected/kube-api-access-lqmhs:{mountpoint:/var/lib/kubelet/pods/7ac81030-35d1-4d86-844d-65d1156d8944/volumes/kubernetes.io~projected/kube-api-access-lqmhs major:0 minor:565 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7ac81030-35d1-4d86-844d-65d1156d8944/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/7ac81030-35d1-4d86-844d-65d1156d8944/volumes/kubernetes.io~secret/metrics-tls major:0 minor:570 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7f0f9b7d-e663-4927-861b-a9544d483b6e/volumes/kubernetes.io~projected/kube-api-access-5m4sb:{mountpoint:/var/lib/kubelet/pods/7f0f9b7d-e663-4927-861b-a9544d483b6e/volumes/kubernetes.io~projected/kube-api-access-5m4sb major:0 minor:133 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7f0f9b7d-e663-4927-861b-a9544d483b6e/volumes/kubernetes.io~secret/metrics-certs:{mountpoint:/var/lib/kubelet/pods/7f0f9b7d-e663-4927-861b-a9544d483b6e/volumes/kubernetes.io~secret/metrics-certs major:0 minor:629 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/83883885-f493-4559-9c0f-e28d69712475/volumes/kubernetes.io~projected/kube-api-access-6nmjz:{mountpoint:/var/lib/kubelet/pods/83883885-f493-4559-9c0f-e28d69712475/volumes/kubernetes.io~projected/kube-api-access-6nmjz major:0 minor:747 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/83883885-f493-4559-9c0f-e28d69712475/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/83883885-f493-4559-9c0f-e28d69712475/volumes/kubernetes.io~secret/serving-cert major:0 minor:662 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/857357a1-dc98-4dd5-98b3-c94b1ddf9dec/volumes/kubernetes.io~projected/ca-certs:{mountpoint:/var/lib/kubelet/pods/857357a1-dc98-4dd5-98b3-c94b1ddf9dec/volumes/kubernetes.io~projected/ca-certs major:0 minor:490 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/857357a1-dc98-4dd5-98b3-c94b1ddf9dec/volumes/kubernetes.io~projected/kube-api-access-lj6v2:{mountpoint:/var/lib/kubelet/pods/857357a1-dc98-4dd5-98b3-c94b1ddf9dec/volumes/kubernetes.io~projected/kube-api-access-lj6v2 major:0 minor:560 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/857357a1-dc98-4dd5-98b3-c94b1ddf9dec/volumes/kubernetes.io~secret/catalogserver-certs:{mountpoint:/var/lib/kubelet/pods/857357a1-dc98-4dd5-98b3-c94b1ddf9dec/volumes/kubernetes.io~secret/catalogserver-certs major:0 minor:559 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/86af980a-2653-40c3-a368-a795d7fb8558/volumes/kubernetes.io~projected/kube-api-access-tgqtf:{mountpoint:/var/lib/kubelet/pods/86af980a-2653-40c3-a368-a795d7fb8558/volumes/kubernetes.io~projected/kube-api-access-tgqtf major:0 minor:1207 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/86af980a-2653-40c3-a368-a795d7fb8558/volumes/kubernetes.io~secret/kube-state-metrics-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/86af980a-2653-40c3-a368-a795d7fb8558/volumes/kubernetes.io~secret/kube-state-metrics-kube-rbac-proxy-config major:0 minor:1205 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/86af980a-2653-40c3-a368-a795d7fb8558/volumes/kubernetes.io~secret/kube-state-metrics-tls:{mountpoint:/var/lib/kubelet/pods/86af980a-2653-40c3-a368-a795d7fb8558/volumes/kubernetes.io~secret/kube-state-metrics-tls major:0 minor:1215 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/89041b37-18f6-499d-89ec-a0523a25dc58/volumes/kubernetes.io~projected/kube-api-access-zvzpb:{mountpoint:/var/lib/kubelet/pods/89041b37-18f6-499d-89ec-a0523a25dc58/volumes/kubernetes.io~projected/kube-api-access-zvzpb major:0 minor:921 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8c267cc7-a51a-4b14-baee-e584254eefc5/volumes/kubernetes.io~projected/kube-api-access-9snq8:{mountpoint:/var/lib/kubelet/pods/8c267cc7-a51a-4b14-baee-e584254eefc5/volumes/kubernetes.io~projected/kube-api-access-9snq8 major:0 minor:1280 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8c267cc7-a51a-4b14-baee-e584254eefc5/volumes/kubernetes.io~secret/client-ca-bundle:{mountpoint:/var/lib/kubelet/pods/8c267cc7-a51a-4b14-baee-e584254eefc5/volumes/kubernetes.io~secret/client-ca-bundle major:0 minor:1274 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8c267cc7-a51a-4b14-baee-e584254eefc5/volumes/kubernetes.io~secret/secret-metrics-client-certs:{mountpoint:/var/lib/kubelet/pods/8c267cc7-a51a-4b14-baee-e584254eefc5/volumes/kubernetes.io~secret/secret-metrics-client-certs major:0 minor:1278 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8c267cc7-a51a-4b14-baee-e584254eefc5/volumes/kubernetes.io~secret/secret-metrics-server-tls:{mountpoint:/var/lib/kubelet/pods/8c267cc7-a51a-4b14-b Feb 16 02:22:27.790774 master-0 kubenswrapper[31559]: aee-e584254eefc5/volumes/kubernetes.io~secret/secret-metrics-server-tls major:0 minor:1279 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8f918d5b-1a4c-4b56-98a4-5cef638bb615/volumes/kubernetes.io~projected/kube-api-access-fxnht:{mountpoint:/var/lib/kubelet/pods/8f918d5b-1a4c-4b56-98a4-5cef638bb615/volumes/kubernetes.io~projected/kube-api-access-fxnht major:0 minor:808 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8f918d5b-1a4c-4b56-98a4-5cef638bb615/volumes/kubernetes.io~secret/encryption-config:{mountpoint:/var/lib/kubelet/pods/8f918d5b-1a4c-4b56-98a4-5cef638bb615/volumes/kubernetes.io~secret/encryption-config major:0 minor:806 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8f918d5b-1a4c-4b56-98a4-5cef638bb615/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/8f918d5b-1a4c-4b56-98a4-5cef638bb615/volumes/kubernetes.io~secret/etcd-client major:0 minor:807 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8f918d5b-1a4c-4b56-98a4-5cef638bb615/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/8f918d5b-1a4c-4b56-98a4-5cef638bb615/volumes/kubernetes.io~secret/serving-cert major:0 minor:805 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/91938be6-9ae4-4849-abe8-fc842daecd23/volumes/kubernetes.io~projected/kube-api-access-bhz2m:{mountpoint:/var/lib/kubelet/pods/91938be6-9ae4-4849-abe8-fc842daecd23/volumes/kubernetes.io~projected/kube-api-access-bhz2m major:0 minor:241 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/91938be6-9ae4-4849-abe8-fc842daecd23/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/91938be6-9ae4-4849-abe8-fc842daecd23/volumes/kubernetes.io~secret/serving-cert major:0 minor:303 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/97b8261a-91e3-435e-93f8-0a17f30359fd/volumes/kubernetes.io~projected/kube-api-access-pmqrb:{mountpoint:/var/lib/kubelet/pods/97b8261a-91e3-435e-93f8-0a17f30359fd/volumes/kubernetes.io~projected/kube-api-access-pmqrb major:0 minor:656 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/97b8261a-91e3-435e-93f8-0a17f30359fd/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/97b8261a-91e3-435e-93f8-0a17f30359fd/volumes/kubernetes.io~secret/proxy-tls major:0 minor:655 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/980aa005-f51d-4ca2-aee6-a6fdeefd86d0/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/980aa005-f51d-4ca2-aee6-a6fdeefd86d0/volumes/kubernetes.io~projected/kube-api-access major:0 minor:264 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/980aa005-f51d-4ca2-aee6-a6fdeefd86d0/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/980aa005-f51d-4ca2-aee6-a6fdeefd86d0/volumes/kubernetes.io~secret/serving-cert major:0 minor:255 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9be9fd24-fdb1-43dc-80b8-68020427bfd7/volumes/kubernetes.io~projected/kube-api-access-k2qvg:{mountpoint:/var/lib/kubelet/pods/9be9fd24-fdb1-43dc-80b8-68020427bfd7/volumes/kubernetes.io~projected/kube-api-access-k2qvg major:0 minor:275 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9be9fd24-fdb1-43dc-80b8-68020427bfd7/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/9be9fd24-fdb1-43dc-80b8-68020427bfd7/volumes/kubernetes.io~secret/serving-cert major:0 minor:254 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9defdfff-eb18-4beb-9591-918d0e4b4236/volumes/kubernetes.io~projected/kube-api-access-r94gg:{mountpoint:/var/lib/kubelet/pods/9defdfff-eb18-4beb-9591-918d0e4b4236/volumes/kubernetes.io~projected/kube-api-access-r94gg major:0 minor:393 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9defdfff-eb18-4beb-9591-918d0e4b4236/volumes/kubernetes.io~secret/signing-key:{mountpoint:/var/lib/kubelet/pods/9defdfff-eb18-4beb-9591-918d0e4b4236/volumes/kubernetes.io~secret/signing-key major:0 minor:392 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a0540a70-a256-422b-a827-e564d0e67866/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/a0540a70-a256-422b-a827-e564d0e67866/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:274 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a0540a70-a256-422b-a827-e564d0e67866/volumes/kubernetes.io~projected/kube-api-access-s9p9r:{mountpoint:/var/lib/kubelet/pods/a0540a70-a256-422b-a827-e564d0e67866/volumes/kubernetes.io~projected/kube-api-access-s9p9r major:0 minor:261 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a0540a70-a256-422b-a827-e564d0e67866/volumes/kubernetes.io~secret/image-registry-operator-tls:{mountpoint:/var/lib/kubelet/pods/a0540a70-a256-422b-a827-e564d0e67866/volumes/kubernetes.io~secret/image-registry-operator-tls major:0 minor:467 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a3065737-c7c0-4fbb-b484-f2a9204d4908/volumes/kubernetes.io~projected/kube-api-access-w7276:{mountpoint:/var/lib/kubelet/pods/a3065737-c7c0-4fbb-b484-f2a9204d4908/volumes/kubernetes.io~projected/kube-api-access-w7276 major:0 minor:480 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a77e2f8f-d164-4a58-aab2-f3444c05cacb/volumes/kubernetes.io~projected/kube-api-access-bsxrl:{mountpoint:/var/lib/kubelet/pods/a77e2f8f-d164-4a58-aab2-f3444c05cacb/volumes/kubernetes.io~projected/kube-api-access-bsxrl major:0 minor:760 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a77e2f8f-d164-4a58-aab2-f3444c05cacb/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/a77e2f8f-d164-4a58-aab2-f3444c05cacb/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert major:0 minor:119 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a8d00a01-aa48-4830-a558-93a31cb98b31/volumes/kubernetes.io~projected/kube-api-access-lbmnx:{mountpoint:/var/lib/kubelet/pods/a8d00a01-aa48-4830-a558-93a31cb98b31/volumes/kubernetes.io~projected/kube-api-access-lbmnx major:0 minor:542 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a8d00a01-aa48-4830-a558-93a31cb98b31/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls:{mountpoint:/var/lib/kubelet/pods/a8d00a01-aa48-4830-a558-93a31cb98b31/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls major:0 minor:541 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a8f33151-61df-4b66-ba85-9ba210779059/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/a8f33151-61df-4b66-ba85-9ba210779059/volumes/kubernetes.io~projected/kube-api-access major:0 minor:262 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a8f33151-61df-4b66-ba85-9ba210779059/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/a8f33151-61df-4b66-ba85-9ba210779059/volumes/kubernetes.io~secret/serving-cert major:0 minor:250 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ad700b17-ba2a-41d4-8bec-538a009a613b/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/ad700b17-ba2a-41d4-8bec-538a009a613b/volumes/kubernetes.io~projected/kube-api-access major:0 minor:587 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ad700b17-ba2a-41d4-8bec-538a009a613b/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/ad700b17-ba2a-41d4-8bec-538a009a613b/volumes/kubernetes.io~secret/serving-cert major:0 minor:569 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b2a83ddd-ffa5-4127-9099-91187ad9dbba/volumes/kubernetes.io~projected/kube-api-access-t7fmj:{mountpoint:/var/lib/kubelet/pods/b2a83ddd-ffa5-4127-9099-91187ad9dbba/volumes/kubernetes.io~projected/kube-api-access-t7fmj major:0 minor:246 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b2a83ddd-ffa5-4127-9099-91187ad9dbba/volumes/kubernetes.io~secret/apiservice-cert:{mountpoint:/var/lib/kubelet/pods/b2a83ddd-ffa5-4127-9099-91187ad9dbba/volumes/kubernetes.io~secret/apiservice-cert major:0 minor:448 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b2a83ddd-ffa5-4127-9099-91187ad9dbba/volumes/kubernetes.io~secret/node-tuning-operator-tls:{mountpoint:/var/lib/kubelet/pods/b2a83ddd-ffa5-4127-9099-91187ad9dbba/volumes/kubernetes.io~secret/node-tuning-operator-tls major:0 minor:455 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bde83629-b39c-401e-bc30-5ce205638918/volumes/kubernetes.io~projected/kube-api-access-24b6h:{mountpoint:/var/lib/kubelet/pods/bde83629-b39c-401e-bc30-5ce205638918/volumes/kubernetes.io~projected/kube-api-access-24b6h major:0 minor:276 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bde83629-b39c-401e-bc30-5ce205638918/volumes/kubernetes.io~secret/marketplace-operator-metrics:{mountpoint:/var/lib/kubelet/pods/bde83629-b39c-401e-bc30-5ce205638918/volumes/kubernetes.io~secret/marketplace-operator-metrics major:0 minor:631 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c442d349-668b-4d01-a097-5981b7a04eac/volumes/kubernetes.io~projected/kube-api-access-4vchs:{mountpoint:/var/lib/kubelet/pods/c442d349-668b-4d01-a097-5981b7a04eac/volumes/kubernetes.io~projected/kube-api-access-4vchs major:0 minor:877 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c442d349-668b-4d01-a097-5981b7a04eac/volumes/kubernetes.io~secret/machine-approver-tls:{mountpoint:/var/lib/kubelet/pods/c442d349-668b-4d01-a097-5981b7a04eac/volumes/kubernetes.io~secret/machine-approver-tls major:0 minor:866 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c4a146b2-c712-408a-97d8-5de3a84f3aaf/volumes/kubernetes.io~projected/kube-api-access-6p8rc:{mountpoint:/var/lib/kubelet/pods/c4a146b2-c712-408a-97d8-5de3a84f3aaf/volumes/kubernetes.io~projected/kube-api-access-6p8rc major:0 minor:901 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c4a146b2-c712-408a-97d8-5de3a84f3aaf/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls:{mountpoint:/var/lib/kubelet/pods/c4a146b2-c712-408a-97d8-5de3a84f3aaf/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls major:0 minor:890 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c8086f93-2d98-4218-afac-20a65e6bf943/volumes/kubernetes.io~projected/kube-api-access-cz49l:{mountpoint:/var/lib/kubelet/pods/c8086f93-2d98-4218-afac-20a65e6bf943/volumes/kubernetes.io~projected/kube-api-access-cz49l major:0 minor:606 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c8086f93-2d98-4218-afac-20a65e6bf943/volumes/kubernetes.io~secret/webhook-certs:{mountpoint:/var/lib/kubelet/pods/c8086f93-2d98-4218-afac-20a65e6bf943/volumes/kubernetes.io~secret/webhook-certs major:0 minor:604 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c9cd32bc-a13a-44ee-ba52-7bb335c7007b/volumes/kubernetes.io~projected/kube-api-access-xr7gn:{mountpoint:/var/lib/kubelet/pods/c9cd32bc-a13a-44ee-ba52-7bb335c7007b/volumes/kubernetes.io~projected/kube-api-access-xr7gn major:0 minor:244 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c9cd32bc-a13a-44ee-ba52-7bb335c7007b/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/c9cd32bc-a13a-44ee-ba52-7bb335c7007b/volumes/kubernetes.io~secret/serving-cert major:0 minor:238 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d008dbd4-e713-4f2e-b64d-ca9cfc83a502/volumes/kubernetes.io~projected/kube-api-access-2582m:{mountpoint:/var/lib/kubelet/pods/d008dbd4-e713-4f2e-b64d-ca9cfc83a502/volumes/kubernetes.io~projected/kube-api-access-2582m major:0 minor:320 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d870332c-2498-4135-a9b3-a71e67c2805b/volumes/kubernetes.io~projected/kube-api-access-wmjjn:{mountpoint:/var/lib/kubelet/pods/d870332c-2498-4135-a9b3-a71e67c2805b/volumes/kubernetes.io~projected/kube-api-access-wmjjn major:0 minor:764 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d870332c-2498-4135-a9b3-a71e67c2805b/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/d870332c-2498-4135-a9b3-a71e67c2805b/volumes/kubernetes.io~secret/proxy-tls major:0 minor:763 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d8bbd369-4219-48ef-ae2d-b45c81789403/volumes/kubernetes.io~projected/kube-api-access-grvmr:{mountpoint:/var/lib/kubelet/pods/d8bbd369-4219-48ef-ae2d-b45c81789403/volumes/kubernetes.io~projected/kube-api-access-grvmr major:0 minor:1173 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d8bbd369-4219-48ef-ae2d-b45c81789403/volumes/kubernetes.io~secret/certs:{mountpoint:/var/lib/kubelet/pods/d8bbd369-4219-48ef-ae2d-b45c81789403/volumes/kubernetes.io~secret/certs major:0 minor:1164 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d8bbd369-4219-48ef-ae2d-b45c81789403/volumes/kubernetes.io~secret/node-bootstrap-token:{mountpoint:/var/lib/kubelet/pods/d8bbd369-4219-48ef-ae2d-b45c81789403/volumes/kubernetes.io~secret/node-bootstrap-token major:0 minor:1165 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/dbc5b101-936f-4bf3-bbf3-f30966b0ab50/volumes/kubernetes.io~projected/kube-api-access-jlnkb:{mountpoint:/var/lib/kubelet/pods/dbc5b101-936f-4bf3-bbf3-f30966b0ab50/volumes/kubernetes.io~projected/kube-api-access-jlnkb major:0 minor:164 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/dbc5b101-936f-4bf3-bbf3-f30966b0ab50/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/dbc5b101-936f-4bf3-bbf3-f30966b0ab50/volumes/kubernetes.io~secret/webhook-cert major:0 minor:165 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/dc3354cb-b6c3-40a5-a695-cccb079ad292/volumes/kubernetes.io~projected/kube-api-access-hm44l:{mountpoint:/var/lib/kubelet/pods/dc3354cb-b6c3-40a5-a695-cccb079ad292/volumes/kubernetes.io~projected/kube-api-access-hm44l major:0 minor:842 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/dc3354cb-b6c3-40a5-a695-cccb079ad292/volumes/kubernetes.io~secret/apiservice-cert:{mountpoint:/var/lib/kubelet/pods/dc3354cb-b6c3-40a5-a695-cccb079ad292/volumes/kubernetes.io~secret/apiservice-cert major:0 minor:767 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/dc3354cb-b6c3-40a5-a695-cccb079ad292/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/dc3354cb-b6c3-40a5-a695-cccb079ad292/volumes/kubernetes.io~secret/webhook-cert major:0 minor:843 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e379cfaf-3a4c-40e7-8641-3524b3669295/volumes/kubernetes.io~projected/kube-api-access-gcq6v:{mountpoint:/var/lib/kubelet/pods/e379cfaf-3a4c-40e7-8641-3524b3669295/volumes/kubernetes.io~projected/kube-api-access-gcq6v major:0 minor:272 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e379cfaf-3a4c-40e7-8641-3524b3669295/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/e379cfaf-3a4c-40e7-8641-3524b3669295/volumes/kubernetes.io~secret/serving-cert major:0 minor:251 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e478bdcc-052e-42f8-91b6-58c26cfc9cfc/volumes/kubernetes.io~projected/kube-api-access-pfgxq:{mountpoint:/var/lib/kubelet/pods/e478bdcc-052e-42f8-91b6-58c26cfc9cfc/volumes/kubernetes.io~projected/kube-api-access-pfgxq major:0 minor:333 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e491b5ed-9c09-4308-9843-fba8d43bd3ae/volumes/kubernetes.io~projected/kube-api-access-j4p8p:{mountpoint:/var/lib/kubelet/pods/e491b5ed-9c09-4308-9843-fba8d43bd3ae/volumes/kubernetes.io~projected/kube-api-access-j4p8p major:0 minor:663 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e491b5ed-9c09-4308-9843-fba8d43bd3ae/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/e491b5ed-9c09-4308-9843-fba8d43bd3ae/volumes/kubernetes.io~secret/serving-cert major:0 minor:657 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f7317f91-9441-449f-9738-85da088cf94f/volumes/kubernetes.io~projected/kube-api-access-58cq8:{mountpoint:/var/lib/kubelet/pods/f7317f91-9441-449f-9738-85da088cf94f/volumes/kubernetes.io~projected/kube-api-access-58cq8 major:0 minor:137 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f7317f91-9441-449f-9738-85da088cf94f/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert:{mountpoint:/var/lib/kubelet/pods/f7317f91-9441-449f-9738-85da088cf94f/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert major:0 minor:136 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f91346c7-bde4-4fa2-ac27-b5f0d25eeb75/volumes/kubernetes.io~projected/kube-api-access-4ns9l:{mountpoint:/var/lib/kubelet/pods/f91346c7-bde4-4fa2-ac27-b5f0d25eeb75/volumes/kubernetes.io~projected/kube-api-access-4ns9l major:0 minor:128 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fec84b8a-a0d1-4b07-8827-cef0beb89ecd/volumes/kubernetes.io~projected/kube-api-access-88kmw:{mountpoint:/var/lib/kubelet/pods/fec84b8a-a0d1-4b07-8827-cef0beb89ecd/volumes/kubernetes.io~projected/kube-api-access-88kmw major:0 minor:844 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fec84b8a-a0d1-4b07-8827-cef0beb89ecd/volumes/kubernetes.io~secret/machine-api-operator-tls:{mountpoint:/var/lib/kubelet/pods/fec84b8a-a0d1-4b07-8827-cef0beb89ecd/volumes/kubernetes.io~secret/machine-api-operator-tls major:0 minor:845 fsType:tmpfs blockSize:0} overlay_0-100:{mountpoint:/var/lib/containers/storage/overlay/3cfcebeeab3f705a7ad8b725dc4e5f5ad5596daa4425443330f3e9b3bbe6905e/merged major:0 minor:100 fsType:overlay blockSize:0} overlay_0-1001:{mountpoint:/var/lib/containers/storage/overlay/08954095dc460665e3facded7f5ea1ab81eb1193ede63f57e7ef85a9cb2b6fef/merged major:0 minor:1001 fsType:overlay blockSize:0} overlay_0-1012:{mountpoint:/var/lib/containers/storage/overlay/9d1e7a0b0fe9c9027f55c5c27011a9d155590f8a469f13b05a8b62166461ca72/merged major:0 minor:1012 fsType:overlay blockSize:0} overlay_0-103:{mountpoint:/var/lib/containers/storage/overlay/3ecd272e51f593fb7005a7fca5b9d5762076c46ff38bea887981e22491c55632/merged major:0 minor:103 fsType:overlay blockSize:0} overlay_0-1046:{mountpoint:/var/lib/containers/storage/overlay/48722cdf3f7353f087cf2af04ed5d6edfe6ea2c28d2df5a2bd7c574ac09b0933/merged major:0 minor:1046 fsType:overlay blockSize:0} overlay_0-1048:{mountpoint:/var/lib/containers/storage/overlay/ea61cdeae5f3bfea03e09d54459a3ffc49bad1e44d06dc8deeb9f5caab64321a/merged major:0 minor:1048 fsType:overlay blockSize:0} overlay_0-1050:{mountpoint:/var/lib/containers/storage/overlay/442d9403d3432bde14aca50bd6213bb4055c51ab437752ea34b94032a97623fc/merged major:0 minor:1050 fsType:overlay blockSize:0} overlay_0-1052:{mountpoint:/var/lib/containers/storage/overlay/a95706180fda998406b64946d74b0661b8ee408cc438465e4d82d010d9f3074e/merged major:0 minor:1052 fsType:overlay blockSize:0} overlay_0-1057:{mountpoint:/var/lib/containers/storage/overlay/319ffc0ad77140a9d382df99192f1a4f8b0f8f9c2f466f3068841bc6f46ced5c/merged major:0 minor:1057 fsType:overlay blockSize:0} overlay_0-1059:{mountpoint:/var/lib/containers/storage/overlay/97b9af73d482abd8aca7cfa8cb2902aa68293722f5482e26aeb916a57e055c67/merged major:0 minor:1059 fsType:overlay blockSize:0} overlay_0-106:{mountpoint:/var/lib/containers/storage/overlay/e341cb9331b0e1405401f0adcde76cf27e8779b1826f31464f0a8a51f71e18b1/merged major:0 minor:106 fsType:overlay blockSize:0} overlay_0-1061:{mountpoint:/var/lib/containers/storage/overlay/005eedf581bdaf7f92070ed8baf2826e05608d9d7ef87f10405cfb7c2ef19c5e/merged major:0 minor:1061 fsType:overlay blockSize:0} overlay_0-1074:{mountpoint:/var/lib/containers/storage/overlay/fe0acebe29072e8690a47ac41b1fcfcc243bbfdb2098c13c3c5cbb92a2f03b79/merged major:0 minor:1074 fsType:overlay blockSize:0} overlay_0-1082:{mountpoint:/var/lib/containers/storage/overlay/a33e9d29b672bb188fcfd9cecb6cccfd1e04aef4970b87f12801d37766995cd4/merged major:0 minor:1082 fsType:overlay blockSize:0} overlay_0-1084:{mountpoint:/var/lib/containers/storage/overlay/fd7026bda4e141813db2b7a6eacaa7afe44fde34c0a89820c9a2d1bf19182f8e/merged major:0 minor:1084 fsType:overlay blockSize:0} overlay_0-109:{mountpoint:/var/lib/containers/storage/overlay/310634e6d0f7931700ee0ddf5c483dedca45928231d0b1c092041db02444afe8/merged major:0 minor:109 fsType:overlay blockSize:0} overlay_0-1097:{mountpoint:/var/lib/containers/storage/overlay/57c8a7e06fb612752bd6f70e046e8bed1d5099b6dcd7fa0c9f41bbac6358d78f/merged major:0 minor:1097 fsType:overlay blockSize:0} overlay_0-1098:{mountpoint:/var/lib/containers/storage/overlay/c7d3557e4fb2f0d41e19b5b2660bc98f1c1f009c1717111053cd86edc735aafc/merged major:0 minor:1098 fsType:overlay blockSize:0} overlay_0-1101:{mountpoint:/var/lib/containers/storage/overlay/f1120bf81cbce15f7b7fcd7ef3cc8b07975c0dfdcd26dc5d2cbe3279b6b4fc51/merged major:0 minor:1101 fsType:overlay blockSize:0} overlay_0-1107:{mountpoint:/var/lib/containers/storage/overlay/69638a5418e8621741edd34b0dd982b3d8db8a49690d8d5f03ca6380a406dd62/merged major:0 minor:1107 fsType:overlay blockSize:0} overlay_0-1115:{mountpoint:/var/lib/containers/storage/overlay/0e82ac4a259e76bcd41fe2d55d7d712366623b08e16bae4c819f774fa06c3311/merged major:0 minor:1115 fsType:overlay blockSize:0} overlay_0-1117:{mountpoint:/var/lib/containers/storage/overlay/05345c94d521d61c550b258b9b16233615cf3fe5a5997429151bd8b3df0fce08/merged major:0 minor:1117 fsType:overlay blockSize:0} overlay_0-1137:{mountpoint:/var/lib/containers/storage/overlay/bfddb83987792c33e4a804eba17613f3a8928e5954d87d670f6fdb26ff209a56/merged major:0 minor:1137 fsType:overlay blockSize:0} overlay_0-1142:{mountpoint:/var/lib/containers/storage/overlay/6b425ac6e43c0727571d0cdd1feb04f0dead19831b0f3dbee9a6db07804bbdf9/merged major:0 minor:1142 fsType:overlay blockSize:0} overlay_0-1144:{mountpoint:/var/lib/containers/storage/overlay/ce09ba9fcf8cabba81a903dde28acbed7f28be744b3b7882451df3d642482222/merged major:0 minor:1144 fsType:overlay blockSize:0} overlay_0-1145:{mountpoint:/var/lib/containers/storage/overlay/5a7f14f701161743f95e1cbadc5760cd70d4afc765e74a681607586a85d938f9/merged major:0 minor:1145 fsType:overlay blockSize:0} overlay_0-1148:{mountpoint:/var/lib/containers/storage/overlay/9921c3ce4bfb7728160e1ce1ee33731e302a1ec229f40ffbf2dcb60efa322f74/merged major:0 minor:1148 fsType:overlay blockSize:0} overlay_0-115:{mountpoint:/var/lib/containers/storage/overlay/b334761243f18cf47aafca9d2b9ac0a0cdd4ed9bee7f6530fdba6b4f94c4d44a/merged major:0 minor:115 fsType:overlay blockSize:0} overlay_0-1151:{mountpoint:/var/lib/containers/storage/overlay/33734295cd3fdc923285546bfb8544ea0cc9f1510c70d1a413f93fe64354753e/merged major:0 minor:1151 fsType:overlay blockSize:0} overlay_0-1154:{mountpoint:/var/lib/containers/storage/overlay/0a29c7f42455ff93803c3c9271df867b895bc392dc6e456b903ccb3922c99dd3/merged major:0 minor:1154 fsType:overlay blockSize:0} overlay_0-1155:{mountpoint:/var/lib/containers/storage/overlay/68340545cc3ae4bae31f57ef540dbfbb64902dbf950a0d40d7b1b0a02dba9ae4/merged major:0 minor:1155 fsType:overlay blockSize:0} overlay_0-1176:{mountpoint:/var/lib/containers/storage/overlay/f72ab5fafd2ee5747539ababaac50de808854a5385ed150d8d690dff414660b1/merged major:0 minor:1176 fsType:overlay blockSize:0} overlay_0-1178:{mountpoint:/var/lib/containers/storage/overlay/cb1e76b43354f5581849900a02d5c167c32fef2584181a5f17176c67f5f66357/merged major:0 minor:1178 fsType:overlay blockSize:0} overlay_0-1188:{mountpoint:/var/lib/containers/storage/overlay/984413ddc1df8e13d2c74aeb260869abd73984fd1643dff86b265f9774308336/merged major:0 minor:1188 fsType:overlay blockSize:0} overlay_0-1190:{mountpoint:/var/lib/containers/storage/overlay/69cf18ff2ca5be3bb89880957b139d95f021564c48b9e6740f4ecf37bcc478b8/merged major:0 minor:1190 fsType:overlay blockSize:0} overlay_0-1192:{mountpoint:/var/lib/containers/storage/overlay/46251856ce4e36a7e2d1c556245a2d0ebf0e092e20974ef3ddbeb9552a7c75df/merged major:0 minor:1192 fsType:overlay blockSize:0} overlay_0-1213:{mountpoint:/var/lib/containers/storage/overlay/cac632442870486895ec68d5b0b76a95935c86a881bd20a43153a02a3e557d37/merged major:0 minor:1213 fsType:overlay blockSize:0} overlay_0-1216:{mountpoint:/var/lib/containers/storage/overlay/17fb5ad35d35066be18ba17f60299cbb61d2a113844fc187485288ea094db73e/merged major:0 minor:1216 fsType:overlay blockSize:0} overlay_0-1218:{mountpoint:/var/lib/containers/storage/overlay/6add11a8d525387b8680eb37dc66dc7785872956bfb373425cda45da4b302d01/merged major:0 minor:1218 fsType:overlay blockSize:0} overlay_0-122:{mountpoint:/var/lib/containers/storage/overlay/8ab7da6c1c07f08f7d77afa1a99d4653b459d42329c716dadaf4c11c6125dc35/merged major:0 minor:122 fsType:overlay blockSize:0} overlay_0-1220:{mountpoint:/var/lib/containers/storage/overlay/a112212cf1b97f23f39553e65ce549dde8ac8aa1620675813710090aa19c7e8f/merged major:0 minor:1220 fsType:overlay blockSize:0} overlay_0-1232:{mountpoint:/var/lib/containers/storage/overlay/a226ec5f426f652a13729def8210750a42144deff7b5dce85509224711c1407f/merged major:0 minor:1232 fsType:overlay blockSize:0} overlay_0-1235:{mountpoint:/var/lib/containers/storage/overlay/d890fdd005b3bd4d1f85489a53133aa5f9ba9102fa9d71215668f8d1234f0632/merged major:0 minor:1235 fsType:overlay blockSize:0} overlay_0-1236:{mountpoint:/var/lib/containers/storage/overlay/8ca455ebd11f9ce2ff641197adfa9eaef2b9808332bc4a71fb57003fa4605eda/merged major:0 minor:1236 fsType:overlay blockSize:0} overlay_0-124:{mountpoint:/var/lib/containers/storage/overlay/17190f2c99848d13b5b6e0ab4afa9800698257c7f7a0b66097619e9ac7f51e98/merged major:0 minor:124 fsType:overlay blockSize:0} overlay_0-1240:{mountpoint:/var/lib/containers/storage/overlay/bdfb9d0160b81c867dbe5a891edb9398e8656534ac9f946f98818ffa8494c4b5/merged major:0 minor:1240 fsType:overlay blockSize:0} overlay_0-1242:{mountpoint:/var/lib/containers/storage/overlay/fe5803debe75b2a49fc1215a3c59d2259af8920c070583da92a9a4239112a9bb/merged major:0 minor:1242 fsType:overlay blockSize:0} overlay_0-1248:{mountpoint:/var/lib/containers/storage/overlay/049773eda5cec1e4996178612363722fd57c774316354944329dc00ddb94831f/merged major:0 minor:1248 fsType:overlay blockSize:0} overlay_0-1253:{mountpoint:/var/lib/containers/storage/overlay/07012cb67e56f807e4295ca539f1157a44f5a526c889d16ac3fbd36bd62632a7/merged major:0 minor:1253 fsType:overlay blockSize:0} overlay_0-1255:{mountpoint:/var/lib/containers/storage/overlay/f4c1c2384c5ac8e053f8662f8566f26de094206f570bfc0c47ee59cd715d2013/merged major:0 minor:1255 fsType:overlay blockSize:0} overlay_0-126:{mountpoint:/var/lib/containers/storage/overlay/b50a1317c43369c3944f99de41cccc4b6694ce586f9df3f06a9b086923ed4fdc/merged major:0 minor:126 fsType:overlay blockSize:0} overlay_0-1264:{mountpoint:/var/lib/containers/storage/overlay/b654139eb7e85270c6e1775fa861feaa3ba13f8cb5a1723b21135dcc9976b0e4/merged major:0 minor:1264 fsType:overlay blockSize:0} overlay_0-1266:{mountpoint:/var/lib/containers/storage/overlay/adac968a271dfc21cd54be8738b725a205d88928d0c2951e5a615bc8703f42d5/merged major:0 minor:1266 fsType:overlay blockSize:0} overlay_0-1283:{mountpoint:/var/lib/containers/storage/overlay/93ce33534045175b35edda3136a24fd9da34321f6d25479825d3d68bc196d91b/merged major:0 minor:1283 fsType:overlay blockSize:0} overlay_0-1285:{mountpoint:/var/lib/containers/storage/overlay/ad3e4cabf6b80b8972ae82c80187c451f4608d2dfdd5a9801bf2e42b76964404/merged major:0 minor:1285 fsType:overlay blockSize:0} overlay_0-1305:{mountpoint:/var/lib/containers/storage/overlay/bea3db714599dc6932f32fba598f468e17692017e7206e8db6e22d8b061dc268/merged major:0 minor:1305 fsType:overlay blockSize:0} overlay_0-1308:{mountpoint:/var/lib/containers/storage/overlay/a1f3f074511442ce31cbde9e578bb86f8648115eb077e0b93b2c92c1d2112b48/merged major:0 minor:1308 fsType:overlay blockSize:0} overlay_0-131:{mountpoint:/var/lib/containers/storage/overlay/6e9afbac13e454e623a09bf9a8403e8fc60c5a051c19f9e25f6eef8dcb794501/merged major:0 minor:131 fsType:overlay blockSize:0} overlay_0-1313:{mountpoint:/var/lib/containers/storage/overlay/c9dc04c5ad947656c3ec3580a4735dc35a11dcdc66be60d490b284cb610356b4/merged major:0 minor:1313 fsType:overlay blockSize:0} overlay_0-1315:{mountpoint:/var/lib/containers/storage/overlay/5df642eb8c618f24b0a67f8b23801fd50fbcde7cfee0dea156f933a44461d099/merged major:0 minor:1315 fsType:overlay blockSize:0} overlay_0-1318:{mountpoint:/var/lib/containers/storage/overlay/28a2581f8f4c8e0ee3fe3ef2dd7a080bf4000b9aedcd0463f7d4a590ec932ebc/merged major:0 minor:1318 fsType:overlay blockSize:0} overlay_0-1320:{mountpoint:/var/lib/containers/storage/overlay/f5b6675d2833156ab8b0c62b5d42c5b815fc6abb44fba538b2fee3c92871da73/merged major:0 minor:1320 fsType:overlay blockSize:0} overlay_0-1322:{mountpoint:/var/lib/containers/storage/overlay/4c415440dd7323517e0a3f9f1257f9d1c326bc0ec7bf6157e3269cb498747ace/merged major:0 minor:1322 fsType:overlay blockSize:0} overlay_0-1323:{mountpoint:/var/lib/containers/storage/overlay/fd5cab00a022393cb207ee651d67e7820e2b2a86bfee28554fb54a7c44c1afbb/merged major:0 minor:1323 fsType:overlay blockSize:0} overlay_0-1334:{mountpoint:/var/lib/containers/storage/overlay/4c49c811e3cc3bf1b10a8fa3c827022f37bf10e4b2ecf149240de2142d8001cd/merged major:0 minor:1334 fsType:overlay blockSize:0} overlay_0-134:{mountpoint:/var/lib/containers/storage/overlay/1d64b1376fddf11bac05423cfa723b2a1c876471b3e65585677b948b17032172/merged major:0 minor:134 fsType:overlay blockSize:0} overlay_0-142:{mountpoint:/var/lib/containers/storage/overlay/1a25e24392c0d16a9d21ecc80e6eb250c115d250ad2174ee4413b9befe9b5dbc/merged major:0 minor:142 fsType:overlay blockSize:0} overlay_0-146:{mountpoint:/var/lib/containers/storage/overlay/77a0271ce4805a938af450dac532ebb8770ade03689b2bc7034d7d7ebd13a331/merged major:0 minor:146 fsType:overlay blockSize:0} overlay_0-148:{mountpoint:/var/lib/containers/storage/overlay/fdc356e2a1779d00a7b145eabe64616f2dc310bfd5a6e4b058b9bd65bca8e8e6/merged major:0 minor:148 fsType:overlay blockSize:0} overlay_0-150:{mountpoint:/var/lib/containers/storage/overlay/94d6bb60cc779d223c874f2fc63dbb9c23f8cdf8c1504c9a6400090a6763956b/merged major:0 minor:150 fsType:overlay blockSize:0} overlay_0-152:{mountpoint:/var/lib/containers/storage/overlay/32a4a87ec9b18a4c7ebee3f9940a51df5edef2903fa47300d4832a3259958a26/merged major:0 minor:152 fsType:overlay blockSize:0} overlay_0-158:{mountpoint:/var/lib/containers/storage/overlay/b1d9a348a8ef0aacec94ca5b756c279157b27710d898119705d5fc33b9da377b/merged major:0 minor:158 fsType:overlay blockSize:0} overlay_0-162:{mountpoint:/var/lib/containers/storage/overlay/ee772f1396b735d3bc013b62f9d0852b28584129e3bbdb86f13c9e606e71ee1f/merged major:0 minor:162 fsType:overlay blockSize:0} overlay_0-168:{mountpoint:/var/lib/containers/storage/overlay/fe69b399d9e5d4f0846be82e32429b9b86ce0e6a59df88d9618bc8ce3344a154/merged major:0 minor:168 fsType:overlay blockSize:0} overlay_0-170:{mountpoint:/var/lib/containers/storage/overlay/d08b61e4f980d1d105057d5fc8cdc5fbd129f50d8b2381f62ef5e6c45138ba22/merged major:0 minor:170 fsType:overlay blockSize:0} overlay_0-172:{mountpoint:/var/lib/containers/storage/overlay/92114d496354e7a0f1b9599b5c9612b69eaf691c15cd38c65a717387fe415660/merged major:0 minor:172 fsType:overlay blockSize:0} overlay_0-174:{mountpoint:/var/lib/containers/storage/overlay/837f2208beacbe381d071b5ffaa113f747886b6d64d84479ffccdaaefa5022f8/merged major:0 minor:174 fsType:overlay blockSize:0} overlay_0-176:{mountpoint:/var/lib/containers/storage/overlay/21ca3330603cf7353703c18b2e0b9896585827c73f47ee83db9951a948397cf6/merged major:0 minor:176 fsType:overlay blockSize:0} overlay_0-178:{mountpoint:/var/lib/containers/storage/overlay/02a53f0b605145c6dc566f44c85a7ace626696d28d2e1bab509ff4227fb92947/merged major:0 minor:178 fsType:overlay blockSize:0} overlay_0-181:{mountpoint:/var/lib/containers/storage/overlay/e8ae0408ec5186131beedaf25132c0dc35375f9a110646edb9e8672ce6b8bab9/merged major:0 minor:181 fsType:overlay blockSize:0} overlay_0-183:{mountpoint:/var/lib/containers/storage/overlay/56e70da7ec90b5673f608f7cf2371e8c0cbe16ceaf90240743b9fc829fd9a911/merged major:0 minor:183 fsType:overlay blockSize:0} overlay_0-186:{mountpoint:/var/lib/containers/storage/overlay/0019173c877e5c7080fde2fcef184a539b5105a93fd56db4cdfd15e86f8746aa/merged major:0 minor:186 fsType:overlay blockSize:0} overlay_0-188:{mountpoint:/var/lib/containers/storage/overlay/ad6ce98cc472875b1838a985624e8ca554af39da6ae9cda8c39ec83576d844ab/merged major:0 minor:188 fsType:overlay blockSize:0} overlay_0-192:{mountpoint:/var/lib/containers/storage/overlay/10c31cc9c95b0342588400481edefed964960c338fc787391f9abc5624a6254e/merged major:0 minor:192 fsType:overlay blockSize:0} overlay_0-193:{mountpoint:/var/lib/containers/storage/overlay/441fd4a7abc3373228b43c6d404e3989069290071598ae2efa920f619cc6d7ed/merged major:0 minor:193 fsType:overlay blockSize:0} overlay_0-198:{mountpoint:/var/lib/containers/storage/overlay/504a20a8cc1350d02dd4fb180d5a4f29a53d26d039177b2aa6186fc6498d4659/merged major:0 minor:198 fsType:overlay blockSize:0} overlay_0-203:{mountpoint:/var/lib/containers/storage/overlay/25072c49dc0a20454397851b4e1fc284b670a9065c6535addb2909fa0343d7e8/merged major:0 minor:203 fsType:overlay blockSize:0} overlay_0-208:{mountpoint:/var/lib/containers/storage/overlay/67f96cdb99208cf1d74c8ea90120280ae7e8a371de2f618ee7eac490ed39ad35/merged major:0 minor:208 fsType:overlay blockSize:0} overlay_0-213:{mountpoint:/var/lib/containers/storage/overlay/db5e1e6c05588444abaaaf420dc49a4d5da83697eb1ecc0121385a773513ad32/merged major:0 minor:213 fsType:overlay blockSize:0} overlay_0-218:{mountpoint:/var/lib/containers/storage/overlay/58f4fdc277689463210318fc48f92e93ee832b85de994c0e10b702f7303268c1/merged major:0 minor:218 fsType:overlay blockSize:0} overlay_0-219:{mountpoint:/var/lib/containers/storage/overlay/4392bf7be2e6542ab7ae865856f291764611c7a10550e9f8c1e7e11fc3729bfc/merged major:0 minor:219 fsType:overlay blockSize:0} overlay_0-228:{mountpoint:/var/lib/containers/storage/overlay/ad87e093dcee11518887953cc5ceb966a9f0677381f734fb832ab23cc8045f6e/merged major:0 minor:228 fsType:overlay blockSize:0} overlay_0-281:{mountpoint:/var/lib/containers/storage/overlay/5ee1318c84d0e94fec137a921e81aa0595ee6236631233010cb8c3e463f8af2c/merged major:0 minor:281 fsType:overlay blockSize:0} overlay_0-297:{mountpoint:/var/lib/containers/storage/overlay/148c5cb55218412f773b8cdd2bb29c87c6ebe66eba90ad00ac25344fac91c105/merged major:0 minor:297 fsType:overlay blockSize:0} overlay_0-299:{mountpoint:/var/lib/containers/storage/overlay/87952562372e2a7892b5aa348eb26a6f6a9159aff0fcacc9741b72ddcb34d8e4/merged major:0 minor:299 fsType:overlay blockSize:0} overlay_0-301:{mountpoint:/var/lib/containers/storage/overlay/5f5dbdfc105665922ace53841cf5dd544ef2dc2ff42047d3dba120aa8962bfdc/merged major:0 minor:301 fsType:overlay blockSize:0} overlay_0-304:{mountpoint:/var/lib/containers/storage/overlay/704c1de4b7d88fa1f5d8f4af59f9ed667383fad472b15eebef55812680b59d52/merged major:0 minor:304 fsType:overlay blockSize:0} overlay_0-306:{mountpoint:/var/lib/containers/storage/overlay/415ecab0d00fe172952c193c077298a53c30952ba8c238494c7e1d8f861733c6/merged major:0 minor:306 fsType:overlay blockSize:0} overlay_0-308:{mountpoint:/var/lib/containers/storage/overlay/bae028af1b38e21a880b73a3d29acac533ea9a612f6b900d2788710cd6a8c11a/merged major:0 minor:308 fsType:overlay blockSize:0} overlay_0-310:{mountpoint:/var/lib/containers/storage/overlay/7988a1fdb70772cbaec49e60bd582584cb6050e00e3af58858de80ded3cdf803/merged major:0 minor:310 fsType:overlay blockSize:0} overlay_0-312:{mountpoint:/var/lib/containers/storage/overlay/a134cd8c876b5ca62017716e4e84d75eab9ef4ec124dcdc8603a9471fc85bf0e/merged major:0 minor:312 fsType:overlay blockSize:0} overlay_0-314:{mountpoint:/var/lib/containers/storage/overlay/5500e4f9c337cbf9889046ae70595f81db67a36ef9b1031d1410f2913f928766/merged major:0 minor:314 fsType:overlay blockSize:0} overlay_0-316:{mountpoint:/var/lib/containers/storage/overlay/089e0c76d39ba4d9d9d4d5ca153eed13907e5a68dd93980687e47238dddd4ef2/merged major:0 minor:316 fsType:overlay blockSize:0} overlay_0-318:{mountpoint:/var/lib/containers/storage/overlay/cdc5ffc0677aaa1ffbcd10a1bb93c1b752e382bb51184d961440442a0345dc96/merged major:0 minor:318 fsType:overlay blockSize:0} overlay_0-325:{mountpoint:/var/lib/containers/storage/overlay/a32b6f3afae2282bd39a19bbfe3f29f0fa7daa6b8af2b42eaf5bd63248de3eff/merged major:0 minor:325 fsType:overlay blockSize:0} overlay_0-327:{mountpoint:/var/lib/containers/storage/overlay/663d2455882a049143a933c7d9f437cc9a649af9b7fff1875a1c7991638ce891/merged major:0 minor:327 fsType:overlay blockSize:0} overlay_0-331:{mountpoint:/var/lib/containers/storage/overlay/bce687113ff7c771221a83db3194c601355c55519c2980dc8d4249a526d35e71/merged major:0 minor:331 fsType:overlay blockSize:0} overlay_0-336:{mountpoint:/var/lib/containers/storage/overlay/8d7a24c3b6f2ce789c6ae6a155afa074029f9e1c33708af645c92a836acb6747/merged major:0 minor:336 fsType:overlay blockSize:0} overlay_0-338:{mountpoint:/var/lib/containers/storage/overlay/d655cc5a80a0a89ce609e419bc194fc5bd326ba7bf15e2870130c957f5f5e83b/merged major:0 minor:338 fsType:overlay blockSize:0} overlay_0-340:{mountpoint:/var/lib/containers/storage/overlay/1d1498869637ec1df88d6552b0abdf88a32cbb06fe55f3e88de9a69f9958a1f8/merged major:0 minor:340 fsType:overlay blockSize:0} overlay_0-344:{mountpoint:/var/lib/containers/storage/overlay/3a87fbf730b9cfe97e4b34bc073fbb644227119e618d54971522681cca102590/merged major:0 minor:344 fsType:overlay blockSize:0} overlay_0-346:{mountpoint:/var/lib/containers/storage/overlay/a54aa774f92e6b0791d3133ee0b5df2be29cf9ba11abb8e4ad45e03569010396/merged major:0 minor:346 fsType:overlay blockSize:0} overlay_0-348:{mountpoint:/var/lib/containers/storage/overlay/56f0a6ffb67bfc156880856cb6042ca9e00ba72161c14817b460917b833eef67/merged major:0 minor:348 fsType:overlay blockSize:0} overlay_0-350:{mountpoint:/var/lib/containers/storage/overlay/2882cafc9038a89f84d84389a227cf735ca970439617dd92696e3adec3625921/merged major:0 minor:350 fsType:overlay blockSize:0} overlay_0-352:{mountpoint:/var/lib/containers/storage/overlay/3f1312c92ebdcef1f297e131cabbdab712389b5831606e410df5b5893e7eebf0/merged major:0 minor:352 fsType:overlay blockSize:0} overlay_0-355:{mountpoint:/var/lib/containers/storage/overlay/d5189b9ac43afd9f2ef7d72138fbc9656a68f2c9bb6dcbf8b842a0c04122ab6b/merged major:0 minor:355 fsType:overlay blockSize:0} overlay_0-356:{mountpoint:/var/lib/containers/storage/overlay/c98a378312f355fdae463d30b795ffd25858c666779f0f5da8ecc97600a7f4f9/merged major:0 minor:356 fsType:overlay blockSize:0} overlay_0-359:{mountpoint:/var/lib/containers/storage/overlay/9814408bbedce0995defb2f998511883d23f74bb73c06205a7c311450fe2c414/merged major:0 minor:359 fsType:overlay blockSize:0} overlay_0-368:{mountpoint:/var/lib/containers/storage/overlay/2fe65d4b41d0c2fd47a27ef1a19347a31babf09d3f0866fe433b36c476c2befe/merged major:0 minor:368 fsType:overlay blockSize:0} overlay_0-379:{mountpoint:/var/lib/containers/storage/overlay/0da2496443d042171838f2aa1df7dc720842fe005da5993da6e0da4816c22833/merged major:0 minor:379 fsType:overlay blockSize:0} overlay_0-394:{mountpoint:/var/lib/containers/storage/overlay/62561c917c5e2d9619367c991b3274eea8cbff001fbacadddb56ec057a166768/merged major:0 minor:394 fsType:overlay blockSize:0} overlay_0-398:{mountpoint:/var/lib/containers/storage/overlay/0fd6ff66e36f024332668fae8ab7243a4f31b9a9c2775fc68722034830c01184/merged major:0 minor:398 fsType:overlay blockSize:0} overlay_0-400:{mountpoint:/var/lib/containers/storage/overlay/a3bb213ddedb1c5116d9e6eb0ea7b322a8e4d695356160414a7965d3b409ad74/merged major:0 minor:400 fsType:overlay blockSize:0} overlay_0-403:{mountpoint:/var/lib/containers/storage/overlay/3b546528b1eda7172a2df45935f5d526c2fa7b9cf6b914daa7e05a0ff0a78d64/merged major:0 minor:403 fsType:overlay blockSize:0} overlay_0-404:{mountpoint:/var/lib/containers/storage/overlay/b62016665ceba84c228bbda96f0616df08657265615fa5f329328b200f7bb118/merged major:0 minor:404 fsType:overlay blockSize:0} overlay_0-406:{mountpoint:/var/lib/containers/storage/overlay/4bd551fbb57b2f13d57e54142b72bf43dc003bc60833d25803004d818fddd754/merged major:0 minor:406 fsType:overlay blockSize:0} overlay_0-407:{mountpoint:/var/lib/containers/storage/overlay/d45aca8569b946f6090c4182fbcf2728d01a454e619f43a168ccb2a912ca565b/merged major:0 minor:407 fsType:overlay blockSize:0} overlay_0-409:{mountpoint:/var/lib/containers/storage/overlay/8e9f7c33cc0694524a57b8038f4839d2f9bd64f1c12c2aa6597cd2a768c1b75a/merged major:0 minor:409 fsType:overlay blockSize:0} overlay_0-413:{mountpoint:/var/lib/containers/storage/overlay/e36be648c894bdc1202df8b56234cdee0cf43cb5562d13bd6b0edb63e6c6fa18/merged major:0 minor:413 fsType:overlay blockSize:0} overlay_0-416:{mountpoint:/var/lib/containers/storage/overlay/61005d5893c92838296cc9ab13acadbfa7b684c2c29cf7712766a9d4bcc2dd9a/merged major:0 minor:416 fsType:overlay blockSize:0} overlay_0-418:{mountpoint:/var/lib/containers/storage/overlay/5bcb0cbeac1b2423890861b8169c5b76225f8ca9e5978343e5b7f3fe310f9497/merged major:0 minor:418 fsType:overlay blockSize:0} overlay_0-424:{mountpoint:/var/lib/containers/storage/overlay/f294140d7914a68d286d39d1aac2258f3cbda26a5112513aa72d1a37a897c7d3/merged major:0 minor:424 fsType:overlay blockSize:0} overlay_0-435:{mountpoint:/var/lib/containers/storage/overlay/0e6e5fd72463cb84d39628e987bfc859256451fe170eec1930020e6ab9c1ad9e/merged major:0 minor:435 fsType:overlay blockSize:0} overlay_0-438:{mountpoint:/var/lib/containers/storage/overlay/57a0235e13990c27fa1c59f39f72637146f9449d4311f0f6353353eb57416dc0/merged major:0 minor:438 fsType:overlay blockSize:0} overlay_0-439:{mountpoint:/var/lib/containers/storage/overlay/9ea0fa9b4a1c68bf17118e611ae6961d6141e80529821a7a95b77da42e8da659/merged major:0 minor:439 fsType:overlay blockSize:0} overlay_0-444:{mountpoint:/var/lib/containers/storage/overlay/801a75e928c5b7dc5078844e8339e5f0e039e697de8b07602c33ead35213db9f/merged major:0 minor:444 fsType:overlay blockSize:0} overlay_0-447:{mountpoint:/var/lib/containers/storage/overlay/2a5a68018383a6e57b53561b05e5f31f5e7cb90de7aad467af9885a691c0efee/merged major:0 minor:447 fsType:overlay blockSize:0} overlay_0-456:{mountpoint:/var/lib/containers/storage/overlay/a1c2de4c8a5ae848f617289747ad5f8d31f6738a78a3775abb8454016723df79/merged major:0 minor:456 fsType:overlay blockSize:0} overlay_0-46:{mountpoint:/var/lib/containers/storage/overlay/8f9ff61517b9a44a391f8f46c1386a0daa1e1ad486ac2afc414f9c6e40f906e3/merged major:0 minor:46 fsType:overlay blockSize:0} overlay_0-461:{mountpoint:/var/lib/containers/storage/overlay/8d58583e2c84a7173e650a44316c8a5b11219987bcee448f92a8a4e3a5bf2ff0/merged major:0 minor:461 fsType:overlay blockSize:0} overlay_0-466:{mountpoint:/var/lib/containers/storage/overlay/f5950dfa786b4c6e6d805bdf169d871228046ffaa1e689ac43c25b18e086608b/merged major:0 minor:466 fsType:overlay blockSize:0} overlay_0-470:{mountpoint:/var/lib/containers/storage/overlay/81b32edca6549b112b32fbc3653027accf714a0f2187075679064033b8bb2778/merged major:0 minor:470 fsType:overlay blockSize:0} overlay_0-472:{mountpoint:/var/lib/containers/storage/overlay/6d1ed8a0f588fb3a4776e96a5575a59a2110b1d59200c866342ae1ebbb2a6df3/merged major:0 minor:472 fsType:overlay blockSize:0} overlay_0-474:{mountpoint:/var/lib/containers/storage/overlay/6846c6ff0d72366a6fb0d4e39a8eee3044d4fe73a7628731636ce26f828606a2/merged major:0 minor:474 fsType:overlay blockSize:0} overlay_0-476:{mountpoint:/var/lib/containers/storage/overlay/fb88a759ab33f5a9bba776a0e7a30232b7dbcbc0f48c50de215e508ca9f260fd/merged major:0 minor:476 fsType:overlay blockSize:0} overlay_0-478:{mountpoint:/var/lib/containers/storage/overlay/d98081dda50488da5519de67005d5e8a90403c39faa4907c42db884213dadc9d/merged major:0 minor:478 fsType:overlay blockSize:0} overlay_0-48:{mountpoint:/var/lib/containers/storage/overlay/40985f42009bbbbdf720043b5ce3c857fd3eb6b2c927f95588c254e9cbcbf987/merged major:0 minor:48 fsType:overlay blockSize:0} overlay_0-487:{mountpoint:/var/lib/containers/storage/overlay/c9bb045fc8a392ad783be0e7c7b3f6764649b7442e0477b2124046a0c4ce0229/merged major:0 minor:487 fsType:overlay blockSize:0} overlay_0-489:{mountpoint:/var/lib/containers/storage/overlay/97391fd4df3f294c89e7abf844e2bcde16324d08414d5c18ab11b5cf34235396/merged major:0 minor:489 fsType:overlay blockSize:0} overlay_0-498:{mountpoint:/var/lib/containers/storage/overlay/10646cc712c6bd9dc9679f52a836b8609b8cffd1b5034ad48d0e8aaa26976f93/merged major:0 minor:498 fsType:overlay blockSize:0} overlay_0-500:{mountpoint:/var/lib/containers/storage/overlay/f841e6a2c8aafd7b5b46accd36d9a1b3a92deefc89b22bbd00fb364720449cae/merged major:0 minor:500 fsType:overlay blockSize:0} overlay_0-502:{mountpoint:/var/lib/containers/storage/overlay/ade814c9d9c886311d90fc913355afe6b99abb90ff6b0339e7e15c6677d8b4d7/merged major:0 minor:502 fsType:overlay blockSize:0} overlay_0-503:{mountpoint:/var/lib/containers/storage/overlay/f74b01442c8f6857e4eb3891f80dc675d547d2bff0e1c51d53e216f0115e2044/merged major:0 minor:503 fsType:overlay blockSize:0} overlay_0-505:{mountpoint:/var/lib/containers/storage/overlay/91cb3a077cd1b565ed87e8ae8b7b5015e59134c34eff0898440b16cbadbd68c7/merged major:0 minor:505 fsType:overlay blockSize:0} overlay_0-511:{mountpoint:/var/lib/containers/storage/overlay/6585c7ef864a4ce1adbe63d55f311e1e69213fdd56ad9436e3876ce588a66f68/merged major:0 minor:511 fsType:overlay blockSize:0} overlay_0-513:{mountpoint:/var/lib/containers/storage/overlay/c72270ab428ee8f1b1e9afa7c382eeb7394d07c566f0ffbbe72ea3b33c6fe09d/merged major:0 minor:513 fsType:overlay blockSize:0} overlay_0-519:{mountpoint:/var/lib/containers/storage/overlay/548916af8057db3cb8faff0c4146cd630b57a626c88f411a5fc24889b8731212/merged major:0 minor:519 fsType:overlay blockSize:0} overlay_0-52:{mountpoint:/var/lib/containers/storage/overlay/7db1534167c3b4cf719b50455f9cbd5ca050b66495d2b715a1f8888ce91422e4/merged major:0 minor:52 fsType:overlay blockSize:0} overlay_0-523:{mountpoint:/var/lib/containers/storage/overlay/3426a2d4fa34e4d2dc95082de278010d63f9e51ee72e13f88833269c034b3dd4/merged major:0 minor:523 fsType:overlay blockSize:0} overlay_0-525:{mountpoint:/var/lib/containers/storage/overlay/ab2c9036dad6b4891ab571f5286a5fe45d4ecca656631f041853fc84b3bcd00e/merged major:0 minor:525 fsType:overlay blockSize:0} overlay_0-54:{mountpoint:/var/lib/containers/storage/overlay/42ac584cda789db8255d068a8277ecd8a536e7efc70759b3ffb418a3a64c3390/merged major:0 minor:54 fsType:overlay blockSize:0} overlay_0-546:{mountpoint:/var/lib/containers/storage/overlay/7ff33ec85f4ddc9dc6a2375dc3af63a0c64e7149dad4d6c77f38c3a1d80ede41/merged major:0 minor:546 fsType:overlay blockSize:0} overlay_0-548:{mountpoint:/var/lib/containers/storage/overlay/0f4be0579c11b4fb4f1149787df6b06b9ea3ba63e5873ee1457d0dba3f506422/merged major:0 minor:548 fsType:overlay blockSize:0} overlay_0-550:{mountpoint:/var/lib/containers/storage/overlay/5ff15a836134b35c5212c4fc5dfee4fe21515d5b91e40440c5c5cfb982286f62/merged major:0 minor:550 fsType:overlay blockSize:0} overlay_0-557:{mountpoint:/var/lib/containers/storage/overlay/c80f41093e41b9f36b6aaf0a0218fce5eb2a3b30d3cebf64c1ec029b5aa9b1f2/merged major:0 minor:557 fsType:overlay blockSize:0} overlay_0-561:{mountpoint:/var/lib/containers/storage/overlay/46bff1d1173981641d04582f4627567832d58c0bd53aa8b8d328e34246f54af6/merged major:0 minor:561 fsType:overlay blockSize:0} overlay_0-571:{mountpoint:/var/lib/containers/storage/overlay/6f99986b1c6ffc184f209a3daa6fd0ff8e327d69f13b64c51d1542f271f48e98/merged major:0 minor:571 fsType:overlay blockSize:0} overlay_0-575:{mountpoint:/var/lib/containers/storage/overlay/2a75dad483e9795dbec87cfe114853242de2a30c3e697c50f0861d9c60d77c21/merged major:0 minor:575 fsType:overlay blockSize:0} overlay_0-577:{mountpoint:/var/lib/containers/storage/overlay/07b3a7d8b69accd70850bbd3529f8bb1dff242404b32ba6b4c6f5a2fbe4239d0/merged major:0 minor:577 fsType:overlay blockSize:0} overlay_0-579:{mountpoint:/var/lib/containers/storage/overlay/129c2606b591480322f201325415cbf5b3203d80548689949f4823337b8693e1/merged major:0 minor:579 fsType:overlay blockSize:0} overlay_0-58:{mountpoint:/var/lib/containers/storage/overlay/003a5d965794eddcf48ef3e1b9083c017cf586640b35cf0fb0a6fdfc6e597210/merged major:0 minor:58 fsType:overlay blockSize:0} overlay_0-581:{mountpoint:/var/lib/containers/storage/overlay/38c97720f702f67b99bbebaff4096428a8102747f0eb0318c25b329b044beb75/merged major:0 minor:581 fsType:overlay blockSize:0} overlay_0-583:{mountpoint:/var/lib/containers/storage/overlay/92ad88dd815b17b22673aeaeeafa633819aadbae27765a34e426572d0400d796/merged major:0 minor:583 fsType:overlay blockSize:0} overlay_0-585:{mountpoint:/var/lib/containers/storage/overlay/dba06a7255152b4f41e215d325bb1dc59898773ed294a4403e155b79d5c4d46f/merged major:0 minor:585 fsType:overlay blockSize:0} overlay_0-59:{mountpoint:/var/lib/containers/storage/overlay/c7da201744ce4b12c935181b062b4bdca2e2fb4efcab8b15c8a647d57ceb3e2b/merged major:0 minor:59 fsType:overlay blockSize:0} overlay_0-594:{mountpoint:/var/lib/containers/storage/overlay/6492092442b39c62d5c7b9241dc112fe42e989efa364afc018de3c545e386719/merged major:0 minor:594 fsType:overlay blockSize:0} overlay_0-598:{mountpoint:/var/lib/containers/storage/overlay/b4f545f6e16ee3c8bf78f0f1adfa354a668f58bdd312943af541859ed767461e/merged major:0 minor:598 fsType:overlay blockSize:0} overlay_0-600:{mountpoint:/var/lib/containers/storage/overlay/b2ddf638fcb9f58040234ef89a27c6d8b33e17afaab914d3113170e4056b0d92/merged major:0 minor:600 fsType:overlay blockSize:0} overlay_0-602:{mountpoint:/var/lib/containers/storage/overlay/c0c50e637f618783c58057474fda636b51ec68cf6f616162a49eebba57618a15/merged major:0 minor:602 fsType:overlay blockSize:0} overlay_0-608:{mountpoint:/var/lib/containers/storage/overlay/e935c84780c94d7dc7c7f9a3012cacb2b38baf5907f7a493e213309a8119aa23/merged major:0 minor:608 fsType:overlay blockSize:0} overlay_0-61:{mountpoint:/var/lib/containers/storage/overl Feb 16 02:22:27.791156 master-0 kubenswrapper[31559]: ay/0c16466a6bfc67f30a055f42d25d6025f29ef3587e3aa706505a8c0f8c9de7d3/merged major:0 minor:61 fsType:overlay blockSize:0} overlay_0-611:{mountpoint:/var/lib/containers/storage/overlay/aebdbffa9e6bbdac2f3585083b2b698e2342d0c0650efa2e7850f135b5094746/merged major:0 minor:611 fsType:overlay blockSize:0} overlay_0-613:{mountpoint:/var/lib/containers/storage/overlay/6780dd6cbce42a65f24e15b30a41e787890a57c33358851ff86524a8e66faeef/merged major:0 minor:613 fsType:overlay blockSize:0} overlay_0-617:{mountpoint:/var/lib/containers/storage/overlay/6e0993c7d43f2e3f8de4ab7f72fc98e05875028fd0d6f4ca4ba11bf806aeae50/merged major:0 minor:617 fsType:overlay blockSize:0} overlay_0-62:{mountpoint:/var/lib/containers/storage/overlay/60745b1de9155d97c0167d9d8261ef67f903db796bcbaa5e03ffbf46e269a0a0/merged major:0 minor:62 fsType:overlay blockSize:0} overlay_0-635:{mountpoint:/var/lib/containers/storage/overlay/9943a0353294d3e8c67889e7211afece02d9cfac1fa2c6e14f7f1dda66764fe2/merged major:0 minor:635 fsType:overlay blockSize:0} overlay_0-64:{mountpoint:/var/lib/containers/storage/overlay/a58ffc447369a3c7e80620a89182d26170137ecf6bb3df15917525fac1134ab4/merged major:0 minor:64 fsType:overlay blockSize:0} overlay_0-642:{mountpoint:/var/lib/containers/storage/overlay/52f612aa2d028c323afc09e78cff54cb2f9f7ffb3b2645b3f3a323294557ebe6/merged major:0 minor:642 fsType:overlay blockSize:0} overlay_0-648:{mountpoint:/var/lib/containers/storage/overlay/ab48c0ae3556bc9a4770c2a463b3ba51244a7890fa0b8a4ed4f8de0e07ecc7f9/merged major:0 minor:648 fsType:overlay blockSize:0} overlay_0-650:{mountpoint:/var/lib/containers/storage/overlay/4a37b8e7f60b14ee6044be104ca369f37ee9d20e66c243a2a802f246fedaf3e1/merged major:0 minor:650 fsType:overlay blockSize:0} overlay_0-652:{mountpoint:/var/lib/containers/storage/overlay/f0cf9fd10b7243772a8f201aaac9939d54cb248bdc5f532a52b60b7cd4323616/merged major:0 minor:652 fsType:overlay blockSize:0} overlay_0-654:{mountpoint:/var/lib/containers/storage/overlay/9ff33ad3a9ef6660e4c43b309e9b2856ef96e8f3e0e847d330525d528ef729f8/merged major:0 minor:654 fsType:overlay blockSize:0} overlay_0-658:{mountpoint:/var/lib/containers/storage/overlay/dfdb8ce9b530923a4efcce26c30b62f6cec53e867741e7a37a3c442d481db4fb/merged major:0 minor:658 fsType:overlay blockSize:0} overlay_0-659:{mountpoint:/var/lib/containers/storage/overlay/6c946d15c22973e03e0f6dd717ea8a34f3e35f8df850f9510de2e424d422a85b/merged major:0 minor:659 fsType:overlay blockSize:0} overlay_0-666:{mountpoint:/var/lib/containers/storage/overlay/9f0c2dc52d2aaad3137cf7bb81b87758e0e08d980ecc74e58e15f7ccec09043d/merged major:0 minor:666 fsType:overlay blockSize:0} overlay_0-667:{mountpoint:/var/lib/containers/storage/overlay/bafeca3fbf132e8300579f40bb2a4feefe30a10956fe8dfd413300f81351944c/merged major:0 minor:667 fsType:overlay blockSize:0} overlay_0-67:{mountpoint:/var/lib/containers/storage/overlay/dc5537cd8cefb44c5f12c0810ebde698e203eef136fa5634164e675f48ed105a/merged major:0 minor:67 fsType:overlay blockSize:0} overlay_0-675:{mountpoint:/var/lib/containers/storage/overlay/081735ce144ae8347067242844640db8712d1e905c24b0d6fc7dd3b6b807bcc8/merged major:0 minor:675 fsType:overlay blockSize:0} overlay_0-679:{mountpoint:/var/lib/containers/storage/overlay/afb086f65a24c3fbba5e06f1cc154525496646c6754a7a298414ce4d81208f5a/merged major:0 minor:679 fsType:overlay blockSize:0} overlay_0-680:{mountpoint:/var/lib/containers/storage/overlay/5fdd18f44bb91568c05dd24da3ce09626e41559f1b47d6946ac22d9eed996c52/merged major:0 minor:680 fsType:overlay blockSize:0} overlay_0-682:{mountpoint:/var/lib/containers/storage/overlay/5d30ac33046f554f4099478dc5022829c22d803554e8e2194d7cc680ea27e8b7/merged major:0 minor:682 fsType:overlay blockSize:0} overlay_0-684:{mountpoint:/var/lib/containers/storage/overlay/125ee15f34fb23f15f7020d3748b0232091d187aae910fe94315e19d2054fc04/merged major:0 minor:684 fsType:overlay blockSize:0} overlay_0-686:{mountpoint:/var/lib/containers/storage/overlay/bdb60a3a0afdbe2ec0308cb6ae79f4a9d39acbb6dba6f80227400e2a88452c3b/merged major:0 minor:686 fsType:overlay blockSize:0} overlay_0-694:{mountpoint:/var/lib/containers/storage/overlay/bfa540ba8020f37b3b1dc18dda7c6824bb468d16bb6eacbb2aac35b896c569f1/merged major:0 minor:694 fsType:overlay blockSize:0} overlay_0-695:{mountpoint:/var/lib/containers/storage/overlay/87a69583ef09b3d905255f7432757c9fc5cf49304c0fe05e5d25f263152d5dea/merged major:0 minor:695 fsType:overlay blockSize:0} overlay_0-699:{mountpoint:/var/lib/containers/storage/overlay/8f368d0be09f3dc95fea3a64d9236a6bea8ba31cf1fc05b152c1f04a2ff53fb8/merged major:0 minor:699 fsType:overlay blockSize:0} overlay_0-701:{mountpoint:/var/lib/containers/storage/overlay/7841679bd6a1b5a7a6dcc5abb6e3a2d83ef2a377b56dae2aeea910dabe75e137/merged major:0 minor:701 fsType:overlay blockSize:0} overlay_0-71:{mountpoint:/var/lib/containers/storage/overlay/e29de7dcda9b9acee3bd29c91a97ad7f37b683147febc0261644788c133d48a5/merged major:0 minor:71 fsType:overlay blockSize:0} overlay_0-710:{mountpoint:/var/lib/containers/storage/overlay/54b4a1d64e57e08729ea4b8de3152df197c6e70fef1cd14fd9078ff3af0a06e6/merged major:0 minor:710 fsType:overlay blockSize:0} overlay_0-712:{mountpoint:/var/lib/containers/storage/overlay/79f6b441b31d273416c6787c18ede2f742e3d4b0265d031581aba0a0ee519f7e/merged major:0 minor:712 fsType:overlay blockSize:0} overlay_0-716:{mountpoint:/var/lib/containers/storage/overlay/789b54f2ff321c7ebcbdba1cc167f2a745d6b41af3a10746e52fa6211cc77345/merged major:0 minor:716 fsType:overlay blockSize:0} overlay_0-717:{mountpoint:/var/lib/containers/storage/overlay/9d1fadb185081edcfb0760f790ef289aa1fe9f0e97e0774db84abcf414dddf6d/merged major:0 minor:717 fsType:overlay blockSize:0} overlay_0-718:{mountpoint:/var/lib/containers/storage/overlay/7c6da8699512d83ccb73e7e2d61ae2eed9b54c13bbee5970c676cc789520cca6/merged major:0 minor:718 fsType:overlay blockSize:0} overlay_0-740:{mountpoint:/var/lib/containers/storage/overlay/9ad138dc0a0d2268bf4d5acebc3bbdca850985dc4414ed927b5193ae69f19e06/merged major:0 minor:740 fsType:overlay blockSize:0} overlay_0-743:{mountpoint:/var/lib/containers/storage/overlay/00c6204b10401b972ce14cd0ba809c97bc71746a9bcefc0c396cddca860ccc06/merged major:0 minor:743 fsType:overlay blockSize:0} overlay_0-752:{mountpoint:/var/lib/containers/storage/overlay/a3a1b37a50027fd1cda7b81dd9361192c143abc104a5612bcdb0d6b580e125a7/merged major:0 minor:752 fsType:overlay blockSize:0} overlay_0-754:{mountpoint:/var/lib/containers/storage/overlay/69fa6dd7875b4650f86050723cb8769606e950a054e486cc73ab144925867f88/merged major:0 minor:754 fsType:overlay blockSize:0} overlay_0-757:{mountpoint:/var/lib/containers/storage/overlay/beb032e1289712f62ef6b79365133db109d53b3ca2550d7cc1dc8eea660db693/merged major:0 minor:757 fsType:overlay blockSize:0} overlay_0-765:{mountpoint:/var/lib/containers/storage/overlay/0cae2f99ff9b2a5a1f92e729285ba306230586b3d6d4d2744b0b0e95f99903cf/merged major:0 minor:765 fsType:overlay blockSize:0} overlay_0-768:{mountpoint:/var/lib/containers/storage/overlay/db20c2c94960c1388a931b8085f9738e27f4393ac97b407440f3f3fac741433a/merged major:0 minor:768 fsType:overlay blockSize:0} overlay_0-769:{mountpoint:/var/lib/containers/storage/overlay/be780a0cd8dfb99128524c05b4634d9a3242d93d32b4ca98b61af9611e7d23c9/merged major:0 minor:769 fsType:overlay blockSize:0} overlay_0-770:{mountpoint:/var/lib/containers/storage/overlay/f7b0ee011492f48eb28f540e5d888c279e66b0476348c35700173a004fabe34c/merged major:0 minor:770 fsType:overlay blockSize:0} overlay_0-775:{mountpoint:/var/lib/containers/storage/overlay/d11e46a24a0f8bec81270c6f3bf935b3e1f036e709d0f6bb21a67b0d9880b10f/merged major:0 minor:775 fsType:overlay blockSize:0} overlay_0-776:{mountpoint:/var/lib/containers/storage/overlay/3058b7c115c5740a9b830cab6e6a0cbd3824aa3be9c31012b192168164588945/merged major:0 minor:776 fsType:overlay blockSize:0} overlay_0-778:{mountpoint:/var/lib/containers/storage/overlay/1dcd63f40ac2cda51c02426dbc57844597ca910364502ea73d174635a12d8bfb/merged major:0 minor:778 fsType:overlay blockSize:0} overlay_0-779:{mountpoint:/var/lib/containers/storage/overlay/1f63db5a782a4e1f2c3b28780ddd850f1669c198f1d7fbf660134a94b707cb2f/merged major:0 minor:779 fsType:overlay blockSize:0} overlay_0-780:{mountpoint:/var/lib/containers/storage/overlay/4f59d0a288e70db9d32672d0818b03f1b499d57521d0d862b194e6a3031de3b9/merged major:0 minor:780 fsType:overlay blockSize:0} overlay_0-783:{mountpoint:/var/lib/containers/storage/overlay/50a51abe7c91e46fe7d7a399c1b0d4c50aff5c486249885cd1cbc484b98c8c01/merged major:0 minor:783 fsType:overlay blockSize:0} overlay_0-785:{mountpoint:/var/lib/containers/storage/overlay/14fc2a8cab69f68428eecae1b43c773aa765b4c9854c46460d755fc251d8c84a/merged major:0 minor:785 fsType:overlay blockSize:0} overlay_0-788:{mountpoint:/var/lib/containers/storage/overlay/a05b4181f20db8aad4e658e2e9997528e5800d1bf6deca58b50d4b848a32bc70/merged major:0 minor:788 fsType:overlay blockSize:0} overlay_0-790:{mountpoint:/var/lib/containers/storage/overlay/d6bb72f9f0ea9e5534f0785a16b9f96481f7a164246c1619e6ffada4cff76e77/merged major:0 minor:790 fsType:overlay blockSize:0} overlay_0-794:{mountpoint:/var/lib/containers/storage/overlay/2a0b485fd5b3d9006e583ff62a46180c70dbee6121d3211e3398d8316a13eabd/merged major:0 minor:794 fsType:overlay blockSize:0} overlay_0-797:{mountpoint:/var/lib/containers/storage/overlay/daa920c481774b880c4fcec0d1de07c5009b93bc50ba4fc57bd24cbe656a75fd/merged major:0 minor:797 fsType:overlay blockSize:0} overlay_0-798:{mountpoint:/var/lib/containers/storage/overlay/fa5b5b55b8dbcd97af795fdc65b0c996ee0014c63267f354558901dc07acff46/merged major:0 minor:798 fsType:overlay blockSize:0} overlay_0-80:{mountpoint:/var/lib/containers/storage/overlay/a3761562e1b58cacf9df2cb0430c26084b812bd237a422da672bb01ba4a86c81/merged major:0 minor:80 fsType:overlay blockSize:0} overlay_0-800:{mountpoint:/var/lib/containers/storage/overlay/e2b0857bce5044de9398f5ebd8672fcaf6fd3bba282134e744651bebfbe197d3/merged major:0 minor:800 fsType:overlay blockSize:0} overlay_0-803:{mountpoint:/var/lib/containers/storage/overlay/ba9d2fd8ee6e8e53ff6cbf58e44daf55759024ae0bd92dd45f1c112bfcf1fa8d/merged major:0 minor:803 fsType:overlay blockSize:0} overlay_0-813:{mountpoint:/var/lib/containers/storage/overlay/e1348c161d32d08e2603f6a4fc886d1b46dc74d32b46f5acb51377a2ea6e01a2/merged major:0 minor:813 fsType:overlay blockSize:0} overlay_0-814:{mountpoint:/var/lib/containers/storage/overlay/79b491a65a9c1b940e04c9bc4b6e83615df14e7b6eb56491190cde20c84bce15/merged major:0 minor:814 fsType:overlay blockSize:0} overlay_0-822:{mountpoint:/var/lib/containers/storage/overlay/95931839c58280841ecf5410ec60fb1f9bf7959f24098f90278a7cd3c6ff29c9/merged major:0 minor:822 fsType:overlay blockSize:0} overlay_0-824:{mountpoint:/var/lib/containers/storage/overlay/3a2990ee87e237a41207a443afe522965d99ac91d8095961bd3ca8ae9d73ed36/merged major:0 minor:824 fsType:overlay blockSize:0} overlay_0-848:{mountpoint:/var/lib/containers/storage/overlay/5e6cf27921873116d3af72e941dd8621c81043da670770471c87e032ff218f3b/merged major:0 minor:848 fsType:overlay blockSize:0} overlay_0-85:{mountpoint:/var/lib/containers/storage/overlay/7d6fe5d54046d95d5282af4eb74e5ab3e6aa23d8398ee7f81cd9dd366a452147/merged major:0 minor:85 fsType:overlay blockSize:0} overlay_0-851:{mountpoint:/var/lib/containers/storage/overlay/32f4e699dd5482ef9aeb10f3920dcc3b613cb4074c2cf66b43617c56f6d74afe/merged major:0 minor:851 fsType:overlay blockSize:0} overlay_0-854:{mountpoint:/var/lib/containers/storage/overlay/bc5267bd14e8729c81f0d333ac53d10edd2616929975e02ee1a4d020068f2a43/merged major:0 minor:854 fsType:overlay blockSize:0} overlay_0-859:{mountpoint:/var/lib/containers/storage/overlay/044e4d983ceb2636ab9fabb6c87e2548ce68fac1c6b7363c8cddb66a488dc734/merged major:0 minor:859 fsType:overlay blockSize:0} overlay_0-86:{mountpoint:/var/lib/containers/storage/overlay/c792957273f482a467b20e024a4f52cf7e20b7ee33f5c3b72a1f1bc8e34a537c/merged major:0 minor:86 fsType:overlay blockSize:0} overlay_0-868:{mountpoint:/var/lib/containers/storage/overlay/52b9699bef19b2a4f4a5d2201309a6f99f085987cdb1659cd45a0375c94a00ce/merged major:0 minor:868 fsType:overlay blockSize:0} overlay_0-870:{mountpoint:/var/lib/containers/storage/overlay/e89dfb3a3d00614a59d31d6d5dc1068845a33d31e49977a5e3b56509b3674e33/merged major:0 minor:870 fsType:overlay blockSize:0} overlay_0-876:{mountpoint:/var/lib/containers/storage/overlay/e333cd085cfd2ad24a12bb5177157ddfc2748391ac8941986c864a746b22b983/merged major:0 minor:876 fsType:overlay blockSize:0} overlay_0-882:{mountpoint:/var/lib/containers/storage/overlay/07a7076d7b3ce5a39181f33a5c9d81776c6534c1e62c9e2c4318cc5e95ee81f9/merged major:0 minor:882 fsType:overlay blockSize:0} overlay_0-884:{mountpoint:/var/lib/containers/storage/overlay/640045f8bc2e4cf4bfcb886c32970dbb7e117ff0355ea9183c95c4ceb78909e2/merged major:0 minor:884 fsType:overlay blockSize:0} overlay_0-886:{mountpoint:/var/lib/containers/storage/overlay/d584be0b31ab923661ee1120c5bc75b184830db669fc6ef04659a693c865d192/merged major:0 minor:886 fsType:overlay blockSize:0} overlay_0-89:{mountpoint:/var/lib/containers/storage/overlay/2aa8867c8e7646ebb881e405c3638452340b7d1f8cee871e43656ba881dedf76/merged major:0 minor:89 fsType:overlay blockSize:0} overlay_0-903:{mountpoint:/var/lib/containers/storage/overlay/51f44370a79450acccdb1722f366358b2bf08ecf520bbdb58b4581b531bee91d/merged major:0 minor:903 fsType:overlay blockSize:0} overlay_0-904:{mountpoint:/var/lib/containers/storage/overlay/1d1cf45279d43a163430e372bc95032024ede1b7edee53a2431c6a28afce0a00/merged major:0 minor:904 fsType:overlay blockSize:0} overlay_0-910:{mountpoint:/var/lib/containers/storage/overlay/f4001837d6bb3ad25e182dc57d01dfd7a112727a8aed1e0492097feaf95f6067/merged major:0 minor:910 fsType:overlay blockSize:0} overlay_0-912:{mountpoint:/var/lib/containers/storage/overlay/e913233bdab429ab7eaa406c935b64ccf32712cf4794d4f7347904f2db388c4f/merged major:0 minor:912 fsType:overlay blockSize:0} overlay_0-914:{mountpoint:/var/lib/containers/storage/overlay/7aefb2b67f4a8360866c45b71dcc91ff6cb4be71841027060ac66873ead4d69b/merged major:0 minor:914 fsType:overlay blockSize:0} overlay_0-919:{mountpoint:/var/lib/containers/storage/overlay/6aed832fc571d0b29dc5b2f7ea1e79486d6727ee0115caf4dc31e67525272a78/merged major:0 minor:919 fsType:overlay blockSize:0} overlay_0-922:{mountpoint:/var/lib/containers/storage/overlay/2657f482c8706d0f8b0533b330a5bcc4591f3ccea617a6c17beb1372c72af337/merged major:0 minor:922 fsType:overlay blockSize:0} overlay_0-927:{mountpoint:/var/lib/containers/storage/overlay/46e854f62cad4e3b1c975067fffded8174d62d3d663994a06a97356596d2a85c/merged major:0 minor:927 fsType:overlay blockSize:0} overlay_0-940:{mountpoint:/var/lib/containers/storage/overlay/1d1dc3e2cb4a9ca1525beaeaad36d229425fbf1abbea42170b6ab4bab0180895/merged major:0 minor:940 fsType:overlay blockSize:0} overlay_0-942:{mountpoint:/var/lib/containers/storage/overlay/d5c78ff20c53971011fa2db2e93b6919ed170368c3daf2de8d19a65037e978ff/merged major:0 minor:942 fsType:overlay blockSize:0} overlay_0-948:{mountpoint:/var/lib/containers/storage/overlay/52307469e3a42f21a76ed4a1aa9f434a436b27d30140ef2d6083034c612f48bc/merged major:0 minor:948 fsType:overlay blockSize:0} overlay_0-949:{mountpoint:/var/lib/containers/storage/overlay/b5b60d02877c192dbfc087fb26f84d2b5e7903acc886b0cc5e5e1de6c8c50de3/merged major:0 minor:949 fsType:overlay blockSize:0} overlay_0-951:{mountpoint:/var/lib/containers/storage/overlay/fb86fb3e5810dace9834cfa6438d4b7234817674ee9b42ba910abebbf9ab3bef/merged major:0 minor:951 fsType:overlay blockSize:0} overlay_0-96:{mountpoint:/var/lib/containers/storage/overlay/7f8a981dc07f9701662beb55a61a6410e3849cfd5cc5d73ef47fb8c6bc31ccc6/merged major:0 minor:96 fsType:overlay blockSize:0} overlay_0-962:{mountpoint:/var/lib/containers/storage/overlay/56ee9a6dd6e80c2df6e5005b7c0dd7bb1e79995b071ebdcce4aa274c47e9a949/merged major:0 minor:962 fsType:overlay blockSize:0} overlay_0-964:{mountpoint:/var/lib/containers/storage/overlay/1918a2463684c82f5cebe410059e3206375391f41d1d4efc8b1a580fd2938e32/merged major:0 minor:964 fsType:overlay blockSize:0} overlay_0-969:{mountpoint:/var/lib/containers/storage/overlay/f9da706180b087fb6b8363cb8c78502a63372bcaf4cccb2a9b7310b87c30fc20/merged major:0 minor:969 fsType:overlay blockSize:0} overlay_0-977:{mountpoint:/var/lib/containers/storage/overlay/b587d14ecc6bf4113ab6dac05ea0a709fcb708742c73de547ae7e7238cb4fd35/merged major:0 minor:977 fsType:overlay blockSize:0} overlay_0-98:{mountpoint:/var/lib/containers/storage/overlay/c7f9cba147b6a45a85d5ded0331aae769a261780830c2d2f811d188b190ed9ee/merged major:0 minor:98 fsType:overlay blockSize:0} overlay_0-981:{mountpoint:/var/lib/containers/storage/overlay/43b3c1779efe71f914eda886b2b0bd1827f2324624b940c1b364051df5602e8f/merged major:0 minor:981 fsType:overlay blockSize:0} overlay_0-987:{mountpoint:/var/lib/containers/storage/overlay/d96f0ebc8a19bbb3338ea4a30f5727e06d3b503bcc7a98be41530d48b09dd915/merged major:0 minor:987 fsType:overlay blockSize:0} overlay_0-992:{mountpoint:/var/lib/containers/storage/overlay/d549aa424ee759c999c79940bae307ea28cd078f854adc6c8bdd7f71a62a24af/merged major:0 minor:992 fsType:overlay blockSize:0} overlay_0-997:{mountpoint:/var/lib/containers/storage/overlay/c22d8a47614fd94cc85fb9658ae243b23f36a4f746f77ad4c30577bd2a533fac/merged major:0 minor:997 fsType:overlay blockSize:0} overlay_0-998:{mountpoint:/var/lib/containers/storage/overlay/27284a21df797045f264f313fac45be69f67be4ff65eb53aad93d656ae0ff9ed/merged major:0 minor:998 fsType:overlay blockSize:0}] Feb 16 02:22:27.850757 master-0 kubenswrapper[31559]: I0216 02:22:27.848677 31559 manager.go:217] Machine: {Timestamp:2026-02-16 02:22:27.847313651 +0000 UTC m=+0.191919716 CPUVendorID:AuthenticAMD NumCores:16 NumPhysicalCores:1 NumSockets:16 CpuFrequency:2800000 MemoryCapacity:50514153472 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:1c19e24b661c4676981e885f5d8565ba SystemUUID:1c19e24b-661c-4676-981e-885f5d8565ba BootID:6af96a74-4ecc-4294-8d2f-0e5321b23e8e Filesystems:[{Device:/run/containers/storage/overlay-containers/1a32ad8e692aa770e92a476bea483378880963d5f68eec068f4e2762350b7f00/userdata/shm DeviceMajor:0 DeviceMinor:485 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-803 DeviceMajor:0 DeviceMinor:803 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/19f7fe92f509ebf58263703a24e99425e1eb0493ad65313aae50f23d57b15adc/userdata/shm DeviceMajor:0 DeviceMinor:295 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/32d420d6-bbda-42c0-82fe-8b187ad91607/volumes/kubernetes.io~projected/kube-api-access-w4mp4 DeviceMajor:0 DeviceMinor:1206 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5b8d0bef4d74f4dd2410957462f576db299d55ec6675ac364687f5b27fba5fd5/userdata/shm DeviceMajor:0 DeviceMinor:69 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f3b7c64f3be908fc19e9deab55b835cdfbaa84035406e99a4fd85bf496337788/userdata/shm DeviceMajor:0 DeviceMinor:144 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/d870332c-2498-4135-a9b3-a71e67c2805b/volumes/kubernetes.io~projected/kube-api-access-wmjjn DeviceMajor:0 DeviceMinor:764 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-914 DeviceMajor:0 DeviceMinor:914 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d8bbd369-4219-48ef-ae2d-b45c81789403/volumes/kubernetes.io~secret/certs DeviceMajor:0 DeviceMinor:1164 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-46 DeviceMajor:0 DeviceMinor:46 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-131 DeviceMajor:0 DeviceMinor:131 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-754 DeviceMajor:0 DeviceMinor:754 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1313 DeviceMajor:0 DeviceMinor:1313 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3aa52e31a2b9a476aba0a48b18d458a5a18722a85353d604bbc35df3b9829545/userdata/shm DeviceMajor:0 DeviceMinor:287 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c4f6fda3b9ae78250d34cf8b7c30c1c11f08347cd23c715bd4e28a7a30204cde/userdata/shm DeviceMajor:0 DeviceMinor:555 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/dc3354cb-b6c3-40a5-a695-cccb079ad292/volumes/kubernetes.io~secret/apiservice-cert DeviceMajor:0 DeviceMinor:767 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0458436e1991707e1a1730601b13074a765d2ad430ff6238224fc587bdd11634/userdata/shm DeviceMajor:0 DeviceMinor:777 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1242 DeviceMajor:0 DeviceMinor:1242 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/72eeb0715f00e0e3f5313c94b059bdc92b87e369f23bdc44266053d9ec61b371/userdata/shm DeviceMajor:0 DeviceMinor:573 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1107 DeviceMajor:0 DeviceMinor:1107 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-124 DeviceMajor:0 DeviceMinor:124 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a77e2f8f-d164-4a58-aab2-f3444c05cacb/volumes/kubernetes.io~projected/kube-api-access-bsxrl DeviceMajor:0 DeviceMinor:760 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1266 DeviceMajor:0 DeviceMinor:1266 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-962 DeviceMajor:0 DeviceMinor:962 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-997 DeviceMajor:0 DeviceMinor:997 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0fbc8f91-f8cc-48d8-917c-64fa978069de/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:374 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-352 DeviceMajor:0 DeviceMinor:352 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/456e6c3a-c16c-470b-a0cd-bb79865b54f0/volumes/kubernetes.io~projected/kube-api-access-nl7r8 DeviceMajor:0 DeviceMinor:70 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1097 DeviceMajor:0 DeviceMinor:1097 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-594 DeviceMajor:0 DeviceMinor:594 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e478bdcc-052e-42f8-91b6-58c26cfc9cfc/volumes/kubernetes.io~projected/kube-api-access-pfgxq DeviceMajor:0 DeviceMinor:333 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/27d876a7-6a48-4942-ad96-ed8ed3aa104b/volumes/kubernetes.io~projected/ca-certs DeviceMajor:0 DeviceMinor:491 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-461 DeviceMajor:0 DeviceMinor:461 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/48863ff6-63ac-42d7-bac7-29d888c92db9/volumes/kubernetes.io~projected/kube-api-access-kgj82 DeviceMajor:0 DeviceMinor:762 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/d8bbd369-4219-48ef-ae2d-b45c81789403/volumes/kubernetes.io~secret/node-bootstrap-token DeviceMajor:0 DeviceMinor:1165 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-170 DeviceMajor:0 DeviceMinor:170 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-788 DeviceMajor:0 DeviceMinor:788 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-338 DeviceMajor:0 DeviceMinor:338 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/695d1f01-d3c1-4fb9-9dda-daf33eae11f5/volumes/kubernetes.io~secret/prometheus-operator-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1184 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1192 DeviceMajor:0 DeviceMinor:1192 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1323 DeviceMajor:0 DeviceMinor:1323 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/99cec3957b7591d54dab1b67d940469ccca762d577aa986d00f4e46746ba55f5/userdata/shm DeviceMajor:0 DeviceMinor:277 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/27d876a7-6a48-4942-ad96-ed8ed3aa104b/volumes/kubernetes.io~projected/kube-api-access-kf7tw DeviceMajor:0 DeviceMinor:443 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/283ae7ddfaf1351f34dacc8beed10e6971e1ef88d2fe208447bc84c265e096e1/userdata/shm DeviceMajor:0 DeviceMinor:372 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/17390d9a-148d-4927-a831-5bc4873c43d5/volumes/kubernetes.io~secret/default-certificate DeviceMajor:0 DeviceMinor:1129 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-356 DeviceMajor:0 DeviceMinor:356 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-86 DeviceMajor:0 DeviceMinor:86 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2a67f799-fd8d-4bee-9d67-720151c1650b/volumes/kubernetes.io~projected/kube-api-access-47lht DeviceMajor:0 DeviceMinor:267 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/04804a08-e3a5-46f3-abcb-967866834baa/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:453 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-743 DeviceMajor:0 DeviceMinor:743 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/695d1f01-d3c1-4fb9-9dda-daf33eae11f5/volumes/kubernetes.io~secret/prometheus-operator-tls DeviceMajor:0 DeviceMinor:1180 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/4a5b01c1-1231-4e69-8b6c-c4981b65b26e/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:252 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/27a42eb0-677c-414d-b0ec-f945ec39b7e9/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:838 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/a77e2f8f-d164-4a58-aab2-f3444c05cacb/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert DeviceMajor:0 DeviceMinor:119 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-778 DeviceMajor:0 DeviceMinor:778 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1050 DeviceMajor:0 DeviceMinor:1050 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1264 DeviceMajor:0 DeviceMinor:1264 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8f918d5b-1a4c-4b56-98a4-5cef638bb615/volumes/kubernetes.io~secret/encryption-config DeviceMajor:0 DeviceMinor:806 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-548 DeviceMajor:0 DeviceMinor:548 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/83883885-f493-4559-9c0f-e28d69712475/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:662 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d3a6bee8bdf67740901292411913794cb77a0e097ae15189322f724e1617872d/userdata/shm DeviceMajor:0 DeviceMinor:256 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/30fef0d5-46ea-4fa3-9ffa-88187d010ffe/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert DeviceMajor:0 DeviceMinor:530 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1059 DeviceMajor:0 DeviceMinor:1059 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f7317f91-9441-449f-9738-85da088cf94f/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert DeviceMajor:0 DeviceMinor:136 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-186 DeviceMajor:0 DeviceMinor:186 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4e9f20e89ac525e352545c86571266e96559d8a418a9a58ef9e04f14e5bd7411/userdata/shm DeviceMajor:0 DeviceMinor:457 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-498 DeviceMajor:0 DeviceMinor:498 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-712 DeviceMajor:0 DeviceMinor:712 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-882 DeviceMajor:0 DeviceMinor:882 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-783 DeviceMajor:0 DeviceMinor:783 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-981 DeviceMajor:0 DeviceMinor:981 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/359c73798649b5d5b089f1492d973d2f87ffd23f53f2f5868ba22d8d7543d4cc/userdata/shm DeviceMajor:0 DeviceMinor:120 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/1f2d2601-481d-4e86-ac4c-3d34d5691261/volumes/kubernetes.io~projected/kube-api-access-8d49c DeviceMajor:0 DeviceMinor:263 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-299 DeviceMajor:0 DeviceMinor:299 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-870 DeviceMajor:0 DeviceMinor:870 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-694 DeviceMajor:0 DeviceMinor:694 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/91938be6-9ae4-4849-abe8-fc842daecd23/volumes/kubernetes.io~projected/kube-api-access-bhz2m DeviceMajor:0 DeviceMinor:241 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/667ddc72e9342237de83564517ac6e7d5264569a48bf2ad6536c670aaa43e0af/userdata/shm DeviceMajor:0 DeviceMinor:664 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/0a900f93-91c9-4782-89a3-1cc09f3aec95/volumes/kubernetes.io~projected/kube-api-access-sctj8 DeviceMajor:0 DeviceMinor:1208 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-757 DeviceMajor:0 DeviceMinor:757 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-407 DeviceMajor:0 DeviceMinor:407 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c442d349-668b-4d01-a097-5981b7a04eac/volumes/kubernetes.io~secret/machine-approver-tls DeviceMajor:0 DeviceMinor:866 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/7f0f9b7d-e663-4927-861b-a9544d483b6e/volumes/kubernetes.io~secret/metrics-certs DeviceMajor:0 DeviceMinor:629 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-456 DeviceMajor:0 DeviceMinor:456 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f1504c9a4b0e4bf6149e0491153df3c7ffe2143b38a16877cba6aa9e83843b5a/userdata/shm DeviceMajor:0 DeviceMinor:190 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-927 DeviceMajor:0 DeviceMinor:927 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1216 DeviceMajor:0 DeviceMinor:1216 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7ffd086045d7cf31e1d7c2da1b8924ee64ea940c7d3c880260b182cb5c759f90/userdata/shm DeviceMajor:0 DeviceMinor:129 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1220 DeviceMajor:0 DeviceMinor:1220 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/21686a6d-f685-4fb6-98af-3e8a39c5981b/volumes/kubernetes.io~projected/kube-api-access-lvf8t DeviceMajor:0 DeviceMinor:273 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-308 DeviceMajor:0 DeviceMinor:308 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-752 DeviceMajor:0 DeviceMinor:752 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1151 DeviceMajor:0 DeviceMinor:1151 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1218 DeviceMajor:0 DeviceMinor:1218 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1115 DeviceMajor:0 DeviceMinor:1115 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-575 DeviceMajor:0 DeviceMinor:575 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-769 DeviceMajor:0 DeviceMinor:769 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-424 DeviceMajor:0 DeviceMinor:424 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4b7e8c0ad2cdf87e8552c9488b3b26422f87ac52802cbde7bf5707282a58545e/userdata/shm DeviceMajor:0 DeviceMinor:258 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/dbc5b101-936f-4bf3-bbf3-f30966b0ab50/volumes/kubernetes.io~projected/kube-api-access-jlnkb DeviceMajor:0 DeviceMinor:164 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-126 DeviceMajor:0 DeviceMinor:126 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-851 DeviceMajor:0 DeviceMinor:851 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-611 DeviceMajor:0 DeviceMinor:611 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-848 DeviceMajor:0 DeviceMinor:848 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1117 DeviceMajor:0 DeviceMinor:1117 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/48118d83188cff04b48b2d21c92d5267795e6e491327e95878cf252a4b94caea/userdata/shm DeviceMajor:0 DeviceMinor:41 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-785 DeviceMajor:0 DeviceMinor:785 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-219 DeviceMajor:0 DeviceMinor:219 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/980aa005-f51d-4ca2-aee6-a6fdeefd86d0/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:264 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-146 DeviceMajor:0 DeviceMinor:146 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1305 DeviceMajor:0 DeviceMinor:1305 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/76915cba-7c11-4bd8-9943-81de74e7781b/volumes/kubernetes.io~projected/kube-api-access-6f8fj DeviceMajor:0 DeviceMinor:240 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/27a42eb0-677c-414d-b0ec-f945ec39b7e9/volumes/kubernetes.io~projected/kube-api-access-l4djm DeviceMajor:0 DeviceMinor:841 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-523 DeviceMajor:0 DeviceMinor:523 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c8086f93-2d98-4218-afac-20a65e6bf943/volumes/kubernetes.io~secret/webhook-certs DeviceMajor:0 DeviceMinor:604 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1098 DeviceMajor:0 DeviceMinor:1098 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/91938be6-9ae4-4849-abe8-fc842daecd23/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:303 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/b2a83ddd-ffa5-4127-9099-91187ad9dbba/volumes/kubernetes.io~secret/apiservice-cert DeviceMajor:0 DeviceMinor:448 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-394 DeviceMajor:0 DeviceMinor:394 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1285 DeviceMajor:0 DeviceMinor:1285 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-54 DeviceMajor:0 DeviceMinor:54 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/980aa005-f51d-4ca2-aee6-a6fdeefd86d0/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:255 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/e491b5ed-9c09-4308-9843-fba8d43bd3ae/volumes/kubernetes.io~projected/kube-api-access-j4p8p DeviceMajor:0 DeviceMinor:663 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-115 DeviceMajor:0 DeviceMinor:115 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/62a59e4f54ec7d1606e491cb0a7ae58230aff5c54f133cd8f1f5aab5922fd486/userdata/shm DeviceMajor:0 DeviceMinor:640 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-513 DeviceMajor:0 DeviceMinor:513 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e491b5ed-9c09-4308-9843-fba8d43bd3ae/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:657 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f218aafff65afcf35d3001ac97851bc4eb0edf9e76199787dd7b9355dbf3fd1e/userdata/shm DeviceMajor:0 DeviceMinor:1209 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-977 DeviceMajor:0 DeviceMinor:977 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-327 DeviceMajor:0 DeviceMinor:327 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-487 DeviceMajor:0 DeviceMinor:487 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-62 DeviceMajor:0 DeviceMinor:62 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-310 DeviceMajor:0 DeviceMinor:310 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/857357a1-dc98-4dd5-98b3-c94b1ddf9dec/volumes/kubernetes.io~projected/kube-api-access-lj6v2 DeviceMajor:0 DeviceMinor:560 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-608 DeviceMajor:0 DeviceMinor:608 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-505 DeviceMajor:0 DeviceMinor:505 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-61 DeviceMajor:0 DeviceMinor:61 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:25257078784 Type:vfs Inodes:1048576 HasInodes:true} {Device:overlay_0-400 DeviceMajor:0 DeviceMinor:400 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/49e455816343db1118d91c9ccd06253262823aebbe81edbfd55679229021e38d/userdata/shm DeviceMajor:0 DeviceMinor:566 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-579 DeviceMajor:0 DeviceMinor:579 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-710 DeviceMajor:0 DeviceMinor:710 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a8d00a01-aa48-4830-a558-93a31cb98b31/volumes/kubernetes.io~projected/kube-api-access-lbmnx DeviceMajor:0 DeviceMinor:542 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1315 DeviceMajor:0 DeviceMinor:1315 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e379cfaf-3a4c-40e7-8641-3524b3669295/volumes/kubernetes.io~projected/kube-api-access-gcq6v DeviceMajor:0 DeviceMinor:272 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1188 DeviceMajor:0 DeviceMinor:1188 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-666 DeviceMajor:0 DeviceMinor:666 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-940 DeviceMajor:0 DeviceMinor:940 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1178 DeviceMajor:0 DeviceMinor:1178 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8ec6f01b2f5ea3a8202f9a73fc87e859e09b3484fd1471c52da0bdebc2c97dba/userdata/shm DeviceMajor:0 DeviceMinor:637 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-80 DeviceMajor:0 DeviceMinor:80 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e04624272a6ae061a3899df44b95c1f16652305181d31b35b7d1c234a03226ba/userdata/shm DeviceMajor:0 DeviceMinor:382 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-969 DeviceMajor:0 DeviceMinor:969 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-174 DeviceMajor:0 DeviceMinor:174 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a8f33151-61df-4b66-ba85-9ba210779059/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:262 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-951 DeviceMajor:0 DeviceMinor:951 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a1854497ff5ad67aee5d94d7312b87d4baf7af5b3f4e0b712c8c8a5cd16079c9/userdata/shm DeviceMajor:0 DeviceMinor:1133 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-188 DeviceMajor:0 DeviceMinor:188 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f9e85a0740edade16aca29d94977dcf8952ab075721ac965ae1df68ba4eec6d2/userdata/shm DeviceMajor:0 DeviceMinor:492 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/83883885-f493-4559-9c0f-e28d69712475/volumes/kubernetes.io~projected/kube-api-access-6nmjz DeviceMajor:0 DeviceMinor:747 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/47db8b5a98f2dc9d4943b35b3435ff0c482e2b9de6a84290f317a3f7a8c32db3/userdata/shm DeviceMajor:0 DeviceMinor:643 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/8f918d5b-1a4c-4b56-98a4-5cef638bb615/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:805 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-949 DeviceMajor:0 DeviceMinor:949 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-868 DeviceMajor:0 DeviceMinor:868 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/97b8261a-91e3-435e-93f8-0a17f30359fd/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:655 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-435 DeviceMajor:0 DeviceMinor:435 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-306 DeviceMajor:0 DeviceMinor:306 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/724ac845-3835-458b-9645-e665be135ff9/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:247 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-444 DeviceMajor:0 DeviceMinor:444 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/86af980a-2653-40c3-a368-a795d7fb8558/volumes/kubernetes.io~secret/kube-state-metrics-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1205 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-85 DeviceMajor:0 DeviceMinor:85 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-413 DeviceMajor:0 DeviceMinor:413 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/fc01c156470f39fb1cec479037027fa891a6711ffbe4b5da46389ad652e479bb/userdata/shm DeviceMajor:0 DeviceMinor:140 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/467d92a2-1cf3-418d-b41e-8e5f9d7a5b74/volumes/kubernetes.io~secret/profile-collector-cert DeviceMajor:0 DeviceMinor:239 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-740 DeviceMajor:0 DeviceMinor:740 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-52 DeviceMajor:0 DeviceMinor:52 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-571 DeviceMajor:0 DeviceMinor:571 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1061 DeviceMajor:0 DeviceMinor:1061 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-192 DeviceMajor:0 DeviceMinor:192 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/75915935-00a2-44ce-99d1-03e2492044d4/volumes/kubernetes.io~projected/kube-api-access-pc9jt DeviceMajor:0 DeviceMinor:1132 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1255 DeviceMajor:0 DeviceMinor:1255 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7846b339-c46d-4983-b586-a28f2868f665/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:465 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-439 DeviceMajor:0 DeviceMinor:439 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d870332c-2498-4135-a9b3-a71e67c2805b/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:763 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7ed4da3dff52ca3a67ddd91e519fed951330dcfe09c0cb1c3559d79f70e3808d/userdata/shm DeviceMajor:0 DeviceMinor:609 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-478 DeviceMajor:0 DeviceMinor:478 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/76915cba-7c11-4bd8-9943-81de74e7781b/volumes/kubernetes.io~secret/profile-collector-cert DeviceMajor:0 DeviceMinor:233 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/65db28ff03b176892a8eec81629c7d19dbef022673e856af206a72dde2a48896/userdata/shm DeviceMajor:0 DeviceMinor:278 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-416 DeviceMajor:0 DeviceMinor:416 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-502 DeviceMajor:0 DeviceMinor:502 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-519 DeviceMajor:0 DeviceMinor:519 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-525 DeviceMajor:0 DeviceMinor:525 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-218 DeviceMajor:0 DeviceMinor:218 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-403 DeviceMajor:0 DeviceMinor:403 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-602 DeviceMajor:0 DeviceMinor:602 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-466 DeviceMajor:0 DeviceMinor:466 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-699 DeviceMajor:0 DeviceMinor:699 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e55ecc9e109900a7552879fd5133496a07323a5858e7c3c83ce557b826a22cc6/userdata/shm DeviceMajor:0 DeviceMinor:1042 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-912 DeviceMajor:0 DeviceMinor:912 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f2d5026b3d62b6eac44704f83447125870ee696cf63066e123a37273291b1d8f/userdata/shm DeviceMajor:0 DeviceMinor:1225 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-148 DeviceMajor:0 DeviceMinor:148 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1190 DeviceMajor:0 DeviceMinor:1190 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-922 DeviceMajor:0 DeviceMinor:922 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-503 DeviceMajor:0 DeviceMinor:503 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b2a83ddd-ffa5-4127-9099-91187ad9dbba/volumes/kubernetes.io~projected/kube-api-access-t7fmj DeviceMajor:0 DeviceMinor:246 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/23755f7f-dce6-4dcf-9664-22e3aedb5c81/volumes/kubernetes.io~projected/kube-api-access-n4gmn DeviceMajor:0 DeviceMinor:271 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2287a210e87155c02ab6e622acb47d96fd89d65dc49d2afbe29745b869fd7b87/userdata/shm DeviceMajor:0 DeviceMinor:291 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-476 DeviceMajor:0 DeviceMinor:476 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/22739961-e322-47f1-b232-eaa4cc35319c/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:343 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/0fbc8f91-f8cc-48d8-917c-64fa978069de/volumes/kubernetes.io~projected/kube-api-access-5bnwz DeviceMajor:0 DeviceMinor:506 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1155 DeviceMajor:0 DeviceMinor:1155 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-208 DeviceMajor:0 DeviceMinor:208 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4134014c45e6845c874e6a32e463bf4a0bdd4d27746b06893f36026417f8e6db/userdata/shm DeviceMajor:0 DeviceMinor:458 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-684 DeviceMajor:0 DeviceMinor:684 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8f918d5b-1a4c-4b56-98a4-5cef638bb615/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:807 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-316 DeviceMajor:0 DeviceMinor:316 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6c02961f-30ec-4405-b7fa-9c4192342ae9/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:237 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/93fe8320cd8b094f12e9b856631a3581df910023e217ba523e4fc8bbdc13eff6/userdata/shm DeviceMajor:0 DeviceMinor:293 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-348 DeviceMajor:0 DeviceMinor:348 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/00ef3b03-55dc-4661-b7fd-1e586c45b5de/volumes/kubernetes.io~empty-dir/tmp DeviceMajor:0 DeviceMinor:536 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/1a07cd28-a33d-4abd-9198-ba82bacd51ba/volumes/kubernetes.io~projected/kube-api-access-5j9vb DeviceMajor:0 DeviceMinor:589 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-172 DeviceMajor:0 DeviceMinor:172 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-331 DeviceMajor:0 DeviceMinor:331 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3dde0495e5ec9d118f2ad7d1acff82faceae146e9c312fc50bf88cf24e85f414/userdata/shm DeviceMajor:0 DeviceMinor:915 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/c9cd32bc-a13a-44ee-ba52-7bb335c7007b/volumes/kubernetes.io~projected/kube-api-access-xr7gn DeviceMajor:0 DeviceMinor:244 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/6dcef814-353e-4985-9afc-9e545f7853ae/volumes/kubernetes.io~secret/ovn-node-metrics-cert DeviceMajor:0 DeviceMinor:138 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-635 DeviceMajor:0 DeviceMinor:635 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/456e6c3a-c16c-470b-a0cd-bb79865b54f0/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:43 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-162 DeviceMajor:0 DeviceMinor:162 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7f0f9b7d-e663-4927-861b-a9544d483b6e/volumes/kubernetes.io~projected/kube-api-access-5m4sb DeviceMajor:0 DeviceMinor:133 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-654 DeviceMajor:0 DeviceMinor:654 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/bde83629-b39c-401e-bc30-5ce205638918/volumes/kubernetes.io~projected/kube-api-access-24b6h DeviceMajor:0 DeviceMinor:276 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/1f2d2601-481d-4e86-ac4c-3d34d5691261/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert DeviceMajor:0 DeviceMinor:253 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-472 DeviceMajor:0 DeviceMinor:472 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/86f122d4749ad1424c989c1ff460643fbb5843c1b201386f01f941204bc41b87/userdata/shm DeviceMajor:0 DeviceMinor:375 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-181 DeviceMajor:0 DeviceMinor:181 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-585 DeviceMajor:0 DeviceMinor:585 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6c02961f-30ec-4405-b7fa-9c4192342ae9/volumes/kubernetes.io~projected/kube-api-access-7llx6 DeviceMajor:0 DeviceMinor:242 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-193 DeviceMajor:0 DeviceMinor:193 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-600 DeviceMajor:0 DeviceMinor:600 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a926069b18af0a45219030c9719e08a473a50355bc2d0c1fd700cdf2592cfa4c/userdata/shm DeviceMajor:0 DeviceMinor:809 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-998 DeviceMajor:0 DeviceMinor:998 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1001 DeviceMajor:0 DeviceMinor:1001 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d247b2be5bf1cb751d97a183400c5d6577356c5b5dce9cfa29235bda3ce8eb9a/userdata/shm DeviceMajor:0 DeviceMinor:603 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1144 DeviceMajor:0 DeviceMinor:1144 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/450f24c15fabf2ce3093e6381873f7497e388b2d4b0a5acae355eb63b714bf74/userdata/shm DeviceMajor:0 DeviceMinor:77 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1318 DeviceMajor:0 DeviceMinor:1318 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-71 DeviceMajor:0 DeviceMinor:71 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-583 DeviceMajor:0 DeviceMinor:583 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ad700b17-ba2a-41d4-8bec-538a009a613b/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:587 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-557 DeviceMajor:0 DeviceMinor:557 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d8bbd369-4219-48ef-ae2d-b45c81789403/volumes/kubernetes.io~projected/kube-api-access-grvmr DeviceMajor:0 DeviceMinor:1173 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-183 DeviceMajor:0 DeviceMinor:183 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1308 DeviceMajor:0 DeviceMinor:1308 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-109 DeviceMajor:0 DeviceMinor:109 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8f918d5b-1a4c-4b56-98a4-5cef638bb615/volumes/kubernetes.io~projected/kube-api-access-fxnht DeviceMajor:0 DeviceMinor:808 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-682 DeviceMajor:0 DeviceMinor:682 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1052 DeviceMajor:0 DeviceMinor:1052 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/724ac845-3835-458b-9645-e665be135ff9/volumes/kubernetes.io~projected/kube-api-access-bff42 DeviceMajor:0 DeviceMinor:268 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/debb4f03d3db8741a1ba37d50a33cf649d64e1d2b1233200aa072be50cd42b72/userdata/shm DeviceMajor:0 DeviceMinor:1040 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/04804a08-e3a5-46f3-abcb-967866834baa/volumes/kubernetes.io~projected/kube-api-access-8rc6w DeviceMajor:0 DeviceMinor:260 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/92905ae35545e079e87d8908f39688e6a1a52d219de95b9b7367aad88b020b28/userdata/shm DeviceMajor:0 DeviceMinor:390 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-695 DeviceMajor:0 DeviceMinor:695 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/fec84b8a-a0d1-4b07-8827-cef0beb89ecd/volumes/kubernetes.io~projected/kube-api-access-88kmw DeviceMajor:0 DeviceMinor:844 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0926ad4d371086f5079e933378a680e9ca645c38b72ce4fd73fbac3448ac6883/userdata/shm DeviceMajor:0 DeviceMinor:1186 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-281 DeviceMajor:0 DeviceMinor:281 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/bd5a6caddc3fbffdc59300004e09c460ee8e769674df58e5a2d88b92c736576e/userdata/shm DeviceMajor:0 DeviceMinor:864 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1176 DeviceMajor:0 DeviceMinor:1176 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4f55a0409391e0031662fe90965f9c6570290d87940cb9577014c63ddf57bd34/userdata/shm DeviceMajor:0 DeviceMinor:1281 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/7ac81030-35d1-4d86-844d-65d1156d8944/volumes/kubernetes.io~projected/kube-api-access-lqmhs DeviceMajor:0 DeviceMinor:565 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-546 DeviceMajor:0 DeviceMinor:546 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/76915cba-7c11-4bd8-9943-81de74e7781b/volumes/kubernetes.io~secret/srv-cert DeviceMajor:0 DeviceMinor:618 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1057 DeviceMajor:0 DeviceMinor:1057 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1213 DeviceMajor:0 DeviceMinor:1213 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/48c24060310ea59ea7726c8831161c102b57d6b94e31b9bc5a4ace9382583b32/userdata/shm DeviceMajor:0 DeviceMinor:284 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/857357a1-dc98-4dd5-98b3-c94b1ddf9dec/volumes/kubernetes.io~secret/catalogserver-certs DeviceMajor:0 DeviceMinor:559 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-854 DeviceMajor:0 DeviceMinor:854 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-203 DeviceMajor:0 DeviceMinor:203 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a8f33151-61df-4b66-ba85-9ba210779059/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:250 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-577 DeviceMajor:0 DeviceMinor:577 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-701 DeviceMajor:0 DeviceMinor:701 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-992 DeviceMajor:0 DeviceMinor:992 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-134 DeviceMajor:0 DeviceMinor:134 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-470 DeviceMajor:0 DeviceMinor:470 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2ffa4db8-97da-42de-8e51-35680f518ca7/volumes/kubernetes.io~projected/kube-api-access-t9sgx DeviceMajor:0 DeviceMinor:243 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c7eb8cc3989ea5b05dd2c5ae1244d08f1947ba602e6ae89eb69848dbf5ea8e95/userdata/shm DeviceMajor:0 DeviceMinor:384 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1101 DeviceMajor:0 DeviceMinor:1101 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-398 DeviceMajor:0 DeviceMinor:398 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ad700b17-ba2a-41d4-8bec-538a009a613b/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:569 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-876 DeviceMajor:0 DeviceMinor:876 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-659 DeviceMajor:0 DeviceMinor:659 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-790 DeviceMajor:0 DeviceMinor:790 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1322 DeviceMajor:0 DeviceMinor:1322 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/27a42eb0-677c-414d-b0ec-f945ec39b7e9/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls DeviceMajor:0 DeviceMinor:839 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7d9105e1418ead3e83bafdc82309e78dfce8ddc065bc7a3854cc209af8115774/userdata/shm DeviceMajor:0 DeviceMinor:1211 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-550 DeviceMajor:0 DeviceMinor:550 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/22739961-e322-47f1-b232-eaa4cc35319c/volumes/kubernetes.io~projected/kube-api-access-9n58j DeviceMajor:0 DeviceMinor:621 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-48 DeviceMajor:0 DeviceMinor:48 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-686 DeviceMajor:0 DeviceMinor:686 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-344 DeviceMajor:0 DeviceMinor:344 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/467d92a2-1cf3-418d-b41e-8e5f9d7a5b74/volumes/kubernetes.io~projected/kube-api-access-f8lvq DeviceMajor:0 DeviceMinor:245 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/d008dbd4-e713-4f2e-b64d-ca9cfc83a502/volumes/kubernetes.io~projected/kube-api-access-2582m DeviceMajor:0 DeviceMinor:320 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/dc3354cb-b6c3-40a5-a695-cccb079ad292/volumes/kubernetes.io~projected/kube-api-access-hm44l DeviceMajor:0 DeviceMinor:842 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/89041b37-18f6-499d-89ec-a0523a25dc58/volumes/kubernetes.io~projected/kube-api-access-zvzpb DeviceMajor:0 DeviceMinor:921 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-910 DeviceMajor:0 DeviceMinor:910 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1240 DeviceMajor:0 DeviceMinor:1240 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-100 DeviceMajor:0 DeviceMinor:100 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-824 DeviceMajor:0 DeviceMi Feb 16 02:22:27.851828 master-0 kubenswrapper[31559]: nor:824 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-765 DeviceMajor:0 DeviceMinor:765 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/37fd7550-cc81-4180-8540-0bc5f62f63d2/volumes/kubernetes.io~secret/tls-certificates DeviceMajor:0 DeviceMinor:1127 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6e4bdeb9de57d42d4e965ab51dbc3a2aa873e2346f227d5e0b75299b42bac97b/userdata/shm DeviceMajor:0 DeviceMinor:1135 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/86af980a-2653-40c3-a368-a795d7fb8558/volumes/kubernetes.io~projected/kube-api-access-tgqtf DeviceMajor:0 DeviceMinor:1207 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1074 DeviceMajor:0 DeviceMinor:1074 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-987 DeviceMajor:0 DeviceMinor:987 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2ffa4db8-97da-42de-8e51-35680f518ca7/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:454 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/92790def01d9723678e72ddb37afa203e6ce284de27eb1ef78e5d202635e3d9e/userdata/shm DeviceMajor:0 DeviceMinor:42 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/23755f7f-dce6-4dcf-9664-22e3aedb5c81/volumes/kubernetes.io~secret/package-server-manager-serving-cert DeviceMajor:0 DeviceMinor:630 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4bdfcd16962800c84c212d75250cbdd940c7636b2eed7dc4de2b7ca286aca5c8/userdata/shm DeviceMajor:0 DeviceMinor:334 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/9be9fd24-fdb1-43dc-80b8-68020427bfd7/volumes/kubernetes.io~projected/kube-api-access-k2qvg DeviceMajor:0 DeviceMinor:275 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-886 DeviceMajor:0 DeviceMinor:886 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/04804a08-e3a5-46f3-abcb-967866834baa/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:270 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-500 DeviceMajor:0 DeviceMinor:500 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0abea413-e08a-465a-8ec4-2be650bfd5bd/volumes/kubernetes.io~projected/kube-api-access-bxlnm DeviceMajor:0 DeviceMinor:761 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-964 DeviceMajor:0 DeviceMinor:964 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-198 DeviceMajor:0 DeviceMinor:198 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6dcef814-353e-4985-9afc-9e545f7853ae/volume-subpaths/run-systemd/ovnkube-controller/6 DeviceMajor:0 DeviceMinor:24 Capacity:10102833152 Type:vfs Inodes:819200 HasInodes:true} {Device:/var/lib/kubelet/pods/724ac845-3835-458b-9645-e665be135ff9/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:249 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-675 DeviceMajor:0 DeviceMinor:675 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1148 DeviceMajor:0 DeviceMinor:1148 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-58 DeviceMajor:0 DeviceMinor:58 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/dbc5b101-936f-4bf3-bbf3-f30966b0ab50/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:165 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-176 DeviceMajor:0 DeviceMinor:176 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-581 DeviceMajor:0 DeviceMinor:581 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-667 DeviceMajor:0 DeviceMinor:667 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-718 DeviceMajor:0 DeviceMinor:718 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-642 DeviceMajor:0 DeviceMinor:642 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/var/lib/kubelet/pods/5f810ea0-e32d-4097-beca-5194349a57a6/volumes/kubernetes.io~projected/kube-api-access-p49hf DeviceMajor:0 DeviceMinor:795 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1283 DeviceMajor:0 DeviceMinor:1283 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4fa491ec351633ec2d1e1b13971562156d70b9f7ae47702242163650f593c658/userdata/shm DeviceMajor:0 DeviceMinor:1044 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-797 DeviceMajor:0 DeviceMinor:797 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0a900f93-91c9-4782-89a3-1cc09f3aec95/volumes/kubernetes.io~secret/node-exporter-tls DeviceMajor:0 DeviceMinor:1198 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-178 DeviceMajor:0 DeviceMinor:178 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-314 DeviceMajor:0 DeviceMinor:314 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-304 DeviceMajor:0 DeviceMinor:304 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-312 DeviceMajor:0 DeviceMinor:312 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ef2bb9465307e33223e533d623bdfd016157fa7b6e73255487b68bf12c529272/userdata/shm DeviceMajor:0 DeviceMinor:321 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/5b923d74-bad3-4780-8e7e-e8365ac9ea06/volumes/kubernetes.io~projected/kube-api-access-cstlg DeviceMajor:0 DeviceMinor:917 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/c442d349-668b-4d01-a097-5981b7a04eac/volumes/kubernetes.io~projected/kube-api-access-4vchs DeviceMajor:0 DeviceMinor:877 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/17390d9a-148d-4927-a831-5bc4873c43d5/volumes/kubernetes.io~secret/stats-auth DeviceMajor:0 DeviceMinor:1123 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/4a5b01c1-1231-4e69-8b6c-c4981b65b26e/volumes/kubernetes.io~projected/kube-api-access-zr872 DeviceMajor:0 DeviceMinor:265 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8dd32bd58a893bd46ee61ae39a01f4492842e7fb2c4d56eeca513230f073e979/userdata/shm DeviceMajor:0 DeviceMinor:323 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ba4f7bcf968605deb298487c68a8b9824d062c97781f01a71a2b9894c49e23ed/userdata/shm DeviceMajor:0 DeviceMinor:792 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/32c10b6146223a63a25cf689fec7461ab6f7e2b10981648562bd63e55782f342/userdata/shm DeviceMajor:0 DeviceMinor:543 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-213 DeviceMajor:0 DeviceMinor:213 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-409 DeviceMajor:0 DeviceMinor:409 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-64 DeviceMajor:0 DeviceMinor:64 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1082 DeviceMajor:0 DeviceMinor:1082 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-648 DeviceMajor:0 DeviceMinor:648 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-142 DeviceMajor:0 DeviceMinor:142 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-98 DeviceMajor:0 DeviceMinor:98 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-775 DeviceMajor:0 DeviceMinor:775 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a3065737-c7c0-4fbb-b484-f2a9204d4908/volumes/kubernetes.io~projected/kube-api-access-w7276 DeviceMajor:0 DeviceMinor:480 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/5b62004d-7fe3-47ae-8e26-8496befb047c/volumes/kubernetes.io~secret/samples-operator-tls DeviceMajor:0 DeviceMinor:829 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/48863ff6-63ac-42d7-bac7-29d888c92db9/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:117 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5be46040ce7c3ed3d2b8402e810c80c1d12c6e7664eed391a78116018fc06276/userdata/shm DeviceMajor:0 DeviceMinor:1139 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:10102833152 Type:vfs Inodes:819200 HasInodes:true} {Device:overlay_0-89 DeviceMajor:0 DeviceMinor:89 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1084 DeviceMajor:0 DeviceMinor:1084 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0a900f93-91c9-4782-89a3-1cc09f3aec95/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1204 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1012 DeviceMajor:0 DeviceMinor:1012 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9be9fd24-fdb1-43dc-80b8-68020427bfd7/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:254 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/f91346c7-bde4-4fa2-ac27-b5f0d25eeb75/volumes/kubernetes.io~projected/kube-api-access-4ns9l DeviceMajor:0 DeviceMinor:128 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-336 DeviceMajor:0 DeviceMinor:336 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0abea413-e08a-465a-8ec4-2be650bfd5bd/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:118 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/c8086f93-2d98-4218-afac-20a65e6bf943/volumes/kubernetes.io~projected/kube-api-access-cz49l DeviceMajor:0 DeviceMinor:606 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/47742d18510812b119307e6d49d3c726c8e73fb1e202f5e57c5cfb5945faf19d/userdata/shm DeviceMajor:0 DeviceMinor:99 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:25257074688 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-340 DeviceMajor:0 DeviceMinor:340 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7ac81030-35d1-4d86-844d-65d1156d8944/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:570 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-768 DeviceMajor:0 DeviceMinor:768 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1248 DeviceMajor:0 DeviceMinor:1248 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-158 DeviceMajor:0 DeviceMinor:158 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7fa429b0e25c1a3fe1e0505256e1e19c0180fa5596c0ad7692ae5d9ed02cf363/userdata/shm DeviceMajor:0 DeviceMinor:639 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-779 DeviceMajor:0 DeviceMinor:779 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-438 DeviceMajor:0 DeviceMinor:438 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/49dbd775e2346c33a70cf58828eb787b58a10dad1b4af76f1c103cfe7d36ce1d/userdata/shm DeviceMajor:0 DeviceMinor:568 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/17390d9a-148d-4927-a831-5bc4873c43d5/volumes/kubernetes.io~secret/metrics-certs DeviceMajor:0 DeviceMinor:1128 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/17390d9a-148d-4927-a831-5bc4873c43d5/volumes/kubernetes.io~projected/kube-api-access-85sdg DeviceMajor:0 DeviceMinor:1130 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-598 DeviceMajor:0 DeviceMinor:598 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-168 DeviceMajor:0 DeviceMinor:168 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-716 DeviceMajor:0 DeviceMinor:716 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-859 DeviceMajor:0 DeviceMinor:859 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-103 DeviceMajor:0 DeviceMinor:103 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8c267cc7-a51a-4b14-baee-e584254eefc5/volumes/kubernetes.io~secret/secret-metrics-server-tls DeviceMajor:0 DeviceMinor:1279 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/93b137c9da7cc55e696e731bc17c8d146d60020ad34798363a1b97a514dd88c5/userdata/shm DeviceMajor:0 DeviceMinor:462 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-67 DeviceMajor:0 DeviceMinor:67 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1743372f-bdb0-4558-b47b-3714f3aa3fde/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:248 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/21686a6d-f685-4fb6-98af-3e8a39c5981b/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls DeviceMajor:0 DeviceMinor:632 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/a0540a70-a256-422b-a827-e564d0e67866/volumes/kubernetes.io~projected/kube-api-access-s9p9r DeviceMajor:0 DeviceMinor:261 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-346 DeviceMajor:0 DeviceMinor:346 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ee59c4907031f715d1c9629d7cd8d627d819c1de44021beecbbfe36a41fcaf72/userdata/shm DeviceMajor:0 DeviceMinor:166 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/30fef0d5-46ea-4fa3-9ffa-88187d010ffe/volumes/kubernetes.io~projected/kube-api-access-xj8x2 DeviceMajor:0 DeviceMinor:531 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/8c267cc7-a51a-4b14-baee-e584254eefc5/volumes/kubernetes.io~projected/kube-api-access-9snq8 DeviceMajor:0 DeviceMinor:1280 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-350 DeviceMajor:0 DeviceMinor:350 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-447 DeviceMajor:0 DeviceMinor:447 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d84f8ee868ca64ba6d178e5808a6769d2388e0cd861fe9fb0b41b3a95b7ca11c/userdata/shm DeviceMajor:0 DeviceMinor:636 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/a0540a70-a256-422b-a827-e564d0e67866/volumes/kubernetes.io~secret/image-registry-operator-tls DeviceMajor:0 DeviceMinor:467 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/467d92a2-1cf3-418d-b41e-8e5f9d7a5b74/volumes/kubernetes.io~secret/srv-cert DeviceMajor:0 DeviceMinor:622 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-679 DeviceMajor:0 DeviceMinor:679 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-613 DeviceMajor:0 DeviceMinor:613 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/86af980a-2653-40c3-a368-a795d7fb8558/volumes/kubernetes.io~secret/kube-state-metrics-tls DeviceMajor:0 DeviceMinor:1215 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/8c267cc7-a51a-4b14-baee-e584254eefc5/volumes/kubernetes.io~secret/client-ca-bundle DeviceMajor:0 DeviceMinor:1274 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-301 DeviceMajor:0 DeviceMinor:301 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-650 DeviceMajor:0 DeviceMinor:650 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/277a843980e79e0d5b023668b83b4bebf3c5a0fcb2193476c696948786d785da/userdata/shm DeviceMajor:0 DeviceMinor:562 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-942 DeviceMajor:0 DeviceMinor:942 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8c267cc7-a51a-4b14-baee-e584254eefc5/volumes/kubernetes.io~secret/secret-metrics-client-certs DeviceMajor:0 DeviceMinor:1278 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-368 DeviceMajor:0 DeviceMinor:368 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/fec84b8a-a0d1-4b07-8827-cef0beb89ecd/volumes/kubernetes.io~secret/machine-api-operator-tls DeviceMajor:0 DeviceMinor:845 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/9defdfff-eb18-4beb-9591-918d0e4b4236/volumes/kubernetes.io~secret/signing-key DeviceMajor:0 DeviceMinor:392 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5ae32b90c0d9ee58a1aa24c15f184b908d4118e753afef2c6de3006c4e387eaa/userdata/shm DeviceMajor:0 DeviceMinor:597 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/28301e075bce0b536459155be0ee0c5701514de23367f3127b28f30bb9102319/userdata/shm DeviceMajor:0 DeviceMinor:641 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-903 DeviceMajor:0 DeviceMinor:903 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1154 DeviceMajor:0 DeviceMinor:1154 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-717 DeviceMajor:0 DeviceMinor:717 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-228 DeviceMajor:0 DeviceMinor:228 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1046 DeviceMajor:0 DeviceMinor:1046 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1487f82c-c14a-4f65-be77-5af2612f56f4/volumes/kubernetes.io~projected/kube-api-access-wxq28 DeviceMajor:0 DeviceMinor:773 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/695d1f01-d3c1-4fb9-9dda-daf33eae11f5/volumes/kubernetes.io~projected/kube-api-access-kws4h DeviceMajor:0 DeviceMinor:1185 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/7846b339-c46d-4983-b586-a28f2868f665/volumes/kubernetes.io~projected/kube-api-access-5cnfs DeviceMajor:0 DeviceMinor:495 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-800 DeviceMajor:0 DeviceMinor:800 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1320 DeviceMajor:0 DeviceMinor:1320 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-297 DeviceMajor:0 DeviceMinor:297 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9defdfff-eb18-4beb-9591-918d0e4b4236/volumes/kubernetes.io~projected/kube-api-access-r94gg DeviceMajor:0 DeviceMinor:393 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ec4a831847dbd9a3830625bc2566b19d885784da6d7562dca0d18bf050003dad/userdata/shm DeviceMajor:0 DeviceMinor:468 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/5b62004d-7fe3-47ae-8e26-8496befb047c/volumes/kubernetes.io~projected/kube-api-access-ln8g4 DeviceMajor:0 DeviceMinor:840 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/430c146b-ceaf-411a-add6-ce949243aabf/volumes/kubernetes.io~projected/kube-api-access-vdllq DeviceMajor:0 DeviceMinor:111 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-474 DeviceMajor:0 DeviceMinor:474 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/32d420d6-bbda-42c0-82fe-8b187ad91607/volumes/kubernetes.io~secret/openshift-state-metrics-tls DeviceMajor:0 DeviceMinor:1203 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-379 DeviceMajor:0 DeviceMinor:379 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/bde83629-b39c-401e-bc30-5ce205638918/volumes/kubernetes.io~secret/marketplace-operator-metrics DeviceMajor:0 DeviceMinor:631 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-652 DeviceMajor:0 DeviceMinor:652 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c4a146b2-c712-408a-97d8-5de3a84f3aaf/volumes/kubernetes.io~projected/kube-api-access-6p8rc DeviceMajor:0 DeviceMinor:901 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1236 DeviceMajor:0 DeviceMinor:1236 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1743372f-bdb0-4558-b47b-3714f3aa3fde/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:269 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-814 DeviceMajor:0 DeviceMinor:814 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1253 DeviceMajor:0 DeviceMinor:1253 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/93c4687f8a629173f2b94639e83fbe20ff1ad3e33cea55c6d4a8747fb84f23bd/userdata/shm DeviceMajor:0 DeviceMinor:395 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/00ef3b03-55dc-4661-b7fd-1e586c45b5de/volumes/kubernetes.io~projected/kube-api-access-7kj7r DeviceMajor:0 DeviceMinor:545 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/dc3354cb-b6c3-40a5-a695-cccb079ad292/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:843 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/aab7eeb6e8bf766155c633f93a77e37a4ca269be0e48fc054214cf6cfcafebc6/userdata/shm DeviceMajor:0 DeviceMinor:381 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/676adb95-3ffd-43e5-89e3-9d7a7d74df28/volumes/kubernetes.io~projected/kube-api-access-lnh76 DeviceMajor:0 DeviceMinor:391 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-919 DeviceMajor:0 DeviceMinor:919 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/22739961-e322-47f1-b232-eaa4cc35319c/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:620 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-325 DeviceMajor:0 DeviceMinor:325 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-359 DeviceMajor:0 DeviceMinor:359 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-617 DeviceMajor:0 DeviceMinor:617 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/32d420d6-bbda-42c0-82fe-8b187ad91607/volumes/kubernetes.io~secret/openshift-state-metrics-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1202 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/248a67424bdbb1372ecb2fb070f15261787aceee6d09513f5274ef915ebe68ae/userdata/shm DeviceMajor:0 DeviceMinor:283 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-406 DeviceMajor:0 DeviceMinor:406 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1048 DeviceMajor:0 DeviceMinor:1048 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1232 DeviceMajor:0 DeviceMinor:1232 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a0540a70-a256-422b-a827-e564d0e67866/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:274 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c4ca903ca847491a2f54378905e0af98cc694e7eee50b6b0fc0352bdc61947b5/userdata/shm DeviceMajor:0 DeviceMinor:1174 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-770 DeviceMajor:0 DeviceMinor:770 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-658 DeviceMajor:0 DeviceMinor:658 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4476a4bc67f1c6ee7cd4f19dd630e65931f829a02ef5d857f284d2df2a08dd8d/userdata/shm DeviceMajor:0 DeviceMinor:376 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-794 DeviceMajor:0 DeviceMinor:794 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-822 DeviceMajor:0 DeviceMinor:822 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-418 DeviceMajor:0 DeviceMinor:418 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-152 DeviceMajor:0 DeviceMinor:152 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c9cd32bc-a13a-44ee-ba52-7bb335c7007b/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:238 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/b2a83ddd-ffa5-4127-9099-91187ad9dbba/volumes/kubernetes.io~secret/node-tuning-operator-tls DeviceMajor:0 DeviceMinor:455 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-780 DeviceMajor:0 DeviceMinor:780 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6c0cfba2536520f6ae9edb17fbb1f2d62f0a336f61c097893c2d906e44086caa/userdata/shm DeviceMajor:0 DeviceMinor:373 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-150 DeviceMajor:0 DeviceMinor:150 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-813 DeviceMajor:0 DeviceMinor:813 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-404 DeviceMajor:0 DeviceMinor:404 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c4a146b2-c712-408a-97d8-5de3a84f3aaf/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls DeviceMajor:0 DeviceMinor:890 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1235 DeviceMajor:0 DeviceMinor:1235 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-561 DeviceMajor:0 DeviceMinor:561 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/97b8261a-91e3-435e-93f8-0a17f30359fd/volumes/kubernetes.io~projected/kube-api-access-pmqrb DeviceMajor:0 DeviceMinor:656 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1137 DeviceMajor:0 DeviceMinor:1137 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-511 DeviceMajor:0 DeviceMinor:511 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-489 DeviceMajor:0 DeviceMinor:489 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1334 DeviceMajor:0 DeviceMinor:1334 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6dcef814-353e-4985-9afc-9e545f7853ae/volumes/kubernetes.io~projected/kube-api-access-pjsbs DeviceMajor:0 DeviceMinor:139 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/857357a1-dc98-4dd5-98b3-c94b1ddf9dec/volumes/kubernetes.io~projected/ca-certs DeviceMajor:0 DeviceMinor:490 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-798 DeviceMajor:0 DeviceMinor:798 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f7317f91-9441-449f-9738-85da088cf94f/volumes/kubernetes.io~projected/kube-api-access-58cq8 DeviceMajor:0 DeviceMinor:137 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-680 DeviceMajor:0 DeviceMinor:680 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1142 DeviceMajor:0 DeviceMinor:1142 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/07f7d55685e3891e139cfcc8fc39a4525349b15753a33187f5704239bf899022/userdata/shm DeviceMajor:0 DeviceMinor:50 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4b908a349a6f9bb67998eaa77c0cb0b67337fd06d9753261cfa10d0744b50e07/userdata/shm DeviceMajor:0 DeviceMinor:383 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/22739961-e322-47f1-b232-eaa4cc35319c/volumes/kubernetes.io~secret/encryption-config DeviceMajor:0 DeviceMinor:342 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1145 DeviceMajor:0 DeviceMinor:1145 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/460579bff076abf6e4d419f44b68400fd50e9b2bc9a03fc49494f7b68ef04045/userdata/shm DeviceMajor:0 DeviceMinor:95 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-355 DeviceMajor:0 DeviceMinor:355 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-318 DeviceMajor:0 DeviceMinor:318 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-122 DeviceMajor:0 DeviceMinor:122 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-884 DeviceMajor:0 DeviceMinor:884 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-776 DeviceMajor:0 DeviceMinor:776 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-96 DeviceMajor:0 DeviceMinor:96 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-948 DeviceMajor:0 DeviceMinor:948 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-59 DeviceMajor:0 DeviceMinor:59 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e379cfaf-3a4c-40e7-8641-3524b3669295/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:251 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/aafd16466f6eed6a672c6fa59488bcd1ea8cbc42fa0dfe86540d9e97cd364cb6/userdata/shm DeviceMajor:0 DeviceMinor:289 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/00ef3b03-55dc-4661-b7fd-1e586c45b5de/volumes/kubernetes.io~empty-dir/etc-tuned DeviceMajor:0 DeviceMinor:537 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/a8d00a01-aa48-4830-a558-93a31cb98b31/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls DeviceMajor:0 DeviceMinor:541 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-904 DeviceMajor:0 DeviceMinor:904 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7498fd943d93408ca13b7d162b110c819eb4973f8cff45407cca83058e5ae25e/userdata/shm DeviceMajor:0 DeviceMinor:889 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-106 DeviceMajor:0 DeviceMinor:106 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:0458436e1991707 MacAddress:fe:8c:56:be:82:ed Speed:10000 Mtu:8900} {Name:0926ad4d371086f MacAddress:e2:58:ad:fb:01:fb Speed:10000 Mtu:8900} {Name:19f7fe92f509ebf MacAddress:7a:63:6a:b6:5b:46 Speed:10000 Mtu:8900} {Name:1a32ad8e692aa77 MacAddress:f6:e7:38:3a:f3:7d Speed:10000 Mtu:8900} {Name:2287a210e87155c MacAddress:96:97:5e:e0:30:a7 Speed:10000 Mtu:8900} {Name:248a67424bdbb13 MacAddress:22:24:86:9d:14:ab Speed:10000 Mtu:8900} {Name:277a843980e79e0 MacAddress:6e:b2:87:d9:f2:3d Speed:10000 Mtu:8900} {Name:28301e075bce0b5 MacAddress:42:34:25:3f:24:38 Speed:10000 Mtu:8900} {Name:283ae7ddfaf1351 MacAddress:46:2f:ff:a2:33:e5 Speed:10000 Mtu:8900} {Name:3aa52e31a2b9a47 MacAddress:fe:87:a4:68:f4:40 Speed:10000 Mtu:8900} {Name:4134014c45e6845 MacAddress:d6:96:af:72:70:93 Speed:10000 Mtu:8900} {Name:4476a4bc67f1c6e MacAddress:26:e9:84:c3:0b:b5 Speed:10000 Mtu:8900} {Name:47db8b5a98f2dc9 MacAddress:62:d8:a8:e2:46:3f Speed:10000 Mtu:8900} {Name:48c24060310ea59 MacAddress:fe:6c:18:48:49:e2 Speed:10000 Mtu:8900} {Name:49dbd775e2346c3 MacAddress:b2:9a:ba:1e:fb:47 Speed:10000 Mtu:8900} {Name:49e455816343db1 MacAddress:1e:0b:af:71:c1:70 Speed:10000 Mtu:8900} {Name:4b7e8c0ad2cdf87 MacAddress:72:14:f5:73:3d:2f Speed:10000 Mtu:8900} {Name:4b908a349a6f9bb MacAddress:5e:07:0e:71:68:24 Speed:10000 Mtu:8900} {Name:4bdfcd16962800c MacAddress:fe:74:34:16:d0:93 Speed:10000 Mtu:8900} {Name:4e9f20e89ac525e MacAddress:de:48:56:52:c8:f1 Speed:10000 Mtu:8900} {Name:4f55a0409391e00 MacAddress:fe:97:5f:68:18:d9 Speed:10000 Mtu:8900} {Name:4fa491ec351633e MacAddress:6e:5a:40:88:a6:05 Speed:10000 Mtu:8900} {Name:5be46040ce7c3ed MacAddress:82:69:9b:05:64:48 Speed:10000 Mtu:8900} {Name:62a59e4f54ec7d1 MacAddress:7e:70:33:d4:de:73 Speed:10000 Mtu:8900} {Name:667ddc72e934223 MacAddress:3e:35:9d:d8:a0:14 Speed:10000 Mtu:8900} {Name:6c0cfba2536520f MacAddress:42:b0:6e:b5:d6:46 Speed:10000 Mtu:8900} {Name:72eeb0715f00e0e MacAddress:82:ee:1a:c5:39:bf Speed:10000 Mtu:8900} {Name:7ed4da3dff52ca3 MacAddress:56:ff:1f:4e:04:9c Speed:10000 Mtu:8900} {Name:7fa429b0e25c1a3 MacAddress:d6:19:db:4c:40:e3 Speed:10000 Mtu:8900} {Name:86f122d4749ad14 MacAddress:3e:35:14:f6:4d:51 Speed:10000 Mtu:8900} {Name:8dd32bd58a893bd MacAddress:f2:32:a3:0c:dc:de Speed:10000 Mtu:8900} {Name:8ec6f01b2f5ea3a MacAddress:0e:cf:de:96:95:98 Speed:10000 Mtu:8900} {Name:92905ae35545e07 MacAddress:3a:ee:d8:69:d9:61 Speed:10000 Mtu:8900} {Name:93b137c9da7cc55 MacAddress:42:45:a0:19:8d:20 Speed:10000 Mtu:8900} {Name:93c4687f8a62917 MacAddress:ce:0a:a3:02:61:88 Speed:10000 Mtu:8900} {Name:93fe8320cd8b094 MacAddress:be:88:41:75:9a:09 Speed:10000 Mtu:8900} {Name:99cec3957b7591d MacAddress:76:92:3b:53:dc:fe Speed:10000 Mtu:8900} {Name:a1854497ff5ad67 MacAddress:0a:b0:fc:25:3b:29 Speed:10000 Mtu:8900} {Name:a926069b18af0a4 MacAddress:f2:d0:d7:da:cc:e3 Speed:10000 Mtu:8900} {Name:aab7eeb6e8bf766 MacAddress:e6:c6:74:e9:ac:51 Speed:10000 Mtu:8900} {Name:aafd16466f6eed6 MacAddress:2e:e7:f0:f5:28:92 Speed:10000 Mtu:8900} {Name:ba4f7bcf968605d MacAddress:5e:a2:b6:a1:d2:7c Speed:10000 Mtu:8900} {Name:bd5a6caddc3fbff MacAddress:d2:e9:a8:5c:cc:44 Speed:10000 Mtu:8900} {Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:br-int MacAddress:46:04:3f:41:5e:67 Speed:0 Mtu:8900} {Name:c7eb8cc3989ea5b MacAddress:6a:42:62:c4:ce:7b Speed:10000 Mtu:8900} {Name:d247b2be5bf1cb7 MacAddress:3e:1f:f7:eb:e0:bc Speed:10000 Mtu:8900} {Name:d3a6bee8bdf6774 MacAddress:12:3a:a6:af:75:30 Speed:10000 Mtu:8900} {Name:d84f8ee868ca64b MacAddress:ba:5a:42:d1:19:0d Speed:10000 Mtu:8900} {Name:debb4f03d3db874 MacAddress:36:8f:e5:8d:4a:cd Speed:10000 Mtu:8900} {Name:e04624272a6ae06 MacAddress:9e:b2:e9:df:c4:26 Speed:10000 Mtu:8900} {Name:e55ecc9e109900a MacAddress:7a:5f:e7:56:bd:53 Speed:10000 Mtu:8900} {Name:ec4a831847dbd9a MacAddress:a6:94:f0:d6:5c:ea Speed:10000 Mtu:8900} {Name:ef2bb9465307e33 MacAddress:0e:fb:54:bc:93:ec Speed:10000 Mtu:8900} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:61:d7:58 Speed:-1 Mtu:9000} {Name:eth2 MacAddress:fa:16:3e:30:df:9f Speed:-1 Mtu:9000} {Name:f1504c9a4b0e4bf MacAddress:da:ed:c8:88:f4:71 Speed:10000 Mtu:8900} {Name:f218aafff65afcf MacAddress:8e:9b:56:bb:1b:44 Speed:10000 Mtu:8900} {Name:f2d5026b3d62b6e MacAddress:6a:40:78:55:14:7b Speed:10000 Mtu:8900} {Name:f9e85a0740edade MacAddress:ea:af:30:8f:bd:ca Speed:10000 Mtu:8900} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:80:00:02 Speed:0 Mtu:8900} {Name:ovs-system MacAddress:6e:22:cb:a5:72:48 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:50514153472 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[12] Caches:[{Id:12 Size:32768 Type:Data Level:1} {Id:12 Size:32768 Type:Instruction Level:1} {Id:12 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:12 Size:16777216 Type:Unified Level:3}] SocketID:12 BookID: DrawerID:} {Id:0 Threads:[13] Caches:[{Id:13 Size:32768 Type:Data Level:1} {Id:13 Size:32768 Type:Instruction Level:1} {Id:13 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:13 Size:16777216 Type:Unified Level:3}] SocketID:13 BookID: DrawerID:} {Id:0 Threads:[14] Caches:[{Id:14 Size:32768 Type:Data Level:1} {Id:14 Size:32768 Type:Instruction Level:1} {Id:14 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:14 Size:16777216 Type:Unified Level:3}] SocketID:14 BookID: DrawerID:} {Id:0 Threads:[15] Caches:[{Id:15 Size:32768 Type:Data Level:1} {Id:15 Size:32768 Type:Instruction Level:1} {Id:15 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:15 Size:16777216 Type:Unified Level:3}] SocketID:15 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 16 02:22:27.851828 master-0 kubenswrapper[31559]: I0216 02:22:27.850738 31559 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 16 02:22:27.851828 master-0 kubenswrapper[31559]: I0216 02:22:27.850817 31559 manager.go:233] Version: {KernelVersion:5.14.0-427.107.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202601202224-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 16 02:22:27.851828 master-0 kubenswrapper[31559]: I0216 02:22:27.851083 31559 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 16 02:22:27.851828 master-0 kubenswrapper[31559]: I0216 02:22:27.851306 31559 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 16 02:22:27.851828 master-0 kubenswrapper[31559]: I0216 02:22:27.851341 31559 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 16 02:22:27.851828 master-0 kubenswrapper[31559]: I0216 02:22:27.851668 31559 topology_manager.go:138] "Creating topology manager with none policy" Feb 16 02:22:27.851828 master-0 kubenswrapper[31559]: I0216 02:22:27.851681 31559 container_manager_linux.go:303] "Creating device plugin manager" Feb 16 02:22:27.851828 master-0 kubenswrapper[31559]: I0216 02:22:27.851691 31559 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 16 02:22:27.851828 master-0 kubenswrapper[31559]: I0216 02:22:27.851719 31559 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 16 02:22:27.851828 master-0 kubenswrapper[31559]: I0216 02:22:27.851763 31559 state_mem.go:36] "Initialized new in-memory state store" Feb 16 02:22:27.851828 master-0 kubenswrapper[31559]: I0216 02:22:27.851860 31559 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 16 02:22:27.858815 master-0 kubenswrapper[31559]: I0216 02:22:27.851933 31559 kubelet.go:418] "Attempting to sync node with API server" Feb 16 02:22:27.858815 master-0 kubenswrapper[31559]: I0216 02:22:27.851948 31559 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 16 02:22:27.858815 master-0 kubenswrapper[31559]: I0216 02:22:27.851964 31559 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 16 02:22:27.858815 master-0 kubenswrapper[31559]: I0216 02:22:27.851978 31559 kubelet.go:324] "Adding apiserver pod source" Feb 16 02:22:27.858815 master-0 kubenswrapper[31559]: I0216 02:22:27.851997 31559 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 16 02:22:27.858815 master-0 kubenswrapper[31559]: I0216 02:22:27.853757 31559 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-3.rhaos4.18.gite0b87e5.el9" apiVersion="v1" Feb 16 02:22:27.858815 master-0 kubenswrapper[31559]: I0216 02:22:27.854141 31559 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 16 02:22:27.858815 master-0 kubenswrapper[31559]: I0216 02:22:27.854740 31559 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 16 02:22:27.858815 master-0 kubenswrapper[31559]: I0216 02:22:27.854983 31559 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 16 02:22:27.858815 master-0 kubenswrapper[31559]: I0216 02:22:27.855248 31559 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 16 02:22:27.858815 master-0 kubenswrapper[31559]: I0216 02:22:27.855271 31559 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 16 02:22:27.858815 master-0 kubenswrapper[31559]: I0216 02:22:27.855283 31559 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 16 02:22:27.858815 master-0 kubenswrapper[31559]: I0216 02:22:27.855294 31559 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 16 02:22:27.858815 master-0 kubenswrapper[31559]: I0216 02:22:27.855305 31559 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 16 02:22:27.858815 master-0 kubenswrapper[31559]: I0216 02:22:27.855315 31559 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 16 02:22:27.858815 master-0 kubenswrapper[31559]: I0216 02:22:27.855324 31559 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 16 02:22:27.858815 master-0 kubenswrapper[31559]: I0216 02:22:27.855337 31559 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 16 02:22:27.858815 master-0 kubenswrapper[31559]: I0216 02:22:27.855354 31559 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 16 02:22:27.858815 master-0 kubenswrapper[31559]: I0216 02:22:27.855368 31559 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 16 02:22:27.858815 master-0 kubenswrapper[31559]: I0216 02:22:27.855386 31559 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 16 02:22:27.858815 master-0 kubenswrapper[31559]: I0216 02:22:27.855421 31559 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 16 02:22:27.858815 master-0 kubenswrapper[31559]: I0216 02:22:27.856012 31559 server.go:1280] "Started kubelet" Feb 16 02:22:27.858815 master-0 kubenswrapper[31559]: I0216 02:22:27.856223 31559 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 16 02:22:27.858815 master-0 kubenswrapper[31559]: I0216 02:22:27.858574 31559 server.go:449] "Adding debug handlers to kubelet server" Feb 16 02:22:27.857107 master-0 systemd[1]: Started Kubernetes Kubelet. Feb 16 02:22:27.864653 master-0 kubenswrapper[31559]: I0216 02:22:27.859914 31559 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 16 02:22:27.864653 master-0 kubenswrapper[31559]: I0216 02:22:27.859992 31559 server_v1.go:47] "podresources" method="list" useActivePods=true Feb 16 02:22:27.865531 master-0 kubenswrapper[31559]: I0216 02:22:27.865031 31559 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 16 02:22:27.876754 master-0 kubenswrapper[31559]: E0216 02:22:27.876695 31559 kubelet.go:1495] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Feb 16 02:22:27.882465 master-0 kubenswrapper[31559]: I0216 02:22:27.882374 31559 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 16 02:22:27.882465 master-0 kubenswrapper[31559]: I0216 02:22:27.882420 31559 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 16 02:22:27.882946 master-0 kubenswrapper[31559]: I0216 02:22:27.882823 31559 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 16 02:22:27.882946 master-0 kubenswrapper[31559]: I0216 02:22:27.882849 31559 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 16 02:22:27.882946 master-0 kubenswrapper[31559]: I0216 02:22:27.882886 31559 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Feb 16 02:22:27.884018 master-0 kubenswrapper[31559]: E0216 02:22:27.882746 31559 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 02:22:27.885244 master-0 kubenswrapper[31559]: I0216 02:22:27.885199 31559 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 16 02:22:27.885745 master-0 kubenswrapper[31559]: I0216 02:22:27.885653 31559 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-17 01:56:50 +0000 UTC, rotation deadline is 2026-02-16 20:37:42.927522294 +0000 UTC Feb 16 02:22:27.885745 master-0 kubenswrapper[31559]: I0216 02:22:27.885713 31559 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 18h15m15.041813474s for next certificate rotation Feb 16 02:22:27.885994 master-0 kubenswrapper[31559]: I0216 02:22:27.885961 31559 factory.go:55] Registering systemd factory Feb 16 02:22:27.885994 master-0 kubenswrapper[31559]: I0216 02:22:27.885994 31559 factory.go:221] Registration of the systemd container factory successfully Feb 16 02:22:27.886359 master-0 kubenswrapper[31559]: I0216 02:22:27.886309 31559 factory.go:153] Registering CRI-O factory Feb 16 02:22:27.886359 master-0 kubenswrapper[31559]: I0216 02:22:27.886328 31559 factory.go:221] Registration of the crio container factory successfully Feb 16 02:22:27.886607 master-0 kubenswrapper[31559]: I0216 02:22:27.886423 31559 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 16 02:22:27.886607 master-0 kubenswrapper[31559]: I0216 02:22:27.886480 31559 factory.go:103] Registering Raw factory Feb 16 02:22:27.886607 master-0 kubenswrapper[31559]: I0216 02:22:27.886502 31559 manager.go:1196] Started watching for new ooms in manager Feb 16 02:22:27.887269 master-0 kubenswrapper[31559]: I0216 02:22:27.887223 31559 manager.go:319] Starting recovery of all containers Feb 16 02:22:27.912365 master-0 kubenswrapper[31559]: I0216 02:22:27.912183 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="857357a1-dc98-4dd5-98b3-c94b1ddf9dec" volumeName="kubernetes.io/projected/857357a1-dc98-4dd5-98b3-c94b1ddf9dec-ca-certs" seLinuxMountContext="" Feb 16 02:22:27.912365 master-0 kubenswrapper[31559]: I0216 02:22:27.912341 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dbc5b101-936f-4bf3-bbf3-f30966b0ab50" volumeName="kubernetes.io/projected/dbc5b101-936f-4bf3-bbf3-f30966b0ab50-kube-api-access-jlnkb" seLinuxMountContext="" Feb 16 02:22:27.912365 master-0 kubenswrapper[31559]: I0216 02:22:27.912366 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="32d420d6-bbda-42c0-82fe-8b187ad91607" volumeName="kubernetes.io/projected/32d420d6-bbda-42c0-82fe-8b187ad91607-kube-api-access-w4mp4" seLinuxMountContext="" Feb 16 02:22:27.912751 master-0 kubenswrapper[31559]: I0216 02:22:27.912387 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6dcef814-353e-4985-9afc-9e545f7853ae" volumeName="kubernetes.io/configmap/6dcef814-353e-4985-9afc-9e545f7853ae-ovnkube-config" seLinuxMountContext="" Feb 16 02:22:27.912751 master-0 kubenswrapper[31559]: I0216 02:22:27.912421 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="86af980a-2653-40c3-a368-a795d7fb8558" volumeName="kubernetes.io/secret/86af980a-2653-40c3-a368-a795d7fb8558-kube-state-metrics-kube-rbac-proxy-config" seLinuxMountContext="" Feb 16 02:22:27.912751 master-0 kubenswrapper[31559]: I0216 02:22:27.912484 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9be9fd24-fdb1-43dc-80b8-68020427bfd7" volumeName="kubernetes.io/secret/9be9fd24-fdb1-43dc-80b8-68020427bfd7-serving-cert" seLinuxMountContext="" Feb 16 02:22:27.912751 master-0 kubenswrapper[31559]: I0216 02:22:27.912516 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0540a70-a256-422b-a827-e564d0e67866" volumeName="kubernetes.io/configmap/a0540a70-a256-422b-a827-e564d0e67866-trusted-ca" seLinuxMountContext="" Feb 16 02:22:27.912751 master-0 kubenswrapper[31559]: I0216 02:22:27.912536 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0abea413-e08a-465a-8ec4-2be650bfd5bd" volumeName="kubernetes.io/configmap/0abea413-e08a-465a-8ec4-2be650bfd5bd-service-ca-bundle" seLinuxMountContext="" Feb 16 02:22:27.912751 master-0 kubenswrapper[31559]: I0216 02:22:27.912571 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0abea413-e08a-465a-8ec4-2be650bfd5bd" volumeName="kubernetes.io/empty-dir/0abea413-e08a-465a-8ec4-2be650bfd5bd-snapshots" seLinuxMountContext="" Feb 16 02:22:27.912751 master-0 kubenswrapper[31559]: I0216 02:22:27.912600 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22739961-e322-47f1-b232-eaa4cc35319c" volumeName="kubernetes.io/secret/22739961-e322-47f1-b232-eaa4cc35319c-serving-cert" seLinuxMountContext="" Feb 16 02:22:27.912751 master-0 kubenswrapper[31559]: I0216 02:22:27.912621 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c8086f93-2d98-4218-afac-20a65e6bf943" volumeName="kubernetes.io/secret/c8086f93-2d98-4218-afac-20a65e6bf943-webhook-certs" seLinuxMountContext="" Feb 16 02:22:27.912751 master-0 kubenswrapper[31559]: I0216 02:22:27.912693 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0a900f93-91c9-4782-89a3-1cc09f3aec95" volumeName="kubernetes.io/projected/0a900f93-91c9-4782-89a3-1cc09f3aec95-kube-api-access-sctj8" seLinuxMountContext="" Feb 16 02:22:27.912751 master-0 kubenswrapper[31559]: I0216 02:22:27.912717 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d870332c-2498-4135-a9b3-a71e67c2805b" volumeName="kubernetes.io/configmap/d870332c-2498-4135-a9b3-a71e67c2805b-images" seLinuxMountContext="" Feb 16 02:22:27.912751 master-0 kubenswrapper[31559]: I0216 02:22:27.912752 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8c267cc7-a51a-4b14-baee-e584254eefc5" volumeName="kubernetes.io/secret/8c267cc7-a51a-4b14-baee-e584254eefc5-secret-metrics-server-tls" seLinuxMountContext="" Feb 16 02:22:27.913358 master-0 kubenswrapper[31559]: I0216 02:22:27.912773 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f918d5b-1a4c-4b56-98a4-5cef638bb615" volumeName="kubernetes.io/configmap/8f918d5b-1a4c-4b56-98a4-5cef638bb615-trusted-ca-bundle" seLinuxMountContext="" Feb 16 02:22:27.913358 master-0 kubenswrapper[31559]: I0216 02:22:27.912824 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bde83629-b39c-401e-bc30-5ce205638918" volumeName="kubernetes.io/secret/bde83629-b39c-401e-bc30-5ce205638918-marketplace-operator-metrics" seLinuxMountContext="" Feb 16 02:22:27.913358 master-0 kubenswrapper[31559]: I0216 02:22:27.912855 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="30fef0d5-46ea-4fa3-9ffa-88187d010ffe" volumeName="kubernetes.io/projected/30fef0d5-46ea-4fa3-9ffa-88187d010ffe-kube-api-access-xj8x2" seLinuxMountContext="" Feb 16 02:22:27.913358 master-0 kubenswrapper[31559]: I0216 02:22:27.912907 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6dcef814-353e-4985-9afc-9e545f7853ae" volumeName="kubernetes.io/secret/6dcef814-353e-4985-9afc-9e545f7853ae-ovn-node-metrics-cert" seLinuxMountContext="" Feb 16 02:22:27.913358 master-0 kubenswrapper[31559]: I0216 02:22:27.912933 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8c267cc7-a51a-4b14-baee-e584254eefc5" volumeName="kubernetes.io/configmap/8c267cc7-a51a-4b14-baee-e584254eefc5-metrics-server-audit-profiles" seLinuxMountContext="" Feb 16 02:22:27.913358 master-0 kubenswrapper[31559]: I0216 02:22:27.912953 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f918d5b-1a4c-4b56-98a4-5cef638bb615" volumeName="kubernetes.io/secret/8f918d5b-1a4c-4b56-98a4-5cef638bb615-etcd-client" seLinuxMountContext="" Feb 16 02:22:27.913358 master-0 kubenswrapper[31559]: I0216 02:22:27.912997 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22739961-e322-47f1-b232-eaa4cc35319c" volumeName="kubernetes.io/configmap/22739961-e322-47f1-b232-eaa4cc35319c-etcd-serving-ca" seLinuxMountContext="" Feb 16 02:22:27.913358 master-0 kubenswrapper[31559]: I0216 02:22:27.913019 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2ffa4db8-97da-42de-8e51-35680f518ca7" volumeName="kubernetes.io/secret/2ffa4db8-97da-42de-8e51-35680f518ca7-metrics-tls" seLinuxMountContext="" Feb 16 02:22:27.913358 master-0 kubenswrapper[31559]: I0216 02:22:27.913038 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="857357a1-dc98-4dd5-98b3-c94b1ddf9dec" volumeName="kubernetes.io/empty-dir/857357a1-dc98-4dd5-98b3-c94b1ddf9dec-cache" seLinuxMountContext="" Feb 16 02:22:27.913358 master-0 kubenswrapper[31559]: I0216 02:22:27.913065 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9be9fd24-fdb1-43dc-80b8-68020427bfd7" volumeName="kubernetes.io/projected/9be9fd24-fdb1-43dc-80b8-68020427bfd7-kube-api-access-k2qvg" seLinuxMountContext="" Feb 16 02:22:27.913358 master-0 kubenswrapper[31559]: I0216 02:22:27.913086 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b62004d-7fe3-47ae-8e26-8496befb047c" volumeName="kubernetes.io/projected/5b62004d-7fe3-47ae-8e26-8496befb047c-kube-api-access-ln8g4" seLinuxMountContext="" Feb 16 02:22:27.913358 master-0 kubenswrapper[31559]: I0216 02:22:27.913124 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="83883885-f493-4559-9c0f-e28d69712475" volumeName="kubernetes.io/configmap/83883885-f493-4559-9c0f-e28d69712475-client-ca" seLinuxMountContext="" Feb 16 02:22:27.913358 master-0 kubenswrapper[31559]: I0216 02:22:27.913152 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="97b8261a-91e3-435e-93f8-0a17f30359fd" volumeName="kubernetes.io/projected/97b8261a-91e3-435e-93f8-0a17f30359fd-kube-api-access-pmqrb" seLinuxMountContext="" Feb 16 02:22:27.913358 master-0 kubenswrapper[31559]: I0216 02:22:27.913200 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="467d92a2-1cf3-418d-b41e-8e5f9d7a5b74" volumeName="kubernetes.io/secret/467d92a2-1cf3-418d-b41e-8e5f9d7a5b74-srv-cert" seLinuxMountContext="" Feb 16 02:22:27.913358 master-0 kubenswrapper[31559]: I0216 02:22:27.913248 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7f0f9b7d-e663-4927-861b-a9544d483b6e" volumeName="kubernetes.io/projected/7f0f9b7d-e663-4927-861b-a9544d483b6e-kube-api-access-5m4sb" seLinuxMountContext="" Feb 16 02:22:27.913358 master-0 kubenswrapper[31559]: I0216 02:22:27.913287 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0a900f93-91c9-4782-89a3-1cc09f3aec95" volumeName="kubernetes.io/empty-dir/0a900f93-91c9-4782-89a3-1cc09f3aec95-node-exporter-textfile" seLinuxMountContext="" Feb 16 02:22:27.913358 master-0 kubenswrapper[31559]: I0216 02:22:27.913320 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="27d876a7-6a48-4942-ad96-ed8ed3aa104b" volumeName="kubernetes.io/empty-dir/27d876a7-6a48-4942-ad96-ed8ed3aa104b-cache" seLinuxMountContext="" Feb 16 02:22:27.913358 master-0 kubenswrapper[31559]: I0216 02:22:27.913345 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2ffa4db8-97da-42de-8e51-35680f518ca7" volumeName="kubernetes.io/projected/2ffa4db8-97da-42de-8e51-35680f518ca7-kube-api-access-t9sgx" seLinuxMountContext="" Feb 16 02:22:27.913358 master-0 kubenswrapper[31559]: I0216 02:22:27.913363 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dc3354cb-b6c3-40a5-a695-cccb079ad292" volumeName="kubernetes.io/empty-dir/dc3354cb-b6c3-40a5-a695-cccb079ad292-tmpfs" seLinuxMountContext="" Feb 16 02:22:27.914427 master-0 kubenswrapper[31559]: I0216 02:22:27.913398 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="00ef3b03-55dc-4661-b7fd-1e586c45b5de" volumeName="kubernetes.io/empty-dir/00ef3b03-55dc-4661-b7fd-1e586c45b5de-etc-tuned" seLinuxMountContext="" Feb 16 02:22:27.914427 master-0 kubenswrapper[31559]: I0216 02:22:27.913422 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22739961-e322-47f1-b232-eaa4cc35319c" volumeName="kubernetes.io/configmap/22739961-e322-47f1-b232-eaa4cc35319c-audit-policies" seLinuxMountContext="" Feb 16 02:22:27.914427 master-0 kubenswrapper[31559]: I0216 02:22:27.913479 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="724ac845-3835-458b-9645-e665be135ff9" volumeName="kubernetes.io/configmap/724ac845-3835-458b-9645-e665be135ff9-etcd-ca" seLinuxMountContext="" Feb 16 02:22:27.914427 master-0 kubenswrapper[31559]: I0216 02:22:27.913506 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="467d92a2-1cf3-418d-b41e-8e5f9d7a5b74" volumeName="kubernetes.io/secret/467d92a2-1cf3-418d-b41e-8e5f9d7a5b74-profile-collector-cert" seLinuxMountContext="" Feb 16 02:22:27.914427 master-0 kubenswrapper[31559]: I0216 02:22:27.913526 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="857357a1-dc98-4dd5-98b3-c94b1ddf9dec" volumeName="kubernetes.io/projected/857357a1-dc98-4dd5-98b3-c94b1ddf9dec-kube-api-access-lj6v2" seLinuxMountContext="" Feb 16 02:22:27.914427 master-0 kubenswrapper[31559]: I0216 02:22:27.913545 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7317f91-9441-449f-9738-85da088cf94f" volumeName="kubernetes.io/configmap/f7317f91-9441-449f-9738-85da088cf94f-ovnkube-config" seLinuxMountContext="" Feb 16 02:22:27.914427 master-0 kubenswrapper[31559]: I0216 02:22:27.913571 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7846b339-c46d-4983-b586-a28f2868f665" volumeName="kubernetes.io/projected/7846b339-c46d-4983-b586-a28f2868f665-kube-api-access-5cnfs" seLinuxMountContext="" Feb 16 02:22:27.914427 master-0 kubenswrapper[31559]: I0216 02:22:27.913590 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7ac81030-35d1-4d86-844d-65d1156d8944" volumeName="kubernetes.io/projected/7ac81030-35d1-4d86-844d-65d1156d8944-kube-api-access-lqmhs" seLinuxMountContext="" Feb 16 02:22:27.914427 master-0 kubenswrapper[31559]: I0216 02:22:27.913627 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8c267cc7-a51a-4b14-baee-e584254eefc5" volumeName="kubernetes.io/empty-dir/8c267cc7-a51a-4b14-baee-e584254eefc5-audit-log" seLinuxMountContext="" Feb 16 02:22:27.914427 master-0 kubenswrapper[31559]: I0216 02:22:27.913665 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="27d876a7-6a48-4942-ad96-ed8ed3aa104b" volumeName="kubernetes.io/projected/27d876a7-6a48-4942-ad96-ed8ed3aa104b-kube-api-access-kf7tw" seLinuxMountContext="" Feb 16 02:22:27.914427 master-0 kubenswrapper[31559]: I0216 02:22:27.913684 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b62004d-7fe3-47ae-8e26-8496befb047c" volumeName="kubernetes.io/secret/5b62004d-7fe3-47ae-8e26-8496befb047c-samples-operator-tls" seLinuxMountContext="" Feb 16 02:22:27.914427 master-0 kubenswrapper[31559]: I0216 02:22:27.913708 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22739961-e322-47f1-b232-eaa4cc35319c" volumeName="kubernetes.io/configmap/22739961-e322-47f1-b232-eaa4cc35319c-trusted-ca-bundle" seLinuxMountContext="" Feb 16 02:22:27.914427 master-0 kubenswrapper[31559]: I0216 02:22:27.913728 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4a5b01c1-1231-4e69-8b6c-c4981b65b26e" volumeName="kubernetes.io/projected/4a5b01c1-1231-4e69-8b6c-c4981b65b26e-kube-api-access-zr872" seLinuxMountContext="" Feb 16 02:22:27.914427 master-0 kubenswrapper[31559]: I0216 02:22:27.913752 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="91938be6-9ae4-4849-abe8-fc842daecd23" volumeName="kubernetes.io/configmap/91938be6-9ae4-4849-abe8-fc842daecd23-config" seLinuxMountContext="" Feb 16 02:22:27.914427 master-0 kubenswrapper[31559]: I0216 02:22:27.913771 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c9cd32bc-a13a-44ee-ba52-7bb335c7007b" volumeName="kubernetes.io/projected/c9cd32bc-a13a-44ee-ba52-7bb335c7007b-kube-api-access-xr7gn" seLinuxMountContext="" Feb 16 02:22:27.914427 master-0 kubenswrapper[31559]: I0216 02:22:27.913790 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="86af980a-2653-40c3-a368-a795d7fb8558" volumeName="kubernetes.io/projected/86af980a-2653-40c3-a368-a795d7fb8558-kube-api-access-tgqtf" seLinuxMountContext="" Feb 16 02:22:27.914427 master-0 kubenswrapper[31559]: I0216 02:22:27.913832 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8c267cc7-a51a-4b14-baee-e584254eefc5" volumeName="kubernetes.io/projected/8c267cc7-a51a-4b14-baee-e584254eefc5-kube-api-access-9snq8" seLinuxMountContext="" Feb 16 02:22:27.914427 master-0 kubenswrapper[31559]: I0216 02:22:27.913894 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f918d5b-1a4c-4b56-98a4-5cef638bb615" volumeName="kubernetes.io/secret/8f918d5b-1a4c-4b56-98a4-5cef638bb615-encryption-config" seLinuxMountContext="" Feb 16 02:22:27.914427 master-0 kubenswrapper[31559]: I0216 02:22:27.913930 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c9cd32bc-a13a-44ee-ba52-7bb335c7007b" volumeName="kubernetes.io/configmap/c9cd32bc-a13a-44ee-ba52-7bb335c7007b-trusted-ca-bundle" seLinuxMountContext="" Feb 16 02:22:27.914427 master-0 kubenswrapper[31559]: I0216 02:22:27.913988 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d8bbd369-4219-48ef-ae2d-b45c81789403" volumeName="kubernetes.io/projected/d8bbd369-4219-48ef-ae2d-b45c81789403-kube-api-access-grvmr" seLinuxMountContext="" Feb 16 02:22:27.914427 master-0 kubenswrapper[31559]: I0216 02:22:27.914060 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dc3354cb-b6c3-40a5-a695-cccb079ad292" volumeName="kubernetes.io/secret/dc3354cb-b6c3-40a5-a695-cccb079ad292-apiservice-cert" seLinuxMountContext="" Feb 16 02:22:27.914427 master-0 kubenswrapper[31559]: I0216 02:22:27.914100 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="17390d9a-148d-4927-a831-5bc4873c43d5" volumeName="kubernetes.io/secret/17390d9a-148d-4927-a831-5bc4873c43d5-metrics-certs" seLinuxMountContext="" Feb 16 02:22:27.914427 master-0 kubenswrapper[31559]: I0216 02:22:27.914132 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="27a42eb0-677c-414d-b0ec-f945ec39b7e9" volumeName="kubernetes.io/configmap/27a42eb0-677c-414d-b0ec-f945ec39b7e9-config" seLinuxMountContext="" Feb 16 02:22:27.914427 master-0 kubenswrapper[31559]: I0216 02:22:27.914245 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c4a146b2-c712-408a-97d8-5de3a84f3aaf" volumeName="kubernetes.io/configmap/c4a146b2-c712-408a-97d8-5de3a84f3aaf-images" seLinuxMountContext="" Feb 16 02:22:27.914427 master-0 kubenswrapper[31559]: I0216 02:22:27.914275 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="83883885-f493-4559-9c0f-e28d69712475" volumeName="kubernetes.io/projected/83883885-f493-4559-9c0f-e28d69712475-kube-api-access-6nmjz" seLinuxMountContext="" Feb 16 02:22:27.914427 master-0 kubenswrapper[31559]: I0216 02:22:27.914301 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d008dbd4-e713-4f2e-b64d-ca9cfc83a502" volumeName="kubernetes.io/projected/d008dbd4-e713-4f2e-b64d-ca9cfc83a502-kube-api-access-2582m" seLinuxMountContext="" Feb 16 02:22:27.914427 master-0 kubenswrapper[31559]: I0216 02:22:27.914322 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="430c146b-ceaf-411a-add6-ce949243aabf" volumeName="kubernetes.io/configmap/430c146b-ceaf-411a-add6-ce949243aabf-cni-binary-copy" seLinuxMountContext="" Feb 16 02:22:27.914427 master-0 kubenswrapper[31559]: I0216 02:22:27.914343 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6dcef814-353e-4985-9afc-9e545f7853ae" volumeName="kubernetes.io/projected/6dcef814-353e-4985-9afc-9e545f7853ae-kube-api-access-pjsbs" seLinuxMountContext="" Feb 16 02:22:27.914427 master-0 kubenswrapper[31559]: I0216 02:22:27.914368 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9be9fd24-fdb1-43dc-80b8-68020427bfd7" volumeName="kubernetes.io/empty-dir/9be9fd24-fdb1-43dc-80b8-68020427bfd7-available-featuregates" seLinuxMountContext="" Feb 16 02:22:27.914427 master-0 kubenswrapper[31559]: I0216 02:22:27.914430 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="00ef3b03-55dc-4661-b7fd-1e586c45b5de" volumeName="kubernetes.io/empty-dir/00ef3b03-55dc-4661-b7fd-1e586c45b5de-tmp" seLinuxMountContext="" Feb 16 02:22:27.914427 master-0 kubenswrapper[31559]: I0216 02:22:27.914497 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22739961-e322-47f1-b232-eaa4cc35319c" volumeName="kubernetes.io/projected/22739961-e322-47f1-b232-eaa4cc35319c-kube-api-access-9n58j" seLinuxMountContext="" Feb 16 02:22:27.916614 master-0 kubenswrapper[31559]: I0216 02:22:27.914554 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="32d420d6-bbda-42c0-82fe-8b187ad91607" volumeName="kubernetes.io/secret/32d420d6-bbda-42c0-82fe-8b187ad91607-openshift-state-metrics-kube-rbac-proxy-config" seLinuxMountContext="" Feb 16 02:22:27.916614 master-0 kubenswrapper[31559]: I0216 02:22:27.914637 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0540a70-a256-422b-a827-e564d0e67866" volumeName="kubernetes.io/projected/a0540a70-a256-422b-a827-e564d0e67866-bound-sa-token" seLinuxMountContext="" Feb 16 02:22:27.916614 master-0 kubenswrapper[31559]: I0216 02:22:27.914663 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a8f33151-61df-4b66-ba85-9ba210779059" volumeName="kubernetes.io/configmap/a8f33151-61df-4b66-ba85-9ba210779059-config" seLinuxMountContext="" Feb 16 02:22:27.916614 master-0 kubenswrapper[31559]: I0216 02:22:27.914683 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0abea413-e08a-465a-8ec4-2be650bfd5bd" volumeName="kubernetes.io/secret/0abea413-e08a-465a-8ec4-2be650bfd5bd-serving-cert" seLinuxMountContext="" Feb 16 02:22:27.916614 master-0 kubenswrapper[31559]: I0216 02:22:27.914708 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1a07cd28-a33d-4abd-9198-ba82bacd51ba" volumeName="kubernetes.io/projected/1a07cd28-a33d-4abd-9198-ba82bacd51ba-kube-api-access-5j9vb" seLinuxMountContext="" Feb 16 02:22:27.916614 master-0 kubenswrapper[31559]: I0216 02:22:27.914747 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f918d5b-1a4c-4b56-98a4-5cef638bb615" volumeName="kubernetes.io/configmap/8f918d5b-1a4c-4b56-98a4-5cef638bb615-etcd-serving-ca" seLinuxMountContext="" Feb 16 02:22:27.916614 master-0 kubenswrapper[31559]: I0216 02:22:27.914784 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="724ac845-3835-458b-9645-e665be135ff9" volumeName="kubernetes.io/configmap/724ac845-3835-458b-9645-e665be135ff9-config" seLinuxMountContext="" Feb 16 02:22:27.916614 master-0 kubenswrapper[31559]: I0216 02:22:27.914811 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="724ac845-3835-458b-9645-e665be135ff9" volumeName="kubernetes.io/secret/724ac845-3835-458b-9645-e665be135ff9-etcd-client" seLinuxMountContext="" Feb 16 02:22:27.916614 master-0 kubenswrapper[31559]: I0216 02:22:27.914830 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7317f91-9441-449f-9738-85da088cf94f" volumeName="kubernetes.io/projected/f7317f91-9441-449f-9738-85da088cf94f-kube-api-access-58cq8" seLinuxMountContext="" Feb 16 02:22:27.916614 master-0 kubenswrapper[31559]: I0216 02:22:27.914871 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7317f91-9441-449f-9738-85da088cf94f" volumeName="kubernetes.io/secret/f7317f91-9441-449f-9738-85da088cf94f-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 16 02:22:27.916614 master-0 kubenswrapper[31559]: I0216 02:22:27.914892 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="21686a6d-f685-4fb6-98af-3e8a39c5981b" volumeName="kubernetes.io/configmap/21686a6d-f685-4fb6-98af-3e8a39c5981b-telemetry-config" seLinuxMountContext="" Feb 16 02:22:27.916614 master-0 kubenswrapper[31559]: I0216 02:22:27.914912 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37fd7550-cc81-4180-8540-0bc5f62f63d2" volumeName="kubernetes.io/secret/37fd7550-cc81-4180-8540-0bc5f62f63d2-tls-certificates" seLinuxMountContext="" Feb 16 02:22:27.916614 master-0 kubenswrapper[31559]: I0216 02:22:27.914938 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5f810ea0-e32d-4097-beca-5194349a57a6" volumeName="kubernetes.io/empty-dir/5f810ea0-e32d-4097-beca-5194349a57a6-catalog-content" seLinuxMountContext="" Feb 16 02:22:27.916614 master-0 kubenswrapper[31559]: I0216 02:22:27.914957 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8c267cc7-a51a-4b14-baee-e584254eefc5" volumeName="kubernetes.io/secret/8c267cc7-a51a-4b14-baee-e584254eefc5-secret-metrics-client-certs" seLinuxMountContext="" Feb 16 02:22:27.916614 master-0 kubenswrapper[31559]: I0216 02:22:27.914980 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bde83629-b39c-401e-bc30-5ce205638918" volumeName="kubernetes.io/configmap/bde83629-b39c-401e-bc30-5ce205638918-marketplace-trusted-ca" seLinuxMountContext="" Feb 16 02:22:27.916614 master-0 kubenswrapper[31559]: I0216 02:22:27.914999 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22739961-e322-47f1-b232-eaa4cc35319c" volumeName="kubernetes.io/secret/22739961-e322-47f1-b232-eaa4cc35319c-etcd-client" seLinuxMountContext="" Feb 16 02:22:27.916614 master-0 kubenswrapper[31559]: I0216 02:22:27.915017 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b923d74-bad3-4780-8e7e-e8365ac9ea06" volumeName="kubernetes.io/empty-dir/5b923d74-bad3-4780-8e7e-e8365ac9ea06-catalog-content" seLinuxMountContext="" Feb 16 02:22:27.916614 master-0 kubenswrapper[31559]: I0216 02:22:27.915106 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="695d1f01-d3c1-4fb9-9dda-daf33eae11f5" volumeName="kubernetes.io/projected/695d1f01-d3c1-4fb9-9dda-daf33eae11f5-kube-api-access-kws4h" seLinuxMountContext="" Feb 16 02:22:27.916614 master-0 kubenswrapper[31559]: I0216 02:22:27.915126 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e491b5ed-9c09-4308-9843-fba8d43bd3ae" volumeName="kubernetes.io/secret/e491b5ed-9c09-4308-9843-fba8d43bd3ae-serving-cert" seLinuxMountContext="" Feb 16 02:22:27.916614 master-0 kubenswrapper[31559]: I0216 02:22:27.915151 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="980aa005-f51d-4ca2-aee6-a6fdeefd86d0" volumeName="kubernetes.io/configmap/980aa005-f51d-4ca2-aee6-a6fdeefd86d0-config" seLinuxMountContext="" Feb 16 02:22:27.916614 master-0 kubenswrapper[31559]: I0216 02:22:27.915245 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a77e2f8f-d164-4a58-aab2-f3444c05cacb" volumeName="kubernetes.io/secret/a77e2f8f-d164-4a58-aab2-f3444c05cacb-cluster-storage-operator-serving-cert" seLinuxMountContext="" Feb 16 02:22:27.916614 master-0 kubenswrapper[31559]: I0216 02:22:27.915269 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c4a146b2-c712-408a-97d8-5de3a84f3aaf" volumeName="kubernetes.io/secret/c4a146b2-c712-408a-97d8-5de3a84f3aaf-cloud-controller-manager-operator-tls" seLinuxMountContext="" Feb 16 02:22:27.916614 master-0 kubenswrapper[31559]: I0216 02:22:27.915312 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="30fef0d5-46ea-4fa3-9ffa-88187d010ffe" volumeName="kubernetes.io/configmap/30fef0d5-46ea-4fa3-9ffa-88187d010ffe-cco-trusted-ca" seLinuxMountContext="" Feb 16 02:22:27.916614 master-0 kubenswrapper[31559]: I0216 02:22:27.915333 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c442d349-668b-4d01-a097-5981b7a04eac" volumeName="kubernetes.io/secret/c442d349-668b-4d01-a097-5981b7a04eac-machine-approver-tls" seLinuxMountContext="" Feb 16 02:22:27.916614 master-0 kubenswrapper[31559]: I0216 02:22:27.915352 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e379cfaf-3a4c-40e7-8641-3524b3669295" volumeName="kubernetes.io/configmap/e379cfaf-3a4c-40e7-8641-3524b3669295-config" seLinuxMountContext="" Feb 16 02:22:27.916614 master-0 kubenswrapper[31559]: I0216 02:22:27.915385 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f918d5b-1a4c-4b56-98a4-5cef638bb615" volumeName="kubernetes.io/configmap/8f918d5b-1a4c-4b56-98a4-5cef638bb615-config" seLinuxMountContext="" Feb 16 02:22:27.916614 master-0 kubenswrapper[31559]: I0216 02:22:27.915411 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dbc5b101-936f-4bf3-bbf3-f30966b0ab50" volumeName="kubernetes.io/configmap/dbc5b101-936f-4bf3-bbf3-f30966b0ab50-env-overrides" seLinuxMountContext="" Feb 16 02:22:27.916614 master-0 kubenswrapper[31559]: I0216 02:22:27.915493 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dbc5b101-936f-4bf3-bbf3-f30966b0ab50" volumeName="kubernetes.io/configmap/dbc5b101-936f-4bf3-bbf3-f30966b0ab50-ovnkube-identity-cm" seLinuxMountContext="" Feb 16 02:22:27.916614 master-0 kubenswrapper[31559]: I0216 02:22:27.915525 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1f2d2601-481d-4e86-ac4c-3d34d5691261" volumeName="kubernetes.io/projected/1f2d2601-481d-4e86-ac4c-3d34d5691261-kube-api-access-8d49c" seLinuxMountContext="" Feb 16 02:22:27.916614 master-0 kubenswrapper[31559]: I0216 02:22:27.915555 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="48863ff6-63ac-42d7-bac7-29d888c92db9" volumeName="kubernetes.io/configmap/48863ff6-63ac-42d7-bac7-29d888c92db9-auth-proxy-config" seLinuxMountContext="" Feb 16 02:22:27.916614 master-0 kubenswrapper[31559]: I0216 02:22:27.915598 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8c267cc7-a51a-4b14-baee-e584254eefc5" volumeName="kubernetes.io/secret/8c267cc7-a51a-4b14-baee-e584254eefc5-client-ca-bundle" seLinuxMountContext="" Feb 16 02:22:27.916614 master-0 kubenswrapper[31559]: I0216 02:22:27.915618 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6c02961f-30ec-4405-b7fa-9c4192342ae9" volumeName="kubernetes.io/secret/6c02961f-30ec-4405-b7fa-9c4192342ae9-serving-cert" seLinuxMountContext="" Feb 16 02:22:27.919062 master-0 kubenswrapper[31559]: I0216 02:22:27.918208 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fec84b8a-a0d1-4b07-8827-cef0beb89ecd" volumeName="kubernetes.io/projected/fec84b8a-a0d1-4b07-8827-cef0beb89ecd-kube-api-access-88kmw" seLinuxMountContext="" Feb 16 02:22:27.919184 master-0 kubenswrapper[31559]: I0216 02:22:27.919082 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4a5b01c1-1231-4e69-8b6c-c4981b65b26e" volumeName="kubernetes.io/secret/4a5b01c1-1231-4e69-8b6c-c4981b65b26e-serving-cert" seLinuxMountContext="" Feb 16 02:22:27.919184 master-0 kubenswrapper[31559]: I0216 02:22:27.919115 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f918d5b-1a4c-4b56-98a4-5cef638bb615" volumeName="kubernetes.io/projected/8f918d5b-1a4c-4b56-98a4-5cef638bb615-kube-api-access-fxnht" seLinuxMountContext="" Feb 16 02:22:27.919184 master-0 kubenswrapper[31559]: I0216 02:22:27.919139 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d870332c-2498-4135-a9b3-a71e67c2805b" volumeName="kubernetes.io/secret/d870332c-2498-4135-a9b3-a71e67c2805b-proxy-tls" seLinuxMountContext="" Feb 16 02:22:27.919184 master-0 kubenswrapper[31559]: I0216 02:22:27.919162 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="27a42eb0-677c-414d-b0ec-f945ec39b7e9" volumeName="kubernetes.io/secret/27a42eb0-677c-414d-b0ec-f945ec39b7e9-cluster-baremetal-operator-tls" seLinuxMountContext="" Feb 16 02:22:27.919184 master-0 kubenswrapper[31559]: I0216 02:22:27.919181 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1487f82c-c14a-4f65-be77-5af2612f56f4" volumeName="kubernetes.io/empty-dir/1487f82c-c14a-4f65-be77-5af2612f56f4-catalog-content" seLinuxMountContext="" Feb 16 02:22:27.919567 master-0 kubenswrapper[31559]: I0216 02:22:27.919231 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1487f82c-c14a-4f65-be77-5af2612f56f4" volumeName="kubernetes.io/projected/1487f82c-c14a-4f65-be77-5af2612f56f4-kube-api-access-wxq28" seLinuxMountContext="" Feb 16 02:22:27.919567 master-0 kubenswrapper[31559]: I0216 02:22:27.919253 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1f2d2601-481d-4e86-ac4c-3d34d5691261" volumeName="kubernetes.io/secret/1f2d2601-481d-4e86-ac4c-3d34d5691261-cluster-olm-operator-serving-cert" seLinuxMountContext="" Feb 16 02:22:27.919567 master-0 kubenswrapper[31559]: I0216 02:22:27.919405 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1f2d2601-481d-4e86-ac4c-3d34d5691261" volumeName="kubernetes.io/empty-dir/1f2d2601-481d-4e86-ac4c-3d34d5691261-operand-assets" seLinuxMountContext="" Feb 16 02:22:27.919567 master-0 kubenswrapper[31559]: I0216 02:22:27.919461 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="23755f7f-dce6-4dcf-9664-22e3aedb5c81" volumeName="kubernetes.io/projected/23755f7f-dce6-4dcf-9664-22e3aedb5c81-kube-api-access-n4gmn" seLinuxMountContext="" Feb 16 02:22:27.919567 master-0 kubenswrapper[31559]: I0216 02:22:27.919487 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e491b5ed-9c09-4308-9843-fba8d43bd3ae" volumeName="kubernetes.io/configmap/e491b5ed-9c09-4308-9843-fba8d43bd3ae-client-ca" seLinuxMountContext="" Feb 16 02:22:27.919567 master-0 kubenswrapper[31559]: I0216 02:22:27.919510 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22739961-e322-47f1-b232-eaa4cc35319c" volumeName="kubernetes.io/secret/22739961-e322-47f1-b232-eaa4cc35319c-encryption-config" seLinuxMountContext="" Feb 16 02:22:27.919567 master-0 kubenswrapper[31559]: I0216 02:22:27.919532 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dc3354cb-b6c3-40a5-a695-cccb079ad292" volumeName="kubernetes.io/secret/dc3354cb-b6c3-40a5-a695-cccb079ad292-webhook-cert" seLinuxMountContext="" Feb 16 02:22:27.919567 master-0 kubenswrapper[31559]: I0216 02:22:27.919555 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d8bbd369-4219-48ef-ae2d-b45c81789403" volumeName="kubernetes.io/secret/d8bbd369-4219-48ef-ae2d-b45c81789403-certs" seLinuxMountContext="" Feb 16 02:22:27.920134 master-0 kubenswrapper[31559]: I0216 02:22:27.919577 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e491b5ed-9c09-4308-9843-fba8d43bd3ae" volumeName="kubernetes.io/configmap/e491b5ed-9c09-4308-9843-fba8d43bd3ae-config" seLinuxMountContext="" Feb 16 02:22:27.920134 master-0 kubenswrapper[31559]: I0216 02:22:27.919601 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b923d74-bad3-4780-8e7e-e8365ac9ea06" volumeName="kubernetes.io/projected/5b923d74-bad3-4780-8e7e-e8365ac9ea06-kube-api-access-cstlg" seLinuxMountContext="" Feb 16 02:22:27.920134 master-0 kubenswrapper[31559]: I0216 02:22:27.919624 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7846b339-c46d-4983-b586-a28f2868f665" volumeName="kubernetes.io/secret/7846b339-c46d-4983-b586-a28f2868f665-cert" seLinuxMountContext="" Feb 16 02:22:27.920134 master-0 kubenswrapper[31559]: I0216 02:22:27.919645 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c442d349-668b-4d01-a097-5981b7a04eac" volumeName="kubernetes.io/configmap/c442d349-668b-4d01-a097-5981b7a04eac-config" seLinuxMountContext="" Feb 16 02:22:27.920134 master-0 kubenswrapper[31559]: I0216 02:22:27.919669 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="724ac845-3835-458b-9645-e665be135ff9" volumeName="kubernetes.io/secret/724ac845-3835-458b-9645-e665be135ff9-serving-cert" seLinuxMountContext="" Feb 16 02:22:27.920134 master-0 kubenswrapper[31559]: I0216 02:22:27.919690 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="97b8261a-91e3-435e-93f8-0a17f30359fd" volumeName="kubernetes.io/configmap/97b8261a-91e3-435e-93f8-0a17f30359fd-mcc-auth-proxy-config" seLinuxMountContext="" Feb 16 02:22:27.920134 master-0 kubenswrapper[31559]: I0216 02:22:27.919710 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="980aa005-f51d-4ca2-aee6-a6fdeefd86d0" volumeName="kubernetes.io/secret/980aa005-f51d-4ca2-aee6-a6fdeefd86d0-serving-cert" seLinuxMountContext="" Feb 16 02:22:27.920134 master-0 kubenswrapper[31559]: I0216 02:22:27.919729 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c442d349-668b-4d01-a097-5981b7a04eac" volumeName="kubernetes.io/configmap/c442d349-668b-4d01-a097-5981b7a04eac-auth-proxy-config" seLinuxMountContext="" Feb 16 02:22:27.920134 master-0 kubenswrapper[31559]: I0216 02:22:27.919771 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f91346c7-bde4-4fa2-ac27-b5f0d25eeb75" volumeName="kubernetes.io/configmap/f91346c7-bde4-4fa2-ac27-b5f0d25eeb75-cni-sysctl-allowlist" seLinuxMountContext="" Feb 16 02:22:27.920134 master-0 kubenswrapper[31559]: I0216 02:22:27.919798 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0fbc8f91-f8cc-48d8-917c-64fa978069de" volumeName="kubernetes.io/configmap/0fbc8f91-f8cc-48d8-917c-64fa978069de-mcd-auth-proxy-config" seLinuxMountContext="" Feb 16 02:22:27.920134 master-0 kubenswrapper[31559]: I0216 02:22:27.919819 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2a67f799-fd8d-4bee-9d67-720151c1650b" volumeName="kubernetes.io/projected/2a67f799-fd8d-4bee-9d67-720151c1650b-kube-api-access-47lht" seLinuxMountContext="" Feb 16 02:22:27.920134 master-0 kubenswrapper[31559]: I0216 02:22:27.919859 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5f810ea0-e32d-4097-beca-5194349a57a6" volumeName="kubernetes.io/projected/5f810ea0-e32d-4097-beca-5194349a57a6-kube-api-access-p49hf" seLinuxMountContext="" Feb 16 02:22:27.920134 master-0 kubenswrapper[31559]: I0216 02:22:27.919889 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f918d5b-1a4c-4b56-98a4-5cef638bb615" volumeName="kubernetes.io/secret/8f918d5b-1a4c-4b56-98a4-5cef638bb615-serving-cert" seLinuxMountContext="" Feb 16 02:22:27.920134 master-0 kubenswrapper[31559]: I0216 02:22:27.919912 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0fbc8f91-f8cc-48d8-917c-64fa978069de" volumeName="kubernetes.io/secret/0fbc8f91-f8cc-48d8-917c-64fa978069de-proxy-tls" seLinuxMountContext="" Feb 16 02:22:27.920134 master-0 kubenswrapper[31559]: I0216 02:22:27.919931 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="17390d9a-148d-4927-a831-5bc4873c43d5" volumeName="kubernetes.io/secret/17390d9a-148d-4927-a831-5bc4873c43d5-default-certificate" seLinuxMountContext="" Feb 16 02:22:27.920134 master-0 kubenswrapper[31559]: I0216 02:22:27.919950 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="456e6c3a-c16c-470b-a0cd-bb79865b54f0" volumeName="kubernetes.io/projected/456e6c3a-c16c-470b-a0cd-bb79865b54f0-kube-api-access-nl7r8" seLinuxMountContext="" Feb 16 02:22:27.920134 master-0 kubenswrapper[31559]: I0216 02:22:27.919998 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="04804a08-e3a5-46f3-abcb-967866834baa" volumeName="kubernetes.io/configmap/04804a08-e3a5-46f3-abcb-967866834baa-trusted-ca" seLinuxMountContext="" Feb 16 02:22:27.920134 master-0 kubenswrapper[31559]: I0216 02:22:27.920017 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="21686a6d-f685-4fb6-98af-3e8a39c5981b" volumeName="kubernetes.io/projected/21686a6d-f685-4fb6-98af-3e8a39c5981b-kube-api-access-lvf8t" seLinuxMountContext="" Feb 16 02:22:27.920134 master-0 kubenswrapper[31559]: I0216 02:22:27.920039 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0540a70-a256-422b-a827-e564d0e67866" volumeName="kubernetes.io/projected/a0540a70-a256-422b-a827-e564d0e67866-kube-api-access-s9p9r" seLinuxMountContext="" Feb 16 02:22:27.920134 master-0 kubenswrapper[31559]: I0216 02:22:27.920063 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e478bdcc-052e-42f8-91b6-58c26cfc9cfc" volumeName="kubernetes.io/projected/e478bdcc-052e-42f8-91b6-58c26cfc9cfc-kube-api-access-pfgxq" seLinuxMountContext="" Feb 16 02:22:27.920134 master-0 kubenswrapper[31559]: I0216 02:22:27.920091 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="27a42eb0-677c-414d-b0ec-f945ec39b7e9" volumeName="kubernetes.io/projected/27a42eb0-677c-414d-b0ec-f945ec39b7e9-kube-api-access-l4djm" seLinuxMountContext="" Feb 16 02:22:27.920134 master-0 kubenswrapper[31559]: I0216 02:22:27.920117 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="89041b37-18f6-499d-89ec-a0523a25dc58" volumeName="kubernetes.io/empty-dir/89041b37-18f6-499d-89ec-a0523a25dc58-utilities" seLinuxMountContext="" Feb 16 02:22:27.920134 master-0 kubenswrapper[31559]: I0216 02:22:27.920144 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ad700b17-ba2a-41d4-8bec-538a009a613b" volumeName="kubernetes.io/secret/ad700b17-ba2a-41d4-8bec-538a009a613b-serving-cert" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.920164 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f918d5b-1a4c-4b56-98a4-5cef638bb615" volumeName="kubernetes.io/configmap/8f918d5b-1a4c-4b56-98a4-5cef638bb615-image-import-ca" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.920204 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9defdfff-eb18-4beb-9591-918d0e4b4236" volumeName="kubernetes.io/configmap/9defdfff-eb18-4beb-9591-918d0e4b4236-signing-cabundle" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.920223 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d870332c-2498-4135-a9b3-a71e67c2805b" volumeName="kubernetes.io/projected/d870332c-2498-4135-a9b3-a71e67c2805b-kube-api-access-wmjjn" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.920242 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e379cfaf-3a4c-40e7-8641-3524b3669295" volumeName="kubernetes.io/secret/e379cfaf-3a4c-40e7-8641-3524b3669295-serving-cert" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.920261 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0a900f93-91c9-4782-89a3-1cc09f3aec95" volumeName="kubernetes.io/secret/0a900f93-91c9-4782-89a3-1cc09f3aec95-node-exporter-kube-rbac-proxy-config" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.920282 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="48863ff6-63ac-42d7-bac7-29d888c92db9" volumeName="kubernetes.io/projected/48863ff6-63ac-42d7-bac7-29d888c92db9-kube-api-access-kgj82" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.920306 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7ac81030-35d1-4d86-844d-65d1156d8944" volumeName="kubernetes.io/secret/7ac81030-35d1-4d86-844d-65d1156d8944-metrics-tls" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.920333 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="04804a08-e3a5-46f3-abcb-967866834baa" volumeName="kubernetes.io/projected/04804a08-e3a5-46f3-abcb-967866834baa-bound-sa-token" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.920358 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6dcef814-353e-4985-9afc-9e545f7853ae" volumeName="kubernetes.io/configmap/6dcef814-353e-4985-9afc-9e545f7853ae-ovnkube-script-lib" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.920411 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="32d420d6-bbda-42c0-82fe-8b187ad91607" volumeName="kubernetes.io/secret/32d420d6-bbda-42c0-82fe-8b187ad91607-openshift-state-metrics-tls" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.920472 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8c267cc7-a51a-4b14-baee-e584254eefc5" volumeName="kubernetes.io/configmap/8c267cc7-a51a-4b14-baee-e584254eefc5-configmap-kubelet-serving-ca-bundle" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.920504 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a8f33151-61df-4b66-ba85-9ba210779059" volumeName="kubernetes.io/projected/a8f33151-61df-4b66-ba85-9ba210779059-kube-api-access" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.920570 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0a900f93-91c9-4782-89a3-1cc09f3aec95" volumeName="kubernetes.io/configmap/0a900f93-91c9-4782-89a3-1cc09f3aec95-metrics-client-ca" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.920600 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0abea413-e08a-465a-8ec4-2be650bfd5bd" volumeName="kubernetes.io/projected/0abea413-e08a-465a-8ec4-2be650bfd5bd-kube-api-access-bxlnm" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.920626 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="86af980a-2653-40c3-a368-a795d7fb8558" volumeName="kubernetes.io/configmap/86af980a-2653-40c3-a368-a795d7fb8558-metrics-client-ca" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.920651 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f91346c7-bde4-4fa2-ac27-b5f0d25eeb75" volumeName="kubernetes.io/configmap/f91346c7-bde4-4fa2-ac27-b5f0d25eeb75-cni-binary-copy" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.920677 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6c02961f-30ec-4405-b7fa-9c4192342ae9" volumeName="kubernetes.io/configmap/6c02961f-30ec-4405-b7fa-9c4192342ae9-config" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.920802 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="857357a1-dc98-4dd5-98b3-c94b1ddf9dec" volumeName="kubernetes.io/secret/857357a1-dc98-4dd5-98b3-c94b1ddf9dec-catalogserver-certs" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.920841 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9defdfff-eb18-4beb-9591-918d0e4b4236" volumeName="kubernetes.io/secret/9defdfff-eb18-4beb-9591-918d0e4b4236-signing-key" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.920870 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="980aa005-f51d-4ca2-aee6-a6fdeefd86d0" volumeName="kubernetes.io/projected/980aa005-f51d-4ca2-aee6-a6fdeefd86d0-kube-api-access" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.920902 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c442d349-668b-4d01-a097-5981b7a04eac" volumeName="kubernetes.io/projected/c442d349-668b-4d01-a097-5981b7a04eac-kube-api-access-4vchs" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.920932 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f91346c7-bde4-4fa2-ac27-b5f0d25eeb75" volumeName="kubernetes.io/projected/f91346c7-bde4-4fa2-ac27-b5f0d25eeb75-kube-api-access-4ns9l" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.920960 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="30fef0d5-46ea-4fa3-9ffa-88187d010ffe" volumeName="kubernetes.io/secret/30fef0d5-46ea-4fa3-9ffa-88187d010ffe-cloud-credential-operator-serving-cert" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.920986 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="91938be6-9ae4-4849-abe8-fc842daecd23" volumeName="kubernetes.io/secret/91938be6-9ae4-4849-abe8-fc842daecd23-serving-cert" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.921012 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e491b5ed-9c09-4308-9843-fba8d43bd3ae" volumeName="kubernetes.io/configmap/e491b5ed-9c09-4308-9843-fba8d43bd3ae-proxy-ca-bundles" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.921079 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="430c146b-ceaf-411a-add6-ce949243aabf" volumeName="kubernetes.io/configmap/430c146b-ceaf-411a-add6-ce949243aabf-multus-daemon-config" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.921110 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="456e6c3a-c16c-470b-a0cd-bb79865b54f0" volumeName="kubernetes.io/secret/456e6c3a-c16c-470b-a0cd-bb79865b54f0-metrics-tls" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.921153 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6c02961f-30ec-4405-b7fa-9c4192342ae9" volumeName="kubernetes.io/projected/6c02961f-30ec-4405-b7fa-9c4192342ae9-kube-api-access-7llx6" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.921179 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dc3354cb-b6c3-40a5-a695-cccb079ad292" volumeName="kubernetes.io/projected/dc3354cb-b6c3-40a5-a695-cccb079ad292-kube-api-access-hm44l" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.921204 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1743372f-bdb0-4558-b47b-3714f3aa3fde" volumeName="kubernetes.io/configmap/1743372f-bdb0-4558-b47b-3714f3aa3fde-config" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.921231 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6dcef814-353e-4985-9afc-9e545f7853ae" volumeName="kubernetes.io/configmap/6dcef814-353e-4985-9afc-9e545f7853ae-env-overrides" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.921254 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f918d5b-1a4c-4b56-98a4-5cef638bb615" volumeName="kubernetes.io/configmap/8f918d5b-1a4c-4b56-98a4-5cef638bb615-audit" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.921277 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7f0f9b7d-e663-4927-861b-a9544d483b6e" volumeName="kubernetes.io/secret/7f0f9b7d-e663-4927-861b-a9544d483b6e-metrics-certs" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.921327 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b2a83ddd-ffa5-4127-9099-91187ad9dbba" volumeName="kubernetes.io/configmap/b2a83ddd-ffa5-4127-9099-91187ad9dbba-trusted-ca" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.921352 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bde83629-b39c-401e-bc30-5ce205638918" volumeName="kubernetes.io/projected/bde83629-b39c-401e-bc30-5ce205638918-kube-api-access-24b6h" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.921376 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="32d420d6-bbda-42c0-82fe-8b187ad91607" volumeName="kubernetes.io/configmap/32d420d6-bbda-42c0-82fe-8b187ad91607-metrics-client-ca" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.921398 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="430c146b-ceaf-411a-add6-ce949243aabf" volumeName="kubernetes.io/projected/430c146b-ceaf-411a-add6-ce949243aabf-kube-api-access-vdllq" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.921421 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="467d92a2-1cf3-418d-b41e-8e5f9d7a5b74" volumeName="kubernetes.io/projected/467d92a2-1cf3-418d-b41e-8e5f9d7a5b74-kube-api-access-f8lvq" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.921484 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="695d1f01-d3c1-4fb9-9dda-daf33eae11f5" volumeName="kubernetes.io/configmap/695d1f01-d3c1-4fb9-9dda-daf33eae11f5-metrics-client-ca" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.921511 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="17390d9a-148d-4927-a831-5bc4873c43d5" volumeName="kubernetes.io/configmap/17390d9a-148d-4927-a831-5bc4873c43d5-service-ca-bundle" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.921536 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dbc5b101-936f-4bf3-bbf3-f30966b0ab50" volumeName="kubernetes.io/secret/dbc5b101-936f-4bf3-bbf3-f30966b0ab50-webhook-cert" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.921591 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fec84b8a-a0d1-4b07-8827-cef0beb89ecd" volumeName="kubernetes.io/secret/fec84b8a-a0d1-4b07-8827-cef0beb89ecd-machine-api-operator-tls" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.921615 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2a67f799-fd8d-4bee-9d67-720151c1650b" volumeName="kubernetes.io/configmap/2a67f799-fd8d-4bee-9d67-720151c1650b-iptables-alerter-script" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.921669 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ad700b17-ba2a-41d4-8bec-538a009a613b" volumeName="kubernetes.io/projected/ad700b17-ba2a-41d4-8bec-538a009a613b-kube-api-access" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.921701 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fec84b8a-a0d1-4b07-8827-cef0beb89ecd" volumeName="kubernetes.io/configmap/fec84b8a-a0d1-4b07-8827-cef0beb89ecd-config" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.921692 31559 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.921727 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a8d00a01-aa48-4830-a558-93a31cb98b31" volumeName="kubernetes.io/secret/a8d00a01-aa48-4830-a558-93a31cb98b31-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.921884 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="04804a08-e3a5-46f3-abcb-967866834baa" volumeName="kubernetes.io/projected/04804a08-e3a5-46f3-abcb-967866834baa-kube-api-access-8rc6w" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.921907 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="17390d9a-148d-4927-a831-5bc4873c43d5" volumeName="kubernetes.io/projected/17390d9a-148d-4927-a831-5bc4873c43d5-kube-api-access-85sdg" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.921918 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="21686a6d-f685-4fb6-98af-3e8a39c5981b" volumeName="kubernetes.io/secret/21686a6d-f685-4fb6-98af-3e8a39c5981b-cluster-monitoring-operator-tls" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.921928 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a77e2f8f-d164-4a58-aab2-f3444c05cacb" volumeName="kubernetes.io/projected/a77e2f8f-d164-4a58-aab2-f3444c05cacb-kube-api-access-bsxrl" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.921939 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="724ac845-3835-458b-9645-e665be135ff9" volumeName="kubernetes.io/configmap/724ac845-3835-458b-9645-e665be135ff9-etcd-service-ca" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.921948 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="724ac845-3835-458b-9645-e665be135ff9" volumeName="kubernetes.io/projected/724ac845-3835-458b-9645-e665be135ff9-kube-api-access-bff42" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.921957 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="89041b37-18f6-499d-89ec-a0523a25dc58" volumeName="kubernetes.io/empty-dir/89041b37-18f6-499d-89ec-a0523a25dc58-catalog-content" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.921966 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="695d1f01-d3c1-4fb9-9dda-daf33eae11f5" volumeName="kubernetes.io/secret/695d1f01-d3c1-4fb9-9dda-daf33eae11f5-prometheus-operator-tls" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.921976 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ad700b17-ba2a-41d4-8bec-538a009a613b" volumeName="kubernetes.io/configmap/ad700b17-ba2a-41d4-8bec-538a009a613b-service-ca" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.921985 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0fbc8f91-f8cc-48d8-917c-64fa978069de" volumeName="kubernetes.io/projected/0fbc8f91-f8cc-48d8-917c-64fa978069de-kube-api-access-5bnwz" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.922001 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c4a146b2-c712-408a-97d8-5de3a84f3aaf" volumeName="kubernetes.io/projected/c4a146b2-c712-408a-97d8-5de3a84f3aaf-kube-api-access-6p8rc" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.922010 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="676adb95-3ffd-43e5-89e3-9d7a7d74df28" volumeName="kubernetes.io/projected/676adb95-3ffd-43e5-89e3-9d7a7d74df28-kube-api-access-lnh76" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.922022 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="83883885-f493-4559-9c0f-e28d69712475" volumeName="kubernetes.io/secret/83883885-f493-4559-9c0f-e28d69712475-serving-cert" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.922031 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="86af980a-2653-40c3-a368-a795d7fb8558" volumeName="kubernetes.io/empty-dir/86af980a-2653-40c3-a368-a795d7fb8558-volume-directive-shadow" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.922041 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f91346c7-bde4-4fa2-ac27-b5f0d25eeb75" volumeName="kubernetes.io/configmap/f91346c7-bde4-4fa2-ac27-b5f0d25eeb75-whereabouts-configmap" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.922049 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0abea413-e08a-465a-8ec4-2be650bfd5bd" volumeName="kubernetes.io/configmap/0abea413-e08a-465a-8ec4-2be650bfd5bd-trusted-ca-bundle" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.922058 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1487f82c-c14a-4f65-be77-5af2612f56f4" volumeName="kubernetes.io/empty-dir/1487f82c-c14a-4f65-be77-5af2612f56f4-utilities" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.922067 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1743372f-bdb0-4558-b47b-3714f3aa3fde" volumeName="kubernetes.io/projected/1743372f-bdb0-4558-b47b-3714f3aa3fde-kube-api-access" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.922077 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="27a42eb0-677c-414d-b0ec-f945ec39b7e9" volumeName="kubernetes.io/configmap/27a42eb0-677c-414d-b0ec-f945ec39b7e9-images" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.922086 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="86af980a-2653-40c3-a368-a795d7fb8558" volumeName="kubernetes.io/configmap/86af980a-2653-40c3-a368-a795d7fb8558-kube-state-metrics-custom-resource-state-configmap" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.922095 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d870332c-2498-4135-a9b3-a71e67c2805b" volumeName="kubernetes.io/configmap/d870332c-2498-4135-a9b3-a71e67c2805b-auth-proxy-config" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.922135 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="76915cba-7c11-4bd8-9943-81de74e7781b" volumeName="kubernetes.io/projected/76915cba-7c11-4bd8-9943-81de74e7781b-kube-api-access-6f8fj" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.922145 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="91938be6-9ae4-4849-abe8-fc842daecd23" volumeName="kubernetes.io/projected/91938be6-9ae4-4849-abe8-fc842daecd23-kube-api-access-bhz2m" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.922156 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b2a83ddd-ffa5-4127-9099-91187ad9dbba" volumeName="kubernetes.io/secret/b2a83ddd-ffa5-4127-9099-91187ad9dbba-node-tuning-operator-tls" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.922167 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d8bbd369-4219-48ef-ae2d-b45c81789403" volumeName="kubernetes.io/secret/d8bbd369-4219-48ef-ae2d-b45c81789403-node-bootstrap-token" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.922176 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e491b5ed-9c09-4308-9843-fba8d43bd3ae" volumeName="kubernetes.io/projected/e491b5ed-9c09-4308-9843-fba8d43bd3ae-kube-api-access-j4p8p" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.922186 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="27a42eb0-677c-414d-b0ec-f945ec39b7e9" volumeName="kubernetes.io/secret/27a42eb0-677c-414d-b0ec-f945ec39b7e9-cert" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.922197 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="695d1f01-d3c1-4fb9-9dda-daf33eae11f5" volumeName="kubernetes.io/secret/695d1f01-d3c1-4fb9-9dda-daf33eae11f5-prometheus-operator-kube-rbac-proxy-config" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.922206 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9defdfff-eb18-4beb-9591-918d0e4b4236" volumeName="kubernetes.io/projected/9defdfff-eb18-4beb-9591-918d0e4b4236-kube-api-access-r94gg" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.922238 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fec84b8a-a0d1-4b07-8827-cef0beb89ecd" volumeName="kubernetes.io/configmap/fec84b8a-a0d1-4b07-8827-cef0beb89ecd-images" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.922252 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b2a83ddd-ffa5-4127-9099-91187ad9dbba" volumeName="kubernetes.io/projected/b2a83ddd-ffa5-4127-9099-91187ad9dbba-kube-api-access-t7fmj" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.922264 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="04804a08-e3a5-46f3-abcb-967866834baa" volumeName="kubernetes.io/secret/04804a08-e3a5-46f3-abcb-967866834baa-metrics-tls" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.922276 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0a900f93-91c9-4782-89a3-1cc09f3aec95" volumeName="kubernetes.io/secret/0a900f93-91c9-4782-89a3-1cc09f3aec95-node-exporter-tls" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.922290 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5f810ea0-e32d-4097-beca-5194349a57a6" volumeName="kubernetes.io/empty-dir/5f810ea0-e32d-4097-beca-5194349a57a6-utilities" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.922301 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c8086f93-2d98-4218-afac-20a65e6bf943" volumeName="kubernetes.io/projected/c8086f93-2d98-4218-afac-20a65e6bf943-kube-api-access-cz49l" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.922312 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c9cd32bc-a13a-44ee-ba52-7bb335c7007b" volumeName="kubernetes.io/configmap/c9cd32bc-a13a-44ee-ba52-7bb335c7007b-config" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.922323 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="00ef3b03-55dc-4661-b7fd-1e586c45b5de" volumeName="kubernetes.io/projected/00ef3b03-55dc-4661-b7fd-1e586c45b5de-kube-api-access-7kj7r" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.922334 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1743372f-bdb0-4558-b47b-3714f3aa3fde" volumeName="kubernetes.io/secret/1743372f-bdb0-4558-b47b-3714f3aa3fde-serving-cert" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.922345 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b923d74-bad3-4780-8e7e-e8365ac9ea06" volumeName="kubernetes.io/empty-dir/5b923d74-bad3-4780-8e7e-e8365ac9ea06-utilities" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.922356 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="23755f7f-dce6-4dcf-9664-22e3aedb5c81" volumeName="kubernetes.io/secret/23755f7f-dce6-4dcf-9664-22e3aedb5c81-package-server-manager-serving-cert" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.922367 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4a5b01c1-1231-4e69-8b6c-c4981b65b26e" volumeName="kubernetes.io/configmap/4a5b01c1-1231-4e69-8b6c-c4981b65b26e-config" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.922376 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="83883885-f493-4559-9c0f-e28d69712475" volumeName="kubernetes.io/configmap/83883885-f493-4559-9c0f-e28d69712475-config" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.922403 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="76915cba-7c11-4bd8-9943-81de74e7781b" volumeName="kubernetes.io/secret/76915cba-7c11-4bd8-9943-81de74e7781b-profile-collector-cert" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.922413 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a8f33151-61df-4b66-ba85-9ba210779059" volumeName="kubernetes.io/secret/a8f33151-61df-4b66-ba85-9ba210779059-serving-cert" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.922428 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="27d876a7-6a48-4942-ad96-ed8ed3aa104b" volumeName="kubernetes.io/projected/27d876a7-6a48-4942-ad96-ed8ed3aa104b-ca-certs" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.922451 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0540a70-a256-422b-a827-e564d0e67866" volumeName="kubernetes.io/secret/a0540a70-a256-422b-a827-e564d0e67866-image-registry-operator-tls" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.922462 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c4a146b2-c712-408a-97d8-5de3a84f3aaf" volumeName="kubernetes.io/configmap/c4a146b2-c712-408a-97d8-5de3a84f3aaf-auth-proxy-config" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.922472 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c9cd32bc-a13a-44ee-ba52-7bb335c7007b" volumeName="kubernetes.io/secret/c9cd32bc-a13a-44ee-ba52-7bb335c7007b-serving-cert" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.922481 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="17390d9a-148d-4927-a831-5bc4873c43d5" volumeName="kubernetes.io/secret/17390d9a-148d-4927-a831-5bc4873c43d5-stats-auth" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.922492 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="75915935-00a2-44ce-99d1-03e2492044d4" volumeName="kubernetes.io/projected/75915935-00a2-44ce-99d1-03e2492044d4-kube-api-access-pc9jt" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.922503 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b2a83ddd-ffa5-4127-9099-91187ad9dbba" volumeName="kubernetes.io/secret/b2a83ddd-ffa5-4127-9099-91187ad9dbba-apiservice-cert" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.922512 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a8d00a01-aa48-4830-a558-93a31cb98b31" volumeName="kubernetes.io/projected/a8d00a01-aa48-4830-a558-93a31cb98b31-kube-api-access-lbmnx" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.922522 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c9cd32bc-a13a-44ee-ba52-7bb335c7007b" volumeName="kubernetes.io/configmap/c9cd32bc-a13a-44ee-ba52-7bb335c7007b-service-ca-bundle" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.922531 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="48863ff6-63ac-42d7-bac7-29d888c92db9" volumeName="kubernetes.io/secret/48863ff6-63ac-42d7-bac7-29d888c92db9-cert" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.922539 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="76915cba-7c11-4bd8-9943-81de74e7781b" volumeName="kubernetes.io/secret/76915cba-7c11-4bd8-9943-81de74e7781b-srv-cert" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.922547 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="89041b37-18f6-499d-89ec-a0523a25dc58" volumeName="kubernetes.io/projected/89041b37-18f6-499d-89ec-a0523a25dc58-kube-api-access-zvzpb" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.922556 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a3065737-c7c0-4fbb-b484-f2a9204d4908" volumeName="kubernetes.io/projected/a3065737-c7c0-4fbb-b484-f2a9204d4908-kube-api-access-w7276" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.922564 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e379cfaf-3a4c-40e7-8641-3524b3669295" volumeName="kubernetes.io/projected/e379cfaf-3a4c-40e7-8641-3524b3669295-kube-api-access-gcq6v" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.922574 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7317f91-9441-449f-9738-85da088cf94f" volumeName="kubernetes.io/configmap/f7317f91-9441-449f-9738-85da088cf94f-env-overrides" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.922582 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7ac81030-35d1-4d86-844d-65d1156d8944" volumeName="kubernetes.io/configmap/7ac81030-35d1-4d86-844d-65d1156d8944-config-volume" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.922591 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="86af980a-2653-40c3-a368-a795d7fb8558" volumeName="kubernetes.io/secret/86af980a-2653-40c3-a368-a795d7fb8558-kube-state-metrics-tls" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.922599 31559 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="97b8261a-91e3-435e-93f8-0a17f30359fd" volumeName="kubernetes.io/secret/97b8261a-91e3-435e-93f8-0a17f30359fd-proxy-tls" seLinuxMountContext="" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.922609 31559 reconstruct.go:97] "Volume reconstruction finished" Feb 16 02:22:27.922971 master-0 kubenswrapper[31559]: I0216 02:22:27.922616 31559 reconciler.go:26] "Reconciler: start to sync state" Feb 16 02:22:27.932859 master-0 kubenswrapper[31559]: I0216 02:22:27.923526 31559 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 16 02:22:27.932859 master-0 kubenswrapper[31559]: I0216 02:22:27.923574 31559 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 16 02:22:27.932859 master-0 kubenswrapper[31559]: I0216 02:22:27.923600 31559 kubelet.go:2335] "Starting kubelet main sync loop" Feb 16 02:22:27.932859 master-0 kubenswrapper[31559]: E0216 02:22:27.923663 31559 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 16 02:22:27.932859 master-0 kubenswrapper[31559]: I0216 02:22:27.925554 31559 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 16 02:22:27.932859 master-0 kubenswrapper[31559]: I0216 02:22:27.926932 31559 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 16 02:22:27.935495 master-0 kubenswrapper[31559]: I0216 02:22:27.935414 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-67fd9768b5-9rvcj_48863ff6-63ac-42d7-bac7-29d888c92db9/cluster-autoscaler-operator/0.log" Feb 16 02:22:27.936565 master-0 kubenswrapper[31559]: I0216 02:22:27.936424 31559 generic.go:334] "Generic (PLEG): container finished" podID="48863ff6-63ac-42d7-bac7-29d888c92db9" containerID="1169ba7d80653acfb978496c38f306905e7dc8028752f494ebda1e9356b7b0b5" exitCode=255 Feb 16 02:22:27.942907 master-0 kubenswrapper[31559]: I0216 02:22:27.942853 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_80f43f07-ce08-4c21-9463-ea983a110244/installer/0.log" Feb 16 02:22:27.943034 master-0 kubenswrapper[31559]: I0216 02:22:27.942928 31559 generic.go:334] "Generic (PLEG): container finished" podID="80f43f07-ce08-4c21-9463-ea983a110244" containerID="b4ee28c7394858e0cf4928c25f023b84c89ee6af3676ca6c853d7b858571c63f" exitCode=1 Feb 16 02:22:27.945747 master-0 kubenswrapper[31559]: I0216 02:22:27.945651 31559 generic.go:334] "Generic (PLEG): container finished" podID="17390d9a-148d-4927-a831-5bc4873c43d5" containerID="6ef739f702cc8cdaf44b732cdf8fab6363588dea12a3413585529df5415a8dd4" exitCode=0 Feb 16 02:22:27.949154 master-0 kubenswrapper[31559]: I0216 02:22:27.949084 31559 generic.go:334] "Generic (PLEG): container finished" podID="456e6c3a-c16c-470b-a0cd-bb79865b54f0" containerID="0315328a7c0259163748331a3160b081a82efff7afa5ee439e110ed017ac4025" exitCode=0 Feb 16 02:22:27.953479 master-0 kubenswrapper[31559]: I0216 02:22:27.952170 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_b3322fd3717f4aec0d8f54ec7862c07e/kube-rbac-proxy-crio/2.log" Feb 16 02:22:27.953479 master-0 kubenswrapper[31559]: I0216 02:22:27.952627 31559 generic.go:334] "Generic (PLEG): container finished" podID="b3322fd3717f4aec0d8f54ec7862c07e" containerID="1315b8b70fa662058fdbb3d25c0b57bbe5b7832e14fd3593c7b3c8b6954d366b" exitCode=1 Feb 16 02:22:27.953479 master-0 kubenswrapper[31559]: I0216 02:22:27.952652 31559 generic.go:334] "Generic (PLEG): container finished" podID="b3322fd3717f4aec0d8f54ec7862c07e" containerID="0a5e2f09da7456e3ddaab1d9e62abba19553e3d0c15bed35db33dc97212f0fb6" exitCode=0 Feb 16 02:22:27.955820 master-0 kubenswrapper[31559]: I0216 02:22:27.955734 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-7c6bdb986f-zlbd2_9be9fd24-fdb1-43dc-80b8-68020427bfd7/openshift-config-operator/3.log" Feb 16 02:22:27.959279 master-0 kubenswrapper[31559]: I0216 02:22:27.957794 31559 generic.go:334] "Generic (PLEG): container finished" podID="9be9fd24-fdb1-43dc-80b8-68020427bfd7" containerID="7a516aec52660718ccf80f8448e598ce525c9666277508da67b9f886712a7edf" exitCode=255 Feb 16 02:22:27.959279 master-0 kubenswrapper[31559]: I0216 02:22:27.957822 31559 generic.go:334] "Generic (PLEG): container finished" podID="9be9fd24-fdb1-43dc-80b8-68020427bfd7" containerID="c10263f4e3b822ac06417bf8ee62f4bdd1bc382e08a14d4e83e8823933674455" exitCode=0 Feb 16 02:22:27.962703 master-0 kubenswrapper[31559]: I0216 02:22:27.962654 31559 generic.go:334] "Generic (PLEG): container finished" podID="91938be6-9ae4-4849-abe8-fc842daecd23" containerID="3af1f9b9834764b079edacabd51db4c771ce412df5b31f88b96200c070e64727" exitCode=0 Feb 16 02:22:27.965194 master-0 kubenswrapper[31559]: I0216 02:22:27.965157 31559 generic.go:334] "Generic (PLEG): container finished" podID="1c399bab-ff5e-4fd0-959b-354508c39eec" containerID="6cbcdbcbd020aa118f6d5315c540820d270da381e100f19561c88fefb12f18a7" exitCode=0 Feb 16 02:22:27.970844 master-0 kubenswrapper[31559]: I0216 02:22:27.970788 31559 generic.go:334] "Generic (PLEG): container finished" podID="4ff0fbd8-9ecc-421f-952c-c90ea17ddc7b" containerID="0b50b3880ac63d1428248172985d5d09cf2333281a593cd01b076731c8454c9a" exitCode=0 Feb 16 02:22:27.978801 master-0 kubenswrapper[31559]: I0216 02:22:27.978755 31559 generic.go:334] "Generic (PLEG): container finished" podID="619e637b8575311b72d43b7b782d610a" containerID="2f315c09e62d7e5ecdac8433decccf201da1935e2dc178927c912fe29e35daf4" exitCode=0 Feb 16 02:22:27.982014 master-0 kubenswrapper[31559]: I0216 02:22:27.981936 31559 generic.go:334] "Generic (PLEG): container finished" podID="22739961-e322-47f1-b232-eaa4cc35319c" containerID="65a84e4599891e5bba954abf5a8b237b752aec5f97fd2072d6a6720ec3678bea" exitCode=0 Feb 16 02:22:27.983786 master-0 kubenswrapper[31559]: I0216 02:22:27.983719 31559 generic.go:334] "Generic (PLEG): container finished" podID="1f35c7c9-16ec-486e-99ff-f1cbcce76eb3" containerID="6880014992fa93e0c0801558387fe49a32761a32c34c61cc54ee116a4f50adda" exitCode=0 Feb 16 02:22:27.984190 master-0 kubenswrapper[31559]: E0216 02:22:27.983973 31559 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 02:22:27.998679 master-0 kubenswrapper[31559]: I0216 02:22:27.998630 31559 generic.go:334] "Generic (PLEG): container finished" podID="8f918d5b-1a4c-4b56-98a4-5cef638bb615" containerID="2df1122300d4e774c3090e5a2115fbbbf79fe2cef81c2ccba8b6a290040b96a4" exitCode=0 Feb 16 02:22:28.001805 master-0 kubenswrapper[31559]: I0216 02:22:28.001753 31559 generic.go:334] "Generic (PLEG): container finished" podID="952766c3a88fd12345a552f1277199f9" containerID="76eaee0713adf3d6273ac37acfe2aa28acfb59f88749a49fa74c637faf8ccbb7" exitCode=0 Feb 16 02:22:28.006102 master-0 kubenswrapper[31559]: I0216 02:22:28.005752 31559 generic.go:334] "Generic (PLEG): container finished" podID="a77e2f8f-d164-4a58-aab2-f3444c05cacb" containerID="992140dbf9ae65014df74f84c27ac943b6aa3fa48ebab2a299f13ea17d92ff73" exitCode=0 Feb 16 02:22:28.007553 master-0 kubenswrapper[31559]: I0216 02:22:28.007520 31559 generic.go:334] "Generic (PLEG): container finished" podID="5b79cd7f-675e-4778-be06-95e79b1c008a" containerID="7ece968240d91c23ca40de1bcd222697872432fda5d86a538de746813dee22af" exitCode=0 Feb 16 02:22:28.009290 master-0 kubenswrapper[31559]: I0216 02:22:28.009264 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-5f5f84757d-b47jp_6c02961f-30ec-4405-b7fa-9c4192342ae9/openshift-controller-manager-operator/1.log" Feb 16 02:22:28.009379 master-0 kubenswrapper[31559]: I0216 02:22:28.009303 31559 generic.go:334] "Generic (PLEG): container finished" podID="6c02961f-30ec-4405-b7fa-9c4192342ae9" containerID="907bfaa35e251ac0a99127e043064ef8d7828048025a8b998d4e1bd9a8208385" exitCode=255 Feb 16 02:22:28.014544 master-0 kubenswrapper[31559]: I0216 02:22:28.014500 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-c588d8cb4-nbjz6_04804a08-e3a5-46f3-abcb-967866834baa/ingress-operator/4.log" Feb 16 02:22:28.015048 master-0 kubenswrapper[31559]: I0216 02:22:28.015003 31559 generic.go:334] "Generic (PLEG): container finished" podID="04804a08-e3a5-46f3-abcb-967866834baa" containerID="0405f37172f7f0e66eacb12dabde4efc8bc5d9f141a69f5229eddcb49dd8fe93" exitCode=1 Feb 16 02:22:28.022968 master-0 kubenswrapper[31559]: I0216 02:22:28.022915 31559 generic.go:334] "Generic (PLEG): container finished" podID="6dcef814-353e-4985-9afc-9e545f7853ae" containerID="64b93c97323f7e51986ec036f1f46d7cb6a600efeaf1c716bc52e696eb3b4391" exitCode=0 Feb 16 02:22:28.023787 master-0 kubenswrapper[31559]: E0216 02:22:28.023744 31559 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 16 02:22:28.024830 master-0 kubenswrapper[31559]: I0216 02:22:28.024780 31559 generic.go:334] "Generic (PLEG): container finished" podID="20bf60f7-9e36-477e-96a5-4fc8dc1bca5e" containerID="573f599caba2d6aea83d83677716e638746c9026d70482beb5f92bc432117189" exitCode=0 Feb 16 02:22:28.026875 master-0 kubenswrapper[31559]: I0216 02:22:28.026834 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-7bc947fc7d-frvgm_27a42eb0-677c-414d-b0ec-f945ec39b7e9/cluster-baremetal-operator/1.log" Feb 16 02:22:28.027306 master-0 kubenswrapper[31559]: I0216 02:22:28.027210 31559 generic.go:334] "Generic (PLEG): container finished" podID="27a42eb0-677c-414d-b0ec-f945ec39b7e9" containerID="3ea074aea2a594e75d8dbcb8474e8c5349cb474e287dbfac0d8bcbc83149c9d5" exitCode=1 Feb 16 02:22:28.030692 master-0 kubenswrapper[31559]: I0216 02:22:28.030655 31559 generic.go:334] "Generic (PLEG): container finished" podID="1f2d2601-481d-4e86-ac4c-3d34d5691261" containerID="cb69fe6420f408760369adf6194d1b660de997c81bbd72a933db5281cb306f1b" exitCode=0 Feb 16 02:22:28.030692 master-0 kubenswrapper[31559]: I0216 02:22:28.030691 31559 generic.go:334] "Generic (PLEG): container finished" podID="1f2d2601-481d-4e86-ac4c-3d34d5691261" containerID="b6ab399a4728a0f121aaa4c90c21fdc28f10d4d7510c12e5c86883bb65e07354" exitCode=0 Feb 16 02:22:28.030799 master-0 kubenswrapper[31559]: I0216 02:22:28.030706 31559 generic.go:334] "Generic (PLEG): container finished" podID="1f2d2601-481d-4e86-ac4c-3d34d5691261" containerID="1a2437279ddd7677a612b841ab7564a6229cb4e0606b54d09369835e4da58be3" exitCode=0 Feb 16 02:22:28.043085 master-0 kubenswrapper[31559]: I0216 02:22:28.043048 31559 generic.go:334] "Generic (PLEG): container finished" podID="a8f33151-61df-4b66-ba85-9ba210779059" containerID="8cdb2cf816b95ba9c46ea2bd0950b6c6b1a6f09cea50132c976d896bf508decf" exitCode=0 Feb 16 02:22:28.045147 master-0 kubenswrapper[31559]: I0216 02:22:28.045090 31559 generic.go:334] "Generic (PLEG): container finished" podID="bde83629-b39c-401e-bc30-5ce205638918" containerID="878600941ff09ae766ef1ccc9a324f0c6d5cbe6f0b05660545fe5e976ad49b02" exitCode=0 Feb 16 02:22:28.047027 master-0 kubenswrapper[31559]: I0216 02:22:28.046992 31559 generic.go:334] "Generic (PLEG): container finished" podID="980aa005-f51d-4ca2-aee6-a6fdeefd86d0" containerID="a6d90aff6f8ce2ab976f48907c4d1b01e98afde362aa201e2dc712d88fff6eb6" exitCode=0 Feb 16 02:22:28.048421 master-0 kubenswrapper[31559]: I0216 02:22:28.048390 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-0_8ea4c28c-8f53-4b41-9c85-c8c50599d7cd/installer/0.log" Feb 16 02:22:28.048518 master-0 kubenswrapper[31559]: I0216 02:22:28.048431 31559 generic.go:334] "Generic (PLEG): container finished" podID="8ea4c28c-8f53-4b41-9c85-c8c50599d7cd" containerID="bb9ffb6ca918ba3341c8df0e7c8c8ba7325d86a68eb6b2856270c9f7326551b5" exitCode=1 Feb 16 02:22:28.052484 master-0 kubenswrapper[31559]: I0216 02:22:28.051996 31559 generic.go:334] "Generic (PLEG): container finished" podID="1487f82c-c14a-4f65-be77-5af2612f56f4" containerID="49af9cb6f60854b313f38f722eafca91d152dc52885026eb8064608e8405a048" exitCode=0 Feb 16 02:22:28.052484 master-0 kubenswrapper[31559]: I0216 02:22:28.052035 31559 generic.go:334] "Generic (PLEG): container finished" podID="1487f82c-c14a-4f65-be77-5af2612f56f4" containerID="21b7e564dfe8c5595be3274f81dfdbf1c60d502a3297f83f36bd1e41f4f2b4cb" exitCode=0 Feb 16 02:22:28.057144 master-0 kubenswrapper[31559]: I0216 02:22:28.057062 31559 generic.go:334] "Generic (PLEG): container finished" podID="83883885-f493-4559-9c0f-e28d69712475" containerID="2a160f2c1742de9f4ba99becfe5db3107e11e652c40ff70cb3349e1627a9a147" exitCode=0 Feb 16 02:22:28.060894 master-0 kubenswrapper[31559]: I0216 02:22:28.060843 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-5c696dbdcd-tkqng_23755f7f-dce6-4dcf-9664-22e3aedb5c81/package-server-manager/0.log" Feb 16 02:22:28.061961 master-0 kubenswrapper[31559]: I0216 02:22:28.061920 31559 generic.go:334] "Generic (PLEG): container finished" podID="23755f7f-dce6-4dcf-9664-22e3aedb5c81" containerID="2f12531c82f370a1fa09ec7f01326ed0fd582df87939a5c0bd560230586f4734" exitCode=1 Feb 16 02:22:28.064253 master-0 kubenswrapper[31559]: I0216 02:22:28.064212 31559 generic.go:334] "Generic (PLEG): container finished" podID="f7317f91-9441-449f-9738-85da088cf94f" containerID="b0f87ddc237d60c2bab39a1452b1e36c685e800e91756d3d4eee6ecf6e94ac8b" exitCode=0 Feb 16 02:22:28.068377 master-0 kubenswrapper[31559]: I0216 02:22:28.068335 31559 generic.go:334] "Generic (PLEG): container finished" podID="f91346c7-bde4-4fa2-ac27-b5f0d25eeb75" containerID="0858acab9d05c2a71790635b1c93f375645e501dd52d5da79fb7b1cbf9b57e86" exitCode=0 Feb 16 02:22:28.068377 master-0 kubenswrapper[31559]: I0216 02:22:28.068369 31559 generic.go:334] "Generic (PLEG): container finished" podID="f91346c7-bde4-4fa2-ac27-b5f0d25eeb75" containerID="e44874f0350470f24d5ea4a5701795fe3efa7441e0282bd848060a5c5089ab29" exitCode=0 Feb 16 02:22:28.068529 master-0 kubenswrapper[31559]: I0216 02:22:28.068382 31559 generic.go:334] "Generic (PLEG): container finished" podID="f91346c7-bde4-4fa2-ac27-b5f0d25eeb75" containerID="f32a8a71ff757721727f0a15b091975f54ceee8df971155b55280b5af1e45ccf" exitCode=0 Feb 16 02:22:28.068529 master-0 kubenswrapper[31559]: I0216 02:22:28.068395 31559 generic.go:334] "Generic (PLEG): container finished" podID="f91346c7-bde4-4fa2-ac27-b5f0d25eeb75" containerID="d0f78aff7e0b714e84872137c91a78811349c06129b280efb18e955c4097bbb8" exitCode=0 Feb 16 02:22:28.068529 master-0 kubenswrapper[31559]: I0216 02:22:28.068406 31559 generic.go:334] "Generic (PLEG): container finished" podID="f91346c7-bde4-4fa2-ac27-b5f0d25eeb75" containerID="93e94006a31f3ba668f3844369615d0dcc4ff0267ec4f323096fa745c6b0818c" exitCode=0 Feb 16 02:22:28.068529 master-0 kubenswrapper[31559]: I0216 02:22:28.068415 31559 generic.go:334] "Generic (PLEG): container finished" podID="f91346c7-bde4-4fa2-ac27-b5f0d25eeb75" containerID="25d520296eb3e3e0c239fcaebd996a70fe80cf8a6487bc284e94db513bb2809d" exitCode=0 Feb 16 02:22:28.072206 master-0 kubenswrapper[31559]: I0216 02:22:28.072158 31559 generic.go:334] "Generic (PLEG): container finished" podID="4a5b01c1-1231-4e69-8b6c-c4981b65b26e" containerID="5fc216122a0910fe569f2678eb8d9427d5895c0ca4368e57e2f60b2f9f7164e2" exitCode=0 Feb 16 02:22:28.075325 master-0 kubenswrapper[31559]: I0216 02:22:28.075297 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj_c4a146b2-c712-408a-97d8-5de3a84f3aaf/config-sync-controllers/0.log" Feb 16 02:22:28.075785 master-0 kubenswrapper[31559]: I0216 02:22:28.075744 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj_c4a146b2-c712-408a-97d8-5de3a84f3aaf/cluster-cloud-controller-manager/0.log" Feb 16 02:22:28.075840 master-0 kubenswrapper[31559]: I0216 02:22:28.075782 31559 generic.go:334] "Generic (PLEG): container finished" podID="c4a146b2-c712-408a-97d8-5de3a84f3aaf" containerID="7b8d5b60c64a954457f5d3632cc4eab151ef7d06b7f4c5d6693868e55012ceda" exitCode=1 Feb 16 02:22:28.075840 master-0 kubenswrapper[31559]: I0216 02:22:28.075800 31559 generic.go:334] "Generic (PLEG): container finished" podID="c4a146b2-c712-408a-97d8-5de3a84f3aaf" containerID="0e6dfb235fe16f13df03b4a59ee89cd057fbaeee70e2959a56474787817390af" exitCode=1 Feb 16 02:22:28.080484 master-0 kubenswrapper[31559]: I0216 02:22:28.080454 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-d8bf84b88-r5l9f_a8d00a01-aa48-4830-a558-93a31cb98b31/control-plane-machine-set-operator/0.log" Feb 16 02:22:28.080662 master-0 kubenswrapper[31559]: I0216 02:22:28.080501 31559 generic.go:334] "Generic (PLEG): container finished" podID="a8d00a01-aa48-4830-a558-93a31cb98b31" containerID="cbb215837f1e5b2ced545b28b81dafe9fa0f617cf84f3ee5cf431ddb83b1fb21" exitCode=1 Feb 16 02:22:28.082385 master-0 kubenswrapper[31559]: I0216 02:22:28.082362 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-retry-1-master-0_9063971f-d258-4c4b-9e12-06b7de390d3b/installer/0.log" Feb 16 02:22:28.082569 master-0 kubenswrapper[31559]: I0216 02:22:28.082398 31559 generic.go:334] "Generic (PLEG): container finished" podID="9063971f-d258-4c4b-9e12-06b7de390d3b" containerID="f61bf9622140879bd257d70cb26fe6250ec7cfc5858c85cf7bce7b8c5f8c9dbd" exitCode=1 Feb 16 02:22:28.084110 master-0 kubenswrapper[31559]: E0216 02:22:28.084081 31559 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 02:22:28.086836 master-0 kubenswrapper[31559]: I0216 02:22:28.086620 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-67bc7c997f-zc2br_857357a1-dc98-4dd5-98b3-c94b1ddf9dec/manager/1.log" Feb 16 02:22:28.088728 master-0 kubenswrapper[31559]: I0216 02:22:28.088664 31559 generic.go:334] "Generic (PLEG): container finished" podID="857357a1-dc98-4dd5-98b3-c94b1ddf9dec" containerID="0f7ba85e2cd54b9d76dada87f2712d689c26beb2b9f0778369e602e1815aefe6" exitCode=1 Feb 16 02:22:28.092326 master-0 kubenswrapper[31559]: I0216 02:22:28.092291 31559 generic.go:334] "Generic (PLEG): container finished" podID="c9cd32bc-a13a-44ee-ba52-7bb335c7007b" containerID="3c59868e46c60a0139fbb9feace033d7ff3288c7e5f3febf6586656bff57983b" exitCode=0 Feb 16 02:22:28.094672 master-0 kubenswrapper[31559]: I0216 02:22:28.094629 31559 generic.go:334] "Generic (PLEG): container finished" podID="1743372f-bdb0-4558-b47b-3714f3aa3fde" containerID="1f742ab76573db69bc143df83fcf581f4c09f3de9ec005f01809b1af5690b4d3" exitCode=0 Feb 16 02:22:28.100916 master-0 kubenswrapper[31559]: I0216 02:22:28.100867 31559 generic.go:334] "Generic (PLEG): container finished" podID="5fb79c4e-3171-4d3c-a0d1-ed1a93acafa8" containerID="e0727143cdcad3cc5251095472bb96e72f7ab1b59c0a90ac12887c7c83657168" exitCode=0 Feb 16 02:22:28.103047 master-0 kubenswrapper[31559]: I0216 02:22:28.103017 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-bd7dd5c46-qw2zq_fec84b8a-a0d1-4b07-8827-cef0beb89ecd/machine-api-operator/0.log" Feb 16 02:22:28.103589 master-0 kubenswrapper[31559]: I0216 02:22:28.103554 31559 generic.go:334] "Generic (PLEG): container finished" podID="fec84b8a-a0d1-4b07-8827-cef0beb89ecd" containerID="54e4f3bd63acfa80c546903eac7441d247818158a150a69ea32c8395383dd3ba" exitCode=255 Feb 16 02:22:28.105916 master-0 kubenswrapper[31559]: I0216 02:22:28.105867 31559 generic.go:334] "Generic (PLEG): container finished" podID="ad700b17-ba2a-41d4-8bec-538a009a613b" containerID="0f4270e5e44e4ba946d497e39a29fcdd94ebfa1e344531fd5ab06971f1a503e0" exitCode=0 Feb 16 02:22:28.116154 master-0 kubenswrapper[31559]: I0216 02:22:28.116044 31559 generic.go:334] "Generic (PLEG): container finished" podID="0a900f93-91c9-4782-89a3-1cc09f3aec95" containerID="c8ecc68c95de851226f7803b9643bfa846686a0947e8d9573c40fc13da9cd25c" exitCode=0 Feb 16 02:22:28.120479 master-0 kubenswrapper[31559]: I0216 02:22:28.120402 31559 generic.go:334] "Generic (PLEG): container finished" podID="30fef0d5-46ea-4fa3-9ffa-88187d010ffe" containerID="4f4386d569551a2cb1add9279ae5e39db1d0c3382f70cefdecbf2167f005bf64" exitCode=0 Feb 16 02:22:28.138784 master-0 kubenswrapper[31559]: I0216 02:22:28.138732 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_532487ad51c30257b744e7c1c79fb34f/cluster-policy-controller/3.log" Feb 16 02:22:28.140084 master-0 kubenswrapper[31559]: I0216 02:22:28.140022 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_532487ad51c30257b744e7c1c79fb34f/kube-controller-manager/1.log" Feb 16 02:22:28.141330 master-0 kubenswrapper[31559]: I0216 02:22:28.141303 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_532487ad51c30257b744e7c1c79fb34f/kube-controller-manager-cert-syncer/0.log" Feb 16 02:22:28.141420 master-0 kubenswrapper[31559]: I0216 02:22:28.141357 31559 generic.go:334] "Generic (PLEG): container finished" podID="532487ad51c30257b744e7c1c79fb34f" containerID="da8d9bb8a4f56bffe45c048c1c373bcf9e89a06a44a652b40fb9bf76cc60fa15" exitCode=255 Feb 16 02:22:28.141420 master-0 kubenswrapper[31559]: I0216 02:22:28.141390 31559 generic.go:334] "Generic (PLEG): container finished" podID="532487ad51c30257b744e7c1c79fb34f" containerID="c315dd4013bf47ddda1fd2c99a095489c35ec0eda907e0f77d5a4d2d27ec8d89" exitCode=137 Feb 16 02:22:28.141420 master-0 kubenswrapper[31559]: I0216 02:22:28.141409 31559 generic.go:334] "Generic (PLEG): container finished" podID="532487ad51c30257b744e7c1c79fb34f" containerID="c1607c7a684a009a85d360c6358aedc027d89ca14606abafaf65b0d9cbaca7c9" exitCode=1 Feb 16 02:22:28.143866 master-0 kubenswrapper[31559]: I0216 02:22:28.143826 31559 generic.go:334] "Generic (PLEG): container finished" podID="9460ca0802075a8a6a10d7b3e6052c4d" containerID="6bc4b5ee1e89ed7a76ec9068e6cdb19289d70c03bd852b3dc8e93c9d7f9e1ba4" exitCode=0 Feb 16 02:22:28.146886 master-0 kubenswrapper[31559]: I0216 02:22:28.146843 31559 generic.go:334] "Generic (PLEG): container finished" podID="5b923d74-bad3-4780-8e7e-e8365ac9ea06" containerID="aea61f85c2790ce795bc5e29cc6af9ceb9ed3e98bdf75492709e6ca02b00c1e9" exitCode=0 Feb 16 02:22:28.146886 master-0 kubenswrapper[31559]: I0216 02:22:28.146884 31559 generic.go:334] "Generic (PLEG): container finished" podID="5b923d74-bad3-4780-8e7e-e8365ac9ea06" containerID="9e46f269d8f0d63b18fad69b8b71e42902785d9a1b409d9e8c0b0f57a24c8b43" exitCode=0 Feb 16 02:22:28.151386 master-0 kubenswrapper[31559]: I0216 02:22:28.151307 31559 generic.go:334] "Generic (PLEG): container finished" podID="d870332c-2498-4135-a9b3-a71e67c2805b" containerID="8a15ec6edf531733b3fdbab5958c503602c9f05e39693986c688462128642a62" exitCode=0 Feb 16 02:22:28.153801 master-0 kubenswrapper[31559]: I0216 02:22:28.153753 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-kffmg_dbc5b101-936f-4bf3-bbf3-f30966b0ab50/approver/1.log" Feb 16 02:22:28.154451 master-0 kubenswrapper[31559]: I0216 02:22:28.154386 31559 generic.go:334] "Generic (PLEG): container finished" podID="dbc5b101-936f-4bf3-bbf3-f30966b0ab50" containerID="e2b95d6d9e0e9f98872131f6b8b2e0daa77ccd636475f9813d73f42413bc869a" exitCode=1 Feb 16 02:22:28.156281 master-0 kubenswrapper[31559]: I0216 02:22:28.156241 31559 generic.go:334] "Generic (PLEG): container finished" podID="8269ffdd-7357-4a8c-b578-0f482558f93e" containerID="3d85392af80e65ab2985e54a4974e1f024f9a9bb02545f3e0dcd5540c8518016" exitCode=0 Feb 16 02:22:28.163726 master-0 kubenswrapper[31559]: I0216 02:22:28.163662 31559 generic.go:334] "Generic (PLEG): container finished" podID="89041b37-18f6-499d-89ec-a0523a25dc58" containerID="7d03945f26a5afefa2326f18d81617f6a565587e2b4f83c138528190839d7076" exitCode=0 Feb 16 02:22:28.163726 master-0 kubenswrapper[31559]: I0216 02:22:28.163707 31559 generic.go:334] "Generic (PLEG): container finished" podID="89041b37-18f6-499d-89ec-a0523a25dc58" containerID="22ae7ce64aa3ec99af80eb81cfd477c02a07e643cce741b7421f2d5683a30b06" exitCode=0 Feb 16 02:22:28.169252 master-0 kubenswrapper[31559]: I0216 02:22:28.169220 31559 generic.go:334] "Generic (PLEG): container finished" podID="4733c2df-0f5a-4696-b8c6-2568ebc7debc" containerID="3537f96b40e8859a6a366ec6550aaba73f34d9f862f4f2e89eccfbc047d01b00" exitCode=0 Feb 16 02:22:28.172657 master-0 kubenswrapper[31559]: I0216 02:22:28.172622 31559 generic.go:334] "Generic (PLEG): container finished" podID="e379cfaf-3a4c-40e7-8641-3524b3669295" containerID="29588e18b21fc378729e293fc4d3e978d87e6e1444fa9f91d1cf677cd080ce85" exitCode=0 Feb 16 02:22:28.183692 master-0 kubenswrapper[31559]: I0216 02:22:28.183651 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-8569dd85ff-vqtcl_c442d349-668b-4d01-a097-5981b7a04eac/machine-approver-controller/0.log" Feb 16 02:22:28.184329 master-0 kubenswrapper[31559]: I0216 02:22:28.184285 31559 generic.go:334] "Generic (PLEG): container finished" podID="c442d349-668b-4d01-a097-5981b7a04eac" containerID="7519ecb1c789c2c061040595067f6c82e07370c9c08904abeb4e65bb29dba279" exitCode=255 Feb 16 02:22:28.184410 master-0 kubenswrapper[31559]: E0216 02:22:28.184370 31559 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 02:22:28.209082 master-0 kubenswrapper[31559]: I0216 02:22:28.208877 31559 generic.go:334] "Generic (PLEG): container finished" podID="e491b5ed-9c09-4308-9843-fba8d43bd3ae" containerID="fb538a41cea5f683a2ab8b99be06eb74affc528bd353ff7cabad5516264bee81" exitCode=0 Feb 16 02:22:28.220204 master-0 kubenswrapper[31559]: I0216 02:22:28.220139 31559 generic.go:334] "Generic (PLEG): container finished" podID="7adecad495595c43c57c30abd350e987" containerID="9fdba8862d6e8ad29a9f7bd67e796348970e6fc6146e389d31c604eee300ee18" exitCode=0 Feb 16 02:22:28.220204 master-0 kubenswrapper[31559]: I0216 02:22:28.220178 31559 generic.go:334] "Generic (PLEG): container finished" podID="7adecad495595c43c57c30abd350e987" containerID="8a29b6deeb6009e1fe7a931b2cf89177c0cfa2c70c5aa2feb9a3b9bb5b6df61d" exitCode=0 Feb 16 02:22:28.220204 master-0 kubenswrapper[31559]: I0216 02:22:28.220189 31559 generic.go:334] "Generic (PLEG): container finished" podID="7adecad495595c43c57c30abd350e987" containerID="ec03f418d636771605fae0ee7e9daf8aa0945bcc9619f802f8819cb5a43f7d70" exitCode=0 Feb 16 02:22:28.223498 master-0 kubenswrapper[31559]: I0216 02:22:28.223457 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-85c9b89969-g9lcm_27d876a7-6a48-4942-ad96-ed8ed3aa104b/manager/1.log" Feb 16 02:22:28.223858 master-0 kubenswrapper[31559]: E0216 02:22:28.223827 31559 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 16 02:22:28.224160 master-0 kubenswrapper[31559]: I0216 02:22:28.224122 31559 generic.go:334] "Generic (PLEG): container finished" podID="27d876a7-6a48-4942-ad96-ed8ed3aa104b" containerID="bb54fbd185420265f2400dbce5bb93b2c07ec50f3a0611291aab6640cc25bca3" exitCode=1 Feb 16 02:22:28.230236 master-0 kubenswrapper[31559]: I0216 02:22:28.230192 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-466x9_a3065737-c7c0-4fbb-b484-f2a9204d4908/snapshot-controller/4.log" Feb 16 02:22:28.230317 master-0 kubenswrapper[31559]: I0216 02:22:28.230241 31559 generic.go:334] "Generic (PLEG): container finished" podID="a3065737-c7c0-4fbb-b484-f2a9204d4908" containerID="10c9eace2e116c89a552aa72158d1a899b8f235b70b43e057de19dffd38d865e" exitCode=1 Feb 16 02:22:28.233312 master-0 kubenswrapper[31559]: I0216 02:22:28.233217 31559 generic.go:334] "Generic (PLEG): container finished" podID="5f810ea0-e32d-4097-beca-5194349a57a6" containerID="0c089d20e3c3f3c6a423f5f3bfd60d7fd11adf6b1d0bb658070186cbb28fa86e" exitCode=0 Feb 16 02:22:28.233312 master-0 kubenswrapper[31559]: I0216 02:22:28.233245 31559 generic.go:334] "Generic (PLEG): container finished" podID="5f810ea0-e32d-4097-beca-5194349a57a6" containerID="1699b8c4f02ae23c693a51c49da941feb9c55db5efaac1f61f4c4aee2139bea0" exitCode=0 Feb 16 02:22:28.235587 master-0 kubenswrapper[31559]: I0216 02:22:28.235521 31559 generic.go:334] "Generic (PLEG): container finished" podID="724ac845-3835-458b-9645-e665be135ff9" containerID="f4cc6bf86c33c3e578a43a1648d54a69838bb79c81f9072d23717330a60f1d97" exitCode=0 Feb 16 02:22:28.242085 master-0 kubenswrapper[31559]: I0216 02:22:28.242031 31559 generic.go:334] "Generic (PLEG): container finished" podID="9defdfff-eb18-4beb-9591-918d0e4b4236" containerID="ea913d4d2d0edfcbff7d836320baff12a198f69ef86939ba8c7d3ee238eec033" exitCode=0 Feb 16 02:22:28.284493 master-0 kubenswrapper[31559]: E0216 02:22:28.284429 31559 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 02:22:28.385004 master-0 kubenswrapper[31559]: E0216 02:22:28.384528 31559 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 02:22:28.400515 master-0 kubenswrapper[31559]: I0216 02:22:28.400468 31559 manager.go:324] Recovery completed Feb 16 02:22:28.484826 master-0 kubenswrapper[31559]: E0216 02:22:28.484650 31559 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 02:22:28.504565 master-0 kubenswrapper[31559]: I0216 02:22:28.504490 31559 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:22:28.508431 master-0 kubenswrapper[31559]: I0216 02:22:28.508361 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:22:28.508431 master-0 kubenswrapper[31559]: I0216 02:22:28.508421 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:22:28.508431 master-0 kubenswrapper[31559]: I0216 02:22:28.508466 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:22:28.515239 master-0 kubenswrapper[31559]: I0216 02:22:28.515182 31559 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 16 02:22:28.515239 master-0 kubenswrapper[31559]: I0216 02:22:28.515216 31559 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 16 02:22:28.515239 master-0 kubenswrapper[31559]: I0216 02:22:28.515248 31559 state_mem.go:36] "Initialized new in-memory state store" Feb 16 02:22:28.515571 master-0 kubenswrapper[31559]: I0216 02:22:28.515551 31559 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 16 02:22:28.515633 master-0 kubenswrapper[31559]: I0216 02:22:28.515572 31559 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 16 02:22:28.515633 master-0 kubenswrapper[31559]: I0216 02:22:28.515605 31559 state_checkpoint.go:136] "State checkpoint: restored state from checkpoint" Feb 16 02:22:28.515633 master-0 kubenswrapper[31559]: I0216 02:22:28.515619 31559 state_checkpoint.go:137] "State checkpoint: defaultCPUSet" defaultCpuSet="" Feb 16 02:22:28.515633 master-0 kubenswrapper[31559]: I0216 02:22:28.515632 31559 policy_none.go:49] "None policy: Start" Feb 16 02:22:28.520337 master-0 kubenswrapper[31559]: I0216 02:22:28.520290 31559 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 16 02:22:28.520337 master-0 kubenswrapper[31559]: I0216 02:22:28.520334 31559 state_mem.go:35] "Initializing new in-memory state store" Feb 16 02:22:28.520690 master-0 kubenswrapper[31559]: I0216 02:22:28.520647 31559 state_mem.go:75] "Updated machine memory state" Feb 16 02:22:28.520690 master-0 kubenswrapper[31559]: I0216 02:22:28.520674 31559 state_checkpoint.go:82] "State checkpoint: restored state from checkpoint" Feb 16 02:22:28.544949 master-0 kubenswrapper[31559]: I0216 02:22:28.544881 31559 manager.go:334] "Starting Device Plugin manager" Feb 16 02:22:28.545158 master-0 kubenswrapper[31559]: I0216 02:22:28.545011 31559 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 16 02:22:28.545158 master-0 kubenswrapper[31559]: I0216 02:22:28.545034 31559 server.go:79] "Starting device plugin registration server" Feb 16 02:22:28.545854 master-0 kubenswrapper[31559]: I0216 02:22:28.545813 31559 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 16 02:22:28.545943 master-0 kubenswrapper[31559]: I0216 02:22:28.545845 31559 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 16 02:22:28.546091 master-0 kubenswrapper[31559]: I0216 02:22:28.546030 31559 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 16 02:22:28.547704 master-0 kubenswrapper[31559]: I0216 02:22:28.547670 31559 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 16 02:22:28.547704 master-0 kubenswrapper[31559]: I0216 02:22:28.547702 31559 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 16 02:22:28.553744 master-0 kubenswrapper[31559]: E0216 02:22:28.553681 31559 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Feb 16 02:22:28.625046 master-0 kubenswrapper[31559]: I0216 02:22:28.624931 31559 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0","openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0","openshift-kube-controller-manager/kube-controller-manager-master-0","openshift-kube-scheduler/openshift-kube-scheduler-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0"] Feb 16 02:22:28.625356 master-0 kubenswrapper[31559]: I0216 02:22:28.625092 31559 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:22:28.628749 master-0 kubenswrapper[31559]: I0216 02:22:28.628692 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:22:28.628749 master-0 kubenswrapper[31559]: I0216 02:22:28.628744 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:22:28.628958 master-0 kubenswrapper[31559]: I0216 02:22:28.628762 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:22:28.629044 master-0 kubenswrapper[31559]: I0216 02:22:28.629024 31559 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:22:28.629520 master-0 kubenswrapper[31559]: I0216 02:22:28.629379 31559 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:22:28.633972 master-0 kubenswrapper[31559]: I0216 02:22:28.633902 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:22:28.634085 master-0 kubenswrapper[31559]: I0216 02:22:28.633974 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:22:28.634085 master-0 kubenswrapper[31559]: I0216 02:22:28.634002 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:22:28.634665 master-0 kubenswrapper[31559]: I0216 02:22:28.634615 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:22:28.634665 master-0 kubenswrapper[31559]: I0216 02:22:28.634656 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:22:28.634842 master-0 kubenswrapper[31559]: I0216 02:22:28.634673 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:22:28.635001 master-0 kubenswrapper[31559]: I0216 02:22:28.634918 31559 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:22:28.635512 master-0 kubenswrapper[31559]: I0216 02:22:28.635431 31559 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:22:28.639038 master-0 kubenswrapper[31559]: I0216 02:22:28.638996 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:22:28.639038 master-0 kubenswrapper[31559]: I0216 02:22:28.639035 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:22:28.639240 master-0 kubenswrapper[31559]: I0216 02:22:28.639052 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:22:28.639303 master-0 kubenswrapper[31559]: I0216 02:22:28.639268 31559 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:22:28.639677 master-0 kubenswrapper[31559]: I0216 02:22:28.639625 31559 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:22:28.640686 master-0 kubenswrapper[31559]: I0216 02:22:28.640627 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:22:28.640798 master-0 kubenswrapper[31559]: I0216 02:22:28.640689 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:22:28.640798 master-0 kubenswrapper[31559]: I0216 02:22:28.640714 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:22:28.644293 master-0 kubenswrapper[31559]: I0216 02:22:28.644247 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:22:28.644468 master-0 kubenswrapper[31559]: I0216 02:22:28.644298 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:22:28.644468 master-0 kubenswrapper[31559]: I0216 02:22:28.644316 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:22:28.644618 master-0 kubenswrapper[31559]: I0216 02:22:28.644570 31559 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:22:28.644698 master-0 kubenswrapper[31559]: I0216 02:22:28.644572 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:22:28.644698 master-0 kubenswrapper[31559]: I0216 02:22:28.644679 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:22:28.644820 master-0 kubenswrapper[31559]: I0216 02:22:28.644701 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:22:28.644820 master-0 kubenswrapper[31559]: I0216 02:22:28.644766 31559 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:22:28.646267 master-0 kubenswrapper[31559]: I0216 02:22:28.646216 31559 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:22:28.649129 master-0 kubenswrapper[31559]: I0216 02:22:28.649075 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:22:28.649129 master-0 kubenswrapper[31559]: I0216 02:22:28.649119 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:22:28.649310 master-0 kubenswrapper[31559]: I0216 02:22:28.649136 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:22:28.649458 master-0 kubenswrapper[31559]: I0216 02:22:28.649390 31559 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:22:28.649673 master-0 kubenswrapper[31559]: I0216 02:22:28.649621 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:22:28.649673 master-0 kubenswrapper[31559]: I0216 02:22:28.649666 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:22:28.649791 master-0 kubenswrapper[31559]: I0216 02:22:28.649683 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:22:28.649791 master-0 kubenswrapper[31559]: I0216 02:22:28.649710 31559 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:22:28.650575 master-0 kubenswrapper[31559]: I0216 02:22:28.650517 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:22:28.650575 master-0 kubenswrapper[31559]: I0216 02:22:28.650565 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:22:28.650826 master-0 kubenswrapper[31559]: I0216 02:22:28.650584 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:22:28.650826 master-0 kubenswrapper[31559]: I0216 02:22:28.650683 31559 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 16 02:22:28.653846 master-0 kubenswrapper[31559]: I0216 02:22:28.653780 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:22:28.653846 master-0 kubenswrapper[31559]: I0216 02:22:28.653834 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:22:28.653846 master-0 kubenswrapper[31559]: I0216 02:22:28.653855 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:22:28.654164 master-0 kubenswrapper[31559]: I0216 02:22:28.654110 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:22:28.654237 master-0 kubenswrapper[31559]: I0216 02:22:28.654165 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:22:28.654237 master-0 kubenswrapper[31559]: I0216 02:22:28.654183 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:22:28.654237 master-0 kubenswrapper[31559]: I0216 02:22:28.654211 31559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aae309ad89c83d46c9fddf6708eda09d37f1fa06aa9277a0a246c53f3525897c" Feb 16 02:22:28.654471 master-0 kubenswrapper[31559]: I0216 02:22:28.654265 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b3322fd3717f4aec0d8f54ec7862c07e","Type":"ContainerStarted","Data":"93fa1f75b959883173b882fc0c221239f90d8c0f0c6f464304aa368bf78625b2"} Feb 16 02:22:28.654471 master-0 kubenswrapper[31559]: I0216 02:22:28.654349 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b3322fd3717f4aec0d8f54ec7862c07e","Type":"ContainerDied","Data":"1315b8b70fa662058fdbb3d25c0b57bbe5b7832e14fd3593c7b3c8b6954d366b"} Feb 16 02:22:28.654471 master-0 kubenswrapper[31559]: I0216 02:22:28.654371 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b3322fd3717f4aec0d8f54ec7862c07e","Type":"ContainerDied","Data":"0a5e2f09da7456e3ddaab1d9e62abba19553e3d0c15bed35db33dc97212f0fb6"} Feb 16 02:22:28.654471 master-0 kubenswrapper[31559]: I0216 02:22:28.654389 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b3322fd3717f4aec0d8f54ec7862c07e","Type":"ContainerStarted","Data":"07f7d55685e3891e139cfcc8fc39a4525349b15753a33187f5704239bf899022"} Feb 16 02:22:28.654471 master-0 kubenswrapper[31559]: I0216 02:22:28.654431 31559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5001e844bddc55d075219ca34aad85fbea6d50d9661dd853374d08556f873a41" Feb 16 02:22:28.654794 master-0 kubenswrapper[31559]: I0216 02:22:28.654484 31559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ca81682ef82021a12533b5444bd8e4c0c15330d2db438a6a205cb774dd456a1" Feb 16 02:22:28.654794 master-0 kubenswrapper[31559]: I0216 02:22:28.654501 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"619e637b8575311b72d43b7b782d610a","Type":"ContainerStarted","Data":"c1ba2d68a64d6fb932ae524cee345f61dbf00431978608d5398de81a322f1f49"} Feb 16 02:22:28.654794 master-0 kubenswrapper[31559]: I0216 02:22:28.654519 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"619e637b8575311b72d43b7b782d610a","Type":"ContainerStarted","Data":"23f1844f084bd578a72562cf5fb2523c0cb62c0a661c5a07d573cb0c56ece51d"} Feb 16 02:22:28.654794 master-0 kubenswrapper[31559]: I0216 02:22:28.654539 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"619e637b8575311b72d43b7b782d610a","Type":"ContainerStarted","Data":"fd47e260d84ca6db305938f1f4e1895a6f6bdda99aeb361b11a3ab5204667a82"} Feb 16 02:22:28.654794 master-0 kubenswrapper[31559]: I0216 02:22:28.654556 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"619e637b8575311b72d43b7b782d610a","Type":"ContainerStarted","Data":"8390bbe4d8742fdad642a6f50a5fdda06aa95077fda8b2a4a38589b254209605"} Feb 16 02:22:28.654794 master-0 kubenswrapper[31559]: I0216 02:22:28.654572 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"619e637b8575311b72d43b7b782d610a","Type":"ContainerStarted","Data":"fec477df3910b6405819fe05a8d3d1b8456afafc6dfca3d23c64fa136cd595d6"} Feb 16 02:22:28.654794 master-0 kubenswrapper[31559]: I0216 02:22:28.654589 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"619e637b8575311b72d43b7b782d610a","Type":"ContainerDied","Data":"2f315c09e62d7e5ecdac8433decccf201da1935e2dc178927c912fe29e35daf4"} Feb 16 02:22:28.654794 master-0 kubenswrapper[31559]: I0216 02:22:28.654609 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"619e637b8575311b72d43b7b782d610a","Type":"ContainerStarted","Data":"47742d18510812b119307e6d49d3c726c8e73fb1e202f5e57c5cfb5945faf19d"} Feb 16 02:22:28.654794 master-0 kubenswrapper[31559]: I0216 02:22:28.654637 31559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98dcc1912f40abe649e3505484d85a2636bf298671bda45fdf2eb9864ccd1111" Feb 16 02:22:28.654794 master-0 kubenswrapper[31559]: I0216 02:22:28.654654 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"ebf941eaba3a97825b1c8002f4b27a20","Type":"ContainerStarted","Data":"c2d62a7e7f4bbc8330cc4b08997f3b933fe2a87b2e1d2971c8efe2da06bf71fa"} Feb 16 02:22:28.654794 master-0 kubenswrapper[31559]: I0216 02:22:28.654521 31559 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:22:28.654794 master-0 kubenswrapper[31559]: I0216 02:22:28.654671 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"ebf941eaba3a97825b1c8002f4b27a20","Type":"ContainerStarted","Data":"460579bff076abf6e4d419f44b68400fd50e9b2bc9a03fc49494f7b68ef04045"} Feb 16 02:22:28.655700 master-0 kubenswrapper[31559]: I0216 02:22:28.655373 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"952766c3a88fd12345a552f1277199f9","Type":"ContainerStarted","Data":"0e646a3e6607e9273936bfd732054c28836f671166eb8f720bf310ef03bc905c"} Feb 16 02:22:28.655700 master-0 kubenswrapper[31559]: I0216 02:22:28.655456 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"952766c3a88fd12345a552f1277199f9","Type":"ContainerStarted","Data":"b1719f823da9232e4df52b63f4eabde4790d28c377a06111c61f6b0b4e7b4fdc"} Feb 16 02:22:28.655700 master-0 kubenswrapper[31559]: I0216 02:22:28.655468 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"952766c3a88fd12345a552f1277199f9","Type":"ContainerStarted","Data":"779e04f06bfe5ff236c55c41264d4f056bfa7e1b617ccd26b61c5cc8d3f21521"} Feb 16 02:22:28.655700 master-0 kubenswrapper[31559]: I0216 02:22:28.655480 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"952766c3a88fd12345a552f1277199f9","Type":"ContainerDied","Data":"76eaee0713adf3d6273ac37acfe2aa28acfb59f88749a49fa74c637faf8ccbb7"} Feb 16 02:22:28.655700 master-0 kubenswrapper[31559]: I0216 02:22:28.655491 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"952766c3a88fd12345a552f1277199f9","Type":"ContainerStarted","Data":"92790def01d9723678e72ddb37afa203e6ce284de27eb1ef78e5d202635e3d9e"} Feb 16 02:22:28.655700 master-0 kubenswrapper[31559]: I0216 02:22:28.655524 31559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f617d4c81a045b3a0a2096d5b392bf8b99ea0de59561036edc52c149d97f12ac" Feb 16 02:22:28.655700 master-0 kubenswrapper[31559]: I0216 02:22:28.655567 31559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="519d12e7d67d992627ab3afec4b63569e16dcc4c57e6118793f1a36ff0f10027" Feb 16 02:22:28.657067 master-0 kubenswrapper[31559]: I0216 02:22:28.656570 31559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1cda764f5f0471c7463d4e4932eaea865d91aa81030462076bd5270b356dfca" Feb 16 02:22:28.657067 master-0 kubenswrapper[31559]: I0216 02:22:28.656810 31559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58997b8fc48a379ceddf1aa04ebaf598050ecf9e7d85b482c7e790d8fddb7296" Feb 16 02:22:28.657067 master-0 kubenswrapper[31559]: I0216 02:22:28.656917 31559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="053a88a441f8640d0f49f51a0dfba08da37a8b783fce751c0810295de49cf426" Feb 16 02:22:28.658373 master-0 kubenswrapper[31559]: I0216 02:22:28.657020 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"532487ad51c30257b744e7c1c79fb34f","Type":"ContainerStarted","Data":"ab9f31d8a9dea7f17fe5df1556062b9ee37acd8a1e22d617b3329084d777dce1"} Feb 16 02:22:28.658373 master-0 kubenswrapper[31559]: I0216 02:22:28.657549 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"532487ad51c30257b744e7c1c79fb34f","Type":"ContainerStarted","Data":"789e61bec232bf870ef2e4f73549435ac6af8ac001a93d4407c58240635552e4"} Feb 16 02:22:28.658373 master-0 kubenswrapper[31559]: I0216 02:22:28.657580 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"532487ad51c30257b744e7c1c79fb34f","Type":"ContainerDied","Data":"da8d9bb8a4f56bffe45c048c1c373bcf9e89a06a44a652b40fb9bf76cc60fa15"} Feb 16 02:22:28.658373 master-0 kubenswrapper[31559]: I0216 02:22:28.657614 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"532487ad51c30257b744e7c1c79fb34f","Type":"ContainerStarted","Data":"5fc96fb916b196b3dbc229cfd525c7d85b5052106365d264bf8c22b6c5329dbb"} Feb 16 02:22:28.658373 master-0 kubenswrapper[31559]: I0216 02:22:28.657636 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"532487ad51c30257b744e7c1c79fb34f","Type":"ContainerDied","Data":"c315dd4013bf47ddda1fd2c99a095489c35ec0eda907e0f77d5a4d2d27ec8d89"} Feb 16 02:22:28.658373 master-0 kubenswrapper[31559]: I0216 02:22:28.657678 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"532487ad51c30257b744e7c1c79fb34f","Type":"ContainerStarted","Data":"47f370468f9a506b6024de7fb2029d49ff3b6445c9e16b06204e3c886ebdacc9"} Feb 16 02:22:28.658373 master-0 kubenswrapper[31559]: I0216 02:22:28.657710 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"532487ad51c30257b744e7c1c79fb34f","Type":"ContainerDied","Data":"c1607c7a684a009a85d360c6358aedc027d89ca14606abafaf65b0d9cbaca7c9"} Feb 16 02:22:28.658373 master-0 kubenswrapper[31559]: I0216 02:22:28.657732 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"532487ad51c30257b744e7c1c79fb34f","Type":"ContainerStarted","Data":"48118d83188cff04b48b2d21c92d5267795e6e491327e95878cf252a4b94caea"} Feb 16 02:22:28.658373 master-0 kubenswrapper[31559]: I0216 02:22:28.657760 31559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8b97e97c35ace1d8e3342fb279b58e63ecd66a09abba6b504fe344a2864fe27" Feb 16 02:22:28.658373 master-0 kubenswrapper[31559]: E0216 02:22:28.657768 31559 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"master-0\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="master-0" Feb 16 02:22:28.658373 master-0 kubenswrapper[31559]: I0216 02:22:28.657853 31559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37b892c134020f30aa7292b1e83d1425d52b4470fccaddc25006dc9605b060e9" Feb 16 02:22:28.658373 master-0 kubenswrapper[31559]: I0216 02:22:28.657915 31559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8066623cf1d51e73e4a446a8cf51f10003367928813b2f433dc4283c2b007eff" Feb 16 02:22:28.658373 master-0 kubenswrapper[31559]: I0216 02:22:28.657988 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerStarted","Data":"b50b0f44e3b3780166f8a63fd8d23b62ef36b9fc26de6eb074f3ec5177cf1af3"} Feb 16 02:22:28.658373 master-0 kubenswrapper[31559]: I0216 02:22:28.658090 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerStarted","Data":"372042b6282c1ebffee97f4b7cbb12648d0476752e838cf6bdaaa46e9c02aadb"} Feb 16 02:22:28.658373 master-0 kubenswrapper[31559]: I0216 02:22:28.658112 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerStarted","Data":"3299356ddf4a192b926c81a482f497d9de881fe9f374a90912b930bcbf67c6b6"} Feb 16 02:22:28.658373 master-0 kubenswrapper[31559]: I0216 02:22:28.658131 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerStarted","Data":"ae32830ca9150c203b10939e294a55ba6320c100ed6704428d65d387485e03fc"} Feb 16 02:22:28.658373 master-0 kubenswrapper[31559]: I0216 02:22:28.658153 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerStarted","Data":"5afdb28db1102b8680211572600d2ea86ff6a2b01f828f45c36202b1f159b2ff"} Feb 16 02:22:28.658373 master-0 kubenswrapper[31559]: I0216 02:22:28.658173 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerDied","Data":"9fdba8862d6e8ad29a9f7bd67e796348970e6fc6146e389d31c604eee300ee18"} Feb 16 02:22:28.658373 master-0 kubenswrapper[31559]: I0216 02:22:28.658196 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerDied","Data":"8a29b6deeb6009e1fe7a931b2cf89177c0cfa2c70c5aa2feb9a3b9bb5b6df61d"} Feb 16 02:22:28.658373 master-0 kubenswrapper[31559]: I0216 02:22:28.658224 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerDied","Data":"ec03f418d636771605fae0ee7e9daf8aa0945bcc9619f802f8819cb5a43f7d70"} Feb 16 02:22:28.658373 master-0 kubenswrapper[31559]: I0216 02:22:28.658245 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerStarted","Data":"5b8d0bef4d74f4dd2410957462f576db299d55ec6675ac364687f5b27fba5fd5"} Feb 16 02:22:28.660294 master-0 kubenswrapper[31559]: I0216 02:22:28.658311 31559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff8deeed5842106bfd4d1b27be4848f25105bbaa159314b19c6a3add851fbf37" Feb 16 02:22:28.663101 master-0 kubenswrapper[31559]: I0216 02:22:28.663054 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:22:28.663210 master-0 kubenswrapper[31559]: I0216 02:22:28.663109 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:22:28.663210 master-0 kubenswrapper[31559]: I0216 02:22:28.663119 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:22:28.731817 master-0 kubenswrapper[31559]: I0216 02:22:28.731731 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebf941eaba3a97825b1c8002f4b27a20\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 02:22:28.732737 master-0 kubenswrapper[31559]: I0216 02:22:28.731935 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/952766c3a88fd12345a552f1277199f9-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"952766c3a88fd12345a552f1277199f9\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 02:22:28.732737 master-0 kubenswrapper[31559]: I0216 02:22:28.731989 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 02:22:28.732737 master-0 kubenswrapper[31559]: I0216 02:22:28.732025 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 02:22:28.732737 master-0 kubenswrapper[31559]: I0216 02:22:28.732059 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-static-pod-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 02:22:28.732737 master-0 kubenswrapper[31559]: I0216 02:22:28.732090 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-resource-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 02:22:28.732737 master-0 kubenswrapper[31559]: I0216 02:22:28.732120 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebf941eaba3a97825b1c8002f4b27a20\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 02:22:28.732737 master-0 kubenswrapper[31559]: I0216 02:22:28.732151 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/532487ad51c30257b744e7c1c79fb34f-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"532487ad51c30257b744e7c1c79fb34f\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:22:28.732737 master-0 kubenswrapper[31559]: I0216 02:22:28.732184 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/532487ad51c30257b744e7c1c79fb34f-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"532487ad51c30257b744e7c1c79fb34f\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:22:28.732737 master-0 kubenswrapper[31559]: I0216 02:22:28.732214 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/952766c3a88fd12345a552f1277199f9-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"952766c3a88fd12345a552f1277199f9\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 02:22:28.732737 master-0 kubenswrapper[31559]: I0216 02:22:28.732515 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-cert-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 02:22:28.732737 master-0 kubenswrapper[31559]: I0216 02:22:28.732560 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/619e637b8575311b72d43b7b782d610a-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"619e637b8575311b72d43b7b782d610a\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 02:22:28.732737 master-0 kubenswrapper[31559]: I0216 02:22:28.732589 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebf941eaba3a97825b1c8002f4b27a20\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 02:22:28.732737 master-0 kubenswrapper[31559]: I0216 02:22:28.732621 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebf941eaba3a97825b1c8002f4b27a20\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 02:22:28.732737 master-0 kubenswrapper[31559]: I0216 02:22:28.732653 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebf941eaba3a97825b1c8002f4b27a20\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 02:22:28.733746 master-0 kubenswrapper[31559]: I0216 02:22:28.732777 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-usr-local-bin\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 02:22:28.733746 master-0 kubenswrapper[31559]: I0216 02:22:28.732861 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-log-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 02:22:28.733746 master-0 kubenswrapper[31559]: I0216 02:22:28.732897 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/619e637b8575311b72d43b7b782d610a-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"619e637b8575311b72d43b7b782d610a\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 02:22:28.733746 master-0 kubenswrapper[31559]: I0216 02:22:28.732928 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/619e637b8575311b72d43b7b782d610a-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"619e637b8575311b72d43b7b782d610a\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 02:22:28.733746 master-0 kubenswrapper[31559]: I0216 02:22:28.733003 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-data-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 02:22:28.833927 master-0 kubenswrapper[31559]: I0216 02:22:28.833774 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-data-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 02:22:28.833927 master-0 kubenswrapper[31559]: I0216 02:22:28.833847 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebf941eaba3a97825b1c8002f4b27a20\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 02:22:28.834262 master-0 kubenswrapper[31559]: I0216 02:22:28.833965 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-data-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 02:22:28.834262 master-0 kubenswrapper[31559]: I0216 02:22:28.834038 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/952766c3a88fd12345a552f1277199f9-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"952766c3a88fd12345a552f1277199f9\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 02:22:28.834262 master-0 kubenswrapper[31559]: I0216 02:22:28.834059 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebf941eaba3a97825b1c8002f4b27a20\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 02:22:28.834262 master-0 kubenswrapper[31559]: I0216 02:22:28.834076 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 02:22:28.834262 master-0 kubenswrapper[31559]: I0216 02:22:28.834091 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/952766c3a88fd12345a552f1277199f9-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"952766c3a88fd12345a552f1277199f9\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 02:22:28.834262 master-0 kubenswrapper[31559]: I0216 02:22:28.834112 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 02:22:28.834262 master-0 kubenswrapper[31559]: I0216 02:22:28.834143 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-static-pod-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 02:22:28.834262 master-0 kubenswrapper[31559]: I0216 02:22:28.834119 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 02:22:28.834262 master-0 kubenswrapper[31559]: I0216 02:22:28.834215 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-resource-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 02:22:28.834262 master-0 kubenswrapper[31559]: I0216 02:22:28.834245 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebf941eaba3a97825b1c8002f4b27a20\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 02:22:28.834262 master-0 kubenswrapper[31559]: I0216 02:22:28.834240 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 02:22:28.835137 master-0 kubenswrapper[31559]: I0216 02:22:28.834283 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-static-pod-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 02:22:28.835137 master-0 kubenswrapper[31559]: I0216 02:22:28.834275 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/532487ad51c30257b744e7c1c79fb34f-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"532487ad51c30257b744e7c1c79fb34f\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:22:28.835137 master-0 kubenswrapper[31559]: I0216 02:22:28.834455 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-resource-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 02:22:28.835137 master-0 kubenswrapper[31559]: I0216 02:22:28.834613 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/532487ad51c30257b744e7c1c79fb34f-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"532487ad51c30257b744e7c1c79fb34f\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:22:28.835137 master-0 kubenswrapper[31559]: I0216 02:22:28.834650 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/532487ad51c30257b744e7c1c79fb34f-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"532487ad51c30257b744e7c1c79fb34f\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:22:28.835137 master-0 kubenswrapper[31559]: I0216 02:22:28.834680 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebf941eaba3a97825b1c8002f4b27a20\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 02:22:28.835137 master-0 kubenswrapper[31559]: I0216 02:22:28.834594 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/532487ad51c30257b744e7c1c79fb34f-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"532487ad51c30257b744e7c1c79fb34f\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:22:28.835137 master-0 kubenswrapper[31559]: I0216 02:22:28.834758 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/952766c3a88fd12345a552f1277199f9-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"952766c3a88fd12345a552f1277199f9\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 02:22:28.835137 master-0 kubenswrapper[31559]: I0216 02:22:28.834847 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-cert-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 02:22:28.835137 master-0 kubenswrapper[31559]: I0216 02:22:28.834874 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/952766c3a88fd12345a552f1277199f9-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"952766c3a88fd12345a552f1277199f9\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 02:22:28.835137 master-0 kubenswrapper[31559]: I0216 02:22:28.834885 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/619e637b8575311b72d43b7b782d610a-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"619e637b8575311b72d43b7b782d610a\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 02:22:28.835137 master-0 kubenswrapper[31559]: I0216 02:22:28.834922 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/619e637b8575311b72d43b7b782d610a-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"619e637b8575311b72d43b7b782d610a\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 02:22:28.835137 master-0 kubenswrapper[31559]: I0216 02:22:28.834941 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebf941eaba3a97825b1c8002f4b27a20\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 02:22:28.835137 master-0 kubenswrapper[31559]: I0216 02:22:28.835003 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebf941eaba3a97825b1c8002f4b27a20\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 02:22:28.835137 master-0 kubenswrapper[31559]: I0216 02:22:28.835012 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-cert-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 02:22:28.835137 master-0 kubenswrapper[31559]: I0216 02:22:28.835058 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebf941eaba3a97825b1c8002f4b27a20\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 02:22:28.835137 master-0 kubenswrapper[31559]: I0216 02:22:28.835094 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebf941eaba3a97825b1c8002f4b27a20\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 02:22:28.835137 master-0 kubenswrapper[31559]: I0216 02:22:28.835124 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-usr-local-bin\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 02:22:28.835137 master-0 kubenswrapper[31559]: I0216 02:22:28.835154 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-log-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 02:22:28.835137 master-0 kubenswrapper[31559]: I0216 02:22:28.835155 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebf941eaba3a97825b1c8002f4b27a20\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 02:22:28.836493 master-0 kubenswrapper[31559]: I0216 02:22:28.835190 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-usr-local-bin\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 02:22:28.836493 master-0 kubenswrapper[31559]: I0216 02:22:28.835214 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebf941eaba3a97825b1c8002f4b27a20\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 02:22:28.836493 master-0 kubenswrapper[31559]: I0216 02:22:28.835230 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/619e637b8575311b72d43b7b782d610a-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"619e637b8575311b72d43b7b782d610a\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 02:22:28.836493 master-0 kubenswrapper[31559]: I0216 02:22:28.835266 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/619e637b8575311b72d43b7b782d610a-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"619e637b8575311b72d43b7b782d610a\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 02:22:28.836493 master-0 kubenswrapper[31559]: I0216 02:22:28.835268 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/619e637b8575311b72d43b7b782d610a-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"619e637b8575311b72d43b7b782d610a\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 02:22:28.836493 master-0 kubenswrapper[31559]: I0216 02:22:28.835219 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-log-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 02:22:28.836493 master-0 kubenswrapper[31559]: I0216 02:22:28.835302 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/619e637b8575311b72d43b7b782d610a-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"619e637b8575311b72d43b7b782d610a\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 02:22:28.858215 master-0 kubenswrapper[31559]: I0216 02:22:28.858146 31559 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:22:28.863614 master-0 kubenswrapper[31559]: I0216 02:22:28.863547 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:22:28.863614 master-0 kubenswrapper[31559]: I0216 02:22:28.863615 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:22:28.863802 master-0 kubenswrapper[31559]: I0216 02:22:28.863636 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:22:28.863946 master-0 kubenswrapper[31559]: I0216 02:22:28.863821 31559 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 16 02:22:28.867931 master-0 kubenswrapper[31559]: E0216 02:22:28.867878 31559 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"master-0\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="master-0" Feb 16 02:22:29.260087 master-0 kubenswrapper[31559]: I0216 02:22:29.260007 31559 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:22:29.260511 master-0 kubenswrapper[31559]: I0216 02:22:29.260106 31559 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:22:29.260511 master-0 kubenswrapper[31559]: I0216 02:22:29.260141 31559 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:22:29.260511 master-0 kubenswrapper[31559]: I0216 02:22:29.260187 31559 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:22:29.260511 master-0 kubenswrapper[31559]: I0216 02:22:29.260028 31559 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:22:29.261760 master-0 kubenswrapper[31559]: I0216 02:22:29.261699 31559 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:22:29.288589 master-0 kubenswrapper[31559]: I0216 02:22:29.269788 31559 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:22:29.289997 master-0 kubenswrapper[31559]: I0216 02:22:29.289743 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:22:29.289997 master-0 kubenswrapper[31559]: I0216 02:22:29.289781 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:22:29.289997 master-0 kubenswrapper[31559]: I0216 02:22:29.289748 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:22:29.289997 master-0 kubenswrapper[31559]: I0216 02:22:29.289838 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:22:29.289997 master-0 kubenswrapper[31559]: I0216 02:22:29.289844 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:22:29.289997 master-0 kubenswrapper[31559]: I0216 02:22:29.289845 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:22:29.289997 master-0 kubenswrapper[31559]: I0216 02:22:29.289934 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:22:29.290979 master-0 kubenswrapper[31559]: I0216 02:22:29.289953 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:22:29.290979 master-0 kubenswrapper[31559]: I0216 02:22:29.289806 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:22:29.290979 master-0 kubenswrapper[31559]: I0216 02:22:29.290203 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:22:29.290979 master-0 kubenswrapper[31559]: I0216 02:22:29.289866 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:22:29.290979 master-0 kubenswrapper[31559]: I0216 02:22:29.289869 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:22:29.290979 master-0 kubenswrapper[31559]: I0216 02:22:29.290412 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:22:29.290979 master-0 kubenswrapper[31559]: I0216 02:22:29.290480 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:22:29.290979 master-0 kubenswrapper[31559]: I0216 02:22:29.289852 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:22:29.290979 master-0 kubenswrapper[31559]: I0216 02:22:29.290559 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:22:29.290979 master-0 kubenswrapper[31559]: I0216 02:22:29.289799 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:22:29.290979 master-0 kubenswrapper[31559]: I0216 02:22:29.290729 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:22:29.290979 master-0 kubenswrapper[31559]: I0216 02:22:29.290746 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:22:29.290979 master-0 kubenswrapper[31559]: I0216 02:22:29.289856 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:22:29.290979 master-0 kubenswrapper[31559]: I0216 02:22:29.290705 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:22:29.294006 master-0 kubenswrapper[31559]: I0216 02:22:29.292293 31559 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 16 02:22:29.298144 master-0 kubenswrapper[31559]: E0216 02:22:29.298084 31559 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"master-0\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="master-0" Feb 16 02:22:29.652623 master-0 kubenswrapper[31559]: I0216 02:22:29.652464 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 02:22:29.659231 master-0 kubenswrapper[31559]: I0216 02:22:29.659175 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 02:22:29.716012 master-0 kubenswrapper[31559]: I0216 02:22:29.715881 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:22:29.723313 master-0 kubenswrapper[31559]: I0216 02:22:29.723240 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:22:30.099000 master-0 kubenswrapper[31559]: I0216 02:22:30.098925 31559 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:22:30.102847 master-0 kubenswrapper[31559]: I0216 02:22:30.102780 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:22:30.102993 master-0 kubenswrapper[31559]: I0216 02:22:30.102854 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:22:30.102993 master-0 kubenswrapper[31559]: I0216 02:22:30.102874 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:22:30.103135 master-0 kubenswrapper[31559]: I0216 02:22:30.103026 31559 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 16 02:22:30.107615 master-0 kubenswrapper[31559]: E0216 02:22:30.107571 31559 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"master-0\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="master-0" Feb 16 02:22:30.146092 master-0 kubenswrapper[31559]: I0216 02:22:30.146001 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 02:22:30.267841 master-0 kubenswrapper[31559]: I0216 02:22:30.267763 31559 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:22:30.268102 master-0 kubenswrapper[31559]: I0216 02:22:30.267897 31559 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:22:30.268102 master-0 kubenswrapper[31559]: I0216 02:22:30.268013 31559 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:22:30.274132 master-0 kubenswrapper[31559]: I0216 02:22:30.274063 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:22:30.274354 master-0 kubenswrapper[31559]: I0216 02:22:30.274137 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:22:30.274354 master-0 kubenswrapper[31559]: I0216 02:22:30.274163 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:22:30.274795 master-0 kubenswrapper[31559]: I0216 02:22:30.274748 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:22:30.274962 master-0 kubenswrapper[31559]: I0216 02:22:30.274808 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:22:30.274962 master-0 kubenswrapper[31559]: I0216 02:22:30.274828 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:22:30.275523 master-0 kubenswrapper[31559]: I0216 02:22:30.275484 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:22:30.275523 master-0 kubenswrapper[31559]: I0216 02:22:30.275522 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:22:30.275717 master-0 kubenswrapper[31559]: I0216 02:22:30.275540 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:22:30.384462 master-0 kubenswrapper[31559]: I0216 02:22:30.384205 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:22:30.392508 master-0 kubenswrapper[31559]: I0216 02:22:30.392381 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:22:30.664151 master-0 kubenswrapper[31559]: I0216 02:22:30.663981 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Feb 16 02:22:30.664406 master-0 kubenswrapper[31559]: I0216 02:22:30.664237 31559 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:22:30.668142 master-0 kubenswrapper[31559]: I0216 02:22:30.668061 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:22:30.668271 master-0 kubenswrapper[31559]: I0216 02:22:30.668149 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:22:30.668271 master-0 kubenswrapper[31559]: I0216 02:22:30.668169 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:22:30.708648 master-0 kubenswrapper[31559]: I0216 02:22:30.708594 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 02:22:31.276543 master-0 kubenswrapper[31559]: I0216 02:22:31.276471 31559 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:22:31.276543 master-0 kubenswrapper[31559]: I0216 02:22:31.276521 31559 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:22:31.277503 master-0 kubenswrapper[31559]: I0216 02:22:31.276523 31559 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 02:22:31.277503 master-0 kubenswrapper[31559]: I0216 02:22:31.276761 31559 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:22:31.283102 master-0 kubenswrapper[31559]: I0216 02:22:31.283036 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:22:31.283267 master-0 kubenswrapper[31559]: I0216 02:22:31.283104 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:22:31.283267 master-0 kubenswrapper[31559]: I0216 02:22:31.283133 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:22:31.283267 master-0 kubenswrapper[31559]: I0216 02:22:31.283181 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:22:31.283267 master-0 kubenswrapper[31559]: I0216 02:22:31.283221 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:22:31.283267 master-0 kubenswrapper[31559]: I0216 02:22:31.283243 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:22:31.283732 master-0 kubenswrapper[31559]: I0216 02:22:31.283317 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:22:31.283732 master-0 kubenswrapper[31559]: I0216 02:22:31.283358 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:22:31.283732 master-0 kubenswrapper[31559]: I0216 02:22:31.283375 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:22:31.708099 master-0 kubenswrapper[31559]: I0216 02:22:31.708011 31559 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:22:31.718265 master-0 kubenswrapper[31559]: I0216 02:22:31.718183 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:22:31.718427 master-0 kubenswrapper[31559]: I0216 02:22:31.718280 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:22:31.718427 master-0 kubenswrapper[31559]: I0216 02:22:31.718312 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:22:31.718609 master-0 kubenswrapper[31559]: I0216 02:22:31.718584 31559 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 16 02:22:31.721937 master-0 kubenswrapper[31559]: E0216 02:22:31.721840 31559 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"master-0\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="master-0" Feb 16 02:22:31.795220 master-0 kubenswrapper[31559]: I0216 02:22:31.795089 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 02:22:31.831580 master-0 kubenswrapper[31559]: I0216 02:22:31.831505 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 02:22:32.287242 master-0 kubenswrapper[31559]: I0216 02:22:32.287179 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_619e637b8575311b72d43b7b782d610a/kube-apiserver-check-endpoints/0.log" Feb 16 02:22:32.289892 master-0 kubenswrapper[31559]: I0216 02:22:32.289806 31559 generic.go:334] "Generic (PLEG): container finished" podID="619e637b8575311b72d43b7b782d610a" containerID="c1ba2d68a64d6fb932ae524cee345f61dbf00431978608d5398de81a322f1f49" exitCode=255 Feb 16 02:22:32.290089 master-0 kubenswrapper[31559]: I0216 02:22:32.289910 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"619e637b8575311b72d43b7b782d610a","Type":"ContainerDied","Data":"c1ba2d68a64d6fb932ae524cee345f61dbf00431978608d5398de81a322f1f49"} Feb 16 02:22:32.290089 master-0 kubenswrapper[31559]: I0216 02:22:32.289974 31559 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:22:32.290089 master-0 kubenswrapper[31559]: I0216 02:22:32.290009 31559 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 02:22:32.290089 master-0 kubenswrapper[31559]: I0216 02:22:32.290029 31559 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 02:22:32.290089 master-0 kubenswrapper[31559]: I0216 02:22:32.290077 31559 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:22:32.294510 master-0 kubenswrapper[31559]: I0216 02:22:32.294398 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:22:32.294510 master-0 kubenswrapper[31559]: I0216 02:22:32.294481 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:22:32.294510 master-0 kubenswrapper[31559]: I0216 02:22:32.294495 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:22:32.294968 master-0 kubenswrapper[31559]: I0216 02:22:32.294893 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:22:32.294968 master-0 kubenswrapper[31559]: I0216 02:22:32.294955 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:22:32.295254 master-0 kubenswrapper[31559]: I0216 02:22:32.294980 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:22:32.295325 master-0 kubenswrapper[31559]: I0216 02:22:32.295263 31559 scope.go:117] "RemoveContainer" containerID="c1ba2d68a64d6fb932ae524cee345f61dbf00431978608d5398de81a322f1f49" Feb 16 02:22:32.297606 master-0 kubenswrapper[31559]: I0216 02:22:32.297561 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 02:22:32.862415 master-0 kubenswrapper[31559]: I0216 02:22:32.862326 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:22:32.865526 master-0 kubenswrapper[31559]: I0216 02:22:32.865498 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:22:32.869899 master-0 kubenswrapper[31559]: I0216 02:22:32.869870 31559 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 16 02:22:32.870918 master-0 kubenswrapper[31559]: I0216 02:22:32.870885 31559 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 16 02:22:32.901098 master-0 kubenswrapper[31559]: I0216 02:22:32.901050 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:22:32.911707 master-0 kubenswrapper[31559]: I0216 02:22:32.911673 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:22:33.299664 master-0 kubenswrapper[31559]: I0216 02:22:33.299627 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_619e637b8575311b72d43b7b782d610a/kube-apiserver-check-endpoints/0.log" Feb 16 02:22:33.301339 master-0 kubenswrapper[31559]: I0216 02:22:33.301293 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"619e637b8575311b72d43b7b782d610a","Type":"ContainerStarted","Data":"4f2b3293fb881e9e69c8104ce58e27052e815106ebd3a16fe20d387a1610c59c"} Feb 16 02:22:33.301578 master-0 kubenswrapper[31559]: I0216 02:22:33.301555 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 02:22:33.320427 master-0 kubenswrapper[31559]: E0216 02:22:33.320396 31559 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-master-0\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:22:33.870731 master-0 kubenswrapper[31559]: I0216 02:22:33.870662 31559 apiserver.go:52] "Watching apiserver" Feb 16 02:22:33.898955 master-0 kubenswrapper[31559]: I0216 02:22:33.898881 31559 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 16 02:22:33.902216 master-0 kubenswrapper[31559]: I0216 02:22:33.902034 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-86b8869b79-4rfwq","openshift-etcd-operator/etcd-operator-67bf55ccdd-htjgz","openshift-machine-api/machine-api-operator-bd7dd5c46-qw2zq","openshift-marketplace/community-operators-s95k9","openshift-cluster-machine-approver/machine-approver-8569dd85ff-vqtcl","openshift-kube-apiserver/installer-3-master-0","openshift-kube-scheduler/installer-4-master-0","openshift-machine-config-operator/machine-config-controller-686c884b4d-zljgp","openshift-monitoring/cluster-monitoring-operator-756d64c8c4-5q4zs","openshift-operator-controller/operator-controller-controller-manager-85c9b89969-g9lcm","openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-845gn","openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-466x9","openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-9rvcj","openshift-monitoring/prometheus-operator-admission-webhook-695b766898-9dx2k","openshift-network-operator/network-operator-6fcf4c966-dctqr","openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-tkqng","openshift-kube-apiserver/installer-1-retry-1-master-0","openshift-kube-apiserver/kube-apiserver-master-0","openshift-kube-controller-manager/kube-controller-manager-master-0","openshift-marketplace/certified-operators-gkbtj","openshift-monitoring/kube-state-metrics-7cc9598d54-2gx8b","openshift-monitoring/node-exporter-jxbq6","openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-n8xmg","openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-k8jz5","openshift-machine-config-operator/machine-config-operator-84976bb859-5gs6g","openshift-machine-config-operator/machine-config-server-5zv6j","openshift-marketplace/marketplace-operator-6cc5b65c6b-8nl7s","openshift-multus/multus-admission-controller-6d678b8d67-8gzlx","openshift-cluster-node-tuning-operator/tuned-vvw25","openshift-etcd/installer-2-master-0","openshift-ingress-operator/ingress-operator-c588d8cb4-nbjz6","openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-dsjz2","openshift-kube-controller-manager/installer-2-master-0","openshift-multus/multus-additional-cni-plugins-mvdkf","openshift-network-diagnostics/network-check-target-hswdj","openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-qwp9g","openshift-controller-manager/controller-manager-5788fc6459-29m25","openshift-dns/node-resolver-7tjn9","openshift-marketplace/redhat-marketplace-thm6w","openshift-oauth-apiserver/apiserver-6796f86fd6-qtxkl","openshift-service-ca-operator/service-ca-operator-5dc4688546-ck5nr","openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-v7lmz","openshift-config-operator/openshift-config-operator-7c6bdb986f-zlbd2","openshift-monitoring/metrics-server-67b79bd656-cs2n2","openshift-multus/network-metrics-daemon-gn9mv","openshift-operator-lifecycle-manager/packageserver-87777c9b7-fxzh6","assisted-installer/assisted-installer-controller-p2zdr","openshift-apiserver/apiserver-578b9bc556-8g98v","openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-b47jp","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","openshift-kube-scheduler/installer-5-master-0","openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-frvgm","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-network-node-identity/network-node-identity-kffmg","openshift-ovn-kubernetes/ovnkube-node-bs85n","openshift-authentication-operator/authentication-operator-755d954778-bngv9","openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-8n9v4","openshift-ingress-canary/ingress-canary-6t7mx","openshift-insights/insights-operator-cb4f7b4cf-llpf5","openshift-monitoring/openshift-state-metrics-546cc7d765-2zl2r","openshift-multus/multus-8jgrl","openshift-operator-lifecycle-manager/catalog-operator-588944557d-2z8fq","openshift-operator-lifecycle-manager/collect-profiles-29520135-gdm59","openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-mcff9","openshift-cluster-version/cluster-version-operator-649c4f5445-lzvc4","openshift-kube-scheduler/openshift-kube-scheduler-master-0","openshift-network-diagnostics/network-check-source-7d8f4c8c66-kcnkd","openshift-route-controller-manager/route-controller-manager-998bd8b4b-hm5k2","openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj","openshift-etcd/etcd-master-0","openshift-etcd/installer-1-master-0","openshift-image-registry/cluster-image-registry-operator-96c8c64b8-bxgpd","openshift-kube-storage-version-migrator/migrator-5bd989df77-sh2wj","openshift-marketplace/redhat-operators-9c6g5","openshift-monitoring/prometheus-operator-7485d645b8-v9mmd","openshift-catalogd/catalogd-controller-manager-67bc7c997f-zc2br","openshift-ingress/router-default-864ddd5f56-ffptx","openshift-machine-config-operator/machine-config-daemon-qd4l7","openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-jshtp","openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-qm7rm","openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-dgxhp","openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-mmhcs","openshift-network-operator/iptables-alerter-9bnql","openshift-dns/dns-default-njlg6","openshift-kube-apiserver/installer-1-master-0","openshift-kube-controller-manager/installer-3-master-0","openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-x2sh4","openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-r5l9f","openshift-service-ca/service-ca-676cd8b9b5-x6nhn"] Feb 16 02:22:33.902642 master-0 kubenswrapper[31559]: I0216 02:22:33.902575 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-p2zdr" Feb 16 02:22:33.911500 master-0 kubenswrapper[31559]: I0216 02:22:33.909622 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 16 02:22:33.911500 master-0 kubenswrapper[31559]: I0216 02:22:33.910244 31559 kubelet.go:2566] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" mirrorPodUID="05ee7f0d-b929-461e-8464-8e1aa0635e08" Feb 16 02:22:33.911500 master-0 kubenswrapper[31559]: I0216 02:22:33.910270 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 16 02:22:33.923075 master-0 kubenswrapper[31559]: I0216 02:22:33.922964 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 16 02:22:33.924463 master-0 kubenswrapper[31559]: I0216 02:22:33.924302 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Feb 16 02:22:33.924463 master-0 kubenswrapper[31559]: I0216 02:22:33.924423 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 16 02:22:33.927352 master-0 kubenswrapper[31559]: I0216 02:22:33.927297 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Feb 16 02:22:33.928291 master-0 kubenswrapper[31559]: I0216 02:22:33.928241 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 16 02:22:33.928504 master-0 kubenswrapper[31559]: I0216 02:22:33.928314 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 16 02:22:33.929586 master-0 kubenswrapper[31559]: I0216 02:22:33.929541 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 16 02:22:33.930084 master-0 kubenswrapper[31559]: I0216 02:22:33.930014 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 16 02:22:33.930488 master-0 kubenswrapper[31559]: I0216 02:22:33.930397 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Feb 16 02:22:33.930488 master-0 kubenswrapper[31559]: I0216 02:22:33.930411 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 16 02:22:33.930937 master-0 kubenswrapper[31559]: I0216 02:22:33.930913 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 16 02:22:33.931252 master-0 kubenswrapper[31559]: I0216 02:22:33.931193 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 16 02:22:33.931522 master-0 kubenswrapper[31559]: I0216 02:22:33.931485 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 16 02:22:33.931652 master-0 kubenswrapper[31559]: I0216 02:22:33.931523 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 16 02:22:33.931856 master-0 kubenswrapper[31559]: I0216 02:22:33.931804 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 16 02:22:33.932703 master-0 kubenswrapper[31559]: I0216 02:22:33.932263 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 16 02:22:33.932703 master-0 kubenswrapper[31559]: I0216 02:22:33.932527 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Feb 16 02:22:33.932947 master-0 kubenswrapper[31559]: I0216 02:22:33.932752 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 16 02:22:33.933302 master-0 kubenswrapper[31559]: I0216 02:22:33.933263 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Feb 16 02:22:33.933493 master-0 kubenswrapper[31559]: I0216 02:22:33.933343 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 16 02:22:33.933493 master-0 kubenswrapper[31559]: I0216 02:22:33.933390 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 16 02:22:33.933699 master-0 kubenswrapper[31559]: I0216 02:22:33.933538 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 16 02:22:33.936043 master-0 kubenswrapper[31559]: I0216 02:22:33.932916 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 16 02:22:33.937416 master-0 kubenswrapper[31559]: I0216 02:22:33.937239 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 16 02:22:33.937591 master-0 kubenswrapper[31559]: I0216 02:22:33.937487 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 16 02:22:33.937591 master-0 kubenswrapper[31559]: I0216 02:22:33.937576 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Feb 16 02:22:33.937726 master-0 kubenswrapper[31559]: I0216 02:22:33.937657 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 16 02:22:33.937726 master-0 kubenswrapper[31559]: I0216 02:22:33.937720 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 16 02:22:33.937990 master-0 kubenswrapper[31559]: I0216 02:22:33.937946 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 16 02:22:33.938096 master-0 kubenswrapper[31559]: I0216 02:22:33.938079 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 16 02:22:33.938228 master-0 kubenswrapper[31559]: I0216 02:22:33.938201 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 16 02:22:33.938458 master-0 kubenswrapper[31559]: I0216 02:22:33.938386 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 16 02:22:33.938570 master-0 kubenswrapper[31559]: I0216 02:22:33.938505 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 16 02:22:33.938681 master-0 kubenswrapper[31559]: I0216 02:22:33.938607 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 16 02:22:33.938770 master-0 kubenswrapper[31559]: I0216 02:22:33.938722 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 16 02:22:33.938867 master-0 kubenswrapper[31559]: I0216 02:22:33.938794 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 16 02:22:33.938867 master-0 kubenswrapper[31559]: I0216 02:22:33.938829 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 16 02:22:33.939038 master-0 kubenswrapper[31559]: I0216 02:22:33.938906 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Feb 16 02:22:33.939038 master-0 kubenswrapper[31559]: I0216 02:22:33.938949 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 16 02:22:33.939038 master-0 kubenswrapper[31559]: I0216 02:22:33.938980 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 16 02:22:33.939038 master-0 kubenswrapper[31559]: I0216 02:22:33.939021 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 16 02:22:33.939373 master-0 kubenswrapper[31559]: I0216 02:22:33.938917 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 16 02:22:33.939373 master-0 kubenswrapper[31559]: I0216 02:22:33.939096 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 16 02:22:33.939373 master-0 kubenswrapper[31559]: I0216 02:22:33.939123 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 16 02:22:33.939373 master-0 kubenswrapper[31559]: I0216 02:22:33.939064 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Feb 16 02:22:33.939373 master-0 kubenswrapper[31559]: I0216 02:22:33.939201 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 16 02:22:33.939373 master-0 kubenswrapper[31559]: I0216 02:22:33.939220 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 16 02:22:33.939373 master-0 kubenswrapper[31559]: I0216 02:22:33.939149 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 16 02:22:33.939373 master-0 kubenswrapper[31559]: I0216 02:22:33.939281 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 16 02:22:33.939373 master-0 kubenswrapper[31559]: I0216 02:22:33.939307 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 16 02:22:33.939373 master-0 kubenswrapper[31559]: I0216 02:22:33.939339 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Feb 16 02:22:33.939373 master-0 kubenswrapper[31559]: I0216 02:22:33.939201 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 16 02:22:33.939373 master-0 kubenswrapper[31559]: I0216 02:22:33.939377 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 16 02:22:33.940586 master-0 kubenswrapper[31559]: I0216 02:22:33.939416 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 16 02:22:33.940586 master-0 kubenswrapper[31559]: I0216 02:22:33.939430 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Feb 16 02:22:33.940586 master-0 kubenswrapper[31559]: I0216 02:22:33.938812 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 16 02:22:33.940586 master-0 kubenswrapper[31559]: I0216 02:22:33.939560 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Feb 16 02:22:33.940586 master-0 kubenswrapper[31559]: I0216 02:22:33.939603 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 16 02:22:33.940586 master-0 kubenswrapper[31559]: I0216 02:22:33.939623 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Feb 16 02:22:33.940586 master-0 kubenswrapper[31559]: I0216 02:22:33.939291 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 16 02:22:33.940586 master-0 kubenswrapper[31559]: I0216 02:22:33.939799 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 16 02:22:33.940586 master-0 kubenswrapper[31559]: I0216 02:22:33.939819 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 16 02:22:33.940586 master-0 kubenswrapper[31559]: I0216 02:22:33.939131 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 16 02:22:33.940586 master-0 kubenswrapper[31559]: I0216 02:22:33.939924 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 16 02:22:33.940586 master-0 kubenswrapper[31559]: I0216 02:22:33.939799 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 16 02:22:33.940586 master-0 kubenswrapper[31559]: I0216 02:22:33.939984 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Feb 16 02:22:33.940586 master-0 kubenswrapper[31559]: I0216 02:22:33.939221 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 16 02:22:33.940586 master-0 kubenswrapper[31559]: I0216 02:22:33.939660 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 16 02:22:33.940586 master-0 kubenswrapper[31559]: I0216 02:22:33.939723 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 16 02:22:33.940586 master-0 kubenswrapper[31559]: I0216 02:22:33.940224 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 16 02:22:33.940586 master-0 kubenswrapper[31559]: I0216 02:22:33.940487 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 16 02:22:33.943996 master-0 kubenswrapper[31559]: I0216 02:22:33.940639 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 16 02:22:33.943996 master-0 kubenswrapper[31559]: I0216 02:22:33.941030 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Feb 16 02:22:33.943996 master-0 kubenswrapper[31559]: I0216 02:22:33.941605 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 16 02:22:33.943996 master-0 kubenswrapper[31559]: I0216 02:22:33.941721 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 16 02:22:33.943996 master-0 kubenswrapper[31559]: I0216 02:22:33.941809 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 16 02:22:33.943996 master-0 kubenswrapper[31559]: I0216 02:22:33.942652 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Feb 16 02:22:33.943996 master-0 kubenswrapper[31559]: I0216 02:22:33.942785 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Feb 16 02:22:33.943996 master-0 kubenswrapper[31559]: I0216 02:22:33.942991 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Feb 16 02:22:33.954005 master-0 kubenswrapper[31559]: I0216 02:22:33.953939 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 16 02:22:33.956176 master-0 kubenswrapper[31559]: I0216 02:22:33.956005 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Feb 16 02:22:33.956345 master-0 kubenswrapper[31559]: I0216 02:22:33.956314 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 16 02:22:33.956598 master-0 kubenswrapper[31559]: I0216 02:22:33.956554 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 16 02:22:33.957355 master-0 kubenswrapper[31559]: I0216 02:22:33.957314 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 16 02:22:33.957613 master-0 kubenswrapper[31559]: I0216 02:22:33.957563 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 16 02:22:33.957711 master-0 kubenswrapper[31559]: I0216 02:22:33.957676 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 16 02:22:33.957711 master-0 kubenswrapper[31559]: I0216 02:22:33.957707 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 16 02:22:33.957800 master-0 kubenswrapper[31559]: I0216 02:22:33.957724 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 16 02:22:33.957800 master-0 kubenswrapper[31559]: I0216 02:22:33.957785 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 16 02:22:33.957971 master-0 kubenswrapper[31559]: I0216 02:22:33.957853 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 16 02:22:33.957971 master-0 kubenswrapper[31559]: I0216 02:22:33.957954 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 16 02:22:33.958052 master-0 kubenswrapper[31559]: I0216 02:22:33.957958 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 16 02:22:33.959491 master-0 kubenswrapper[31559]: I0216 02:22:33.958021 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 16 02:22:33.959705 master-0 kubenswrapper[31559]: I0216 02:22:33.959664 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 16 02:22:33.959951 master-0 kubenswrapper[31559]: I0216 02:22:33.959902 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 16 02:22:33.960004 master-0 kubenswrapper[31559]: I0216 02:22:33.959945 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 16 02:22:33.961937 master-0 kubenswrapper[31559]: I0216 02:22:33.961901 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 16 02:22:33.962320 master-0 kubenswrapper[31559]: I0216 02:22:33.962246 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 16 02:22:33.964141 master-0 kubenswrapper[31559]: I0216 02:22:33.963014 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 16 02:22:33.964141 master-0 kubenswrapper[31559]: I0216 02:22:33.963724 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Feb 16 02:22:33.964337 master-0 kubenswrapper[31559]: I0216 02:22:33.964162 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Feb 16 02:22:33.975344 master-0 kubenswrapper[31559]: I0216 02:22:33.973015 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 16 02:22:33.975344 master-0 kubenswrapper[31559]: I0216 02:22:33.973735 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 16 02:22:33.975344 master-0 kubenswrapper[31559]: I0216 02:22:33.974662 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Feb 16 02:22:33.975344 master-0 kubenswrapper[31559]: I0216 02:22:33.974834 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 16 02:22:33.975344 master-0 kubenswrapper[31559]: I0216 02:22:33.974984 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 16 02:22:33.975344 master-0 kubenswrapper[31559]: I0216 02:22:33.975004 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Feb 16 02:22:33.975344 master-0 kubenswrapper[31559]: I0216 02:22:33.974671 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520135-gdm59" Feb 16 02:22:33.978542 master-0 kubenswrapper[31559]: I0216 02:22:33.978515 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Feb 16 02:22:33.987717 master-0 kubenswrapper[31559]: I0216 02:22:33.987658 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 16 02:22:33.994662 master-0 kubenswrapper[31559]: I0216 02:22:33.994616 31559 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Feb 16 02:22:34.005989 master-0 kubenswrapper[31559]: I0216 02:22:34.005947 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 16 02:22:34.008265 master-0 kubenswrapper[31559]: I0216 02:22:34.008227 31559 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Feb 16 02:22:34.025789 master-0 kubenswrapper[31559]: I0216 02:22:34.025737 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 16 02:22:34.049490 master-0 kubenswrapper[31559]: I0216 02:22:34.048960 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 16 02:22:34.067696 master-0 kubenswrapper[31559]: I0216 02:22:34.066913 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 16 02:22:34.084838 master-0 kubenswrapper[31559]: I0216 02:22:34.084757 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8f918d5b-1a4c-4b56-98a4-5cef638bb615-encryption-config\") pod \"apiserver-578b9bc556-8g98v\" (UID: \"8f918d5b-1a4c-4b56-98a4-5cef638bb615\") " pod="openshift-apiserver/apiserver-578b9bc556-8g98v" Feb 16 02:22:34.084838 master-0 kubenswrapper[31559]: I0216 02:22:34.084840 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-host-run-multus-certs\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:22:34.085147 master-0 kubenswrapper[31559]: I0216 02:22:34.085018 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/8c267cc7-a51a-4b14-baee-e584254eefc5-metrics-server-audit-profiles\") pod \"metrics-server-67b79bd656-cs2n2\" (UID: \"8c267cc7-a51a-4b14-baee-e584254eefc5\") " pod="openshift-monitoring/metrics-server-67b79bd656-cs2n2" Feb 16 02:22:34.085147 master-0 kubenswrapper[31559]: I0216 02:22:34.085062 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d8bbd369-4219-48ef-ae2d-b45c81789403-certs\") pod \"machine-config-server-5zv6j\" (UID: \"d8bbd369-4219-48ef-ae2d-b45c81789403\") " pod="openshift-machine-config-operator/machine-config-server-5zv6j" Feb 16 02:22:34.085254 master-0 kubenswrapper[31559]: I0216 02:22:34.085137 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zr872\" (UniqueName: \"kubernetes.io/projected/4a5b01c1-1231-4e69-8b6c-c4981b65b26e-kube-api-access-zr872\") pod \"kube-storage-version-migrator-operator-cd5474998-x2sh4\" (UID: \"4a5b01c1-1231-4e69-8b6c-c4981b65b26e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-x2sh4" Feb 16 02:22:34.085254 master-0 kubenswrapper[31559]: I0216 02:22:34.085177 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/00ef3b03-55dc-4661-b7fd-1e586c45b5de-etc-modprobe-d\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:22:34.085471 master-0 kubenswrapper[31559]: I0216 02:22:34.085374 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pc9jt\" (UniqueName: \"kubernetes.io/projected/75915935-00a2-44ce-99d1-03e2492044d4-kube-api-access-pc9jt\") pod \"network-check-source-7d8f4c8c66-kcnkd\" (UID: \"75915935-00a2-44ce-99d1-03e2492044d4\") " pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-kcnkd" Feb 16 02:22:34.085539 master-0 kubenswrapper[31559]: I0216 02:22:34.085480 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvf8t\" (UniqueName: \"kubernetes.io/projected/21686a6d-f685-4fb6-98af-3e8a39c5981b-kube-api-access-lvf8t\") pod \"cluster-monitoring-operator-756d64c8c4-5q4zs\" (UID: \"21686a6d-f685-4fb6-98af-3e8a39c5981b\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-5q4zs" Feb 16 02:22:34.085604 master-0 kubenswrapper[31559]: I0216 02:22:34.085525 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e379cfaf-3a4c-40e7-8641-3524b3669295-config\") pod \"openshift-apiserver-operator-6d4655d9cf-v7lmz\" (UID: \"e379cfaf-3a4c-40e7-8641-3524b3669295\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-v7lmz" Feb 16 02:22:34.085604 master-0 kubenswrapper[31559]: I0216 02:22:34.085568 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1743372f-bdb0-4558-b47b-3714f3aa3fde-config\") pod \"openshift-kube-scheduler-operator-7485d55966-mmhcs\" (UID: \"1743372f-bdb0-4558-b47b-3714f3aa3fde\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-mmhcs" Feb 16 02:22:34.085705 master-0 kubenswrapper[31559]: I0216 02:22:34.085609 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/0a900f93-91c9-4782-89a3-1cc09f3aec95-root\") pod \"node-exporter-jxbq6\" (UID: \"0a900f93-91c9-4782-89a3-1cc09f3aec95\") " pod="openshift-monitoring/node-exporter-jxbq6" Feb 16 02:22:34.085705 master-0 kubenswrapper[31559]: I0216 02:22:34.085644 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91938be6-9ae4-4849-abe8-fc842daecd23-config\") pod \"service-ca-operator-5dc4688546-ck5nr\" (UID: \"91938be6-9ae4-4849-abe8-fc842daecd23\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-ck5nr" Feb 16 02:22:34.085705 master-0 kubenswrapper[31559]: I0216 02:22:34.085675 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d870332c-2498-4135-a9b3-a71e67c2805b-images\") pod \"machine-config-operator-84976bb859-5gs6g\" (UID: \"d870332c-2498-4135-a9b3-a71e67c2805b\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-5gs6g" Feb 16 02:22:34.085847 master-0 kubenswrapper[31559]: I0216 02:22:34.085781 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 16 02:22:34.085946 master-0 kubenswrapper[31559]: I0216 02:22:34.085842 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/22739961-e322-47f1-b232-eaa4cc35319c-audit-dir\") pod \"apiserver-6796f86fd6-qtxkl\" (UID: \"22739961-e322-47f1-b232-eaa4cc35319c\") " pod="openshift-oauth-apiserver/apiserver-6796f86fd6-qtxkl" Feb 16 02:22:34.086189 master-0 kubenswrapper[31559]: I0216 02:22:34.086136 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1743372f-bdb0-4558-b47b-3714f3aa3fde-config\") pod \"openshift-kube-scheduler-operator-7485d55966-mmhcs\" (UID: \"1743372f-bdb0-4558-b47b-3714f3aa3fde\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-mmhcs" Feb 16 02:22:34.086189 master-0 kubenswrapper[31559]: I0216 02:22:34.086177 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91938be6-9ae4-4849-abe8-fc842daecd23-config\") pod \"service-ca-operator-5dc4688546-ck5nr\" (UID: \"91938be6-9ae4-4849-abe8-fc842daecd23\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-ck5nr" Feb 16 02:22:34.086314 master-0 kubenswrapper[31559]: I0216 02:22:34.086281 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6dcef814-353e-4985-9afc-9e545f7853ae-ovn-node-metrics-cert\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:22:34.086388 master-0 kubenswrapper[31559]: I0216 02:22:34.086334 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8f918d5b-1a4c-4b56-98a4-5cef638bb615-image-import-ca\") pod \"apiserver-578b9bc556-8g98v\" (UID: \"8f918d5b-1a4c-4b56-98a4-5cef638bb615\") " pod="openshift-apiserver/apiserver-578b9bc556-8g98v" Feb 16 02:22:34.086388 master-0 kubenswrapper[31559]: I0216 02:22:34.086360 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c8086f93-2d98-4218-afac-20a65e6bf943-webhook-certs\") pod \"multus-admission-controller-6d678b8d67-8gzlx\" (UID: \"c8086f93-2d98-4218-afac-20a65e6bf943\") " pod="openshift-multus/multus-admission-controller-6d678b8d67-8gzlx" Feb 16 02:22:34.086579 master-0 kubenswrapper[31559]: I0216 02:22:34.086386 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7llx6\" (UniqueName: \"kubernetes.io/projected/6c02961f-30ec-4405-b7fa-9c4192342ae9-kube-api-access-7llx6\") pod \"openshift-controller-manager-operator-5f5f84757d-b47jp\" (UID: \"6c02961f-30ec-4405-b7fa-9c4192342ae9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-b47jp" Feb 16 02:22:34.086579 master-0 kubenswrapper[31559]: I0216 02:22:34.086417 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f91346c7-bde4-4fa2-ac27-b5f0d25eeb75-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mvdkf\" (UID: \"f91346c7-bde4-4fa2-ac27-b5f0d25eeb75\") " pod="openshift-multus/multus-additional-cni-plugins-mvdkf" Feb 16 02:22:34.086579 master-0 kubenswrapper[31559]: I0216 02:22:34.086464 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ad700b17-ba2a-41d4-8bec-538a009a613b-etc-cvo-updatepayloads\") pod \"cluster-version-operator-649c4f5445-lzvc4\" (UID: \"ad700b17-ba2a-41d4-8bec-538a009a613b\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-lzvc4" Feb 16 02:22:34.086579 master-0 kubenswrapper[31559]: I0216 02:22:34.086490 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmjjn\" (UniqueName: \"kubernetes.io/projected/d870332c-2498-4135-a9b3-a71e67c2805b-kube-api-access-wmjjn\") pod \"machine-config-operator-84976bb859-5gs6g\" (UID: \"d870332c-2498-4135-a9b3-a71e67c2805b\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-5gs6g" Feb 16 02:22:34.086579 master-0 kubenswrapper[31559]: I0216 02:22:34.086517 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-etc-kubernetes\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:22:34.086579 master-0 kubenswrapper[31559]: I0216 02:22:34.086552 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/27a42eb0-677c-414d-b0ec-f945ec39b7e9-images\") pod \"cluster-baremetal-operator-7bc947fc7d-frvgm\" (UID: \"27a42eb0-677c-414d-b0ec-f945ec39b7e9\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-frvgm" Feb 16 02:22:34.086852 master-0 kubenswrapper[31559]: I0216 02:22:34.086607 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/17390d9a-148d-4927-a831-5bc4873c43d5-stats-auth\") pod \"router-default-864ddd5f56-ffptx\" (UID: \"17390d9a-148d-4927-a831-5bc4873c43d5\") " pod="openshift-ingress/router-default-864ddd5f56-ffptx" Feb 16 02:22:34.086852 master-0 kubenswrapper[31559]: I0216 02:22:34.086654 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6dcef814-353e-4985-9afc-9e545f7853ae-ovn-node-metrics-cert\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:22:34.086852 master-0 kubenswrapper[31559]: I0216 02:22:34.086661 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a8f33151-61df-4b66-ba85-9ba210779059-kube-api-access\") pod \"kube-controller-manager-operator-78ff47c7c5-dgxhp\" (UID: \"a8f33151-61df-4b66-ba85-9ba210779059\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-dgxhp" Feb 16 02:22:34.086852 master-0 kubenswrapper[31559]: I0216 02:22:34.086695 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f918d5b-1a4c-4b56-98a4-5cef638bb615-trusted-ca-bundle\") pod \"apiserver-578b9bc556-8g98v\" (UID: \"8f918d5b-1a4c-4b56-98a4-5cef638bb615\") " pod="openshift-apiserver/apiserver-578b9bc556-8g98v" Feb 16 02:22:34.087023 master-0 kubenswrapper[31559]: I0216 02:22:34.086977 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e379cfaf-3a4c-40e7-8641-3524b3669295-config\") pod \"openshift-apiserver-operator-6d4655d9cf-v7lmz\" (UID: \"e379cfaf-3a4c-40e7-8641-3524b3669295\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-v7lmz" Feb 16 02:22:34.087159 master-0 kubenswrapper[31559]: I0216 02:22:34.087104 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fec84b8a-a0d1-4b07-8827-cef0beb89ecd-config\") pod \"machine-api-operator-bd7dd5c46-qw2zq\" (UID: \"fec84b8a-a0d1-4b07-8827-cef0beb89ecd\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-qw2zq" Feb 16 02:22:34.087226 master-0 kubenswrapper[31559]: I0216 02:22:34.087175 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9sgx\" (UniqueName: \"kubernetes.io/projected/2ffa4db8-97da-42de-8e51-35680f518ca7-kube-api-access-t9sgx\") pod \"dns-operator-86b8869b79-4rfwq\" (UID: \"2ffa4db8-97da-42de-8e51-35680f518ca7\") " pod="openshift-dns-operator/dns-operator-86b8869b79-4rfwq" Feb 16 02:22:34.087269 master-0 kubenswrapper[31559]: I0216 02:22:34.087219 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9be9fd24-fdb1-43dc-80b8-68020427bfd7-serving-cert\") pod \"openshift-config-operator-7c6bdb986f-zlbd2\" (UID: \"9be9fd24-fdb1-43dc-80b8-68020427bfd7\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-zlbd2" Feb 16 02:22:34.087269 master-0 kubenswrapper[31559]: I0216 02:22:34.087257 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/97b8261a-91e3-435e-93f8-0a17f30359fd-proxy-tls\") pod \"machine-config-controller-686c884b4d-zljgp\" (UID: \"97b8261a-91e3-435e-93f8-0a17f30359fd\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-zljgp" Feb 16 02:22:34.087364 master-0 kubenswrapper[31559]: I0216 02:22:34.087321 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83883885-f493-4559-9c0f-e28d69712475-config\") pod \"route-controller-manager-998bd8b4b-hm5k2\" (UID: \"83883885-f493-4559-9c0f-e28d69712475\") " pod="openshift-route-controller-manager/route-controller-manager-998bd8b4b-hm5k2" Feb 16 02:22:34.087422 master-0 kubenswrapper[31559]: I0216 02:22:34.087358 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/30fef0d5-46ea-4fa3-9ffa-88187d010ffe-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-595c8f9ff-n8xmg\" (UID: \"30fef0d5-46ea-4fa3-9ffa-88187d010ffe\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-n8xmg" Feb 16 02:22:34.087422 master-0 kubenswrapper[31559]: I0216 02:22:34.087396 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/0fbc8f91-f8cc-48d8-917c-64fa978069de-rootfs\") pod \"machine-config-daemon-qd4l7\" (UID: \"0fbc8f91-f8cc-48d8-917c-64fa978069de\") " pod="openshift-machine-config-operator/machine-config-daemon-qd4l7" Feb 16 02:22:34.087615 master-0 kubenswrapper[31559]: I0216 02:22:34.087459 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/32d420d6-bbda-42c0-82fe-8b187ad91607-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-546cc7d765-2zl2r\" (UID: \"32d420d6-bbda-42c0-82fe-8b187ad91607\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-2zl2r" Feb 16 02:22:34.087615 master-0 kubenswrapper[31559]: I0216 02:22:34.087524 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/8c267cc7-a51a-4b14-baee-e584254eefc5-secret-metrics-server-tls\") pod \"metrics-server-67b79bd656-cs2n2\" (UID: \"8c267cc7-a51a-4b14-baee-e584254eefc5\") " pod="openshift-monitoring/metrics-server-67b79bd656-cs2n2" Feb 16 02:22:34.087615 master-0 kubenswrapper[31559]: I0216 02:22:34.087577 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r94gg\" (UniqueName: \"kubernetes.io/projected/9defdfff-eb18-4beb-9591-918d0e4b4236-kube-api-access-r94gg\") pod \"service-ca-676cd8b9b5-x6nhn\" (UID: \"9defdfff-eb18-4beb-9591-918d0e4b4236\") " pod="openshift-service-ca/service-ca-676cd8b9b5-x6nhn" Feb 16 02:22:34.087742 master-0 kubenswrapper[31559]: I0216 02:22:34.087617 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0abea413-e08a-465a-8ec4-2be650bfd5bd-service-ca-bundle\") pod \"insights-operator-cb4f7b4cf-llpf5\" (UID: \"0abea413-e08a-465a-8ec4-2be650bfd5bd\") " pod="openshift-insights/insights-operator-cb4f7b4cf-llpf5" Feb 16 02:22:34.087742 master-0 kubenswrapper[31559]: I0216 02:22:34.087659 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c442d349-668b-4d01-a097-5981b7a04eac-config\") pod \"machine-approver-8569dd85ff-vqtcl\" (UID: \"c442d349-668b-4d01-a097-5981b7a04eac\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-vqtcl" Feb 16 02:22:34.087742 master-0 kubenswrapper[31559]: I0216 02:22:34.087660 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9be9fd24-fdb1-43dc-80b8-68020427bfd7-serving-cert\") pod \"openshift-config-operator-7c6bdb986f-zlbd2\" (UID: \"9be9fd24-fdb1-43dc-80b8-68020427bfd7\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-zlbd2" Feb 16 02:22:34.087742 master-0 kubenswrapper[31559]: I0216 02:22:34.087696 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4mp4\" (UniqueName: \"kubernetes.io/projected/32d420d6-bbda-42c0-82fe-8b187ad91607-kube-api-access-w4mp4\") pod \"openshift-state-metrics-546cc7d765-2zl2r\" (UID: \"32d420d6-bbda-42c0-82fe-8b187ad91607\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-2zl2r" Feb 16 02:22:34.087919 master-0 kubenswrapper[31559]: I0216 02:22:34.087742 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6p8rc\" (UniqueName: \"kubernetes.io/projected/c4a146b2-c712-408a-97d8-5de3a84f3aaf-kube-api-access-6p8rc\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj\" (UID: \"c4a146b2-c712-408a-97d8-5de3a84f3aaf\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj" Feb 16 02:22:34.087919 master-0 kubenswrapper[31559]: I0216 02:22:34.087900 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7fmj\" (UniqueName: \"kubernetes.io/projected/b2a83ddd-ffa5-4127-9099-91187ad9dbba-kube-api-access-t7fmj\") pod \"cluster-node-tuning-operator-ff6c9b66-845gn\" (UID: \"b2a83ddd-ffa5-4127-9099-91187ad9dbba\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-845gn" Feb 16 02:22:34.088009 master-0 kubenswrapper[31559]: I0216 02:22:34.087948 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhz2m\" (UniqueName: \"kubernetes.io/projected/91938be6-9ae4-4849-abe8-fc842daecd23-kube-api-access-bhz2m\") pod \"service-ca-operator-5dc4688546-ck5nr\" (UID: \"91938be6-9ae4-4849-abe8-fc842daecd23\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-ck5nr" Feb 16 02:22:34.088009 master-0 kubenswrapper[31559]: I0216 02:22:34.087988 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lj6v2\" (UniqueName: \"kubernetes.io/projected/857357a1-dc98-4dd5-98b3-c94b1ddf9dec-kube-api-access-lj6v2\") pod \"catalogd-controller-manager-67bc7c997f-zc2br\" (UID: \"857357a1-dc98-4dd5-98b3-c94b1ddf9dec\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-zc2br" Feb 16 02:22:34.088090 master-0 kubenswrapper[31559]: I0216 02:22:34.088027 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c4a146b2-c712-408a-97d8-5de3a84f3aaf-images\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj\" (UID: \"c4a146b2-c712-408a-97d8-5de3a84f3aaf\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj" Feb 16 02:22:34.088090 master-0 kubenswrapper[31559]: I0216 02:22:34.088070 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/0a900f93-91c9-4782-89a3-1cc09f3aec95-node-exporter-textfile\") pod \"node-exporter-jxbq6\" (UID: \"0a900f93-91c9-4782-89a3-1cc09f3aec95\") " pod="openshift-monitoring/node-exporter-jxbq6" Feb 16 02:22:34.088175 master-0 kubenswrapper[31559]: I0216 02:22:34.088115 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/48863ff6-63ac-42d7-bac7-29d888c92db9-auth-proxy-config\") pod \"cluster-autoscaler-operator-67fd9768b5-9rvcj\" (UID: \"48863ff6-63ac-42d7-bac7-29d888c92db9\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-9rvcj" Feb 16 02:22:34.088175 master-0 kubenswrapper[31559]: I0216 02:22:34.088154 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b923d74-bad3-4780-8e7e-e8365ac9ea06-utilities\") pod \"certified-operators-gkbtj\" (UID: \"5b923d74-bad3-4780-8e7e-e8365ac9ea06\") " pod="openshift-marketplace/certified-operators-gkbtj" Feb 16 02:22:34.088267 master-0 kubenswrapper[31559]: I0216 02:22:34.088198 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvzpb\" (UniqueName: \"kubernetes.io/projected/89041b37-18f6-499d-89ec-a0523a25dc58-kube-api-access-zvzpb\") pod \"redhat-operators-9c6g5\" (UID: \"89041b37-18f6-499d-89ec-a0523a25dc58\") " pod="openshift-marketplace/redhat-operators-9c6g5" Feb 16 02:22:34.088267 master-0 kubenswrapper[31559]: I0216 02:22:34.088236 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/0a900f93-91c9-4782-89a3-1cc09f3aec95-node-exporter-textfile\") pod \"node-exporter-jxbq6\" (UID: \"0a900f93-91c9-4782-89a3-1cc09f3aec95\") " pod="openshift-monitoring/node-exporter-jxbq6" Feb 16 02:22:34.088267 master-0 kubenswrapper[31559]: I0216 02:22:34.088237 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9be9fd24-fdb1-43dc-80b8-68020427bfd7-available-featuregates\") pod \"openshift-config-operator-7c6bdb986f-zlbd2\" (UID: \"9be9fd24-fdb1-43dc-80b8-68020427bfd7\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-zlbd2" Feb 16 02:22:34.088460 master-0 kubenswrapper[31559]: I0216 02:22:34.088290 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-system-cni-dir\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:22:34.088460 master-0 kubenswrapper[31559]: I0216 02:22:34.088303 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b923d74-bad3-4780-8e7e-e8365ac9ea06-utilities\") pod \"certified-operators-gkbtj\" (UID: \"5b923d74-bad3-4780-8e7e-e8365ac9ea06\") " pod="openshift-marketplace/certified-operators-gkbtj" Feb 16 02:22:34.088460 master-0 kubenswrapper[31559]: I0216 02:22:34.088339 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27a42eb0-677c-414d-b0ec-f945ec39b7e9-config\") pod \"cluster-baremetal-operator-7bc947fc7d-frvgm\" (UID: \"27a42eb0-677c-414d-b0ec-f945ec39b7e9\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-frvgm" Feb 16 02:22:34.088460 master-0 kubenswrapper[31559]: I0216 02:22:34.088368 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ad700b17-ba2a-41d4-8bec-538a009a613b-serving-cert\") pod \"cluster-version-operator-649c4f5445-lzvc4\" (UID: \"ad700b17-ba2a-41d4-8bec-538a009a613b\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-lzvc4" Feb 16 02:22:34.088460 master-0 kubenswrapper[31559]: I0216 02:22:34.088396 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fec84b8a-a0d1-4b07-8827-cef0beb89ecd-images\") pod \"machine-api-operator-bd7dd5c46-qw2zq\" (UID: \"fec84b8a-a0d1-4b07-8827-cef0beb89ecd\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-qw2zq" Feb 16 02:22:34.088460 master-0 kubenswrapper[31559]: I0216 02:22:34.088397 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9be9fd24-fdb1-43dc-80b8-68020427bfd7-available-featuregates\") pod \"openshift-config-operator-7c6bdb986f-zlbd2\" (UID: \"9be9fd24-fdb1-43dc-80b8-68020427bfd7\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-zlbd2" Feb 16 02:22:34.088724 master-0 kubenswrapper[31559]: I0216 02:22:34.088466 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7ac81030-35d1-4d86-844d-65d1156d8944-config-volume\") pod \"dns-default-njlg6\" (UID: \"7ac81030-35d1-4d86-844d-65d1156d8944\") " pod="openshift-dns/dns-default-njlg6" Feb 16 02:22:34.088724 master-0 kubenswrapper[31559]: I0216 02:22:34.088549 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/857357a1-dc98-4dd5-98b3-c94b1ddf9dec-ca-certs\") pod \"catalogd-controller-manager-67bc7c997f-zc2br\" (UID: \"857357a1-dc98-4dd5-98b3-c94b1ddf9dec\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-zc2br" Feb 16 02:22:34.088724 master-0 kubenswrapper[31559]: I0216 02:22:34.088580 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8f33151-61df-4b66-ba85-9ba210779059-config\") pod \"kube-controller-manager-operator-78ff47c7c5-dgxhp\" (UID: \"a8f33151-61df-4b66-ba85-9ba210779059\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-dgxhp" Feb 16 02:22:34.088724 master-0 kubenswrapper[31559]: I0216 02:22:34.088609 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sctj8\" (UniqueName: \"kubernetes.io/projected/0a900f93-91c9-4782-89a3-1cc09f3aec95-kube-api-access-sctj8\") pod \"node-exporter-jxbq6\" (UID: \"0a900f93-91c9-4782-89a3-1cc09f3aec95\") " pod="openshift-monitoring/node-exporter-jxbq6" Feb 16 02:22:34.088724 master-0 kubenswrapper[31559]: I0216 02:22:34.088634 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/00ef3b03-55dc-4661-b7fd-1e586c45b5de-etc-sysctl-conf\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:22:34.088926 master-0 kubenswrapper[31559]: I0216 02:22:34.088742 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/724ac845-3835-458b-9645-e665be135ff9-etcd-client\") pod \"etcd-operator-67bf55ccdd-htjgz\" (UID: \"724ac845-3835-458b-9645-e665be135ff9\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-htjgz" Feb 16 02:22:34.088926 master-0 kubenswrapper[31559]: I0216 02:22:34.088776 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c442d349-668b-4d01-a097-5981b7a04eac-auth-proxy-config\") pod \"machine-approver-8569dd85ff-vqtcl\" (UID: \"c442d349-668b-4d01-a097-5981b7a04eac\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-vqtcl" Feb 16 02:22:34.088926 master-0 kubenswrapper[31559]: I0216 02:22:34.088884 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/695d1f01-d3c1-4fb9-9dda-daf33eae11f5-prometheus-operator-tls\") pod \"prometheus-operator-7485d645b8-v9mmd\" (UID: \"695d1f01-d3c1-4fb9-9dda-daf33eae11f5\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-v9mmd" Feb 16 02:22:34.088926 master-0 kubenswrapper[31559]: I0216 02:22:34.088916 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hm44l\" (UniqueName: \"kubernetes.io/projected/dc3354cb-b6c3-40a5-a695-cccb079ad292-kube-api-access-hm44l\") pod \"packageserver-87777c9b7-fxzh6\" (UID: \"dc3354cb-b6c3-40a5-a695-cccb079ad292\") " pod="openshift-operator-lifecycle-manager/packageserver-87777c9b7-fxzh6" Feb 16 02:22:34.089093 master-0 kubenswrapper[31559]: I0216 02:22:34.088942 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcq6v\" (UniqueName: \"kubernetes.io/projected/e379cfaf-3a4c-40e7-8641-3524b3669295-kube-api-access-gcq6v\") pod \"openshift-apiserver-operator-6d4655d9cf-v7lmz\" (UID: \"e379cfaf-3a4c-40e7-8641-3524b3669295\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-v7lmz" Feb 16 02:22:34.089093 master-0 kubenswrapper[31559]: I0216 02:22:34.088958 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8f33151-61df-4b66-ba85-9ba210779059-config\") pod \"kube-controller-manager-operator-78ff47c7c5-dgxhp\" (UID: \"a8f33151-61df-4b66-ba85-9ba210779059\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-dgxhp" Feb 16 02:22:34.089093 master-0 kubenswrapper[31559]: I0216 02:22:34.088971 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b923d74-bad3-4780-8e7e-e8365ac9ea06-catalog-content\") pod \"certified-operators-gkbtj\" (UID: \"5b923d74-bad3-4780-8e7e-e8365ac9ea06\") " pod="openshift-marketplace/certified-operators-gkbtj" Feb 16 02:22:34.089224 master-0 kubenswrapper[31559]: I0216 02:22:34.089092 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/00ef3b03-55dc-4661-b7fd-1e586c45b5de-etc-sysctl-d\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:22:34.089224 master-0 kubenswrapper[31559]: I0216 02:22:34.089124 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/724ac845-3835-458b-9645-e665be135ff9-etcd-client\") pod \"etcd-operator-67bf55ccdd-htjgz\" (UID: \"724ac845-3835-458b-9645-e665be135ff9\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-htjgz" Feb 16 02:22:34.089224 master-0 kubenswrapper[31559]: I0216 02:22:34.089133 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ad700b17-ba2a-41d4-8bec-538a009a613b-service-ca\") pod \"cluster-version-operator-649c4f5445-lzvc4\" (UID: \"ad700b17-ba2a-41d4-8bec-538a009a613b\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-lzvc4" Feb 16 02:22:34.089224 master-0 kubenswrapper[31559]: I0216 02:22:34.089211 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dc3354cb-b6c3-40a5-a695-cccb079ad292-webhook-cert\") pod \"packageserver-87777c9b7-fxzh6\" (UID: \"dc3354cb-b6c3-40a5-a695-cccb079ad292\") " pod="openshift-operator-lifecycle-manager/packageserver-87777c9b7-fxzh6" Feb 16 02:22:34.089386 master-0 kubenswrapper[31559]: I0216 02:22:34.089256 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kf7tw\" (UniqueName: \"kubernetes.io/projected/27d876a7-6a48-4942-ad96-ed8ed3aa104b-kube-api-access-kf7tw\") pod \"operator-controller-controller-manager-85c9b89969-g9lcm\" (UID: \"27d876a7-6a48-4942-ad96-ed8ed3aa104b\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-g9lcm" Feb 16 02:22:34.089386 master-0 kubenswrapper[31559]: I0216 02:22:34.089270 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b923d74-bad3-4780-8e7e-e8365ac9ea06-catalog-content\") pod \"certified-operators-gkbtj\" (UID: \"5b923d74-bad3-4780-8e7e-e8365ac9ea06\") " pod="openshift-marketplace/certified-operators-gkbtj" Feb 16 02:22:34.089386 master-0 kubenswrapper[31559]: I0216 02:22:34.089300 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8f918d5b-1a4c-4b56-98a4-5cef638bb615-etcd-client\") pod \"apiserver-578b9bc556-8g98v\" (UID: \"8f918d5b-1a4c-4b56-98a4-5cef638bb615\") " pod="openshift-apiserver/apiserver-578b9bc556-8g98v" Feb 16 02:22:34.089386 master-0 kubenswrapper[31559]: I0216 02:22:34.089337 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a0540a70-a256-422b-a827-e564d0e67866-trusted-ca\") pod \"cluster-image-registry-operator-96c8c64b8-bxgpd\" (UID: \"a0540a70-a256-422b-a827-e564d0e67866\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-bxgpd" Feb 16 02:22:34.089386 master-0 kubenswrapper[31559]: I0216 02:22:34.089373 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cnfs\" (UniqueName: \"kubernetes.io/projected/7846b339-c46d-4983-b586-a28f2868f665-kube-api-access-5cnfs\") pod \"ingress-canary-6t7mx\" (UID: \"7846b339-c46d-4983-b586-a28f2868f665\") " pod="openshift-ingress-canary/ingress-canary-6t7mx" Feb 16 02:22:34.089619 master-0 kubenswrapper[31559]: I0216 02:22:34.089412 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/27d876a7-6a48-4942-ad96-ed8ed3aa104b-etc-containers\") pod \"operator-controller-controller-manager-85c9b89969-g9lcm\" (UID: \"27d876a7-6a48-4942-ad96-ed8ed3aa104b\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-g9lcm" Feb 16 02:22:34.089619 master-0 kubenswrapper[31559]: I0216 02:22:34.089489 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4djm\" (UniqueName: \"kubernetes.io/projected/27a42eb0-677c-414d-b0ec-f945ec39b7e9-kube-api-access-l4djm\") pod \"cluster-baremetal-operator-7bc947fc7d-frvgm\" (UID: \"27a42eb0-677c-414d-b0ec-f945ec39b7e9\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-frvgm" Feb 16 02:22:34.089619 master-0 kubenswrapper[31559]: I0216 02:22:34.089534 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/97b8261a-91e3-435e-93f8-0a17f30359fd-mcc-auth-proxy-config\") pod \"machine-config-controller-686c884b4d-zljgp\" (UID: \"97b8261a-91e3-435e-93f8-0a17f30359fd\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-zljgp" Feb 16 02:22:34.089737 master-0 kubenswrapper[31559]: I0216 02:22:34.089623 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f7317f91-9441-449f-9738-85da088cf94f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-bb7ffbb8d-mcff9\" (UID: \"f7317f91-9441-449f-9738-85da088cf94f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-mcff9" Feb 16 02:22:34.089737 master-0 kubenswrapper[31559]: I0216 02:22:34.089652 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/32d420d6-bbda-42c0-82fe-8b187ad91607-openshift-state-metrics-tls\") pod \"openshift-state-metrics-546cc7d765-2zl2r\" (UID: \"32d420d6-bbda-42c0-82fe-8b187ad91607\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-2zl2r" Feb 16 02:22:34.089737 master-0 kubenswrapper[31559]: I0216 02:22:34.089672 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-host-run-netns\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:22:34.089737 master-0 kubenswrapper[31559]: I0216 02:22:34.089692 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-host-cni-netd\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:22:34.089895 master-0 kubenswrapper[31559]: I0216 02:22:34.089774 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a0540a70-a256-422b-a827-e564d0e67866-trusted-ca\") pod \"cluster-image-registry-operator-96c8c64b8-bxgpd\" (UID: \"a0540a70-a256-422b-a827-e564d0e67866\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-bxgpd" Feb 16 02:22:34.089895 master-0 kubenswrapper[31559]: I0216 02:22:34.089774 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88kmw\" (UniqueName: \"kubernetes.io/projected/fec84b8a-a0d1-4b07-8827-cef0beb89ecd-kube-api-access-88kmw\") pod \"machine-api-operator-bd7dd5c46-qw2zq\" (UID: \"fec84b8a-a0d1-4b07-8827-cef0beb89ecd\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-qw2zq" Feb 16 02:22:34.089895 master-0 kubenswrapper[31559]: I0216 02:22:34.089822 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9cd32bc-a13a-44ee-ba52-7bb335c7007b-config\") pod \"authentication-operator-755d954778-bngv9\" (UID: \"c9cd32bc-a13a-44ee-ba52-7bb335c7007b\") " pod="openshift-authentication-operator/authentication-operator-755d954778-bngv9" Feb 16 02:22:34.089895 master-0 kubenswrapper[31559]: I0216 02:22:34.089847 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/22739961-e322-47f1-b232-eaa4cc35319c-encryption-config\") pod \"apiserver-6796f86fd6-qtxkl\" (UID: \"22739961-e322-47f1-b232-eaa4cc35319c\") " pod="openshift-oauth-apiserver/apiserver-6796f86fd6-qtxkl" Feb 16 02:22:34.089895 master-0 kubenswrapper[31559]: I0216 02:22:34.089869 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f7317f91-9441-449f-9738-85da088cf94f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-bb7ffbb8d-mcff9\" (UID: \"f7317f91-9441-449f-9738-85da088cf94f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-mcff9" Feb 16 02:22:34.089895 master-0 kubenswrapper[31559]: I0216 02:22:34.089874 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vchs\" (UniqueName: \"kubernetes.io/projected/c442d349-668b-4d01-a097-5981b7a04eac-kube-api-access-4vchs\") pod \"machine-approver-8569dd85ff-vqtcl\" (UID: \"c442d349-668b-4d01-a097-5981b7a04eac\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-vqtcl" Feb 16 02:22:34.090136 master-0 kubenswrapper[31559]: I0216 02:22:34.089925 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04804a08-e3a5-46f3-abcb-967866834baa-trusted-ca\") pod \"ingress-operator-c588d8cb4-nbjz6\" (UID: \"04804a08-e3a5-46f3-abcb-967866834baa\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nbjz6" Feb 16 02:22:34.090136 master-0 kubenswrapper[31559]: I0216 02:22:34.089953 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/0a900f93-91c9-4782-89a3-1cc09f3aec95-node-exporter-tls\") pod \"node-exporter-jxbq6\" (UID: \"0a900f93-91c9-4782-89a3-1cc09f3aec95\") " pod="openshift-monitoring/node-exporter-jxbq6" Feb 16 02:22:34.090136 master-0 kubenswrapper[31559]: I0216 02:22:34.089979 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/20bf60f7-9e36-477e-96a5-4fc8dc1bca5e-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"20bf60f7-9e36-477e-96a5-4fc8dc1bca5e\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 16 02:22:34.090136 master-0 kubenswrapper[31559]: I0216 02:22:34.090003 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/00ef3b03-55dc-4661-b7fd-1e586c45b5de-etc-systemd\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:22:34.090136 master-0 kubenswrapper[31559]: I0216 02:22:34.090031 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8f918d5b-1a4c-4b56-98a4-5cef638bb615-etcd-serving-ca\") pod \"apiserver-578b9bc556-8g98v\" (UID: \"8f918d5b-1a4c-4b56-98a4-5cef638bb615\") " pod="openshift-apiserver/apiserver-578b9bc556-8g98v" Feb 16 02:22:34.090136 master-0 kubenswrapper[31559]: I0216 02:22:34.090054 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-run-systemd\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:22:34.090136 master-0 kubenswrapper[31559]: I0216 02:22:34.090100 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c267cc7-a51a-4b14-baee-e584254eefc5-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-67b79bd656-cs2n2\" (UID: \"8c267cc7-a51a-4b14-baee-e584254eefc5\") " pod="openshift-monitoring/metrics-server-67b79bd656-cs2n2" Feb 16 02:22:34.090136 master-0 kubenswrapper[31559]: I0216 02:22:34.090102 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9cd32bc-a13a-44ee-ba52-7bb335c7007b-config\") pod \"authentication-operator-755d954778-bngv9\" (UID: \"c9cd32bc-a13a-44ee-ba52-7bb335c7007b\") " pod="openshift-authentication-operator/authentication-operator-755d954778-bngv9" Feb 16 02:22:34.090521 master-0 kubenswrapper[31559]: I0216 02:22:34.090202 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9snq8\" (UniqueName: \"kubernetes.io/projected/8c267cc7-a51a-4b14-baee-e584254eefc5-kube-api-access-9snq8\") pod \"metrics-server-67b79bd656-cs2n2\" (UID: \"8c267cc7-a51a-4b14-baee-e584254eefc5\") " pod="openshift-monitoring/metrics-server-67b79bd656-cs2n2" Feb 16 02:22:34.090521 master-0 kubenswrapper[31559]: I0216 02:22:34.090220 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04804a08-e3a5-46f3-abcb-967866834baa-trusted-ca\") pod \"ingress-operator-c588d8cb4-nbjz6\" (UID: \"04804a08-e3a5-46f3-abcb-967866834baa\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nbjz6" Feb 16 02:22:34.090521 master-0 kubenswrapper[31559]: I0216 02:22:34.090336 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9cd32bc-a13a-44ee-ba52-7bb335c7007b-service-ca-bundle\") pod \"authentication-operator-755d954778-bngv9\" (UID: \"c9cd32bc-a13a-44ee-ba52-7bb335c7007b\") " pod="openshift-authentication-operator/authentication-operator-755d954778-bngv9" Feb 16 02:22:34.090521 master-0 kubenswrapper[31559]: I0216 02:22:34.090372 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxlnm\" (UniqueName: \"kubernetes.io/projected/0abea413-e08a-465a-8ec4-2be650bfd5bd-kube-api-access-bxlnm\") pod \"insights-operator-cb4f7b4cf-llpf5\" (UID: \"0abea413-e08a-465a-8ec4-2be650bfd5bd\") " pod="openshift-insights/insights-operator-cb4f7b4cf-llpf5" Feb 16 02:22:34.090521 master-0 kubenswrapper[31559]: I0216 02:22:34.090392 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8f918d5b-1a4c-4b56-98a4-5cef638bb615-audit-dir\") pod \"apiserver-578b9bc556-8g98v\" (UID: \"8f918d5b-1a4c-4b56-98a4-5cef638bb615\") " pod="openshift-apiserver/apiserver-578b9bc556-8g98v" Feb 16 02:22:34.090521 master-0 kubenswrapper[31559]: I0216 02:22:34.090484 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/21686a6d-f685-4fb6-98af-3e8a39c5981b-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-5q4zs\" (UID: \"21686a6d-f685-4fb6-98af-3e8a39c5981b\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-5q4zs" Feb 16 02:22:34.090761 master-0 kubenswrapper[31559]: I0216 02:22:34.090554 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-run-ovn\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:22:34.090761 master-0 kubenswrapper[31559]: I0216 02:22:34.090607 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6dcef814-353e-4985-9afc-9e545f7853ae-ovnkube-config\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:22:34.090761 master-0 kubenswrapper[31559]: I0216 02:22:34.090611 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9cd32bc-a13a-44ee-ba52-7bb335c7007b-service-ca-bundle\") pod \"authentication-operator-755d954778-bngv9\" (UID: \"c9cd32bc-a13a-44ee-ba52-7bb335c7007b\") " pod="openshift-authentication-operator/authentication-operator-755d954778-bngv9" Feb 16 02:22:34.090761 master-0 kubenswrapper[31559]: I0216 02:22:34.090648 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-multus-conf-dir\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:22:34.090947 master-0 kubenswrapper[31559]: I0216 02:22:34.090780 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/c442d349-668b-4d01-a097-5981b7a04eac-machine-approver-tls\") pod \"machine-approver-8569dd85ff-vqtcl\" (UID: \"c442d349-668b-4d01-a097-5981b7a04eac\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-vqtcl" Feb 16 02:22:34.090947 master-0 kubenswrapper[31559]: I0216 02:22:34.090795 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/21686a6d-f685-4fb6-98af-3e8a39c5981b-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-5q4zs\" (UID: \"21686a6d-f685-4fb6-98af-3e8a39c5981b\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-5q4zs" Feb 16 02:22:34.090947 master-0 kubenswrapper[31559]: I0216 02:22:34.090814 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjsbs\" (UniqueName: \"kubernetes.io/projected/6dcef814-353e-4985-9afc-9e545f7853ae-kube-api-access-pjsbs\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:22:34.090947 master-0 kubenswrapper[31559]: I0216 02:22:34.090891 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d8bbd369-4219-48ef-ae2d-b45c81789403-node-bootstrap-token\") pod \"machine-config-server-5zv6j\" (UID: \"d8bbd369-4219-48ef-ae2d-b45c81789403\") " pod="openshift-machine-config-operator/machine-config-server-5zv6j" Feb 16 02:22:34.090947 master-0 kubenswrapper[31559]: I0216 02:22:34.090904 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6dcef814-353e-4985-9afc-9e545f7853ae-ovnkube-config\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:22:34.090947 master-0 kubenswrapper[31559]: I0216 02:22:34.090925 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/00ef3b03-55dc-4661-b7fd-1e586c45b5de-sys\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:22:34.091200 master-0 kubenswrapper[31559]: I0216 02:22:34.090952 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/00ef3b03-55dc-4661-b7fd-1e586c45b5de-var-lib-kubelet\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:22:34.091200 master-0 kubenswrapper[31559]: I0216 02:22:34.090975 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ad700b17-ba2a-41d4-8bec-538a009a613b-etc-ssl-certs\") pod \"cluster-version-operator-649c4f5445-lzvc4\" (UID: \"ad700b17-ba2a-41d4-8bec-538a009a613b\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-lzvc4" Feb 16 02:22:34.091200 master-0 kubenswrapper[31559]: I0216 02:22:34.091015 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ad700b17-ba2a-41d4-8bec-538a009a613b-kube-api-access\") pod \"cluster-version-operator-649c4f5445-lzvc4\" (UID: \"ad700b17-ba2a-41d4-8bec-538a009a613b\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-lzvc4" Feb 16 02:22:34.091200 master-0 kubenswrapper[31559]: I0216 02:22:34.091112 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bde83629-b39c-401e-bc30-5ce205638918-marketplace-trusted-ca\") pod \"marketplace-operator-6cc5b65c6b-8nl7s\" (UID: \"bde83629-b39c-401e-bc30-5ce205638918\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-8nl7s" Feb 16 02:22:34.091387 master-0 kubenswrapper[31559]: I0216 02:22:34.091200 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85sdg\" (UniqueName: \"kubernetes.io/projected/17390d9a-148d-4927-a831-5bc4873c43d5-kube-api-access-85sdg\") pod \"router-default-864ddd5f56-ffptx\" (UID: \"17390d9a-148d-4927-a831-5bc4873c43d5\") " pod="openshift-ingress/router-default-864ddd5f56-ffptx" Feb 16 02:22:34.091387 master-0 kubenswrapper[31559]: I0216 02:22:34.091344 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-cnibin\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:22:34.091508 master-0 kubenswrapper[31559]: I0216 02:22:34.091390 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e491b5ed-9c09-4308-9843-fba8d43bd3ae-config\") pod \"controller-manager-5788fc6459-29m25\" (UID: \"e491b5ed-9c09-4308-9843-fba8d43bd3ae\") " pod="openshift-controller-manager/controller-manager-5788fc6459-29m25" Feb 16 02:22:34.091508 master-0 kubenswrapper[31559]: I0216 02:22:34.091423 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bde83629-b39c-401e-bc30-5ce205638918-marketplace-trusted-ca\") pod \"marketplace-operator-6cc5b65c6b-8nl7s\" (UID: \"bde83629-b39c-401e-bc30-5ce205638918\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-8nl7s" Feb 16 02:22:34.091508 master-0 kubenswrapper[31559]: I0216 02:22:34.091449 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8f918d5b-1a4c-4b56-98a4-5cef638bb615-audit\") pod \"apiserver-578b9bc556-8g98v\" (UID: \"8f918d5b-1a4c-4b56-98a4-5cef638bb615\") " pod="openshift-apiserver/apiserver-578b9bc556-8g98v" Feb 16 02:22:34.091508 master-0 kubenswrapper[31559]: I0216 02:22:34.091506 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nl7r8\" (UniqueName: \"kubernetes.io/projected/456e6c3a-c16c-470b-a0cd-bb79865b54f0-kube-api-access-nl7r8\") pod \"network-operator-6fcf4c966-dctqr\" (UID: \"456e6c3a-c16c-470b-a0cd-bb79865b54f0\") " pod="openshift-network-operator/network-operator-6fcf4c966-dctqr" Feb 16 02:22:34.091753 master-0 kubenswrapper[31559]: I0216 02:22:34.091532 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/8c267cc7-a51a-4b14-baee-e584254eefc5-secret-metrics-client-certs\") pod \"metrics-server-67b79bd656-cs2n2\" (UID: \"8c267cc7-a51a-4b14-baee-e584254eefc5\") " pod="openshift-monitoring/metrics-server-67b79bd656-cs2n2" Feb 16 02:22:34.091753 master-0 kubenswrapper[31559]: I0216 02:22:34.091560 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/00ef3b03-55dc-4661-b7fd-1e586c45b5de-etc-tuned\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:22:34.091753 master-0 kubenswrapper[31559]: I0216 02:22:34.091583 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/5b62004d-7fe3-47ae-8e26-8496befb047c-samples-operator-tls\") pod \"cluster-samples-operator-f8cbff74c-k8jz5\" (UID: \"5b62004d-7fe3-47ae-8e26-8496befb047c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-k8jz5" Feb 16 02:22:34.091753 master-0 kubenswrapper[31559]: I0216 02:22:34.091608 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/857357a1-dc98-4dd5-98b3-c94b1ddf9dec-etc-containers\") pod \"catalogd-controller-manager-67bc7c997f-zc2br\" (UID: \"857357a1-dc98-4dd5-98b3-c94b1ddf9dec\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-zc2br" Feb 16 02:22:34.091753 master-0 kubenswrapper[31559]: I0216 02:22:34.091658 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/1a07cd28-a33d-4abd-9198-ba82bacd51ba-hosts-file\") pod \"node-resolver-7tjn9\" (UID: \"1a07cd28-a33d-4abd-9198-ba82bacd51ba\") " pod="openshift-dns/node-resolver-7tjn9" Feb 16 02:22:34.091753 master-0 kubenswrapper[31559]: I0216 02:22:34.091682 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-hostroot\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:22:34.091753 master-0 kubenswrapper[31559]: I0216 02:22:34.091701 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6f8fj\" (UniqueName: \"kubernetes.io/projected/76915cba-7c11-4bd8-9943-81de74e7781b-kube-api-access-6f8fj\") pod \"catalog-operator-588944557d-2z8fq\" (UID: \"76915cba-7c11-4bd8-9943-81de74e7781b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-2z8fq" Feb 16 02:22:34.091753 master-0 kubenswrapper[31559]: I0216 02:22:34.091730 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/00ef3b03-55dc-4661-b7fd-1e586c45b5de-etc-tuned\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:22:34.092835 master-0 kubenswrapper[31559]: I0216 02:22:34.091740 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/9defdfff-eb18-4beb-9591-918d0e4b4236-signing-key\") pod \"service-ca-676cd8b9b5-x6nhn\" (UID: \"9defdfff-eb18-4beb-9591-918d0e4b4236\") " pod="openshift-service-ca/service-ca-676cd8b9b5-x6nhn" Feb 16 02:22:34.092835 master-0 kubenswrapper[31559]: I0216 02:22:34.091850 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/dc3354cb-b6c3-40a5-a695-cccb079ad292-tmpfs\") pod \"packageserver-87777c9b7-fxzh6\" (UID: \"dc3354cb-b6c3-40a5-a695-cccb079ad292\") " pod="openshift-operator-lifecycle-manager/packageserver-87777c9b7-fxzh6" Feb 16 02:22:34.092835 master-0 kubenswrapper[31559]: I0216 02:22:34.091880 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ns9l\" (UniqueName: \"kubernetes.io/projected/f91346c7-bde4-4fa2-ac27-b5f0d25eeb75-kube-api-access-4ns9l\") pod \"multus-additional-cni-plugins-mvdkf\" (UID: \"f91346c7-bde4-4fa2-ac27-b5f0d25eeb75\") " pod="openshift-multus/multus-additional-cni-plugins-mvdkf" Feb 16 02:22:34.092835 master-0 kubenswrapper[31559]: I0216 02:22:34.091908 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/22739961-e322-47f1-b232-eaa4cc35319c-etcd-serving-ca\") pod \"apiserver-6796f86fd6-qtxkl\" (UID: \"22739961-e322-47f1-b232-eaa4cc35319c\") " pod="openshift-oauth-apiserver/apiserver-6796f86fd6-qtxkl" Feb 16 02:22:34.092835 master-0 kubenswrapper[31559]: I0216 02:22:34.091934 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-host-slash\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:22:34.092835 master-0 kubenswrapper[31559]: I0216 02:22:34.091961 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/86af980a-2653-40c3-a368-a795d7fb8558-metrics-client-ca\") pod \"kube-state-metrics-7cc9598d54-2gx8b\" (UID: \"86af980a-2653-40c3-a368-a795d7fb8558\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-2gx8b" Feb 16 02:22:34.092835 master-0 kubenswrapper[31559]: I0216 02:22:34.091970 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/dc3354cb-b6c3-40a5-a695-cccb079ad292-tmpfs\") pod \"packageserver-87777c9b7-fxzh6\" (UID: \"dc3354cb-b6c3-40a5-a695-cccb079ad292\") " pod="openshift-operator-lifecycle-manager/packageserver-87777c9b7-fxzh6" Feb 16 02:22:34.092835 master-0 kubenswrapper[31559]: I0216 02:22:34.091977 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/9defdfff-eb18-4beb-9591-918d0e4b4236-signing-key\") pod \"service-ca-676cd8b9b5-x6nhn\" (UID: \"9defdfff-eb18-4beb-9591-918d0e4b4236\") " pod="openshift-service-ca/service-ca-676cd8b9b5-x6nhn" Feb 16 02:22:34.092835 master-0 kubenswrapper[31559]: I0216 02:22:34.091991 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/23755f7f-dce6-4dcf-9664-22e3aedb5c81-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-tkqng\" (UID: \"23755f7f-dce6-4dcf-9664-22e3aedb5c81\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-tkqng" Feb 16 02:22:34.092835 master-0 kubenswrapper[31559]: I0216 02:22:34.092037 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/0a900f93-91c9-4782-89a3-1cc09f3aec95-metrics-client-ca\") pod \"node-exporter-jxbq6\" (UID: \"0a900f93-91c9-4782-89a3-1cc09f3aec95\") " pod="openshift-monitoring/node-exporter-jxbq6" Feb 16 02:22:34.092835 master-0 kubenswrapper[31559]: I0216 02:22:34.092063 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0fbc8f91-f8cc-48d8-917c-64fa978069de-mcd-auth-proxy-config\") pod \"machine-config-daemon-qd4l7\" (UID: \"0fbc8f91-f8cc-48d8-917c-64fa978069de\") " pod="openshift-machine-config-operator/machine-config-daemon-qd4l7" Feb 16 02:22:34.092835 master-0 kubenswrapper[31559]: I0216 02:22:34.092088 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8lvq\" (UniqueName: \"kubernetes.io/projected/467d92a2-1cf3-418d-b41e-8e5f9d7a5b74-kube-api-access-f8lvq\") pod \"olm-operator-6b56bd877c-qwp9g\" (UID: \"467d92a2-1cf3-418d-b41e-8e5f9d7a5b74\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-qwp9g" Feb 16 02:22:34.092835 master-0 kubenswrapper[31559]: I0216 02:22:34.092113 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/fec84b8a-a0d1-4b07-8827-cef0beb89ecd-machine-api-operator-tls\") pod \"machine-api-operator-bd7dd5c46-qw2zq\" (UID: \"fec84b8a-a0d1-4b07-8827-cef0beb89ecd\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-qw2zq" Feb 16 02:22:34.092835 master-0 kubenswrapper[31559]: I0216 02:22:34.092139 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-host-cni-bin\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:22:34.092835 master-0 kubenswrapper[31559]: I0216 02:22:34.092167 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f810ea0-e32d-4097-beca-5194349a57a6-catalog-content\") pod \"community-operators-s95k9\" (UID: \"5f810ea0-e32d-4097-beca-5194349a57a6\") " pod="openshift-marketplace/community-operators-s95k9" Feb 16 02:22:34.092835 master-0 kubenswrapper[31559]: I0216 02:22:34.092192 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58cq8\" (UniqueName: \"kubernetes.io/projected/f7317f91-9441-449f-9738-85da088cf94f-kube-api-access-58cq8\") pod \"ovnkube-control-plane-bb7ffbb8d-mcff9\" (UID: \"f7317f91-9441-449f-9738-85da088cf94f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-mcff9" Feb 16 02:22:34.092835 master-0 kubenswrapper[31559]: I0216 02:22:34.092229 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22739961-e322-47f1-b232-eaa4cc35319c-serving-cert\") pod \"apiserver-6796f86fd6-qtxkl\" (UID: \"22739961-e322-47f1-b232-eaa4cc35319c\") " pod="openshift-oauth-apiserver/apiserver-6796f86fd6-qtxkl" Feb 16 02:22:34.092835 master-0 kubenswrapper[31559]: I0216 02:22:34.092256 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/32d420d6-bbda-42c0-82fe-8b187ad91607-metrics-client-ca\") pod \"openshift-state-metrics-546cc7d765-2zl2r\" (UID: \"32d420d6-bbda-42c0-82fe-8b187ad91607\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-2zl2r" Feb 16 02:22:34.092835 master-0 kubenswrapper[31559]: I0216 02:22:34.092281 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7846b339-c46d-4983-b586-a28f2868f665-cert\") pod \"ingress-canary-6t7mx\" (UID: \"7846b339-c46d-4983-b586-a28f2868f665\") " pod="openshift-ingress-canary/ingress-canary-6t7mx" Feb 16 02:22:34.092835 master-0 kubenswrapper[31559]: I0216 02:22:34.092299 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f810ea0-e32d-4097-beca-5194349a57a6-catalog-content\") pod \"community-operators-s95k9\" (UID: \"5f810ea0-e32d-4097-beca-5194349a57a6\") " pod="openshift-marketplace/community-operators-s95k9" Feb 16 02:22:34.092835 master-0 kubenswrapper[31559]: I0216 02:22:34.092327 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/23755f7f-dce6-4dcf-9664-22e3aedb5c81-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-tkqng\" (UID: \"23755f7f-dce6-4dcf-9664-22e3aedb5c81\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-tkqng" Feb 16 02:22:34.092835 master-0 kubenswrapper[31559]: I0216 02:22:34.092304 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grvmr\" (UniqueName: \"kubernetes.io/projected/d8bbd369-4219-48ef-ae2d-b45c81789403-kube-api-access-grvmr\") pod \"machine-config-server-5zv6j\" (UID: \"d8bbd369-4219-48ef-ae2d-b45c81789403\") " pod="openshift-machine-config-operator/machine-config-server-5zv6j" Feb 16 02:22:34.092835 master-0 kubenswrapper[31559]: I0216 02:22:34.092395 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/37fd7550-cc81-4180-8540-0bc5f62f63d2-tls-certificates\") pod \"prometheus-operator-admission-webhook-695b766898-9dx2k\" (UID: \"37fd7550-cc81-4180-8540-0bc5f62f63d2\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-9dx2k" Feb 16 02:22:34.092835 master-0 kubenswrapper[31559]: I0216 02:22:34.092501 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2ffa4db8-97da-42de-8e51-35680f518ca7-metrics-tls\") pod \"dns-operator-86b8869b79-4rfwq\" (UID: \"2ffa4db8-97da-42de-8e51-35680f518ca7\") " pod="openshift-dns-operator/dns-operator-86b8869b79-4rfwq" Feb 16 02:22:34.092835 master-0 kubenswrapper[31559]: I0216 02:22:34.092543 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f91346c7-bde4-4fa2-ac27-b5f0d25eeb75-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-mvdkf\" (UID: \"f91346c7-bde4-4fa2-ac27-b5f0d25eeb75\") " pod="openshift-multus/multus-additional-cni-plugins-mvdkf" Feb 16 02:22:34.092835 master-0 kubenswrapper[31559]: I0216 02:22:34.092570 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bnwz\" (UniqueName: \"kubernetes.io/projected/0fbc8f91-f8cc-48d8-917c-64fa978069de-kube-api-access-5bnwz\") pod \"machine-config-daemon-qd4l7\" (UID: \"0fbc8f91-f8cc-48d8-917c-64fa978069de\") " pod="openshift-machine-config-operator/machine-config-daemon-qd4l7" Feb 16 02:22:34.092835 master-0 kubenswrapper[31559]: I0216 02:22:34.092603 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdllq\" (UniqueName: \"kubernetes.io/projected/430c146b-ceaf-411a-add6-ce949243aabf-kube-api-access-vdllq\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:22:34.092835 master-0 kubenswrapper[31559]: I0216 02:22:34.092639 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/980aa005-f51d-4ca2-aee6-a6fdeefd86d0-serving-cert\") pod \"kube-apiserver-operator-54984b6678-dsjz2\" (UID: \"980aa005-f51d-4ca2-aee6-a6fdeefd86d0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-dsjz2" Feb 16 02:22:34.092835 master-0 kubenswrapper[31559]: I0216 02:22:34.092676 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/724ac845-3835-458b-9645-e665be135ff9-etcd-service-ca\") pod \"etcd-operator-67bf55ccdd-htjgz\" (UID: \"724ac845-3835-458b-9645-e665be135ff9\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-htjgz" Feb 16 02:22:34.092835 master-0 kubenswrapper[31559]: I0216 02:22:34.092715 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlnkb\" (UniqueName: \"kubernetes.io/projected/dbc5b101-936f-4bf3-bbf3-f30966b0ab50-kube-api-access-jlnkb\") pod \"network-node-identity-kffmg\" (UID: \"dbc5b101-936f-4bf3-bbf3-f30966b0ab50\") " pod="openshift-network-node-identity/network-node-identity-kffmg" Feb 16 02:22:34.092835 master-0 kubenswrapper[31559]: I0216 02:22:34.092737 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2ffa4db8-97da-42de-8e51-35680f518ca7-metrics-tls\") pod \"dns-operator-86b8869b79-4rfwq\" (UID: \"2ffa4db8-97da-42de-8e51-35680f518ca7\") " pod="openshift-dns-operator/dns-operator-86b8869b79-4rfwq" Feb 16 02:22:34.092835 master-0 kubenswrapper[31559]: I0216 02:22:34.092758 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f91346c7-bde4-4fa2-ac27-b5f0d25eeb75-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-mvdkf\" (UID: \"f91346c7-bde4-4fa2-ac27-b5f0d25eeb75\") " pod="openshift-multus/multus-additional-cni-plugins-mvdkf" Feb 16 02:22:34.092835 master-0 kubenswrapper[31559]: I0216 02:22:34.092789 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsxrl\" (UniqueName: \"kubernetes.io/projected/a77e2f8f-d164-4a58-aab2-f3444c05cacb-kube-api-access-bsxrl\") pod \"cluster-storage-operator-75b869db96-qm7rm\" (UID: \"a77e2f8f-d164-4a58-aab2-f3444c05cacb\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-qm7rm" Feb 16 02:22:34.092835 master-0 kubenswrapper[31559]: I0216 02:22:34.092821 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p49hf\" (UniqueName: \"kubernetes.io/projected/5f810ea0-e32d-4097-beca-5194349a57a6-kube-api-access-p49hf\") pod \"community-operators-s95k9\" (UID: \"5f810ea0-e32d-4097-beca-5194349a57a6\") " pod="openshift-marketplace/community-operators-s95k9" Feb 16 02:22:34.092835 master-0 kubenswrapper[31559]: I0216 02:22:34.092852 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xj8x2\" (UniqueName: \"kubernetes.io/projected/30fef0d5-46ea-4fa3-9ffa-88187d010ffe-kube-api-access-xj8x2\") pod \"cloud-credential-operator-595c8f9ff-n8xmg\" (UID: \"30fef0d5-46ea-4fa3-9ffa-88187d010ffe\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-n8xmg" Feb 16 02:22:34.092835 master-0 kubenswrapper[31559]: I0216 02:22:34.092877 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/1f2d2601-481d-4e86-ac4c-3d34d5691261-operand-assets\") pod \"cluster-olm-operator-55b69c6c48-jshtp\" (UID: \"1f2d2601-481d-4e86-ac4c-3d34d5691261\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-jshtp" Feb 16 02:22:34.094318 master-0 kubenswrapper[31559]: I0216 02:22:34.092904 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/2a67f799-fd8d-4bee-9d67-720151c1650b-iptables-alerter-script\") pod \"iptables-alerter-9bnql\" (UID: \"2a67f799-fd8d-4bee-9d67-720151c1650b\") " pod="openshift-network-operator/iptables-alerter-9bnql" Feb 16 02:22:34.094318 master-0 kubenswrapper[31559]: I0216 02:22:34.092933 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f918d5b-1a4c-4b56-98a4-5cef638bb615-config\") pod \"apiserver-578b9bc556-8g98v\" (UID: \"8f918d5b-1a4c-4b56-98a4-5cef638bb615\") " pod="openshift-apiserver/apiserver-578b9bc556-8g98v" Feb 16 02:22:34.094318 master-0 kubenswrapper[31559]: I0216 02:22:34.092938 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/724ac845-3835-458b-9645-e665be135ff9-etcd-service-ca\") pod \"etcd-operator-67bf55ccdd-htjgz\" (UID: \"724ac845-3835-458b-9645-e665be135ff9\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-htjgz" Feb 16 02:22:34.094318 master-0 kubenswrapper[31559]: I0216 02:22:34.092958 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/22739961-e322-47f1-b232-eaa4cc35319c-audit-policies\") pod \"apiserver-6796f86fd6-qtxkl\" (UID: \"22739961-e322-47f1-b232-eaa4cc35319c\") " pod="openshift-oauth-apiserver/apiserver-6796f86fd6-qtxkl" Feb 16 02:22:34.094318 master-0 kubenswrapper[31559]: I0216 02:22:34.093002 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/21686a6d-f685-4fb6-98af-3e8a39c5981b-telemetry-config\") pod \"cluster-monitoring-operator-756d64c8c4-5q4zs\" (UID: \"21686a6d-f685-4fb6-98af-3e8a39c5981b\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-5q4zs" Feb 16 02:22:34.094318 master-0 kubenswrapper[31559]: I0216 02:22:34.093001 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/1f2d2601-481d-4e86-ac4c-3d34d5691261-operand-assets\") pod \"cluster-olm-operator-55b69c6c48-jshtp\" (UID: \"1f2d2601-481d-4e86-ac4c-3d34d5691261\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-jshtp" Feb 16 02:22:34.094318 master-0 kubenswrapper[31559]: I0216 02:22:34.093093 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/857357a1-dc98-4dd5-98b3-c94b1ddf9dec-cache\") pod \"catalogd-controller-manager-67bc7c997f-zc2br\" (UID: \"857357a1-dc98-4dd5-98b3-c94b1ddf9dec\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-zc2br" Feb 16 02:22:34.094318 master-0 kubenswrapper[31559]: I0216 02:22:34.093119 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f91346c7-bde4-4fa2-ac27-b5f0d25eeb75-os-release\") pod \"multus-additional-cni-plugins-mvdkf\" (UID: \"f91346c7-bde4-4fa2-ac27-b5f0d25eeb75\") " pod="openshift-multus/multus-additional-cni-plugins-mvdkf" Feb 16 02:22:34.094318 master-0 kubenswrapper[31559]: I0216 02:22:34.093127 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/980aa005-f51d-4ca2-aee6-a6fdeefd86d0-serving-cert\") pod \"kube-apiserver-operator-54984b6678-dsjz2\" (UID: \"980aa005-f51d-4ca2-aee6-a6fdeefd86d0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-dsjz2" Feb 16 02:22:34.094318 master-0 kubenswrapper[31559]: I0216 02:22:34.093143 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/04804a08-e3a5-46f3-abcb-967866834baa-bound-sa-token\") pod \"ingress-operator-c588d8cb4-nbjz6\" (UID: \"04804a08-e3a5-46f3-abcb-967866834baa\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nbjz6" Feb 16 02:22:34.094318 master-0 kubenswrapper[31559]: I0216 02:22:34.093184 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/21686a6d-f685-4fb6-98af-3e8a39c5981b-telemetry-config\") pod \"cluster-monitoring-operator-756d64c8c4-5q4zs\" (UID: \"21686a6d-f685-4fb6-98af-3e8a39c5981b\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-5q4zs" Feb 16 02:22:34.094318 master-0 kubenswrapper[31559]: I0216 02:22:34.093185 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/2a67f799-fd8d-4bee-9d67-720151c1650b-iptables-alerter-script\") pod \"iptables-alerter-9bnql\" (UID: \"2a67f799-fd8d-4bee-9d67-720151c1650b\") " pod="openshift-network-operator/iptables-alerter-9bnql" Feb 16 02:22:34.094318 master-0 kubenswrapper[31559]: I0216 02:22:34.093240 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/857357a1-dc98-4dd5-98b3-c94b1ddf9dec-cache\") pod \"catalogd-controller-manager-67bc7c997f-zc2br\" (UID: \"857357a1-dc98-4dd5-98b3-c94b1ddf9dec\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-zc2br" Feb 16 02:22:34.094318 master-0 kubenswrapper[31559]: I0216 02:22:34.093274 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89041b37-18f6-499d-89ec-a0523a25dc58-catalog-content\") pod \"redhat-operators-9c6g5\" (UID: \"89041b37-18f6-499d-89ec-a0523a25dc58\") " pod="openshift-marketplace/redhat-operators-9c6g5" Feb 16 02:22:34.094318 master-0 kubenswrapper[31559]: I0216 02:22:34.093299 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/27d876a7-6a48-4942-ad96-ed8ed3aa104b-etc-docker\") pod \"operator-controller-controller-manager-85c9b89969-g9lcm\" (UID: \"27d876a7-6a48-4942-ad96-ed8ed3aa104b\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-g9lcm" Feb 16 02:22:34.094318 master-0 kubenswrapper[31559]: I0216 02:22:34.093326 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmqrb\" (UniqueName: \"kubernetes.io/projected/97b8261a-91e3-435e-93f8-0a17f30359fd-kube-api-access-pmqrb\") pod \"machine-config-controller-686c884b4d-zljgp\" (UID: \"97b8261a-91e3-435e-93f8-0a17f30359fd\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-zljgp" Feb 16 02:22:34.094318 master-0 kubenswrapper[31559]: I0216 02:22:34.093351 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2a67f799-fd8d-4bee-9d67-720151c1650b-host-slash\") pod \"iptables-alerter-9bnql\" (UID: \"2a67f799-fd8d-4bee-9d67-720151c1650b\") " pod="openshift-network-operator/iptables-alerter-9bnql" Feb 16 02:22:34.094318 master-0 kubenswrapper[31559]: I0216 02:22:34.093377 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-run-openvswitch\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:22:34.094318 master-0 kubenswrapper[31559]: I0216 02:22:34.093392 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89041b37-18f6-499d-89ec-a0523a25dc58-catalog-content\") pod \"redhat-operators-9c6g5\" (UID: \"89041b37-18f6-499d-89ec-a0523a25dc58\") " pod="openshift-marketplace/redhat-operators-9c6g5" Feb 16 02:22:34.094318 master-0 kubenswrapper[31559]: I0216 02:22:34.093498 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8f33151-61df-4b66-ba85-9ba210779059-serving-cert\") pod \"kube-controller-manager-operator-78ff47c7c5-dgxhp\" (UID: \"a8f33151-61df-4b66-ba85-9ba210779059\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-dgxhp" Feb 16 02:22:34.094318 master-0 kubenswrapper[31559]: I0216 02:22:34.093545 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24b6h\" (UniqueName: \"kubernetes.io/projected/bde83629-b39c-401e-bc30-5ce205638918-kube-api-access-24b6h\") pod \"marketplace-operator-6cc5b65c6b-8nl7s\" (UID: \"bde83629-b39c-401e-bc30-5ce205638918\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-8nl7s" Feb 16 02:22:34.094318 master-0 kubenswrapper[31559]: I0216 02:22:34.093575 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9p9r\" (UniqueName: \"kubernetes.io/projected/a0540a70-a256-422b-a827-e564d0e67866-kube-api-access-s9p9r\") pod \"cluster-image-registry-operator-96c8c64b8-bxgpd\" (UID: \"a0540a70-a256-422b-a827-e564d0e67866\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-bxgpd" Feb 16 02:22:34.094318 master-0 kubenswrapper[31559]: I0216 02:22:34.093598 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7ac81030-35d1-4d86-844d-65d1156d8944-metrics-tls\") pod \"dns-default-njlg6\" (UID: \"7ac81030-35d1-4d86-844d-65d1156d8944\") " pod="openshift-dns/dns-default-njlg6" Feb 16 02:22:34.094318 master-0 kubenswrapper[31559]: I0216 02:22:34.093660 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4gmn\" (UniqueName: \"kubernetes.io/projected/23755f7f-dce6-4dcf-9664-22e3aedb5c81-kube-api-access-n4gmn\") pod \"package-server-manager-5c696dbdcd-tkqng\" (UID: \"23755f7f-dce6-4dcf-9664-22e3aedb5c81\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-tkqng" Feb 16 02:22:34.094318 master-0 kubenswrapper[31559]: I0216 02:22:34.093863 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8f33151-61df-4b66-ba85-9ba210779059-serving-cert\") pod \"kube-controller-manager-operator-78ff47c7c5-dgxhp\" (UID: \"a8f33151-61df-4b66-ba85-9ba210779059\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-dgxhp" Feb 16 02:22:34.094318 master-0 kubenswrapper[31559]: I0216 02:22:34.093824 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0a900f93-91c9-4782-89a3-1cc09f3aec95-sys\") pod \"node-exporter-jxbq6\" (UID: \"0a900f93-91c9-4782-89a3-1cc09f3aec95\") " pod="openshift-monitoring/node-exporter-jxbq6" Feb 16 02:22:34.094318 master-0 kubenswrapper[31559]: I0216 02:22:34.093941 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/00ef3b03-55dc-4661-b7fd-1e586c45b5de-etc-sysconfig\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:22:34.094318 master-0 kubenswrapper[31559]: I0216 02:22:34.093970 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/430c146b-ceaf-411a-add6-ce949243aabf-cni-binary-copy\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:22:34.094318 master-0 kubenswrapper[31559]: I0216 02:22:34.093988 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cstlg\" (UniqueName: \"kubernetes.io/projected/5b923d74-bad3-4780-8e7e-e8365ac9ea06-kube-api-access-cstlg\") pod \"certified-operators-gkbtj\" (UID: \"5b923d74-bad3-4780-8e7e-e8365ac9ea06\") " pod="openshift-marketplace/certified-operators-gkbtj" Feb 16 02:22:34.094318 master-0 kubenswrapper[31559]: I0216 02:22:34.094035 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgj82\" (UniqueName: \"kubernetes.io/projected/48863ff6-63ac-42d7-bac7-29d888c92db9-kube-api-access-kgj82\") pod \"cluster-autoscaler-operator-67fd9768b5-9rvcj\" (UID: \"48863ff6-63ac-42d7-bac7-29d888c92db9\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-9rvcj" Feb 16 02:22:34.094318 master-0 kubenswrapper[31559]: I0216 02:22:34.094074 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/1f2d2601-481d-4e86-ac4c-3d34d5691261-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-55b69c6c48-jshtp\" (UID: \"1f2d2601-481d-4e86-ac4c-3d34d5691261\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-jshtp" Feb 16 02:22:34.094318 master-0 kubenswrapper[31559]: I0216 02:22:34.094103 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:22:34.094318 master-0 kubenswrapper[31559]: I0216 02:22:34.094140 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/00ef3b03-55dc-4661-b7fd-1e586c45b5de-tmp\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:22:34.094318 master-0 kubenswrapper[31559]: I0216 02:22:34.094202 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/86af980a-2653-40c3-a368-a795d7fb8558-volume-directive-shadow\") pod \"kube-state-metrics-7cc9598d54-2gx8b\" (UID: \"86af980a-2653-40c3-a368-a795d7fb8558\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-2gx8b" Feb 16 02:22:34.094318 master-0 kubenswrapper[31559]: I0216 02:22:34.094248 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/00ef3b03-55dc-4661-b7fd-1e586c45b5de-tmp\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:22:34.094318 master-0 kubenswrapper[31559]: I0216 02:22:34.094258 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-host-run-k8s-cni-cncf-io\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:22:34.094318 master-0 kubenswrapper[31559]: I0216 02:22:34.094218 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/430c146b-ceaf-411a-add6-ce949243aabf-cni-binary-copy\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:22:34.094318 master-0 kubenswrapper[31559]: I0216 02:22:34.094290 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/1f2d2601-481d-4e86-ac4c-3d34d5691261-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-55b69c6c48-jshtp\" (UID: \"1f2d2601-481d-4e86-ac4c-3d34d5691261\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-jshtp" Feb 16 02:22:34.094318 master-0 kubenswrapper[31559]: I0216 02:22:34.094307 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0abea413-e08a-465a-8ec4-2be650bfd5bd-serving-cert\") pod \"insights-operator-cb4f7b4cf-llpf5\" (UID: \"0abea413-e08a-465a-8ec4-2be650bfd5bd\") " pod="openshift-insights/insights-operator-cb4f7b4cf-llpf5" Feb 16 02:22:34.094318 master-0 kubenswrapper[31559]: I0216 02:22:34.094318 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/86af980a-2653-40c3-a368-a795d7fb8558-volume-directive-shadow\") pod \"kube-state-metrics-7cc9598d54-2gx8b\" (UID: \"86af980a-2653-40c3-a368-a795d7fb8558\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-2gx8b" Feb 16 02:22:34.094318 master-0 kubenswrapper[31559]: I0216 02:22:34.094348 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/22739961-e322-47f1-b232-eaa4cc35319c-etcd-client\") pod \"apiserver-6796f86fd6-qtxkl\" (UID: \"22739961-e322-47f1-b232-eaa4cc35319c\") " pod="openshift-oauth-apiserver/apiserver-6796f86fd6-qtxkl" Feb 16 02:22:34.096169 master-0 kubenswrapper[31559]: I0216 02:22:34.094403 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ln8g4\" (UniqueName: \"kubernetes.io/projected/5b62004d-7fe3-47ae-8e26-8496befb047c-kube-api-access-ln8g4\") pod \"cluster-samples-operator-f8cbff74c-k8jz5\" (UID: \"5b62004d-7fe3-47ae-8e26-8496befb047c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-k8jz5" Feb 16 02:22:34.096169 master-0 kubenswrapper[31559]: I0216 02:22:34.094479 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kws4h\" (UniqueName: \"kubernetes.io/projected/695d1f01-d3c1-4fb9-9dda-daf33eae11f5-kube-api-access-kws4h\") pod \"prometheus-operator-7485d645b8-v9mmd\" (UID: \"695d1f01-d3c1-4fb9-9dda-daf33eae11f5\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-v9mmd" Feb 16 02:22:34.096169 master-0 kubenswrapper[31559]: I0216 02:22:34.094530 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-host-kubelet\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:22:34.096169 master-0 kubenswrapper[31559]: I0216 02:22:34.094895 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/724ac845-3835-458b-9645-e665be135ff9-serving-cert\") pod \"etcd-operator-67bf55ccdd-htjgz\" (UID: \"724ac845-3835-458b-9645-e665be135ff9\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-htjgz" Feb 16 02:22:34.096169 master-0 kubenswrapper[31559]: I0216 02:22:34.094934 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/724ac845-3835-458b-9645-e665be135ff9-serving-cert\") pod \"etcd-operator-67bf55ccdd-htjgz\" (UID: \"724ac845-3835-458b-9645-e665be135ff9\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-htjgz" Feb 16 02:22:34.096169 master-0 kubenswrapper[31559]: I0216 02:22:34.094980 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dc3354cb-b6c3-40a5-a695-cccb079ad292-apiservice-cert\") pod \"packageserver-87777c9b7-fxzh6\" (UID: \"dc3354cb-b6c3-40a5-a695-cccb079ad292\") " pod="openshift-operator-lifecycle-manager/packageserver-87777c9b7-fxzh6" Feb 16 02:22:34.096169 master-0 kubenswrapper[31559]: I0216 02:22:34.095036 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9cd32bc-a13a-44ee-ba52-7bb335c7007b-serving-cert\") pod \"authentication-operator-755d954778-bngv9\" (UID: \"c9cd32bc-a13a-44ee-ba52-7bb335c7007b\") " pod="openshift-authentication-operator/authentication-operator-755d954778-bngv9" Feb 16 02:22:34.096169 master-0 kubenswrapper[31559]: I0216 02:22:34.095067 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xr7gn\" (UniqueName: \"kubernetes.io/projected/c9cd32bc-a13a-44ee-ba52-7bb335c7007b-kube-api-access-xr7gn\") pod \"authentication-operator-755d954778-bngv9\" (UID: \"c9cd32bc-a13a-44ee-ba52-7bb335c7007b\") " pod="openshift-authentication-operator/authentication-operator-755d954778-bngv9" Feb 16 02:22:34.096169 master-0 kubenswrapper[31559]: I0216 02:22:34.095120 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22739961-e322-47f1-b232-eaa4cc35319c-trusted-ca-bundle\") pod \"apiserver-6796f86fd6-qtxkl\" (UID: \"22739961-e322-47f1-b232-eaa4cc35319c\") " pod="openshift-oauth-apiserver/apiserver-6796f86fd6-qtxkl" Feb 16 02:22:34.096169 master-0 kubenswrapper[31559]: I0216 02:22:34.095145 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9n58j\" (UniqueName: \"kubernetes.io/projected/22739961-e322-47f1-b232-eaa4cc35319c-kube-api-access-9n58j\") pod \"apiserver-6796f86fd6-qtxkl\" (UID: \"22739961-e322-47f1-b232-eaa4cc35319c\") " pod="openshift-oauth-apiserver/apiserver-6796f86fd6-qtxkl" Feb 16 02:22:34.096169 master-0 kubenswrapper[31559]: I0216 02:22:34.095200 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rc6w\" (UniqueName: \"kubernetes.io/projected/04804a08-e3a5-46f3-abcb-967866834baa-kube-api-access-8rc6w\") pod \"ingress-operator-c588d8cb4-nbjz6\" (UID: \"04804a08-e3a5-46f3-abcb-967866834baa\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nbjz6" Feb 16 02:22:34.096169 master-0 kubenswrapper[31559]: I0216 02:22:34.095225 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/857357a1-dc98-4dd5-98b3-c94b1ddf9dec-catalogserver-certs\") pod \"catalogd-controller-manager-67bc7c997f-zc2br\" (UID: \"857357a1-dc98-4dd5-98b3-c94b1ddf9dec\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-zc2br" Feb 16 02:22:34.096169 master-0 kubenswrapper[31559]: I0216 02:22:34.095259 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9cd32bc-a13a-44ee-ba52-7bb335c7007b-serving-cert\") pod \"authentication-operator-755d954778-bngv9\" (UID: \"c9cd32bc-a13a-44ee-ba52-7bb335c7007b\") " pod="openshift-authentication-operator/authentication-operator-755d954778-bngv9" Feb 16 02:22:34.096169 master-0 kubenswrapper[31559]: I0216 02:22:34.095281 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-host-var-lib-cni-bin\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:22:34.096169 master-0 kubenswrapper[31559]: I0216 02:22:34.095308 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5j9vb\" (UniqueName: \"kubernetes.io/projected/1a07cd28-a33d-4abd-9198-ba82bacd51ba-kube-api-access-5j9vb\") pod \"node-resolver-7tjn9\" (UID: \"1a07cd28-a33d-4abd-9198-ba82bacd51ba\") " pod="openshift-dns/node-resolver-7tjn9" Feb 16 02:22:34.096169 master-0 kubenswrapper[31559]: I0216 02:22:34.095333 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/17390d9a-148d-4927-a831-5bc4873c43d5-metrics-certs\") pod \"router-default-864ddd5f56-ffptx\" (UID: \"17390d9a-148d-4927-a831-5bc4873c43d5\") " pod="openshift-ingress/router-default-864ddd5f56-ffptx" Feb 16 02:22:34.096169 master-0 kubenswrapper[31559]: I0216 02:22:34.095357 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91938be6-9ae4-4849-abe8-fc842daecd23-serving-cert\") pod \"service-ca-operator-5dc4688546-ck5nr\" (UID: \"91938be6-9ae4-4849-abe8-fc842daecd23\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-ck5nr" Feb 16 02:22:34.096169 master-0 kubenswrapper[31559]: I0216 02:22:34.095384 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxnht\" (UniqueName: \"kubernetes.io/projected/8f918d5b-1a4c-4b56-98a4-5cef638bb615-kube-api-access-fxnht\") pod \"apiserver-578b9bc556-8g98v\" (UID: \"8f918d5b-1a4c-4b56-98a4-5cef638bb615\") " pod="openshift-apiserver/apiserver-578b9bc556-8g98v" Feb 16 02:22:34.096169 master-0 kubenswrapper[31559]: I0216 02:22:34.095407 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/76915cba-7c11-4bd8-9943-81de74e7781b-profile-collector-cert\") pod \"catalog-operator-588944557d-2z8fq\" (UID: \"76915cba-7c11-4bd8-9943-81de74e7781b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-2z8fq" Feb 16 02:22:34.096169 master-0 kubenswrapper[31559]: I0216 02:22:34.095447 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e491b5ed-9c09-4308-9843-fba8d43bd3ae-client-ca\") pod \"controller-manager-5788fc6459-29m25\" (UID: \"e491b5ed-9c09-4308-9843-fba8d43bd3ae\") " pod="openshift-controller-manager/controller-manager-5788fc6459-29m25" Feb 16 02:22:34.096169 master-0 kubenswrapper[31559]: I0216 02:22:34.095550 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/0abea413-e08a-465a-8ec4-2be650bfd5bd-snapshots\") pod \"insights-operator-cb4f7b4cf-llpf5\" (UID: \"0abea413-e08a-465a-8ec4-2be650bfd5bd\") " pod="openshift-insights/insights-operator-cb4f7b4cf-llpf5" Feb 16 02:22:34.096169 master-0 kubenswrapper[31559]: I0216 02:22:34.095752 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/0abea413-e08a-465a-8ec4-2be650bfd5bd-snapshots\") pod \"insights-operator-cb4f7b4cf-llpf5\" (UID: \"0abea413-e08a-465a-8ec4-2be650bfd5bd\") " pod="openshift-insights/insights-operator-cb4f7b4cf-llpf5" Feb 16 02:22:34.096169 master-0 kubenswrapper[31559]: I0216 02:22:34.095764 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/30fef0d5-46ea-4fa3-9ffa-88187d010ffe-cco-trusted-ca\") pod \"cloud-credential-operator-595c8f9ff-n8xmg\" (UID: \"30fef0d5-46ea-4fa3-9ffa-88187d010ffe\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-n8xmg" Feb 16 02:22:34.096169 master-0 kubenswrapper[31559]: I0216 02:22:34.095824 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8d49c\" (UniqueName: \"kubernetes.io/projected/1f2d2601-481d-4e86-ac4c-3d34d5691261-kube-api-access-8d49c\") pod \"cluster-olm-operator-55b69c6c48-jshtp\" (UID: \"1f2d2601-481d-4e86-ac4c-3d34d5691261\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-jshtp" Feb 16 02:22:34.096169 master-0 kubenswrapper[31559]: I0216 02:22:34.095827 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/76915cba-7c11-4bd8-9943-81de74e7781b-profile-collector-cert\") pod \"catalog-operator-588944557d-2z8fq\" (UID: \"76915cba-7c11-4bd8-9943-81de74e7781b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-2z8fq" Feb 16 02:22:34.096169 master-0 kubenswrapper[31559]: I0216 02:22:34.095851 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-multus-cni-dir\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:22:34.096169 master-0 kubenswrapper[31559]: I0216 02:22:34.095882 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83883885-f493-4559-9c0f-e28d69712475-serving-cert\") pod \"route-controller-manager-998bd8b4b-hm5k2\" (UID: \"83883885-f493-4559-9c0f-e28d69712475\") " pod="openshift-route-controller-manager/route-controller-manager-998bd8b4b-hm5k2" Feb 16 02:22:34.096169 master-0 kubenswrapper[31559]: I0216 02:22:34.095943 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/724ac845-3835-458b-9645-e665be135ff9-config\") pod \"etcd-operator-67bf55ccdd-htjgz\" (UID: \"724ac845-3835-458b-9645-e665be135ff9\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-htjgz" Feb 16 02:22:34.096169 master-0 kubenswrapper[31559]: I0216 02:22:34.095972 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bff42\" (UniqueName: \"kubernetes.io/projected/724ac845-3835-458b-9645-e665be135ff9-kube-api-access-bff42\") pod \"etcd-operator-67bf55ccdd-htjgz\" (UID: \"724ac845-3835-458b-9645-e665be135ff9\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-htjgz" Feb 16 02:22:34.096169 master-0 kubenswrapper[31559]: I0216 02:22:34.096057 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91938be6-9ae4-4849-abe8-fc842daecd23-serving-cert\") pod \"service-ca-operator-5dc4688546-ck5nr\" (UID: \"91938be6-9ae4-4849-abe8-fc842daecd23\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-ck5nr" Feb 16 02:22:34.096169 master-0 kubenswrapper[31559]: I0216 02:22:34.096167 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b2a83ddd-ffa5-4127-9099-91187ad9dbba-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-845gn\" (UID: \"b2a83ddd-ffa5-4127-9099-91187ad9dbba\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-845gn" Feb 16 02:22:34.096169 master-0 kubenswrapper[31559]: I0216 02:22:34.096193 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/724ac845-3835-458b-9645-e665be135ff9-config\") pod \"etcd-operator-67bf55ccdd-htjgz\" (UID: \"724ac845-3835-458b-9645-e665be135ff9\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-htjgz" Feb 16 02:22:34.097542 master-0 kubenswrapper[31559]: I0216 02:22:34.096268 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/f91346c7-bde4-4fa2-ac27-b5f0d25eeb75-whereabouts-configmap\") pod \"multus-additional-cni-plugins-mvdkf\" (UID: \"f91346c7-bde4-4fa2-ac27-b5f0d25eeb75\") " pod="openshift-multus/multus-additional-cni-plugins-mvdkf" Feb 16 02:22:34.097542 master-0 kubenswrapper[31559]: I0216 02:22:34.096383 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b2a83ddd-ffa5-4127-9099-91187ad9dbba-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-845gn\" (UID: \"b2a83ddd-ffa5-4127-9099-91187ad9dbba\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-845gn" Feb 16 02:22:34.097542 master-0 kubenswrapper[31559]: I0216 02:22:34.096493 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/dbc5b101-936f-4bf3-bbf3-f30966b0ab50-ovnkube-identity-cm\") pod \"network-node-identity-kffmg\" (UID: \"dbc5b101-936f-4bf3-bbf3-f30966b0ab50\") " pod="openshift-network-node-identity/network-node-identity-kffmg" Feb 16 02:22:34.097542 master-0 kubenswrapper[31559]: I0216 02:22:34.096523 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/f91346c7-bde4-4fa2-ac27-b5f0d25eeb75-whereabouts-configmap\") pod \"multus-additional-cni-plugins-mvdkf\" (UID: \"f91346c7-bde4-4fa2-ac27-b5f0d25eeb75\") " pod="openshift-multus/multus-additional-cni-plugins-mvdkf" Feb 16 02:22:34.097542 master-0 kubenswrapper[31559]: I0216 02:22:34.096667 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/9defdfff-eb18-4beb-9591-918d0e4b4236-signing-cabundle\") pod \"service-ca-676cd8b9b5-x6nhn\" (UID: \"9defdfff-eb18-4beb-9591-918d0e4b4236\") " pod="openshift-service-ca/service-ca-676cd8b9b5-x6nhn" Feb 16 02:22:34.097542 master-0 kubenswrapper[31559]: I0216 02:22:34.096935 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/9defdfff-eb18-4beb-9591-918d0e4b4236-signing-cabundle\") pod \"service-ca-676cd8b9b5-x6nhn\" (UID: \"9defdfff-eb18-4beb-9591-918d0e4b4236\") " pod="openshift-service-ca/service-ca-676cd8b9b5-x6nhn" Feb 16 02:22:34.097542 master-0 kubenswrapper[31559]: I0216 02:22:34.096781 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/dbc5b101-936f-4bf3-bbf3-f30966b0ab50-ovnkube-identity-cm\") pod \"network-node-identity-kffmg\" (UID: \"dbc5b101-936f-4bf3-bbf3-f30966b0ab50\") " pod="openshift-network-node-identity/network-node-identity-kffmg" Feb 16 02:22:34.097542 master-0 kubenswrapper[31559]: I0216 02:22:34.096938 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-os-release\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:22:34.097542 master-0 kubenswrapper[31559]: I0216 02:22:34.097016 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/27d876a7-6a48-4942-ad96-ed8ed3aa104b-cache\") pod \"operator-controller-controller-manager-85c9b89969-g9lcm\" (UID: \"27d876a7-6a48-4942-ad96-ed8ed3aa104b\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-g9lcm" Feb 16 02:22:34.097542 master-0 kubenswrapper[31559]: I0216 02:22:34.097079 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/0a900f93-91c9-4782-89a3-1cc09f3aec95-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-jxbq6\" (UID: \"0a900f93-91c9-4782-89a3-1cc09f3aec95\") " pod="openshift-monitoring/node-exporter-jxbq6" Feb 16 02:22:34.097542 master-0 kubenswrapper[31559]: I0216 02:22:34.097110 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a5b01c1-1231-4e69-8b6c-c4981b65b26e-config\") pod \"kube-storage-version-migrator-operator-cd5474998-x2sh4\" (UID: \"4a5b01c1-1231-4e69-8b6c-c4981b65b26e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-x2sh4" Feb 16 02:22:34.097542 master-0 kubenswrapper[31559]: I0216 02:22:34.097109 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/27d876a7-6a48-4942-ad96-ed8ed3aa104b-cache\") pod \"operator-controller-controller-manager-85c9b89969-g9lcm\" (UID: \"27d876a7-6a48-4942-ad96-ed8ed3aa104b\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-g9lcm" Feb 16 02:22:34.097542 master-0 kubenswrapper[31559]: I0216 02:22:34.097296 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8f918d5b-1a4c-4b56-98a4-5cef638bb615-node-pullsecrets\") pod \"apiserver-578b9bc556-8g98v\" (UID: \"8f918d5b-1a4c-4b56-98a4-5cef638bb615\") " pod="openshift-apiserver/apiserver-578b9bc556-8g98v" Feb 16 02:22:34.097542 master-0 kubenswrapper[31559]: I0216 02:22:34.097346 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6dcef814-353e-4985-9afc-9e545f7853ae-ovnkube-script-lib\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:22:34.097542 master-0 kubenswrapper[31559]: I0216 02:22:34.097380 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/b2a83ddd-ffa5-4127-9099-91187ad9dbba-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-845gn\" (UID: \"b2a83ddd-ffa5-4127-9099-91187ad9dbba\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-845gn" Feb 16 02:22:34.097542 master-0 kubenswrapper[31559]: I0216 02:22:34.097406 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/857357a1-dc98-4dd5-98b3-c94b1ddf9dec-etc-docker\") pod \"catalogd-controller-manager-67bc7c997f-zc2br\" (UID: \"857357a1-dc98-4dd5-98b3-c94b1ddf9dec\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-zc2br" Feb 16 02:22:34.097542 master-0 kubenswrapper[31559]: I0216 02:22:34.097306 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a5b01c1-1231-4e69-8b6c-c4981b65b26e-config\") pod \"kube-storage-version-migrator-operator-cd5474998-x2sh4\" (UID: \"4a5b01c1-1231-4e69-8b6c-c4981b65b26e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-x2sh4" Feb 16 02:22:34.097542 master-0 kubenswrapper[31559]: I0216 02:22:34.097470 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/c4a146b2-c712-408a-97d8-5de3a84f3aaf-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj\" (UID: \"c4a146b2-c712-408a-97d8-5de3a84f3aaf\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj" Feb 16 02:22:34.098257 master-0 kubenswrapper[31559]: I0216 02:22:34.097622 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-node-log\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:22:34.098257 master-0 kubenswrapper[31559]: I0216 02:22:34.097641 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6dcef814-353e-4985-9afc-9e545f7853ae-ovnkube-script-lib\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:22:34.098257 master-0 kubenswrapper[31559]: I0216 02:22:34.097672 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/0a900f93-91c9-4782-89a3-1cc09f3aec95-node-exporter-wtmp\") pod \"node-exporter-jxbq6\" (UID: \"0a900f93-91c9-4782-89a3-1cc09f3aec95\") " pod="openshift-monitoring/node-exporter-jxbq6" Feb 16 02:22:34.098257 master-0 kubenswrapper[31559]: I0216 02:22:34.097722 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f918d5b-1a4c-4b56-98a4-5cef638bb615-serving-cert\") pod \"apiserver-578b9bc556-8g98v\" (UID: \"8f918d5b-1a4c-4b56-98a4-5cef638bb615\") " pod="openshift-apiserver/apiserver-578b9bc556-8g98v" Feb 16 02:22:34.098257 master-0 kubenswrapper[31559]: I0216 02:22:34.097716 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/b2a83ddd-ffa5-4127-9099-91187ad9dbba-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-845gn\" (UID: \"b2a83ddd-ffa5-4127-9099-91187ad9dbba\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-845gn" Feb 16 02:22:34.098257 master-0 kubenswrapper[31559]: I0216 02:22:34.097769 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f91346c7-bde4-4fa2-ac27-b5f0d25eeb75-system-cni-dir\") pod \"multus-additional-cni-plugins-mvdkf\" (UID: \"f91346c7-bde4-4fa2-ac27-b5f0d25eeb75\") " pod="openshift-multus/multus-additional-cni-plugins-mvdkf" Feb 16 02:22:34.098257 master-0 kubenswrapper[31559]: I0216 02:22:34.097811 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f91346c7-bde4-4fa2-ac27-b5f0d25eeb75-cni-binary-copy\") pod \"multus-additional-cni-plugins-mvdkf\" (UID: \"f91346c7-bde4-4fa2-ac27-b5f0d25eeb75\") " pod="openshift-multus/multus-additional-cni-plugins-mvdkf" Feb 16 02:22:34.098257 master-0 kubenswrapper[31559]: I0216 02:22:34.097897 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1487f82c-c14a-4f65-be77-5af2612f56f4-catalog-content\") pod \"redhat-marketplace-thm6w\" (UID: \"1487f82c-c14a-4f65-be77-5af2612f56f4\") " pod="openshift-marketplace/redhat-marketplace-thm6w" Feb 16 02:22:34.098257 master-0 kubenswrapper[31559]: I0216 02:22:34.098018 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1487f82c-c14a-4f65-be77-5af2612f56f4-catalog-content\") pod \"redhat-marketplace-thm6w\" (UID: \"1487f82c-c14a-4f65-be77-5af2612f56f4\") " pod="openshift-marketplace/redhat-marketplace-thm6w" Feb 16 02:22:34.098257 master-0 kubenswrapper[31559]: I0216 02:22:34.098052 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1743372f-bdb0-4558-b47b-3714f3aa3fde-kube-api-access\") pod \"openshift-kube-scheduler-operator-7485d55966-mmhcs\" (UID: \"1743372f-bdb0-4558-b47b-3714f3aa3fde\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-mmhcs" Feb 16 02:22:34.098257 master-0 kubenswrapper[31559]: I0216 02:22:34.098067 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f91346c7-bde4-4fa2-ac27-b5f0d25eeb75-cni-binary-copy\") pod \"multus-additional-cni-plugins-mvdkf\" (UID: \"f91346c7-bde4-4fa2-ac27-b5f0d25eeb75\") " pod="openshift-multus/multus-additional-cni-plugins-mvdkf" Feb 16 02:22:34.098257 master-0 kubenswrapper[31559]: I0216 02:22:34.098101 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/27a42eb0-677c-414d-b0ec-f945ec39b7e9-cert\") pod \"cluster-baremetal-operator-7bc947fc7d-frvgm\" (UID: \"27a42eb0-677c-414d-b0ec-f945ec39b7e9\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-frvgm" Feb 16 02:22:34.098257 master-0 kubenswrapper[31559]: I0216 02:22:34.098144 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/27a42eb0-677c-414d-b0ec-f945ec39b7e9-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-7bc947fc7d-frvgm\" (UID: \"27a42eb0-677c-414d-b0ec-f945ec39b7e9\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-frvgm" Feb 16 02:22:34.098257 master-0 kubenswrapper[31559]: I0216 02:22:34.098182 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/00ef3b03-55dc-4661-b7fd-1e586c45b5de-host\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:22:34.098257 master-0 kubenswrapper[31559]: I0216 02:22:34.098240 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c02961f-30ec-4405-b7fa-9c4192342ae9-serving-cert\") pod \"openshift-controller-manager-operator-5f5f84757d-b47jp\" (UID: \"6c02961f-30ec-4405-b7fa-9c4192342ae9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-b47jp" Feb 16 02:22:34.099125 master-0 kubenswrapper[31559]: I0216 02:22:34.098311 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1487f82c-c14a-4f65-be77-5af2612f56f4-utilities\") pod \"redhat-marketplace-thm6w\" (UID: \"1487f82c-c14a-4f65-be77-5af2612f56f4\") " pod="openshift-marketplace/redhat-marketplace-thm6w" Feb 16 02:22:34.099125 master-0 kubenswrapper[31559]: I0216 02:22:34.098351 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0540a70-a256-422b-a827-e564d0e67866-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-bxgpd\" (UID: \"a0540a70-a256-422b-a827-e564d0e67866\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-bxgpd" Feb 16 02:22:34.099125 master-0 kubenswrapper[31559]: I0216 02:22:34.098470 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47lht\" (UniqueName: \"kubernetes.io/projected/2a67f799-fd8d-4bee-9d67-720151c1650b-kube-api-access-47lht\") pod \"iptables-alerter-9bnql\" (UID: \"2a67f799-fd8d-4bee-9d67-720151c1650b\") " pod="openshift-network-operator/iptables-alerter-9bnql" Feb 16 02:22:34.099125 master-0 kubenswrapper[31559]: I0216 02:22:34.098487 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1487f82c-c14a-4f65-be77-5af2612f56f4-utilities\") pod \"redhat-marketplace-thm6w\" (UID: \"1487f82c-c14a-4f65-be77-5af2612f56f4\") " pod="openshift-marketplace/redhat-marketplace-thm6w" Feb 16 02:22:34.099125 master-0 kubenswrapper[31559]: I0216 02:22:34.098504 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-var-lib-openvswitch\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:22:34.099125 master-0 kubenswrapper[31559]: I0216 02:22:34.098540 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfgxq\" (UniqueName: \"kubernetes.io/projected/e478bdcc-052e-42f8-91b6-58c26cfc9cfc-kube-api-access-pfgxq\") pod \"network-check-target-hswdj\" (UID: \"e478bdcc-052e-42f8-91b6-58c26cfc9cfc\") " pod="openshift-network-diagnostics/network-check-target-hswdj" Feb 16 02:22:34.099125 master-0 kubenswrapper[31559]: I0216 02:22:34.098550 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c02961f-30ec-4405-b7fa-9c4192342ae9-serving-cert\") pod \"openshift-controller-manager-operator-5f5f84757d-b47jp\" (UID: \"6c02961f-30ec-4405-b7fa-9c4192342ae9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-b47jp" Feb 16 02:22:34.099125 master-0 kubenswrapper[31559]: I0216 02:22:34.098573 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/bde83629-b39c-401e-bc30-5ce205638918-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-8nl7s\" (UID: \"bde83629-b39c-401e-bc30-5ce205638918\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-8nl7s" Feb 16 02:22:34.099125 master-0 kubenswrapper[31559]: I0216 02:22:34.098597 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0540a70-a256-422b-a827-e564d0e67866-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-bxgpd\" (UID: \"a0540a70-a256-422b-a827-e564d0e67866\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-bxgpd" Feb 16 02:22:34.099125 master-0 kubenswrapper[31559]: I0216 02:22:34.098722 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6dcef814-353e-4985-9afc-9e545f7853ae-env-overrides\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:22:34.099125 master-0 kubenswrapper[31559]: I0216 02:22:34.098757 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b2a83ddd-ffa5-4127-9099-91187ad9dbba-trusted-ca\") pod \"cluster-node-tuning-operator-ff6c9b66-845gn\" (UID: \"b2a83ddd-ffa5-4127-9099-91187ad9dbba\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-845gn" Feb 16 02:22:34.099125 master-0 kubenswrapper[31559]: I0216 02:22:34.098785 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/467d92a2-1cf3-418d-b41e-8e5f9d7a5b74-srv-cert\") pod \"olm-operator-6b56bd877c-qwp9g\" (UID: \"467d92a2-1cf3-418d-b41e-8e5f9d7a5b74\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-qwp9g" Feb 16 02:22:34.099125 master-0 kubenswrapper[31559]: I0216 02:22:34.098805 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/bde83629-b39c-401e-bc30-5ce205638918-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-8nl7s\" (UID: \"bde83629-b39c-401e-bc30-5ce205638918\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-8nl7s" Feb 16 02:22:34.099125 master-0 kubenswrapper[31559]: I0216 02:22:34.098834 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7f0f9b7d-e663-4927-861b-a9544d483b6e-metrics-certs\") pod \"network-metrics-daemon-gn9mv\" (UID: \"7f0f9b7d-e663-4927-861b-a9544d483b6e\") " pod="openshift-multus/network-metrics-daemon-gn9mv" Feb 16 02:22:34.099125 master-0 kubenswrapper[31559]: I0216 02:22:34.098862 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/695d1f01-d3c1-4fb9-9dda-daf33eae11f5-metrics-client-ca\") pod \"prometheus-operator-7485d645b8-v9mmd\" (UID: \"695d1f01-d3c1-4fb9-9dda-daf33eae11f5\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-v9mmd" Feb 16 02:22:34.099125 master-0 kubenswrapper[31559]: I0216 02:22:34.098896 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a0540a70-a256-422b-a827-e564d0e67866-bound-sa-token\") pod \"cluster-image-registry-operator-96c8c64b8-bxgpd\" (UID: \"a0540a70-a256-422b-a827-e564d0e67866\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-bxgpd" Feb 16 02:22:34.099125 master-0 kubenswrapper[31559]: I0216 02:22:34.098932 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6dcef814-353e-4985-9afc-9e545f7853ae-env-overrides\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:22:34.099125 master-0 kubenswrapper[31559]: I0216 02:22:34.098966 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-host-run-ovn-kubernetes\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:22:34.099125 master-0 kubenswrapper[31559]: I0216 02:22:34.099001 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/980aa005-f51d-4ca2-aee6-a6fdeefd86d0-config\") pod \"kube-apiserver-operator-54984b6678-dsjz2\" (UID: \"980aa005-f51d-4ca2-aee6-a6fdeefd86d0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-dsjz2" Feb 16 02:22:34.099125 master-0 kubenswrapper[31559]: I0216 02:22:34.099035 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c267cc7-a51a-4b14-baee-e584254eefc5-client-ca-bundle\") pod \"metrics-server-67b79bd656-cs2n2\" (UID: \"8c267cc7-a51a-4b14-baee-e584254eefc5\") " pod="openshift-monitoring/metrics-server-67b79bd656-cs2n2" Feb 16 02:22:34.099125 master-0 kubenswrapper[31559]: I0216 02:22:34.099106 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7f0f9b7d-e663-4927-861b-a9544d483b6e-metrics-certs\") pod \"network-metrics-daemon-gn9mv\" (UID: \"7f0f9b7d-e663-4927-861b-a9544d483b6e\") " pod="openshift-multus/network-metrics-daemon-gn9mv" Feb 16 02:22:34.099125 master-0 kubenswrapper[31559]: I0216 02:22:34.099132 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b2a83ddd-ffa5-4127-9099-91187ad9dbba-trusted-ca\") pod \"cluster-node-tuning-operator-ff6c9b66-845gn\" (UID: \"b2a83ddd-ffa5-4127-9099-91187ad9dbba\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-845gn" Feb 16 02:22:34.100264 master-0 kubenswrapper[31559]: I0216 02:22:34.099156 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/467d92a2-1cf3-418d-b41e-8e5f9d7a5b74-profile-collector-cert\") pod \"olm-operator-6b56bd877c-qwp9g\" (UID: \"467d92a2-1cf3-418d-b41e-8e5f9d7a5b74\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-qwp9g" Feb 16 02:22:34.100264 master-0 kubenswrapper[31559]: I0216 02:22:34.099211 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/27d876a7-6a48-4942-ad96-ed8ed3aa104b-ca-certs\") pod \"operator-controller-controller-manager-85c9b89969-g9lcm\" (UID: \"27d876a7-6a48-4942-ad96-ed8ed3aa104b\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-g9lcm" Feb 16 02:22:34.100264 master-0 kubenswrapper[31559]: I0216 02:22:34.099241 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/980aa005-f51d-4ca2-aee6-a6fdeefd86d0-config\") pod \"kube-apiserver-operator-54984b6678-dsjz2\" (UID: \"980aa005-f51d-4ca2-aee6-a6fdeefd86d0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-dsjz2" Feb 16 02:22:34.100264 master-0 kubenswrapper[31559]: I0216 02:22:34.099243 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kj7r\" (UniqueName: \"kubernetes.io/projected/00ef3b03-55dc-4661-b7fd-1e586c45b5de-kube-api-access-7kj7r\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:22:34.100264 master-0 kubenswrapper[31559]: I0216 02:22:34.099279 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/467d92a2-1cf3-418d-b41e-8e5f9d7a5b74-srv-cert\") pod \"olm-operator-6b56bd877c-qwp9g\" (UID: \"467d92a2-1cf3-418d-b41e-8e5f9d7a5b74\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-qwp9g" Feb 16 02:22:34.100264 master-0 kubenswrapper[31559]: I0216 02:22:34.099339 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nmjz\" (UniqueName: \"kubernetes.io/projected/83883885-f493-4559-9c0f-e28d69712475-kube-api-access-6nmjz\") pod \"route-controller-manager-998bd8b4b-hm5k2\" (UID: \"83883885-f493-4559-9c0f-e28d69712475\") " pod="openshift-route-controller-manager/route-controller-manager-998bd8b4b-hm5k2" Feb 16 02:22:34.100264 master-0 kubenswrapper[31559]: I0216 02:22:34.099399 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/86af980a-2653-40c3-a368-a795d7fb8558-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7cc9598d54-2gx8b\" (UID: \"86af980a-2653-40c3-a368-a795d7fb8558\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-2gx8b" Feb 16 02:22:34.100264 master-0 kubenswrapper[31559]: I0216 02:22:34.099460 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/467d92a2-1cf3-418d-b41e-8e5f9d7a5b74-profile-collector-cert\") pod \"olm-operator-6b56bd877c-qwp9g\" (UID: \"467d92a2-1cf3-418d-b41e-8e5f9d7a5b74\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-qwp9g" Feb 16 02:22:34.100264 master-0 kubenswrapper[31559]: I0216 02:22:34.099462 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-host-run-netns\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:22:34.100264 master-0 kubenswrapper[31559]: I0216 02:22:34.099589 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7276\" (UniqueName: \"kubernetes.io/projected/a3065737-c7c0-4fbb-b484-f2a9204d4908-kube-api-access-w7276\") pod \"csi-snapshot-controller-74b6595c6d-466x9\" (UID: \"a3065737-c7c0-4fbb-b484-f2a9204d4908\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-466x9" Feb 16 02:22:34.100264 master-0 kubenswrapper[31559]: I0216 02:22:34.099700 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a5b01c1-1231-4e69-8b6c-c4981b65b26e-serving-cert\") pod \"kube-storage-version-migrator-operator-cd5474998-x2sh4\" (UID: \"4a5b01c1-1231-4e69-8b6c-c4981b65b26e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-x2sh4" Feb 16 02:22:34.100264 master-0 kubenswrapper[31559]: I0216 02:22:34.099758 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d870332c-2498-4135-a9b3-a71e67c2805b-auth-proxy-config\") pod \"machine-config-operator-84976bb859-5gs6g\" (UID: \"d870332c-2498-4135-a9b3-a71e67c2805b\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-5gs6g" Feb 16 02:22:34.100264 master-0 kubenswrapper[31559]: I0216 02:22:34.099789 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f91346c7-bde4-4fa2-ac27-b5f0d25eeb75-cnibin\") pod \"multus-additional-cni-plugins-mvdkf\" (UID: \"f91346c7-bde4-4fa2-ac27-b5f0d25eeb75\") " pod="openshift-multus/multus-additional-cni-plugins-mvdkf" Feb 16 02:22:34.100264 master-0 kubenswrapper[31559]: I0216 02:22:34.099827 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/a77e2f8f-d164-4a58-aab2-f3444c05cacb-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-75b869db96-qm7rm\" (UID: \"a77e2f8f-d164-4a58-aab2-f3444c05cacb\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-qm7rm" Feb 16 02:22:34.100264 master-0 kubenswrapper[31559]: I0216 02:22:34.099855 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/456e6c3a-c16c-470b-a0cd-bb79865b54f0-host-etc-kube\") pod \"network-operator-6fcf4c966-dctqr\" (UID: \"456e6c3a-c16c-470b-a0cd-bb79865b54f0\") " pod="openshift-network-operator/network-operator-6fcf4c966-dctqr" Feb 16 02:22:34.100264 master-0 kubenswrapper[31559]: I0216 02:22:34.099885 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbmnx\" (UniqueName: \"kubernetes.io/projected/a8d00a01-aa48-4830-a558-93a31cb98b31-kube-api-access-lbmnx\") pod \"control-plane-machine-set-operator-d8bf84b88-r5l9f\" (UID: \"a8d00a01-aa48-4830-a558-93a31cb98b31\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-r5l9f" Feb 16 02:22:34.100264 master-0 kubenswrapper[31559]: I0216 02:22:34.099992 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d870332c-2498-4135-a9b3-a71e67c2805b-proxy-tls\") pod \"machine-config-operator-84976bb859-5gs6g\" (UID: \"d870332c-2498-4135-a9b3-a71e67c2805b\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-5gs6g" Feb 16 02:22:34.100264 master-0 kubenswrapper[31559]: I0216 02:22:34.100000 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a5b01c1-1231-4e69-8b6c-c4981b65b26e-serving-cert\") pod \"kube-storage-version-migrator-operator-cd5474998-x2sh4\" (UID: \"4a5b01c1-1231-4e69-8b6c-c4981b65b26e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-x2sh4" Feb 16 02:22:34.100264 master-0 kubenswrapper[31559]: I0216 02:22:34.100067 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f7317f91-9441-449f-9738-85da088cf94f-ovnkube-config\") pod \"ovnkube-control-plane-bb7ffbb8d-mcff9\" (UID: \"f7317f91-9441-449f-9738-85da088cf94f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-mcff9" Feb 16 02:22:34.100264 master-0 kubenswrapper[31559]: I0216 02:22:34.100141 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2qvg\" (UniqueName: \"kubernetes.io/projected/9be9fd24-fdb1-43dc-80b8-68020427bfd7-kube-api-access-k2qvg\") pod \"openshift-config-operator-7c6bdb986f-zlbd2\" (UID: \"9be9fd24-fdb1-43dc-80b8-68020427bfd7\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-zlbd2" Feb 16 02:22:34.100264 master-0 kubenswrapper[31559]: I0216 02:22:34.100171 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/c4a146b2-c712-408a-97d8-5de3a84f3aaf-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj\" (UID: \"c4a146b2-c712-408a-97d8-5de3a84f3aaf\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj" Feb 16 02:22:34.100264 master-0 kubenswrapper[31559]: I0216 02:22:34.100194 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-host-var-lib-cni-multus\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:22:34.100264 master-0 kubenswrapper[31559]: I0216 02:22:34.100223 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5m4sb\" (UniqueName: \"kubernetes.io/projected/7f0f9b7d-e663-4927-861b-a9544d483b6e-kube-api-access-5m4sb\") pod \"network-metrics-daemon-gn9mv\" (UID: \"7f0f9b7d-e663-4927-861b-a9544d483b6e\") " pod="openshift-multus/network-metrics-daemon-gn9mv" Feb 16 02:22:34.101195 master-0 kubenswrapper[31559]: I0216 02:22:34.100323 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-log-socket\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:22:34.101195 master-0 kubenswrapper[31559]: I0216 02:22:34.100358 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9cd32bc-a13a-44ee-ba52-7bb335c7007b-trusted-ca-bundle\") pod \"authentication-operator-755d954778-bngv9\" (UID: \"c9cd32bc-a13a-44ee-ba52-7bb335c7007b\") " pod="openshift-authentication-operator/authentication-operator-755d954778-bngv9" Feb 16 02:22:34.101195 master-0 kubenswrapper[31559]: I0216 02:22:34.100383 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/48863ff6-63ac-42d7-bac7-29d888c92db9-cert\") pod \"cluster-autoscaler-operator-67fd9768b5-9rvcj\" (UID: \"48863ff6-63ac-42d7-bac7-29d888c92db9\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-9rvcj" Feb 16 02:22:34.101195 master-0 kubenswrapper[31559]: I0216 02:22:34.100409 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17390d9a-148d-4927-a831-5bc4873c43d5-service-ca-bundle\") pod \"router-default-864ddd5f56-ffptx\" (UID: \"17390d9a-148d-4927-a831-5bc4873c43d5\") " pod="openshift-ingress/router-default-864ddd5f56-ffptx" Feb 16 02:22:34.101195 master-0 kubenswrapper[31559]: I0216 02:22:34.100476 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/20bf60f7-9e36-477e-96a5-4fc8dc1bca5e-kube-api-access\") pod \"installer-3-master-0\" (UID: \"20bf60f7-9e36-477e-96a5-4fc8dc1bca5e\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 16 02:22:34.101195 master-0 kubenswrapper[31559]: I0216 02:22:34.100504 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgqtf\" (UniqueName: \"kubernetes.io/projected/86af980a-2653-40c3-a368-a795d7fb8558-kube-api-access-tgqtf\") pod \"kube-state-metrics-7cc9598d54-2gx8b\" (UID: \"86af980a-2653-40c3-a368-a795d7fb8558\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-2gx8b" Feb 16 02:22:34.101195 master-0 kubenswrapper[31559]: I0216 02:22:34.100559 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f810ea0-e32d-4097-beca-5194349a57a6-utilities\") pod \"community-operators-s95k9\" (UID: \"5f810ea0-e32d-4097-beca-5194349a57a6\") " pod="openshift-marketplace/community-operators-s95k9" Feb 16 02:22:34.101195 master-0 kubenswrapper[31559]: I0216 02:22:34.100615 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0abea413-e08a-465a-8ec4-2be650bfd5bd-trusted-ca-bundle\") pod \"insights-operator-cb4f7b4cf-llpf5\" (UID: \"0abea413-e08a-465a-8ec4-2be650bfd5bd\") " pod="openshift-insights/insights-operator-cb4f7b4cf-llpf5" Feb 16 02:22:34.101195 master-0 kubenswrapper[31559]: I0216 02:22:34.100660 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/17390d9a-148d-4927-a831-5bc4873c43d5-default-certificate\") pod \"router-default-864ddd5f56-ffptx\" (UID: \"17390d9a-148d-4927-a831-5bc4873c43d5\") " pod="openshift-ingress/router-default-864ddd5f56-ffptx" Feb 16 02:22:34.101195 master-0 kubenswrapper[31559]: I0216 02:22:34.100673 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f810ea0-e32d-4097-beca-5194349a57a6-utilities\") pod \"community-operators-s95k9\" (UID: \"5f810ea0-e32d-4097-beca-5194349a57a6\") " pod="openshift-marketplace/community-operators-s95k9" Feb 16 02:22:34.101195 master-0 kubenswrapper[31559]: I0216 02:22:34.100711 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/00ef3b03-55dc-4661-b7fd-1e586c45b5de-etc-kubernetes\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:22:34.101195 master-0 kubenswrapper[31559]: I0216 02:22:34.100755 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxq28\" (UniqueName: \"kubernetes.io/projected/1487f82c-c14a-4f65-be77-5af2612f56f4-kube-api-access-wxq28\") pod \"redhat-marketplace-thm6w\" (UID: \"1487f82c-c14a-4f65-be77-5af2612f56f4\") " pod="openshift-marketplace/redhat-marketplace-thm6w" Feb 16 02:22:34.101195 master-0 kubenswrapper[31559]: I0216 02:22:34.100767 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9cd32bc-a13a-44ee-ba52-7bb335c7007b-trusted-ca-bundle\") pod \"authentication-operator-755d954778-bngv9\" (UID: \"c9cd32bc-a13a-44ee-ba52-7bb335c7007b\") " pod="openshift-authentication-operator/authentication-operator-755d954778-bngv9" Feb 16 02:22:34.101195 master-0 kubenswrapper[31559]: I0216 02:22:34.100791 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e491b5ed-9c09-4308-9843-fba8d43bd3ae-serving-cert\") pod \"controller-manager-5788fc6459-29m25\" (UID: \"e491b5ed-9c09-4308-9843-fba8d43bd3ae\") " pod="openshift-controller-manager/controller-manager-5788fc6459-29m25" Feb 16 02:22:34.101195 master-0 kubenswrapper[31559]: I0216 02:22:34.100848 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a8d00a01-aa48-4830-a558-93a31cb98b31-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-d8bf84b88-r5l9f\" (UID: \"a8d00a01-aa48-4830-a558-93a31cb98b31\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-r5l9f" Feb 16 02:22:34.101195 master-0 kubenswrapper[31559]: I0216 02:22:34.100895 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/76915cba-7c11-4bd8-9943-81de74e7781b-srv-cert\") pod \"catalog-operator-588944557d-2z8fq\" (UID: \"76915cba-7c11-4bd8-9943-81de74e7781b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-2z8fq" Feb 16 02:22:34.101195 master-0 kubenswrapper[31559]: I0216 02:22:34.100928 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/980aa005-f51d-4ca2-aee6-a6fdeefd86d0-kube-api-access\") pod \"kube-apiserver-operator-54984b6678-dsjz2\" (UID: \"980aa005-f51d-4ca2-aee6-a6fdeefd86d0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-dsjz2" Feb 16 02:22:34.101195 master-0 kubenswrapper[31559]: I0216 02:22:34.101088 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e491b5ed-9c09-4308-9843-fba8d43bd3ae-proxy-ca-bundles\") pod \"controller-manager-5788fc6459-29m25\" (UID: \"e491b5ed-9c09-4308-9843-fba8d43bd3ae\") " pod="openshift-controller-manager/controller-manager-5788fc6459-29m25" Feb 16 02:22:34.101195 master-0 kubenswrapper[31559]: I0216 02:22:34.101135 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cz49l\" (UniqueName: \"kubernetes.io/projected/c8086f93-2d98-4218-afac-20a65e6bf943-kube-api-access-cz49l\") pod \"multus-admission-controller-6d678b8d67-8gzlx\" (UID: \"c8086f93-2d98-4218-afac-20a65e6bf943\") " pod="openshift-multus/multus-admission-controller-6d678b8d67-8gzlx" Feb 16 02:22:34.101195 master-0 kubenswrapper[31559]: I0216 02:22:34.101161 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/04804a08-e3a5-46f3-abcb-967866834baa-metrics-tls\") pod \"ingress-operator-c588d8cb4-nbjz6\" (UID: \"04804a08-e3a5-46f3-abcb-967866834baa\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nbjz6" Feb 16 02:22:34.101195 master-0 kubenswrapper[31559]: I0216 02:22:34.101180 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/76915cba-7c11-4bd8-9943-81de74e7781b-srv-cert\") pod \"catalog-operator-588944557d-2z8fq\" (UID: \"76915cba-7c11-4bd8-9943-81de74e7781b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-2z8fq" Feb 16 02:22:34.101195 master-0 kubenswrapper[31559]: I0216 02:22:34.101187 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e379cfaf-3a4c-40e7-8641-3524b3669295-serving-cert\") pod \"openshift-apiserver-operator-6d4655d9cf-v7lmz\" (UID: \"e379cfaf-3a4c-40e7-8641-3524b3669295\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-v7lmz" Feb 16 02:22:34.101195 master-0 kubenswrapper[31559]: I0216 02:22:34.101217 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/86af980a-2653-40c3-a368-a795d7fb8558-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7cc9598d54-2gx8b\" (UID: \"86af980a-2653-40c3-a368-a795d7fb8558\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-2gx8b" Feb 16 02:22:34.102190 master-0 kubenswrapper[31559]: I0216 02:22:34.101249 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/8c267cc7-a51a-4b14-baee-e584254eefc5-audit-log\") pod \"metrics-server-67b79bd656-cs2n2\" (UID: \"8c267cc7-a51a-4b14-baee-e584254eefc5\") " pod="openshift-monitoring/metrics-server-67b79bd656-cs2n2" Feb 16 02:22:34.102190 master-0 kubenswrapper[31559]: I0216 02:22:34.101355 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/8c267cc7-a51a-4b14-baee-e584254eefc5-audit-log\") pod \"metrics-server-67b79bd656-cs2n2\" (UID: \"8c267cc7-a51a-4b14-baee-e584254eefc5\") " pod="openshift-monitoring/metrics-server-67b79bd656-cs2n2" Feb 16 02:22:34.102190 master-0 kubenswrapper[31559]: I0216 02:22:34.101367 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/04804a08-e3a5-46f3-abcb-967866834baa-metrics-tls\") pod \"ingress-operator-c588d8cb4-nbjz6\" (UID: \"04804a08-e3a5-46f3-abcb-967866834baa\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nbjz6" Feb 16 02:22:34.102190 master-0 kubenswrapper[31559]: I0216 02:22:34.101404 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnh76\" (UniqueName: \"kubernetes.io/projected/676adb95-3ffd-43e5-89e3-9d7a7d74df28-kube-api-access-lnh76\") pod \"migrator-5bd989df77-sh2wj\" (UID: \"676adb95-3ffd-43e5-89e3-9d7a7d74df28\") " pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-sh2wj" Feb 16 02:22:34.102190 master-0 kubenswrapper[31559]: I0216 02:22:34.101474 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/00ef3b03-55dc-4661-b7fd-1e586c45b5de-lib-modules\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:22:34.102190 master-0 kubenswrapper[31559]: I0216 02:22:34.101508 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/20bf60f7-9e36-477e-96a5-4fc8dc1bca5e-var-lock\") pod \"installer-3-master-0\" (UID: \"20bf60f7-9e36-477e-96a5-4fc8dc1bca5e\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 16 02:22:34.102190 master-0 kubenswrapper[31559]: I0216 02:22:34.101539 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/83883885-f493-4559-9c0f-e28d69712475-client-ca\") pod \"route-controller-manager-998bd8b4b-hm5k2\" (UID: \"83883885-f493-4559-9c0f-e28d69712475\") " pod="openshift-route-controller-manager/route-controller-manager-998bd8b4b-hm5k2" Feb 16 02:22:34.102190 master-0 kubenswrapper[31559]: I0216 02:22:34.101567 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0fbc8f91-f8cc-48d8-917c-64fa978069de-proxy-tls\") pod \"machine-config-daemon-qd4l7\" (UID: \"0fbc8f91-f8cc-48d8-917c-64fa978069de\") " pod="openshift-machine-config-operator/machine-config-daemon-qd4l7" Feb 16 02:22:34.102190 master-0 kubenswrapper[31559]: I0216 02:22:34.101701 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/430c146b-ceaf-411a-add6-ce949243aabf-multus-daemon-config\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:22:34.102190 master-0 kubenswrapper[31559]: I0216 02:22:34.101733 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4p8p\" (UniqueName: \"kubernetes.io/projected/e491b5ed-9c09-4308-9843-fba8d43bd3ae-kube-api-access-j4p8p\") pod \"controller-manager-5788fc6459-29m25\" (UID: \"e491b5ed-9c09-4308-9843-fba8d43bd3ae\") " pod="openshift-controller-manager/controller-manager-5788fc6459-29m25" Feb 16 02:22:34.102190 master-0 kubenswrapper[31559]: I0216 02:22:34.101757 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/00ef3b03-55dc-4661-b7fd-1e586c45b5de-run\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:22:34.102190 master-0 kubenswrapper[31559]: I0216 02:22:34.101787 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dbc5b101-936f-4bf3-bbf3-f30966b0ab50-env-overrides\") pod \"network-node-identity-kffmg\" (UID: \"dbc5b101-936f-4bf3-bbf3-f30966b0ab50\") " pod="openshift-network-node-identity/network-node-identity-kffmg" Feb 16 02:22:34.102190 master-0 kubenswrapper[31559]: I0216 02:22:34.101821 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/86af980a-2653-40c3-a368-a795d7fb8558-kube-state-metrics-tls\") pod \"kube-state-metrics-7cc9598d54-2gx8b\" (UID: \"86af980a-2653-40c3-a368-a795d7fb8558\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-2gx8b" Feb 16 02:22:34.102190 master-0 kubenswrapper[31559]: I0216 02:22:34.101885 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89041b37-18f6-499d-89ec-a0523a25dc58-utilities\") pod \"redhat-operators-9c6g5\" (UID: \"89041b37-18f6-499d-89ec-a0523a25dc58\") " pod="openshift-marketplace/redhat-operators-9c6g5" Feb 16 02:22:34.102190 master-0 kubenswrapper[31559]: I0216 02:22:34.101928 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/430c146b-ceaf-411a-add6-ce949243aabf-multus-daemon-config\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:22:34.102190 master-0 kubenswrapper[31559]: I0216 02:22:34.101935 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f7317f91-9441-449f-9738-85da088cf94f-ovnkube-config\") pod \"ovnkube-control-plane-bb7ffbb8d-mcff9\" (UID: \"f7317f91-9441-449f-9738-85da088cf94f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-mcff9" Feb 16 02:22:34.102190 master-0 kubenswrapper[31559]: I0216 02:22:34.101950 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dbc5b101-936f-4bf3-bbf3-f30966b0ab50-env-overrides\") pod \"network-node-identity-kffmg\" (UID: \"dbc5b101-936f-4bf3-bbf3-f30966b0ab50\") " pod="openshift-network-node-identity/network-node-identity-kffmg" Feb 16 02:22:34.102190 master-0 kubenswrapper[31559]: I0216 02:22:34.101939 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c02961f-30ec-4405-b7fa-9c4192342ae9-config\") pod \"openshift-controller-manager-operator-5f5f84757d-b47jp\" (UID: \"6c02961f-30ec-4405-b7fa-9c4192342ae9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-b47jp" Feb 16 02:22:34.102190 master-0 kubenswrapper[31559]: I0216 02:22:34.101980 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e379cfaf-3a4c-40e7-8641-3524b3669295-serving-cert\") pod \"openshift-apiserver-operator-6d4655d9cf-v7lmz\" (UID: \"e379cfaf-3a4c-40e7-8641-3524b3669295\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-v7lmz" Feb 16 02:22:34.102190 master-0 kubenswrapper[31559]: I0216 02:22:34.102017 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c4a146b2-c712-408a-97d8-5de3a84f3aaf-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj\" (UID: \"c4a146b2-c712-408a-97d8-5de3a84f3aaf\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj" Feb 16 02:22:34.102190 master-0 kubenswrapper[31559]: I0216 02:22:34.102029 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89041b37-18f6-499d-89ec-a0523a25dc58-utilities\") pod \"redhat-operators-9c6g5\" (UID: \"89041b37-18f6-499d-89ec-a0523a25dc58\") " pod="openshift-marketplace/redhat-operators-9c6g5" Feb 16 02:22:34.102190 master-0 kubenswrapper[31559]: I0216 02:22:34.102062 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/456e6c3a-c16c-470b-a0cd-bb79865b54f0-metrics-tls\") pod \"network-operator-6fcf4c966-dctqr\" (UID: \"456e6c3a-c16c-470b-a0cd-bb79865b54f0\") " pod="openshift-network-operator/network-operator-6fcf4c966-dctqr" Feb 16 02:22:34.102190 master-0 kubenswrapper[31559]: I0216 02:22:34.102109 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-etc-openvswitch\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:22:34.102190 master-0 kubenswrapper[31559]: I0216 02:22:34.102151 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqmhs\" (UniqueName: \"kubernetes.io/projected/7ac81030-35d1-4d86-844d-65d1156d8944-kube-api-access-lqmhs\") pod \"dns-default-njlg6\" (UID: \"7ac81030-35d1-4d86-844d-65d1156d8944\") " pod="openshift-dns/dns-default-njlg6" Feb 16 02:22:34.102190 master-0 kubenswrapper[31559]: I0216 02:22:34.102161 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c02961f-30ec-4405-b7fa-9c4192342ae9-config\") pod \"openshift-controller-manager-operator-5f5f84757d-b47jp\" (UID: \"6c02961f-30ec-4405-b7fa-9c4192342ae9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-b47jp" Feb 16 02:22:34.102190 master-0 kubenswrapper[31559]: I0216 02:22:34.102193 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2582m\" (UniqueName: \"kubernetes.io/projected/d008dbd4-e713-4f2e-b64d-ca9cfc83a502-kube-api-access-2582m\") pod \"csi-snapshot-controller-operator-7b87b97578-8n9v4\" (UID: \"d008dbd4-e713-4f2e-b64d-ca9cfc83a502\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-8n9v4" Feb 16 02:22:34.103341 master-0 kubenswrapper[31559]: I0216 02:22:34.102243 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/724ac845-3835-458b-9645-e665be135ff9-etcd-ca\") pod \"etcd-operator-67bf55ccdd-htjgz\" (UID: \"724ac845-3835-458b-9645-e665be135ff9\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-htjgz" Feb 16 02:22:34.103341 master-0 kubenswrapper[31559]: I0216 02:22:34.102286 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f7317f91-9441-449f-9738-85da088cf94f-env-overrides\") pod \"ovnkube-control-plane-bb7ffbb8d-mcff9\" (UID: \"f7317f91-9441-449f-9738-85da088cf94f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-mcff9" Feb 16 02:22:34.103341 master-0 kubenswrapper[31559]: I0216 02:22:34.102316 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/456e6c3a-c16c-470b-a0cd-bb79865b54f0-metrics-tls\") pod \"network-operator-6fcf4c966-dctqr\" (UID: \"456e6c3a-c16c-470b-a0cd-bb79865b54f0\") " pod="openshift-network-operator/network-operator-6fcf4c966-dctqr" Feb 16 02:22:34.103341 master-0 kubenswrapper[31559]: I0216 02:22:34.102457 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/695d1f01-d3c1-4fb9-9dda-daf33eae11f5-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-7485d645b8-v9mmd\" (UID: \"695d1f01-d3c1-4fb9-9dda-daf33eae11f5\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-v9mmd" Feb 16 02:22:34.103341 master-0 kubenswrapper[31559]: I0216 02:22:34.102491 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-systemd-units\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:22:34.103341 master-0 kubenswrapper[31559]: I0216 02:22:34.102510 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f7317f91-9441-449f-9738-85da088cf94f-env-overrides\") pod \"ovnkube-control-plane-bb7ffbb8d-mcff9\" (UID: \"f7317f91-9441-449f-9738-85da088cf94f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-mcff9" Feb 16 02:22:34.103341 master-0 kubenswrapper[31559]: I0216 02:22:34.102519 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dbc5b101-936f-4bf3-bbf3-f30966b0ab50-webhook-cert\") pod \"network-node-identity-kffmg\" (UID: \"dbc5b101-936f-4bf3-bbf3-f30966b0ab50\") " pod="openshift-network-node-identity/network-node-identity-kffmg" Feb 16 02:22:34.103341 master-0 kubenswrapper[31559]: I0216 02:22:34.102534 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/724ac845-3835-458b-9645-e665be135ff9-etcd-ca\") pod \"etcd-operator-67bf55ccdd-htjgz\" (UID: \"724ac845-3835-458b-9645-e665be135ff9\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-htjgz" Feb 16 02:22:34.103341 master-0 kubenswrapper[31559]: I0216 02:22:34.102579 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-multus-socket-dir-parent\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:22:34.103341 master-0 kubenswrapper[31559]: I0216 02:22:34.102852 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dbc5b101-936f-4bf3-bbf3-f30966b0ab50-webhook-cert\") pod \"network-node-identity-kffmg\" (UID: \"dbc5b101-936f-4bf3-bbf3-f30966b0ab50\") " pod="openshift-network-node-identity/network-node-identity-kffmg" Feb 16 02:22:34.103341 master-0 kubenswrapper[31559]: I0216 02:22:34.102629 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-host-var-lib-kubelet\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:22:34.103341 master-0 kubenswrapper[31559]: I0216 02:22:34.103068 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1743372f-bdb0-4558-b47b-3714f3aa3fde-serving-cert\") pod \"openshift-kube-scheduler-operator-7485d55966-mmhcs\" (UID: \"1743372f-bdb0-4558-b47b-3714f3aa3fde\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-mmhcs" Feb 16 02:22:34.103341 master-0 kubenswrapper[31559]: I0216 02:22:34.103304 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1743372f-bdb0-4558-b47b-3714f3aa3fde-serving-cert\") pod \"openshift-kube-scheduler-operator-7485d55966-mmhcs\" (UID: \"1743372f-bdb0-4558-b47b-3714f3aa3fde\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-mmhcs" Feb 16 02:22:34.111620 master-0 kubenswrapper[31559]: I0216 02:22:34.111583 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Feb 16 02:22:34.125279 master-0 kubenswrapper[31559]: I0216 02:22:34.125156 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 16 02:22:34.127842 master-0 kubenswrapper[31559]: I0216 02:22:34.127794 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/17390d9a-148d-4927-a831-5bc4873c43d5-stats-auth\") pod \"router-default-864ddd5f56-ffptx\" (UID: \"17390d9a-148d-4927-a831-5bc4873c43d5\") " pod="openshift-ingress/router-default-864ddd5f56-ffptx" Feb 16 02:22:34.144988 master-0 kubenswrapper[31559]: I0216 02:22:34.144940 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 16 02:22:34.151457 master-0 kubenswrapper[31559]: I0216 02:22:34.151373 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17390d9a-148d-4927-a831-5bc4873c43d5-service-ca-bundle\") pod \"router-default-864ddd5f56-ffptx\" (UID: \"17390d9a-148d-4927-a831-5bc4873c43d5\") " pod="openshift-ingress/router-default-864ddd5f56-ffptx" Feb 16 02:22:34.166142 master-0 kubenswrapper[31559]: I0216 02:22:34.165707 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Feb 16 02:22:34.176488 master-0 kubenswrapper[31559]: I0216 02:22:34.176396 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/857357a1-dc98-4dd5-98b3-c94b1ddf9dec-catalogserver-certs\") pod \"catalogd-controller-manager-67bc7c997f-zc2br\" (UID: \"857357a1-dc98-4dd5-98b3-c94b1ddf9dec\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-zc2br" Feb 16 02:22:34.186155 master-0 kubenswrapper[31559]: I0216 02:22:34.186070 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 16 02:22:34.191508 master-0 kubenswrapper[31559]: I0216 02:22:34.191460 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/17390d9a-148d-4927-a831-5bc4873c43d5-default-certificate\") pod \"router-default-864ddd5f56-ffptx\" (UID: \"17390d9a-148d-4927-a831-5bc4873c43d5\") " pod="openshift-ingress/router-default-864ddd5f56-ffptx" Feb 16 02:22:34.204135 master-0 kubenswrapper[31559]: I0216 02:22:34.204073 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-system-cni-dir\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:22:34.204312 master-0 kubenswrapper[31559]: I0216 02:22:34.204178 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/00ef3b03-55dc-4661-b7fd-1e586c45b5de-etc-sysctl-conf\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:22:34.204312 master-0 kubenswrapper[31559]: I0216 02:22:34.204281 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-system-cni-dir\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:22:34.204518 master-0 kubenswrapper[31559]: I0216 02:22:34.204368 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/00ef3b03-55dc-4661-b7fd-1e586c45b5de-etc-sysctl-conf\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:22:34.204619 master-0 kubenswrapper[31559]: I0216 02:22:34.204517 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/00ef3b03-55dc-4661-b7fd-1e586c45b5de-etc-sysctl-d\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:22:34.204697 master-0 kubenswrapper[31559]: I0216 02:22:34.204640 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/27d876a7-6a48-4942-ad96-ed8ed3aa104b-etc-containers\") pod \"operator-controller-controller-manager-85c9b89969-g9lcm\" (UID: \"27d876a7-6a48-4942-ad96-ed8ed3aa104b\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-g9lcm" Feb 16 02:22:34.204697 master-0 kubenswrapper[31559]: I0216 02:22:34.204668 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/00ef3b03-55dc-4661-b7fd-1e586c45b5de-etc-sysctl-d\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:22:34.204697 master-0 kubenswrapper[31559]: I0216 02:22:34.204685 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-host-cni-netd\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:22:34.204883 master-0 kubenswrapper[31559]: I0216 02:22:34.204719 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/27d876a7-6a48-4942-ad96-ed8ed3aa104b-etc-containers\") pod \"operator-controller-controller-manager-85c9b89969-g9lcm\" (UID: \"27d876a7-6a48-4942-ad96-ed8ed3aa104b\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-g9lcm" Feb 16 02:22:34.204883 master-0 kubenswrapper[31559]: I0216 02:22:34.204793 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-host-run-netns\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:22:34.205022 master-0 kubenswrapper[31559]: I0216 02:22:34.204920 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-host-cni-netd\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:22:34.205022 master-0 kubenswrapper[31559]: I0216 02:22:34.204929 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/20bf60f7-9e36-477e-96a5-4fc8dc1bca5e-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"20bf60f7-9e36-477e-96a5-4fc8dc1bca5e\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 16 02:22:34.205022 master-0 kubenswrapper[31559]: I0216 02:22:34.204971 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-host-run-netns\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:22:34.205022 master-0 kubenswrapper[31559]: I0216 02:22:34.205014 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/00ef3b03-55dc-4661-b7fd-1e586c45b5de-etc-systemd\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:22:34.205355 master-0 kubenswrapper[31559]: I0216 02:22:34.205086 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/20bf60f7-9e36-477e-96a5-4fc8dc1bca5e-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"20bf60f7-9e36-477e-96a5-4fc8dc1bca5e\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 16 02:22:34.205355 master-0 kubenswrapper[31559]: I0216 02:22:34.205114 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/00ef3b03-55dc-4661-b7fd-1e586c45b5de-etc-systemd\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:22:34.205355 master-0 kubenswrapper[31559]: I0216 02:22:34.205128 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-run-systemd\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:22:34.205355 master-0 kubenswrapper[31559]: I0216 02:22:34.205180 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-run-systemd\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:22:34.205355 master-0 kubenswrapper[31559]: I0216 02:22:34.205209 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-multus-conf-dir\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:22:34.205355 master-0 kubenswrapper[31559]: I0216 02:22:34.205263 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8f918d5b-1a4c-4b56-98a4-5cef638bb615-audit-dir\") pod \"apiserver-578b9bc556-8g98v\" (UID: \"8f918d5b-1a4c-4b56-98a4-5cef638bb615\") " pod="openshift-apiserver/apiserver-578b9bc556-8g98v" Feb 16 02:22:34.205814 master-0 kubenswrapper[31559]: I0216 02:22:34.205406 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-multus-conf-dir\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:22:34.205814 master-0 kubenswrapper[31559]: I0216 02:22:34.205492 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8f918d5b-1a4c-4b56-98a4-5cef638bb615-audit-dir\") pod \"apiserver-578b9bc556-8g98v\" (UID: \"8f918d5b-1a4c-4b56-98a4-5cef638bb615\") " pod="openshift-apiserver/apiserver-578b9bc556-8g98v" Feb 16 02:22:34.205814 master-0 kubenswrapper[31559]: I0216 02:22:34.205556 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 16 02:22:34.205814 master-0 kubenswrapper[31559]: I0216 02:22:34.205652 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-run-ovn\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:22:34.205814 master-0 kubenswrapper[31559]: I0216 02:22:34.205751 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/00ef3b03-55dc-4661-b7fd-1e586c45b5de-sys\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:22:34.205814 master-0 kubenswrapper[31559]: I0216 02:22:34.205786 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/00ef3b03-55dc-4661-b7fd-1e586c45b5de-var-lib-kubelet\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:22:34.206179 master-0 kubenswrapper[31559]: I0216 02:22:34.205822 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ad700b17-ba2a-41d4-8bec-538a009a613b-etc-ssl-certs\") pod \"cluster-version-operator-649c4f5445-lzvc4\" (UID: \"ad700b17-ba2a-41d4-8bec-538a009a613b\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-lzvc4" Feb 16 02:22:34.206179 master-0 kubenswrapper[31559]: I0216 02:22:34.205840 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-run-ovn\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:22:34.206179 master-0 kubenswrapper[31559]: I0216 02:22:34.205895 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-cnibin\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:22:34.206179 master-0 kubenswrapper[31559]: I0216 02:22:34.205934 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/00ef3b03-55dc-4661-b7fd-1e586c45b5de-sys\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:22:34.206179 master-0 kubenswrapper[31559]: I0216 02:22:34.205970 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/857357a1-dc98-4dd5-98b3-c94b1ddf9dec-etc-containers\") pod \"catalogd-controller-manager-67bc7c997f-zc2br\" (UID: \"857357a1-dc98-4dd5-98b3-c94b1ddf9dec\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-zc2br" Feb 16 02:22:34.206179 master-0 kubenswrapper[31559]: I0216 02:22:34.205997 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/1a07cd28-a33d-4abd-9198-ba82bacd51ba-hosts-file\") pod \"node-resolver-7tjn9\" (UID: \"1a07cd28-a33d-4abd-9198-ba82bacd51ba\") " pod="openshift-dns/node-resolver-7tjn9" Feb 16 02:22:34.206179 master-0 kubenswrapper[31559]: I0216 02:22:34.206004 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/00ef3b03-55dc-4661-b7fd-1e586c45b5de-var-lib-kubelet\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:22:34.206179 master-0 kubenswrapper[31559]: I0216 02:22:34.206034 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ad700b17-ba2a-41d4-8bec-538a009a613b-etc-ssl-certs\") pod \"cluster-version-operator-649c4f5445-lzvc4\" (UID: \"ad700b17-ba2a-41d4-8bec-538a009a613b\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-lzvc4" Feb 16 02:22:34.206179 master-0 kubenswrapper[31559]: I0216 02:22:34.206052 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-hostroot\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:22:34.206179 master-0 kubenswrapper[31559]: I0216 02:22:34.206086 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/1a07cd28-a33d-4abd-9198-ba82bacd51ba-hosts-file\") pod \"node-resolver-7tjn9\" (UID: \"1a07cd28-a33d-4abd-9198-ba82bacd51ba\") " pod="openshift-dns/node-resolver-7tjn9" Feb 16 02:22:34.206179 master-0 kubenswrapper[31559]: I0216 02:22:34.206132 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-host-slash\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:22:34.206179 master-0 kubenswrapper[31559]: I0216 02:22:34.206173 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-cnibin\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:22:34.206179 master-0 kubenswrapper[31559]: I0216 02:22:34.206136 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/857357a1-dc98-4dd5-98b3-c94b1ddf9dec-etc-containers\") pod \"catalogd-controller-manager-67bc7c997f-zc2br\" (UID: \"857357a1-dc98-4dd5-98b3-c94b1ddf9dec\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-zc2br" Feb 16 02:22:34.207250 master-0 kubenswrapper[31559]: I0216 02:22:34.206212 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-hostroot\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:22:34.207250 master-0 kubenswrapper[31559]: I0216 02:22:34.206244 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-host-cni-bin\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:22:34.207250 master-0 kubenswrapper[31559]: I0216 02:22:34.206264 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-host-slash\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:22:34.207250 master-0 kubenswrapper[31559]: I0216 02:22:34.206297 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-host-cni-bin\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:22:34.207250 master-0 kubenswrapper[31559]: I0216 02:22:34.206562 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/17390d9a-148d-4927-a831-5bc4873c43d5-metrics-certs\") pod \"router-default-864ddd5f56-ffptx\" (UID: \"17390d9a-148d-4927-a831-5bc4873c43d5\") " pod="openshift-ingress/router-default-864ddd5f56-ffptx" Feb 16 02:22:34.207250 master-0 kubenswrapper[31559]: I0216 02:22:34.206630 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f91346c7-bde4-4fa2-ac27-b5f0d25eeb75-os-release\") pod \"multus-additional-cni-plugins-mvdkf\" (UID: \"f91346c7-bde4-4fa2-ac27-b5f0d25eeb75\") " pod="openshift-multus/multus-additional-cni-plugins-mvdkf" Feb 16 02:22:34.207250 master-0 kubenswrapper[31559]: I0216 02:22:34.206684 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/27d876a7-6a48-4942-ad96-ed8ed3aa104b-etc-docker\") pod \"operator-controller-controller-manager-85c9b89969-g9lcm\" (UID: \"27d876a7-6a48-4942-ad96-ed8ed3aa104b\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-g9lcm" Feb 16 02:22:34.207250 master-0 kubenswrapper[31559]: I0216 02:22:34.206722 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2a67f799-fd8d-4bee-9d67-720151c1650b-host-slash\") pod \"iptables-alerter-9bnql\" (UID: \"2a67f799-fd8d-4bee-9d67-720151c1650b\") " pod="openshift-network-operator/iptables-alerter-9bnql" Feb 16 02:22:34.207250 master-0 kubenswrapper[31559]: I0216 02:22:34.206756 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-run-openvswitch\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:22:34.207250 master-0 kubenswrapper[31559]: I0216 02:22:34.206794 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/27d876a7-6a48-4942-ad96-ed8ed3aa104b-etc-docker\") pod \"operator-controller-controller-manager-85c9b89969-g9lcm\" (UID: \"27d876a7-6a48-4942-ad96-ed8ed3aa104b\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-g9lcm" Feb 16 02:22:34.207250 master-0 kubenswrapper[31559]: I0216 02:22:34.206848 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0a900f93-91c9-4782-89a3-1cc09f3aec95-sys\") pod \"node-exporter-jxbq6\" (UID: \"0a900f93-91c9-4782-89a3-1cc09f3aec95\") " pod="openshift-monitoring/node-exporter-jxbq6" Feb 16 02:22:34.207250 master-0 kubenswrapper[31559]: I0216 02:22:34.206862 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f91346c7-bde4-4fa2-ac27-b5f0d25eeb75-os-release\") pod \"multus-additional-cni-plugins-mvdkf\" (UID: \"f91346c7-bde4-4fa2-ac27-b5f0d25eeb75\") " pod="openshift-multus/multus-additional-cni-plugins-mvdkf" Feb 16 02:22:34.207250 master-0 kubenswrapper[31559]: I0216 02:22:34.206882 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/00ef3b03-55dc-4661-b7fd-1e586c45b5de-etc-sysconfig\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:22:34.207250 master-0 kubenswrapper[31559]: I0216 02:22:34.206902 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-run-openvswitch\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:22:34.207250 master-0 kubenswrapper[31559]: I0216 02:22:34.206933 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2a67f799-fd8d-4bee-9d67-720151c1650b-host-slash\") pod \"iptables-alerter-9bnql\" (UID: \"2a67f799-fd8d-4bee-9d67-720151c1650b\") " pod="openshift-network-operator/iptables-alerter-9bnql" Feb 16 02:22:34.207250 master-0 kubenswrapper[31559]: I0216 02:22:34.207016 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0a900f93-91c9-4782-89a3-1cc09f3aec95-sys\") pod \"node-exporter-jxbq6\" (UID: \"0a900f93-91c9-4782-89a3-1cc09f3aec95\") " pod="openshift-monitoring/node-exporter-jxbq6" Feb 16 02:22:34.207250 master-0 kubenswrapper[31559]: I0216 02:22:34.207080 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/00ef3b03-55dc-4661-b7fd-1e586c45b5de-etc-sysconfig\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:22:34.207250 master-0 kubenswrapper[31559]: I0216 02:22:34.207131 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:22:34.207250 master-0 kubenswrapper[31559]: I0216 02:22:34.207188 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-host-run-k8s-cni-cncf-io\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:22:34.207250 master-0 kubenswrapper[31559]: I0216 02:22:34.207230 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:22:34.208623 master-0 kubenswrapper[31559]: I0216 02:22:34.207290 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-host-run-k8s-cni-cncf-io\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:22:34.208623 master-0 kubenswrapper[31559]: I0216 02:22:34.207292 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-host-kubelet\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:22:34.208623 master-0 kubenswrapper[31559]: I0216 02:22:34.207318 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-host-kubelet\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:22:34.208623 master-0 kubenswrapper[31559]: I0216 02:22:34.207495 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-host-var-lib-cni-bin\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:22:34.208623 master-0 kubenswrapper[31559]: I0216 02:22:34.207572 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-host-var-lib-cni-bin\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:22:34.208623 master-0 kubenswrapper[31559]: I0216 02:22:34.207695 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-multus-cni-dir\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:22:34.208623 master-0 kubenswrapper[31559]: I0216 02:22:34.207776 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-os-release\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:22:34.208623 master-0 kubenswrapper[31559]: I0216 02:22:34.207834 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8f918d5b-1a4c-4b56-98a4-5cef638bb615-node-pullsecrets\") pod \"apiserver-578b9bc556-8g98v\" (UID: \"8f918d5b-1a4c-4b56-98a4-5cef638bb615\") " pod="openshift-apiserver/apiserver-578b9bc556-8g98v" Feb 16 02:22:34.208623 master-0 kubenswrapper[31559]: I0216 02:22:34.207874 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/857357a1-dc98-4dd5-98b3-c94b1ddf9dec-etc-docker\") pod \"catalogd-controller-manager-67bc7c997f-zc2br\" (UID: \"857357a1-dc98-4dd5-98b3-c94b1ddf9dec\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-zc2br" Feb 16 02:22:34.208623 master-0 kubenswrapper[31559]: I0216 02:22:34.207921 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-os-release\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:22:34.208623 master-0 kubenswrapper[31559]: I0216 02:22:34.207923 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-node-log\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:22:34.208623 master-0 kubenswrapper[31559]: I0216 02:22:34.207877 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-multus-cni-dir\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:22:34.208623 master-0 kubenswrapper[31559]: I0216 02:22:34.207968 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/857357a1-dc98-4dd5-98b3-c94b1ddf9dec-etc-docker\") pod \"catalogd-controller-manager-67bc7c997f-zc2br\" (UID: \"857357a1-dc98-4dd5-98b3-c94b1ddf9dec\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-zc2br" Feb 16 02:22:34.208623 master-0 kubenswrapper[31559]: I0216 02:22:34.208010 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/0a900f93-91c9-4782-89a3-1cc09f3aec95-node-exporter-wtmp\") pod \"node-exporter-jxbq6\" (UID: \"0a900f93-91c9-4782-89a3-1cc09f3aec95\") " pod="openshift-monitoring/node-exporter-jxbq6" Feb 16 02:22:34.208623 master-0 kubenswrapper[31559]: I0216 02:22:34.208018 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8f918d5b-1a4c-4b56-98a4-5cef638bb615-node-pullsecrets\") pod \"apiserver-578b9bc556-8g98v\" (UID: \"8f918d5b-1a4c-4b56-98a4-5cef638bb615\") " pod="openshift-apiserver/apiserver-578b9bc556-8g98v" Feb 16 02:22:34.208623 master-0 kubenswrapper[31559]: I0216 02:22:34.208063 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/0a900f93-91c9-4782-89a3-1cc09f3aec95-node-exporter-wtmp\") pod \"node-exporter-jxbq6\" (UID: \"0a900f93-91c9-4782-89a3-1cc09f3aec95\") " pod="openshift-monitoring/node-exporter-jxbq6" Feb 16 02:22:34.208623 master-0 kubenswrapper[31559]: I0216 02:22:34.208064 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f91346c7-bde4-4fa2-ac27-b5f0d25eeb75-system-cni-dir\") pod \"multus-additional-cni-plugins-mvdkf\" (UID: \"f91346c7-bde4-4fa2-ac27-b5f0d25eeb75\") " pod="openshift-multus/multus-additional-cni-plugins-mvdkf" Feb 16 02:22:34.208623 master-0 kubenswrapper[31559]: I0216 02:22:34.208096 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f91346c7-bde4-4fa2-ac27-b5f0d25eeb75-system-cni-dir\") pod \"multus-additional-cni-plugins-mvdkf\" (UID: \"f91346c7-bde4-4fa2-ac27-b5f0d25eeb75\") " pod="openshift-multus/multus-additional-cni-plugins-mvdkf" Feb 16 02:22:34.208623 master-0 kubenswrapper[31559]: I0216 02:22:34.208120 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/00ef3b03-55dc-4661-b7fd-1e586c45b5de-host\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:22:34.208623 master-0 kubenswrapper[31559]: I0216 02:22:34.208173 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/00ef3b03-55dc-4661-b7fd-1e586c45b5de-host\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:22:34.208623 master-0 kubenswrapper[31559]: I0216 02:22:34.208202 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-node-log\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:22:34.208623 master-0 kubenswrapper[31559]: I0216 02:22:34.208226 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-var-lib-openvswitch\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:22:34.208623 master-0 kubenswrapper[31559]: I0216 02:22:34.208517 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-host-run-ovn-kubernetes\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:22:34.208623 master-0 kubenswrapper[31559]: I0216 02:22:34.208560 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-host-run-netns\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:22:34.210304 master-0 kubenswrapper[31559]: I0216 02:22:34.208683 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/456e6c3a-c16c-470b-a0cd-bb79865b54f0-host-etc-kube\") pod \"network-operator-6fcf4c966-dctqr\" (UID: \"456e6c3a-c16c-470b-a0cd-bb79865b54f0\") " pod="openshift-network-operator/network-operator-6fcf4c966-dctqr" Feb 16 02:22:34.210304 master-0 kubenswrapper[31559]: I0216 02:22:34.208733 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f91346c7-bde4-4fa2-ac27-b5f0d25eeb75-cnibin\") pod \"multus-additional-cni-plugins-mvdkf\" (UID: \"f91346c7-bde4-4fa2-ac27-b5f0d25eeb75\") " pod="openshift-multus/multus-additional-cni-plugins-mvdkf" Feb 16 02:22:34.210304 master-0 kubenswrapper[31559]: I0216 02:22:34.208779 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-host-var-lib-cni-multus\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:22:34.210304 master-0 kubenswrapper[31559]: I0216 02:22:34.208863 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/c4a146b2-c712-408a-97d8-5de3a84f3aaf-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj\" (UID: \"c4a146b2-c712-408a-97d8-5de3a84f3aaf\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj" Feb 16 02:22:34.210304 master-0 kubenswrapper[31559]: I0216 02:22:34.208916 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-log-socket\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:22:34.210304 master-0 kubenswrapper[31559]: I0216 02:22:34.208965 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/20bf60f7-9e36-477e-96a5-4fc8dc1bca5e-kube-api-access\") pod \"installer-3-master-0\" (UID: \"20bf60f7-9e36-477e-96a5-4fc8dc1bca5e\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 16 02:22:34.210304 master-0 kubenswrapper[31559]: I0216 02:22:34.209029 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/00ef3b03-55dc-4661-b7fd-1e586c45b5de-etc-kubernetes\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:22:34.210304 master-0 kubenswrapper[31559]: I0216 02:22:34.209182 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/00ef3b03-55dc-4661-b7fd-1e586c45b5de-lib-modules\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:22:34.210304 master-0 kubenswrapper[31559]: I0216 02:22:34.209325 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f91346c7-bde4-4fa2-ac27-b5f0d25eeb75-cnibin\") pod \"multus-additional-cni-plugins-mvdkf\" (UID: \"f91346c7-bde4-4fa2-ac27-b5f0d25eeb75\") " pod="openshift-multus/multus-additional-cni-plugins-mvdkf" Feb 16 02:22:34.210304 master-0 kubenswrapper[31559]: I0216 02:22:34.209382 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-host-run-netns\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:22:34.210304 master-0 kubenswrapper[31559]: I0216 02:22:34.209417 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-host-run-ovn-kubernetes\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:22:34.210304 master-0 kubenswrapper[31559]: I0216 02:22:34.209550 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/c4a146b2-c712-408a-97d8-5de3a84f3aaf-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj\" (UID: \"c4a146b2-c712-408a-97d8-5de3a84f3aaf\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj" Feb 16 02:22:34.210304 master-0 kubenswrapper[31559]: I0216 02:22:34.209555 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-host-var-lib-cni-multus\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:22:34.210304 master-0 kubenswrapper[31559]: I0216 02:22:34.209476 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-var-lib-openvswitch\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:22:34.210304 master-0 kubenswrapper[31559]: I0216 02:22:34.209640 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/456e6c3a-c16c-470b-a0cd-bb79865b54f0-host-etc-kube\") pod \"network-operator-6fcf4c966-dctqr\" (UID: \"456e6c3a-c16c-470b-a0cd-bb79865b54f0\") " pod="openshift-network-operator/network-operator-6fcf4c966-dctqr" Feb 16 02:22:34.210304 master-0 kubenswrapper[31559]: I0216 02:22:34.209694 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/00ef3b03-55dc-4661-b7fd-1e586c45b5de-etc-kubernetes\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:22:34.210304 master-0 kubenswrapper[31559]: I0216 02:22:34.209781 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-log-socket\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:22:34.210304 master-0 kubenswrapper[31559]: I0216 02:22:34.209880 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/20bf60f7-9e36-477e-96a5-4fc8dc1bca5e-var-lock\") pod \"installer-3-master-0\" (UID: \"20bf60f7-9e36-477e-96a5-4fc8dc1bca5e\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 16 02:22:34.210304 master-0 kubenswrapper[31559]: I0216 02:22:34.210048 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/00ef3b03-55dc-4661-b7fd-1e586c45b5de-lib-modules\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:22:34.210304 master-0 kubenswrapper[31559]: I0216 02:22:34.210245 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/00ef3b03-55dc-4661-b7fd-1e586c45b5de-run\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:22:34.211982 master-0 kubenswrapper[31559]: I0216 02:22:34.210422 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/00ef3b03-55dc-4661-b7fd-1e586c45b5de-run\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:22:34.211982 master-0 kubenswrapper[31559]: I0216 02:22:34.210471 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/20bf60f7-9e36-477e-96a5-4fc8dc1bca5e-var-lock\") pod \"installer-3-master-0\" (UID: \"20bf60f7-9e36-477e-96a5-4fc8dc1bca5e\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 16 02:22:34.211982 master-0 kubenswrapper[31559]: I0216 02:22:34.210753 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-systemd-units\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:22:34.211982 master-0 kubenswrapper[31559]: I0216 02:22:34.210896 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-etc-openvswitch\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:22:34.211982 master-0 kubenswrapper[31559]: I0216 02:22:34.210907 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-systemd-units\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:22:34.211982 master-0 kubenswrapper[31559]: I0216 02:22:34.210969 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6dcef814-353e-4985-9afc-9e545f7853ae-etc-openvswitch\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:22:34.211982 master-0 kubenswrapper[31559]: I0216 02:22:34.211035 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-multus-socket-dir-parent\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:22:34.211982 master-0 kubenswrapper[31559]: I0216 02:22:34.211062 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-host-var-lib-kubelet\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:22:34.211982 master-0 kubenswrapper[31559]: I0216 02:22:34.211104 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-host-run-multus-certs\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:22:34.211982 master-0 kubenswrapper[31559]: I0216 02:22:34.211165 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/00ef3b03-55dc-4661-b7fd-1e586c45b5de-etc-modprobe-d\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:22:34.211982 master-0 kubenswrapper[31559]: I0216 02:22:34.211213 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/0a900f93-91c9-4782-89a3-1cc09f3aec95-root\") pod \"node-exporter-jxbq6\" (UID: \"0a900f93-91c9-4782-89a3-1cc09f3aec95\") " pod="openshift-monitoring/node-exporter-jxbq6" Feb 16 02:22:34.211982 master-0 kubenswrapper[31559]: I0216 02:22:34.211244 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/22739961-e322-47f1-b232-eaa4cc35319c-audit-dir\") pod \"apiserver-6796f86fd6-qtxkl\" (UID: \"22739961-e322-47f1-b232-eaa4cc35319c\") " pod="openshift-oauth-apiserver/apiserver-6796f86fd6-qtxkl" Feb 16 02:22:34.211982 master-0 kubenswrapper[31559]: I0216 02:22:34.211265 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-multus-socket-dir-parent\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:22:34.211982 master-0 kubenswrapper[31559]: I0216 02:22:34.211326 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-host-run-multus-certs\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:22:34.211982 master-0 kubenswrapper[31559]: I0216 02:22:34.211331 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-host-var-lib-kubelet\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:22:34.211982 master-0 kubenswrapper[31559]: I0216 02:22:34.211288 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f91346c7-bde4-4fa2-ac27-b5f0d25eeb75-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mvdkf\" (UID: \"f91346c7-bde4-4fa2-ac27-b5f0d25eeb75\") " pod="openshift-multus/multus-additional-cni-plugins-mvdkf" Feb 16 02:22:34.211982 master-0 kubenswrapper[31559]: I0216 02:22:34.211408 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ad700b17-ba2a-41d4-8bec-538a009a613b-etc-cvo-updatepayloads\") pod \"cluster-version-operator-649c4f5445-lzvc4\" (UID: \"ad700b17-ba2a-41d4-8bec-538a009a613b\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-lzvc4" Feb 16 02:22:34.211982 master-0 kubenswrapper[31559]: I0216 02:22:34.211417 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/00ef3b03-55dc-4661-b7fd-1e586c45b5de-etc-modprobe-d\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:22:34.211982 master-0 kubenswrapper[31559]: I0216 02:22:34.211481 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-etc-kubernetes\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:22:34.211982 master-0 kubenswrapper[31559]: I0216 02:22:34.211471 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ad700b17-ba2a-41d4-8bec-538a009a613b-etc-cvo-updatepayloads\") pod \"cluster-version-operator-649c4f5445-lzvc4\" (UID: \"ad700b17-ba2a-41d4-8bec-538a009a613b\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-lzvc4" Feb 16 02:22:34.211982 master-0 kubenswrapper[31559]: I0216 02:22:34.211516 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/430c146b-ceaf-411a-add6-ce949243aabf-etc-kubernetes\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:22:34.211982 master-0 kubenswrapper[31559]: I0216 02:22:34.211540 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/0a900f93-91c9-4782-89a3-1cc09f3aec95-root\") pod \"node-exporter-jxbq6\" (UID: \"0a900f93-91c9-4782-89a3-1cc09f3aec95\") " pod="openshift-monitoring/node-exporter-jxbq6" Feb 16 02:22:34.211982 master-0 kubenswrapper[31559]: I0216 02:22:34.211568 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/22739961-e322-47f1-b232-eaa4cc35319c-audit-dir\") pod \"apiserver-6796f86fd6-qtxkl\" (UID: \"22739961-e322-47f1-b232-eaa4cc35319c\") " pod="openshift-oauth-apiserver/apiserver-6796f86fd6-qtxkl" Feb 16 02:22:34.211982 master-0 kubenswrapper[31559]: I0216 02:22:34.211602 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/0fbc8f91-f8cc-48d8-917c-64fa978069de-rootfs\") pod \"machine-config-daemon-qd4l7\" (UID: \"0fbc8f91-f8cc-48d8-917c-64fa978069de\") " pod="openshift-machine-config-operator/machine-config-daemon-qd4l7" Feb 16 02:22:34.211982 master-0 kubenswrapper[31559]: I0216 02:22:34.211656 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f91346c7-bde4-4fa2-ac27-b5f0d25eeb75-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mvdkf\" (UID: \"f91346c7-bde4-4fa2-ac27-b5f0d25eeb75\") " pod="openshift-multus/multus-additional-cni-plugins-mvdkf" Feb 16 02:22:34.211982 master-0 kubenswrapper[31559]: I0216 02:22:34.211829 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/0fbc8f91-f8cc-48d8-917c-64fa978069de-rootfs\") pod \"machine-config-daemon-qd4l7\" (UID: \"0fbc8f91-f8cc-48d8-917c-64fa978069de\") " pod="openshift-machine-config-operator/machine-config-daemon-qd4l7" Feb 16 02:22:34.225812 master-0 kubenswrapper[31559]: I0216 02:22:34.225771 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 16 02:22:34.245372 master-0 kubenswrapper[31559]: I0216 02:22:34.245306 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 16 02:22:34.265515 master-0 kubenswrapper[31559]: I0216 02:22:34.265426 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Feb 16 02:22:34.271343 master-0 kubenswrapper[31559]: I0216 02:22:34.271275 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/27d876a7-6a48-4942-ad96-ed8ed3aa104b-ca-certs\") pod \"operator-controller-controller-manager-85c9b89969-g9lcm\" (UID: \"27d876a7-6a48-4942-ad96-ed8ed3aa104b\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-g9lcm" Feb 16 02:22:34.285299 master-0 kubenswrapper[31559]: I0216 02:22:34.285241 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Feb 16 02:22:34.352911 master-0 kubenswrapper[31559]: I0216 02:22:34.325363 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Feb 16 02:22:34.352911 master-0 kubenswrapper[31559]: I0216 02:22:34.329262 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/857357a1-dc98-4dd5-98b3-c94b1ddf9dec-ca-certs\") pod \"catalogd-controller-manager-67bc7c997f-zc2br\" (UID: \"857357a1-dc98-4dd5-98b3-c94b1ddf9dec\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-zc2br" Feb 16 02:22:34.352911 master-0 kubenswrapper[31559]: I0216 02:22:34.332966 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Feb 16 02:22:34.356278 master-0 kubenswrapper[31559]: I0216 02:22:34.356121 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Feb 16 02:22:34.358083 master-0 kubenswrapper[31559]: I0216 02:22:34.357988 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Feb 16 02:22:34.365131 master-0 kubenswrapper[31559]: I0216 02:22:34.365079 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 16 02:22:34.368125 master-0 kubenswrapper[31559]: I0216 02:22:34.368088 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Feb 16 02:22:34.377369 master-0 kubenswrapper[31559]: I0216 02:22:34.377248 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7ac81030-35d1-4d86-844d-65d1156d8944-metrics-tls\") pod \"dns-default-njlg6\" (UID: \"7ac81030-35d1-4d86-844d-65d1156d8944\") " pod="openshift-dns/dns-default-njlg6" Feb 16 02:22:34.385741 master-0 kubenswrapper[31559]: I0216 02:22:34.385708 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 16 02:22:34.388877 master-0 kubenswrapper[31559]: I0216 02:22:34.388826 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7ac81030-35d1-4d86-844d-65d1156d8944-config-volume\") pod \"dns-default-njlg6\" (UID: \"7ac81030-35d1-4d86-844d-65d1156d8944\") " pod="openshift-dns/dns-default-njlg6" Feb 16 02:22:34.425233 master-0 kubenswrapper[31559]: I0216 02:22:34.425025 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 16 02:22:34.453664 master-0 kubenswrapper[31559]: I0216 02:22:34.453596 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/20bf60f7-9e36-477e-96a5-4fc8dc1bca5e-var-lock\") pod \"20bf60f7-9e36-477e-96a5-4fc8dc1bca5e\" (UID: \"20bf60f7-9e36-477e-96a5-4fc8dc1bca5e\") " Feb 16 02:22:34.453854 master-0 kubenswrapper[31559]: I0216 02:22:34.453746 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/20bf60f7-9e36-477e-96a5-4fc8dc1bca5e-kubelet-dir\") pod \"20bf60f7-9e36-477e-96a5-4fc8dc1bca5e\" (UID: \"20bf60f7-9e36-477e-96a5-4fc8dc1bca5e\") " Feb 16 02:22:34.454914 master-0 kubenswrapper[31559]: I0216 02:22:34.454597 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20bf60f7-9e36-477e-96a5-4fc8dc1bca5e-var-lock" (OuterVolumeSpecName: "var-lock") pod "20bf60f7-9e36-477e-96a5-4fc8dc1bca5e" (UID: "20bf60f7-9e36-477e-96a5-4fc8dc1bca5e"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:22:34.454996 master-0 kubenswrapper[31559]: I0216 02:22:34.454941 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20bf60f7-9e36-477e-96a5-4fc8dc1bca5e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "20bf60f7-9e36-477e-96a5-4fc8dc1bca5e" (UID: "20bf60f7-9e36-477e-96a5-4fc8dc1bca5e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:22:34.456132 master-0 kubenswrapper[31559]: I0216 02:22:34.455658 31559 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/20bf60f7-9e36-477e-96a5-4fc8dc1bca5e-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 16 02:22:34.456132 master-0 kubenswrapper[31559]: I0216 02:22:34.455699 31559 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/20bf60f7-9e36-477e-96a5-4fc8dc1bca5e-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 02:22:34.461061 master-0 kubenswrapper[31559]: I0216 02:22:34.460860 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 16 02:22:34.465978 master-0 kubenswrapper[31559]: I0216 02:22:34.465799 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 16 02:22:34.466583 master-0 kubenswrapper[31559]: I0216 02:22:34.466533 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 16 02:22:34.469539 master-0 kubenswrapper[31559]: I0216 02:22:34.469003 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f918d5b-1a4c-4b56-98a4-5cef638bb615-trusted-ca-bundle\") pod \"apiserver-578b9bc556-8g98v\" (UID: \"8f918d5b-1a4c-4b56-98a4-5cef638bb615\") " pod="openshift-apiserver/apiserver-578b9bc556-8g98v" Feb 16 02:22:34.470489 master-0 kubenswrapper[31559]: I0216 02:22:34.470226 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8f918d5b-1a4c-4b56-98a4-5cef638bb615-etcd-client\") pod \"apiserver-578b9bc556-8g98v\" (UID: \"8f918d5b-1a4c-4b56-98a4-5cef638bb615\") " pod="openshift-apiserver/apiserver-578b9bc556-8g98v" Feb 16 02:22:34.485579 master-0 kubenswrapper[31559]: I0216 02:22:34.485512 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 16 02:22:34.488409 master-0 kubenswrapper[31559]: I0216 02:22:34.488350 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f918d5b-1a4c-4b56-98a4-5cef638bb615-serving-cert\") pod \"apiserver-578b9bc556-8g98v\" (UID: \"8f918d5b-1a4c-4b56-98a4-5cef638bb615\") " pod="openshift-apiserver/apiserver-578b9bc556-8g98v" Feb 16 02:22:34.505749 master-0 kubenswrapper[31559]: I0216 02:22:34.505694 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 16 02:22:34.515316 master-0 kubenswrapper[31559]: I0216 02:22:34.515282 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8f918d5b-1a4c-4b56-98a4-5cef638bb615-encryption-config\") pod \"apiserver-578b9bc556-8g98v\" (UID: \"8f918d5b-1a4c-4b56-98a4-5cef638bb615\") " pod="openshift-apiserver/apiserver-578b9bc556-8g98v" Feb 16 02:22:34.525095 master-0 kubenswrapper[31559]: I0216 02:22:34.525045 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 16 02:22:34.527667 master-0 kubenswrapper[31559]: I0216 02:22:34.527633 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8f918d5b-1a4c-4b56-98a4-5cef638bb615-image-import-ca\") pod \"apiserver-578b9bc556-8g98v\" (UID: \"8f918d5b-1a4c-4b56-98a4-5cef638bb615\") " pod="openshift-apiserver/apiserver-578b9bc556-8g98v" Feb 16 02:22:34.545566 master-0 kubenswrapper[31559]: I0216 02:22:34.545521 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 16 02:22:34.565979 master-0 kubenswrapper[31559]: I0216 02:22:34.565924 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 16 02:22:34.585982 master-0 kubenswrapper[31559]: I0216 02:22:34.585932 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 16 02:22:34.590603 master-0 kubenswrapper[31559]: I0216 02:22:34.590559 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8f918d5b-1a4c-4b56-98a4-5cef638bb615-etcd-serving-ca\") pod \"apiserver-578b9bc556-8g98v\" (UID: \"8f918d5b-1a4c-4b56-98a4-5cef638bb615\") " pod="openshift-apiserver/apiserver-578b9bc556-8g98v" Feb 16 02:22:34.605913 master-0 kubenswrapper[31559]: I0216 02:22:34.605846 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 16 02:22:34.615428 master-0 kubenswrapper[31559]: I0216 02:22:34.615389 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f918d5b-1a4c-4b56-98a4-5cef638bb615-config\") pod \"apiserver-578b9bc556-8g98v\" (UID: \"8f918d5b-1a4c-4b56-98a4-5cef638bb615\") " pod="openshift-apiserver/apiserver-578b9bc556-8g98v" Feb 16 02:22:34.625555 master-0 kubenswrapper[31559]: I0216 02:22:34.625512 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 16 02:22:34.633388 master-0 kubenswrapper[31559]: I0216 02:22:34.633278 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8f918d5b-1a4c-4b56-98a4-5cef638bb615-audit\") pod \"apiserver-578b9bc556-8g98v\" (UID: \"8f918d5b-1a4c-4b56-98a4-5cef638bb615\") " pod="openshift-apiserver/apiserver-578b9bc556-8g98v" Feb 16 02:22:34.646252 master-0 kubenswrapper[31559]: I0216 02:22:34.646206 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 16 02:22:34.657073 master-0 kubenswrapper[31559]: I0216 02:22:34.657001 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22739961-e322-47f1-b232-eaa4cc35319c-trusted-ca-bundle\") pod \"apiserver-6796f86fd6-qtxkl\" (UID: \"22739961-e322-47f1-b232-eaa4cc35319c\") " pod="openshift-oauth-apiserver/apiserver-6796f86fd6-qtxkl" Feb 16 02:22:34.665791 master-0 kubenswrapper[31559]: I0216 02:22:34.665750 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 16 02:22:34.671361 master-0 kubenswrapper[31559]: I0216 02:22:34.671309 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/22739961-e322-47f1-b232-eaa4cc35319c-encryption-config\") pod \"apiserver-6796f86fd6-qtxkl\" (UID: \"22739961-e322-47f1-b232-eaa4cc35319c\") " pod="openshift-oauth-apiserver/apiserver-6796f86fd6-qtxkl" Feb 16 02:22:34.686165 master-0 kubenswrapper[31559]: I0216 02:22:34.686101 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 16 02:22:34.695865 master-0 kubenswrapper[31559]: I0216 02:22:34.695820 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/22739961-e322-47f1-b232-eaa4cc35319c-etcd-client\") pod \"apiserver-6796f86fd6-qtxkl\" (UID: \"22739961-e322-47f1-b232-eaa4cc35319c\") " pod="openshift-oauth-apiserver/apiserver-6796f86fd6-qtxkl" Feb 16 02:22:34.706508 master-0 kubenswrapper[31559]: I0216 02:22:34.706421 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 16 02:22:34.713968 master-0 kubenswrapper[31559]: I0216 02:22:34.713915 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22739961-e322-47f1-b232-eaa4cc35319c-serving-cert\") pod \"apiserver-6796f86fd6-qtxkl\" (UID: \"22739961-e322-47f1-b232-eaa4cc35319c\") " pod="openshift-oauth-apiserver/apiserver-6796f86fd6-qtxkl" Feb 16 02:22:34.725605 master-0 kubenswrapper[31559]: I0216 02:22:34.725541 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 16 02:22:34.746349 master-0 kubenswrapper[31559]: I0216 02:22:34.746295 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 16 02:22:34.765675 master-0 kubenswrapper[31559]: I0216 02:22:34.765618 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 16 02:22:34.774004 master-0 kubenswrapper[31559]: I0216 02:22:34.773938 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/22739961-e322-47f1-b232-eaa4cc35319c-audit-policies\") pod \"apiserver-6796f86fd6-qtxkl\" (UID: \"22739961-e322-47f1-b232-eaa4cc35319c\") " pod="openshift-oauth-apiserver/apiserver-6796f86fd6-qtxkl" Feb 16 02:22:34.786004 master-0 kubenswrapper[31559]: I0216 02:22:34.785955 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 16 02:22:34.793108 master-0 kubenswrapper[31559]: I0216 02:22:34.793068 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/22739961-e322-47f1-b232-eaa4cc35319c-etcd-serving-ca\") pod \"apiserver-6796f86fd6-qtxkl\" (UID: \"22739961-e322-47f1-b232-eaa4cc35319c\") " pod="openshift-oauth-apiserver/apiserver-6796f86fd6-qtxkl" Feb 16 02:22:34.806066 master-0 kubenswrapper[31559]: I0216 02:22:34.806028 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Feb 16 02:22:34.812999 master-0 kubenswrapper[31559]: I0216 02:22:34.812943 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/37fd7550-cc81-4180-8540-0bc5f62f63d2-tls-certificates\") pod \"prometheus-operator-admission-webhook-695b766898-9dx2k\" (UID: \"37fd7550-cc81-4180-8540-0bc5f62f63d2\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-9dx2k" Feb 16 02:22:34.827280 master-0 kubenswrapper[31559]: I0216 02:22:34.827219 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 16 02:22:34.829395 master-0 kubenswrapper[31559]: I0216 02:22:34.829334 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ad700b17-ba2a-41d4-8bec-538a009a613b-serving-cert\") pod \"cluster-version-operator-649c4f5445-lzvc4\" (UID: \"ad700b17-ba2a-41d4-8bec-538a009a613b\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-lzvc4" Feb 16 02:22:34.845569 master-0 kubenswrapper[31559]: I0216 02:22:34.845524 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 16 02:22:34.866201 master-0 kubenswrapper[31559]: I0216 02:22:34.866142 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 16 02:22:34.870547 master-0 kubenswrapper[31559]: I0216 02:22:34.870506 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ad700b17-ba2a-41d4-8bec-538a009a613b-service-ca\") pod \"cluster-version-operator-649c4f5445-lzvc4\" (UID: \"ad700b17-ba2a-41d4-8bec-538a009a613b\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-lzvc4" Feb 16 02:22:34.885699 master-0 kubenswrapper[31559]: I0216 02:22:34.885596 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 16 02:22:34.905613 master-0 kubenswrapper[31559]: I0216 02:22:34.905554 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 16 02:22:34.911285 master-0 kubenswrapper[31559]: I0216 02:22:34.911240 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a8d00a01-aa48-4830-a558-93a31cb98b31-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-d8bf84b88-r5l9f\" (UID: \"a8d00a01-aa48-4830-a558-93a31cb98b31\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-r5l9f" Feb 16 02:22:34.922231 master-0 kubenswrapper[31559]: I0216 02:22:34.922196 31559 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 02:22:34.925784 master-0 kubenswrapper[31559]: I0216 02:22:34.925667 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 16 02:22:34.926196 master-0 kubenswrapper[31559]: I0216 02:22:34.926138 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 02:22:34.926287 master-0 kubenswrapper[31559]: I0216 02:22:34.926206 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 02:22:34.926287 master-0 kubenswrapper[31559]: I0216 02:22:34.926232 31559 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 02:22:34.926886 master-0 kubenswrapper[31559]: I0216 02:22:34.926712 31559 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 16 02:22:34.946115 master-0 kubenswrapper[31559]: I0216 02:22:34.946078 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-s4hmw" Feb 16 02:22:34.964052 master-0 kubenswrapper[31559]: I0216 02:22:34.963994 31559 request.go:700] Waited for 1.019013149s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/configmaps?fieldSelector=metadata.name%3Dcco-trusted-ca&limit=500&resourceVersion=0 Feb 16 02:22:34.974541 master-0 kubenswrapper[31559]: I0216 02:22:34.974316 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Feb 16 02:22:34.976460 master-0 kubenswrapper[31559]: I0216 02:22:34.976402 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/30fef0d5-46ea-4fa3-9ffa-88187d010ffe-cco-trusted-ca\") pod \"cloud-credential-operator-595c8f9ff-n8xmg\" (UID: \"30fef0d5-46ea-4fa3-9ffa-88187d010ffe\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-n8xmg" Feb 16 02:22:34.986113 master-0 kubenswrapper[31559]: I0216 02:22:34.985927 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Feb 16 02:22:34.987968 master-0 kubenswrapper[31559]: I0216 02:22:34.987885 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/30fef0d5-46ea-4fa3-9ffa-88187d010ffe-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-595c8f9ff-n8xmg\" (UID: \"30fef0d5-46ea-4fa3-9ffa-88187d010ffe\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-n8xmg" Feb 16 02:22:35.014695 master-0 kubenswrapper[31559]: I0216 02:22:35.014626 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Feb 16 02:22:35.025948 master-0 kubenswrapper[31559]: I0216 02:22:35.025760 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Feb 16 02:22:35.045279 master-0 kubenswrapper[31559]: I0216 02:22:35.045237 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 16 02:22:35.052180 master-0 kubenswrapper[31559]: I0216 02:22:35.052107 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/5b62004d-7fe3-47ae-8e26-8496befb047c-samples-operator-tls\") pod \"cluster-samples-operator-f8cbff74c-k8jz5\" (UID: \"5b62004d-7fe3-47ae-8e26-8496befb047c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-k8jz5" Feb 16 02:22:35.065892 master-0 kubenswrapper[31559]: I0216 02:22:35.065840 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-wp42g" Feb 16 02:22:35.085855 master-0 kubenswrapper[31559]: E0216 02:22:35.085512 31559 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-server-audit-profiles: failed to sync configmap cache: timed out waiting for the condition Feb 16 02:22:35.085855 master-0 kubenswrapper[31559]: E0216 02:22:35.085542 31559 secret.go:189] Couldn't get secret openshift-machine-config-operator/machine-config-server-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 02:22:35.085855 master-0 kubenswrapper[31559]: E0216 02:22:35.085637 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8c267cc7-a51a-4b14-baee-e584254eefc5-metrics-server-audit-profiles podName:8c267cc7-a51a-4b14-baee-e584254eefc5 nodeName:}" failed. No retries permitted until 2026-02-16 02:22:35.585610588 +0000 UTC m=+7.930216633 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-server-audit-profiles" (UniqueName: "kubernetes.io/configmap/8c267cc7-a51a-4b14-baee-e584254eefc5-metrics-server-audit-profiles") pod "metrics-server-67b79bd656-cs2n2" (UID: "8c267cc7-a51a-4b14-baee-e584254eefc5") : failed to sync configmap cache: timed out waiting for the condition Feb 16 02:22:35.085855 master-0 kubenswrapper[31559]: E0216 02:22:35.085671 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d8bbd369-4219-48ef-ae2d-b45c81789403-certs podName:d8bbd369-4219-48ef-ae2d-b45c81789403 nodeName:}" failed. No retries permitted until 2026-02-16 02:22:35.585656799 +0000 UTC m=+7.930262854 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certs" (UniqueName: "kubernetes.io/secret/d8bbd369-4219-48ef-ae2d-b45c81789403-certs") pod "machine-config-server-5zv6j" (UID: "d8bbd369-4219-48ef-ae2d-b45c81789403") : failed to sync secret cache: timed out waiting for the condition Feb 16 02:22:35.085855 master-0 kubenswrapper[31559]: I0216 02:22:35.085700 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 16 02:22:35.085855 master-0 kubenswrapper[31559]: E0216 02:22:35.085851 31559 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: failed to sync configmap cache: timed out waiting for the condition Feb 16 02:22:35.086652 master-0 kubenswrapper[31559]: E0216 02:22:35.085910 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d870332c-2498-4135-a9b3-a71e67c2805b-images podName:d870332c-2498-4135-a9b3-a71e67c2805b nodeName:}" failed. No retries permitted until 2026-02-16 02:22:35.585889335 +0000 UTC m=+7.930495360 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/d870332c-2498-4135-a9b3-a71e67c2805b-images") pod "machine-config-operator-84976bb859-5gs6g" (UID: "d870332c-2498-4135-a9b3-a71e67c2805b") : failed to sync configmap cache: timed out waiting for the condition Feb 16 02:22:35.086940 master-0 kubenswrapper[31559]: E0216 02:22:35.086858 31559 configmap.go:193] Couldn't get configMap openshift-machine-api/cluster-baremetal-operator-images: failed to sync configmap cache: timed out waiting for the condition Feb 16 02:22:35.087069 master-0 kubenswrapper[31559]: E0216 02:22:35.087048 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/27a42eb0-677c-414d-b0ec-f945ec39b7e9-images podName:27a42eb0-677c-414d-b0ec-f945ec39b7e9 nodeName:}" failed. No retries permitted until 2026-02-16 02:22:35.587025224 +0000 UTC m=+7.931631249 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/27a42eb0-677c-414d-b0ec-f945ec39b7e9-images") pod "cluster-baremetal-operator-7bc947fc7d-frvgm" (UID: "27a42eb0-677c-414d-b0ec-f945ec39b7e9") : failed to sync configmap cache: timed out waiting for the condition Feb 16 02:22:35.087153 master-0 kubenswrapper[31559]: E0216 02:22:35.087067 31559 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Feb 16 02:22:35.087153 master-0 kubenswrapper[31559]: E0216 02:22:35.087140 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8086f93-2d98-4218-afac-20a65e6bf943-webhook-certs podName:c8086f93-2d98-4218-afac-20a65e6bf943 nodeName:}" failed. No retries permitted until 2026-02-16 02:22:35.587124327 +0000 UTC m=+7.931730372 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c8086f93-2d98-4218-afac-20a65e6bf943-webhook-certs") pod "multus-admission-controller-6d678b8d67-8gzlx" (UID: "c8086f93-2d98-4218-afac-20a65e6bf943") : failed to sync secret cache: timed out waiting for the condition Feb 16 02:22:35.088074 master-0 kubenswrapper[31559]: E0216 02:22:35.087921 31559 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 02:22:35.088074 master-0 kubenswrapper[31559]: E0216 02:22:35.088003 31559 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 16 02:22:35.088074 master-0 kubenswrapper[31559]: E0216 02:22:35.088035 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c267cc7-a51a-4b14-baee-e584254eefc5-secret-metrics-server-tls podName:8c267cc7-a51a-4b14-baee-e584254eefc5 nodeName:}" failed. No retries permitted until 2026-02-16 02:22:35.58800144 +0000 UTC m=+7.932607565 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-metrics-server-tls" (UniqueName: "kubernetes.io/secret/8c267cc7-a51a-4b14-baee-e584254eefc5-secret-metrics-server-tls") pod "metrics-server-67b79bd656-cs2n2" (UID: "8c267cc7-a51a-4b14-baee-e584254eefc5") : failed to sync secret cache: timed out waiting for the condition Feb 16 02:22:35.088074 master-0 kubenswrapper[31559]: E0216 02:22:35.088044 31559 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Feb 16 02:22:35.088074 master-0 kubenswrapper[31559]: E0216 02:22:35.088065 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fec84b8a-a0d1-4b07-8827-cef0beb89ecd-config podName:fec84b8a-a0d1-4b07-8827-cef0beb89ecd nodeName:}" failed. No retries permitted until 2026-02-16 02:22:35.588050731 +0000 UTC m=+7.932656786 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/fec84b8a-a0d1-4b07-8827-cef0beb89ecd-config") pod "machine-api-operator-bd7dd5c46-qw2zq" (UID: "fec84b8a-a0d1-4b07-8827-cef0beb89ecd") : failed to sync configmap cache: timed out waiting for the condition Feb 16 02:22:35.088419 master-0 kubenswrapper[31559]: E0216 02:22:35.088105 31559 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Feb 16 02:22:35.088419 master-0 kubenswrapper[31559]: E0216 02:22:35.088115 31559 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/machine-approver-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 02:22:35.088419 master-0 kubenswrapper[31559]: E0216 02:22:35.088107 31559 secret.go:189] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 02:22:35.088419 master-0 kubenswrapper[31559]: E0216 02:22:35.088012 31559 configmap.go:193] Couldn't get configMap openshift-insights/service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 16 02:22:35.088419 master-0 kubenswrapper[31559]: E0216 02:22:35.088166 31559 configmap.go:193] Couldn't get configMap openshift-cloud-controller-manager-operator/cloud-controller-manager-images: failed to sync configmap cache: timed out waiting for the condition Feb 16 02:22:35.088419 master-0 kubenswrapper[31559]: E0216 02:22:35.088115 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/32d420d6-bbda-42c0-82fe-8b187ad91607-openshift-state-metrics-kube-rbac-proxy-config podName:32d420d6-bbda-42c0-82fe-8b187ad91607 nodeName:}" failed. No retries permitted until 2026-02-16 02:22:35.588095502 +0000 UTC m=+7.932701547 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/32d420d6-bbda-42c0-82fe-8b187ad91607-openshift-state-metrics-kube-rbac-proxy-config") pod "openshift-state-metrics-546cc7d765-2zl2r" (UID: "32d420d6-bbda-42c0-82fe-8b187ad91607") : failed to sync secret cache: timed out waiting for the condition Feb 16 02:22:35.088419 master-0 kubenswrapper[31559]: E0216 02:22:35.088260 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c442d349-668b-4d01-a097-5981b7a04eac-config podName:c442d349-668b-4d01-a097-5981b7a04eac nodeName:}" failed. No retries permitted until 2026-02-16 02:22:35.588204045 +0000 UTC m=+7.932810100 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/c442d349-668b-4d01-a097-5981b7a04eac-config") pod "machine-approver-8569dd85ff-vqtcl" (UID: "c442d349-668b-4d01-a097-5981b7a04eac") : failed to sync configmap cache: timed out waiting for the condition Feb 16 02:22:35.088419 master-0 kubenswrapper[31559]: E0216 02:22:35.088287 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/83883885-f493-4559-9c0f-e28d69712475-config podName:83883885-f493-4559-9c0f-e28d69712475 nodeName:}" failed. No retries permitted until 2026-02-16 02:22:35.588274167 +0000 UTC m=+7.932880222 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/83883885-f493-4559-9c0f-e28d69712475-config") pod "route-controller-manager-998bd8b4b-hm5k2" (UID: "83883885-f493-4559-9c0f-e28d69712475") : failed to sync configmap cache: timed out waiting for the condition Feb 16 02:22:35.088419 master-0 kubenswrapper[31559]: E0216 02:22:35.088314 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0abea413-e08a-465a-8ec4-2be650bfd5bd-service-ca-bundle podName:0abea413-e08a-465a-8ec4-2be650bfd5bd nodeName:}" failed. No retries permitted until 2026-02-16 02:22:35.588300588 +0000 UTC m=+7.932906643 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/0abea413-e08a-465a-8ec4-2be650bfd5bd-service-ca-bundle") pod "insights-operator-cb4f7b4cf-llpf5" (UID: "0abea413-e08a-465a-8ec4-2be650bfd5bd") : failed to sync configmap cache: timed out waiting for the condition Feb 16 02:22:35.088419 master-0 kubenswrapper[31559]: E0216 02:22:35.088333 31559 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy-cluster-autoscaler-operator: failed to sync configmap cache: timed out waiting for the condition Feb 16 02:22:35.088419 master-0 kubenswrapper[31559]: E0216 02:22:35.088338 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/97b8261a-91e3-435e-93f8-0a17f30359fd-proxy-tls podName:97b8261a-91e3-435e-93f8-0a17f30359fd nodeName:}" failed. No retries permitted until 2026-02-16 02:22:35.588327708 +0000 UTC m=+7.932933763 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/97b8261a-91e3-435e-93f8-0a17f30359fd-proxy-tls") pod "machine-config-controller-686c884b4d-zljgp" (UID: "97b8261a-91e3-435e-93f8-0a17f30359fd") : failed to sync secret cache: timed out waiting for the condition Feb 16 02:22:35.088419 master-0 kubenswrapper[31559]: E0216 02:22:35.088372 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c4a146b2-c712-408a-97d8-5de3a84f3aaf-images podName:c4a146b2-c712-408a-97d8-5de3a84f3aaf nodeName:}" failed. No retries permitted until 2026-02-16 02:22:35.588360209 +0000 UTC m=+7.932966274 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/c4a146b2-c712-408a-97d8-5de3a84f3aaf-images") pod "cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj" (UID: "c4a146b2-c712-408a-97d8-5de3a84f3aaf") : failed to sync configmap cache: timed out waiting for the condition Feb 16 02:22:35.088419 master-0 kubenswrapper[31559]: E0216 02:22:35.088395 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/48863ff6-63ac-42d7-bac7-29d888c92db9-auth-proxy-config podName:48863ff6-63ac-42d7-bac7-29d888c92db9 nodeName:}" failed. No retries permitted until 2026-02-16 02:22:35.5883842 +0000 UTC m=+7.932990265 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/48863ff6-63ac-42d7-bac7-29d888c92db9-auth-proxy-config") pod "cluster-autoscaler-operator-67fd9768b5-9rvcj" (UID: "48863ff6-63ac-42d7-bac7-29d888c92db9") : failed to sync configmap cache: timed out waiting for the condition Feb 16 02:22:35.089646 master-0 kubenswrapper[31559]: E0216 02:22:35.089235 31559 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: failed to sync configmap cache: timed out waiting for the condition Feb 16 02:22:35.089646 master-0 kubenswrapper[31559]: E0216 02:22:35.089473 31559 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 02:22:35.089646 master-0 kubenswrapper[31559]: E0216 02:22:35.089521 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fec84b8a-a0d1-4b07-8827-cef0beb89ecd-images podName:fec84b8a-a0d1-4b07-8827-cef0beb89ecd nodeName:}" failed. No retries permitted until 2026-02-16 02:22:35.589476308 +0000 UTC m=+7.934082333 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/fec84b8a-a0d1-4b07-8827-cef0beb89ecd-images") pod "machine-api-operator-bd7dd5c46-qw2zq" (UID: "fec84b8a-a0d1-4b07-8827-cef0beb89ecd") : failed to sync configmap cache: timed out waiting for the condition Feb 16 02:22:35.089646 master-0 kubenswrapper[31559]: E0216 02:22:35.089545 31559 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 16 02:22:35.089646 master-0 kubenswrapper[31559]: E0216 02:22:35.089570 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/695d1f01-d3c1-4fb9-9dda-daf33eae11f5-prometheus-operator-tls podName:695d1f01-d3c1-4fb9-9dda-daf33eae11f5 nodeName:}" failed. No retries permitted until 2026-02-16 02:22:35.58955956 +0000 UTC m=+7.934165585 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/695d1f01-d3c1-4fb9-9dda-daf33eae11f5-prometheus-operator-tls") pod "prometheus-operator-7485d645b8-v9mmd" (UID: "695d1f01-d3c1-4fb9-9dda-daf33eae11f5") : failed to sync secret cache: timed out waiting for the condition Feb 16 02:22:35.089646 master-0 kubenswrapper[31559]: E0216 02:22:35.089580 31559 configmap.go:193] Couldn't get configMap openshift-machine-api/baremetal-kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 16 02:22:35.089646 master-0 kubenswrapper[31559]: E0216 02:22:35.089548 31559 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 02:22:35.089940 master-0 kubenswrapper[31559]: E0216 02:22:35.089661 31559 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 16 02:22:35.089940 master-0 kubenswrapper[31559]: E0216 02:22:35.089608 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c442d349-668b-4d01-a097-5981b7a04eac-auth-proxy-config podName:c442d349-668b-4d01-a097-5981b7a04eac nodeName:}" failed. No retries permitted until 2026-02-16 02:22:35.589588181 +0000 UTC m=+7.934194236 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/c442d349-668b-4d01-a097-5981b7a04eac-auth-proxy-config") pod "machine-approver-8569dd85ff-vqtcl" (UID: "c442d349-668b-4d01-a097-5981b7a04eac") : failed to sync configmap cache: timed out waiting for the condition Feb 16 02:22:35.089940 master-0 kubenswrapper[31559]: E0216 02:22:35.089732 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/27a42eb0-677c-414d-b0ec-f945ec39b7e9-config podName:27a42eb0-677c-414d-b0ec-f945ec39b7e9 nodeName:}" failed. No retries permitted until 2026-02-16 02:22:35.589705734 +0000 UTC m=+7.934311789 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/27a42eb0-677c-414d-b0ec-f945ec39b7e9-config") pod "cluster-baremetal-operator-7bc947fc7d-frvgm" (UID: "27a42eb0-677c-414d-b0ec-f945ec39b7e9") : failed to sync configmap cache: timed out waiting for the condition Feb 16 02:22:35.089940 master-0 kubenswrapper[31559]: E0216 02:22:35.089758 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc3354cb-b6c3-40a5-a695-cccb079ad292-webhook-cert podName:dc3354cb-b6c3-40a5-a695-cccb079ad292 nodeName:}" failed. No retries permitted until 2026-02-16 02:22:35.589745085 +0000 UTC m=+7.934351140 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/dc3354cb-b6c3-40a5-a695-cccb079ad292-webhook-cert") pod "packageserver-87777c9b7-fxzh6" (UID: "dc3354cb-b6c3-40a5-a695-cccb079ad292") : failed to sync secret cache: timed out waiting for the condition Feb 16 02:22:35.089940 master-0 kubenswrapper[31559]: E0216 02:22:35.089793 31559 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 02:22:35.089940 master-0 kubenswrapper[31559]: E0216 02:22:35.089802 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/97b8261a-91e3-435e-93f8-0a17f30359fd-mcc-auth-proxy-config podName:97b8261a-91e3-435e-93f8-0a17f30359fd nodeName:}" failed. No retries permitted until 2026-02-16 02:22:35.589784656 +0000 UTC m=+7.934390711 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "mcc-auth-proxy-config" (UniqueName: "kubernetes.io/configmap/97b8261a-91e3-435e-93f8-0a17f30359fd-mcc-auth-proxy-config") pod "machine-config-controller-686c884b4d-zljgp" (UID: "97b8261a-91e3-435e-93f8-0a17f30359fd") : failed to sync configmap cache: timed out waiting for the condition Feb 16 02:22:35.089940 master-0 kubenswrapper[31559]: E0216 02:22:35.089843 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/32d420d6-bbda-42c0-82fe-8b187ad91607-openshift-state-metrics-tls podName:32d420d6-bbda-42c0-82fe-8b187ad91607 nodeName:}" failed. No retries permitted until 2026-02-16 02:22:35.589830757 +0000 UTC m=+7.934436812 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/32d420d6-bbda-42c0-82fe-8b187ad91607-openshift-state-metrics-tls") pod "openshift-state-metrics-546cc7d765-2zl2r" (UID: "32d420d6-bbda-42c0-82fe-8b187ad91607") : failed to sync secret cache: timed out waiting for the condition Feb 16 02:22:35.090791 master-0 kubenswrapper[31559]: E0216 02:22:35.090658 31559 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 16 02:22:35.090791 master-0 kubenswrapper[31559]: E0216 02:22:35.090701 31559 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 02:22:35.090791 master-0 kubenswrapper[31559]: E0216 02:22:35.090743 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8c267cc7-a51a-4b14-baee-e584254eefc5-configmap-kubelet-serving-ca-bundle podName:8c267cc7-a51a-4b14-baee-e584254eefc5 nodeName:}" failed. No retries permitted until 2026-02-16 02:22:35.590727081 +0000 UTC m=+7.935333196 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/8c267cc7-a51a-4b14-baee-e584254eefc5-configmap-kubelet-serving-ca-bundle") pod "metrics-server-67b79bd656-cs2n2" (UID: "8c267cc7-a51a-4b14-baee-e584254eefc5") : failed to sync configmap cache: timed out waiting for the condition Feb 16 02:22:35.090791 master-0 kubenswrapper[31559]: E0216 02:22:35.090773 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0a900f93-91c9-4782-89a3-1cc09f3aec95-node-exporter-tls podName:0a900f93-91c9-4782-89a3-1cc09f3aec95 nodeName:}" failed. No retries permitted until 2026-02-16 02:22:35.590755101 +0000 UTC m=+7.935361156 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/0a900f93-91c9-4782-89a3-1cc09f3aec95-node-exporter-tls") pod "node-exporter-jxbq6" (UID: "0a900f93-91c9-4782-89a3-1cc09f3aec95") : failed to sync secret cache: timed out waiting for the condition Feb 16 02:22:35.091762 master-0 kubenswrapper[31559]: E0216 02:22:35.091512 31559 secret.go:189] Couldn't get secret openshift-machine-config-operator/node-bootstrapper-token: failed to sync secret cache: timed out waiting for the condition Feb 16 02:22:35.091762 master-0 kubenswrapper[31559]: E0216 02:22:35.091551 31559 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 02:22:35.091762 master-0 kubenswrapper[31559]: E0216 02:22:35.091580 31559 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Feb 16 02:22:35.091762 master-0 kubenswrapper[31559]: E0216 02:22:35.091587 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d8bbd369-4219-48ef-ae2d-b45c81789403-node-bootstrap-token podName:d8bbd369-4219-48ef-ae2d-b45c81789403 nodeName:}" failed. No retries permitted until 2026-02-16 02:22:35.591570593 +0000 UTC m=+7.936176648 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-bootstrap-token" (UniqueName: "kubernetes.io/secret/d8bbd369-4219-48ef-ae2d-b45c81789403-node-bootstrap-token") pod "machine-config-server-5zv6j" (UID: "d8bbd369-4219-48ef-ae2d-b45c81789403") : failed to sync secret cache: timed out waiting for the condition Feb 16 02:22:35.091762 master-0 kubenswrapper[31559]: E0216 02:22:35.091652 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c442d349-668b-4d01-a097-5981b7a04eac-machine-approver-tls podName:c442d349-668b-4d01-a097-5981b7a04eac nodeName:}" failed. No retries permitted until 2026-02-16 02:22:35.591637834 +0000 UTC m=+7.936243969 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/c442d349-668b-4d01-a097-5981b7a04eac-machine-approver-tls") pod "machine-approver-8569dd85ff-vqtcl" (UID: "c442d349-668b-4d01-a097-5981b7a04eac") : failed to sync secret cache: timed out waiting for the condition Feb 16 02:22:35.091762 master-0 kubenswrapper[31559]: E0216 02:22:35.091670 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e491b5ed-9c09-4308-9843-fba8d43bd3ae-config podName:e491b5ed-9c09-4308-9843-fba8d43bd3ae nodeName:}" failed. No retries permitted until 2026-02-16 02:22:35.591662675 +0000 UTC m=+7.936268820 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e491b5ed-9c09-4308-9843-fba8d43bd3ae-config") pod "controller-manager-5788fc6459-29m25" (UID: "e491b5ed-9c09-4308-9843-fba8d43bd3ae") : failed to sync configmap cache: timed out waiting for the condition Feb 16 02:22:35.091762 master-0 kubenswrapper[31559]: E0216 02:22:35.091707 31559 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: failed to sync secret cache: timed out waiting for the condition Feb 16 02:22:35.091762 master-0 kubenswrapper[31559]: E0216 02:22:35.091740 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c267cc7-a51a-4b14-baee-e584254eefc5-secret-metrics-client-certs podName:8c267cc7-a51a-4b14-baee-e584254eefc5 nodeName:}" failed. No retries permitted until 2026-02-16 02:22:35.591732137 +0000 UTC m=+7.936338292 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/8c267cc7-a51a-4b14-baee-e584254eefc5-secret-metrics-client-certs") pod "metrics-server-67b79bd656-cs2n2" (UID: "8c267cc7-a51a-4b14-baee-e584254eefc5") : failed to sync secret cache: timed out waiting for the condition Feb 16 02:22:35.093108 master-0 kubenswrapper[31559]: E0216 02:22:35.092886 31559 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 02:22:35.093108 master-0 kubenswrapper[31559]: E0216 02:22:35.092913 31559 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 02:22:35.093108 master-0 kubenswrapper[31559]: E0216 02:22:35.092950 31559 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 02:22:35.093108 master-0 kubenswrapper[31559]: E0216 02:22:35.092967 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/32d420d6-bbda-42c0-82fe-8b187ad91607-metrics-client-ca podName:32d420d6-bbda-42c0-82fe-8b187ad91607 nodeName:}" failed. No retries permitted until 2026-02-16 02:22:35.592942178 +0000 UTC m=+7.937548233 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/32d420d6-bbda-42c0-82fe-8b187ad91607-metrics-client-ca") pod "openshift-state-metrics-546cc7d765-2zl2r" (UID: "32d420d6-bbda-42c0-82fe-8b187ad91607") : failed to sync configmap cache: timed out waiting for the condition Feb 16 02:22:35.093108 master-0 kubenswrapper[31559]: E0216 02:22:35.093013 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7846b339-c46d-4983-b586-a28f2868f665-cert podName:7846b339-c46d-4983-b586-a28f2868f665 nodeName:}" failed. No retries permitted until 2026-02-16 02:22:35.59299144 +0000 UTC m=+7.937597545 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7846b339-c46d-4983-b586-a28f2868f665-cert") pod "ingress-canary-6t7mx" (UID: "7846b339-c46d-4983-b586-a28f2868f665") : failed to sync secret cache: timed out waiting for the condition Feb 16 02:22:35.093108 master-0 kubenswrapper[31559]: E0216 02:22:35.093023 31559 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 16 02:22:35.093108 master-0 kubenswrapper[31559]: E0216 02:22:35.093040 31559 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 02:22:35.093108 master-0 kubenswrapper[31559]: E0216 02:22:35.093045 31559 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 02:22:35.093108 master-0 kubenswrapper[31559]: E0216 02:22:35.093054 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fec84b8a-a0d1-4b07-8827-cef0beb89ecd-machine-api-operator-tls podName:fec84b8a-a0d1-4b07-8827-cef0beb89ecd nodeName:}" failed. No retries permitted until 2026-02-16 02:22:35.593038951 +0000 UTC m=+7.937645066 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/fec84b8a-a0d1-4b07-8827-cef0beb89ecd-machine-api-operator-tls") pod "machine-api-operator-bd7dd5c46-qw2zq" (UID: "fec84b8a-a0d1-4b07-8827-cef0beb89ecd") : failed to sync secret cache: timed out waiting for the condition Feb 16 02:22:35.093108 master-0 kubenswrapper[31559]: E0216 02:22:35.093098 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0fbc8f91-f8cc-48d8-917c-64fa978069de-mcd-auth-proxy-config podName:0fbc8f91-f8cc-48d8-917c-64fa978069de nodeName:}" failed. No retries permitted until 2026-02-16 02:22:35.593086562 +0000 UTC m=+7.937692617 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "mcd-auth-proxy-config" (UniqueName: "kubernetes.io/configmap/0fbc8f91-f8cc-48d8-917c-64fa978069de-mcd-auth-proxy-config") pod "machine-config-daemon-qd4l7" (UID: "0fbc8f91-f8cc-48d8-917c-64fa978069de") : failed to sync configmap cache: timed out waiting for the condition Feb 16 02:22:35.093734 master-0 kubenswrapper[31559]: E0216 02:22:35.093126 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/86af980a-2653-40c3-a368-a795d7fb8558-metrics-client-ca podName:86af980a-2653-40c3-a368-a795d7fb8558 nodeName:}" failed. No retries permitted until 2026-02-16 02:22:35.593112753 +0000 UTC m=+7.937718808 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/86af980a-2653-40c3-a368-a795d7fb8558-metrics-client-ca") pod "kube-state-metrics-7cc9598d54-2gx8b" (UID: "86af980a-2653-40c3-a368-a795d7fb8558") : failed to sync configmap cache: timed out waiting for the condition Feb 16 02:22:35.093734 master-0 kubenswrapper[31559]: E0216 02:22:35.093152 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0a900f93-91c9-4782-89a3-1cc09f3aec95-metrics-client-ca podName:0a900f93-91c9-4782-89a3-1cc09f3aec95 nodeName:}" failed. No retries permitted until 2026-02-16 02:22:35.593140674 +0000 UTC m=+7.937746729 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/0a900f93-91c9-4782-89a3-1cc09f3aec95-metrics-client-ca") pod "node-exporter-jxbq6" (UID: "0a900f93-91c9-4782-89a3-1cc09f3aec95") : failed to sync configmap cache: timed out waiting for the condition Feb 16 02:22:35.095574 master-0 kubenswrapper[31559]: E0216 02:22:35.095215 31559 secret.go:189] Couldn't get secret openshift-insights/openshift-insights-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 02:22:35.095574 master-0 kubenswrapper[31559]: E0216 02:22:35.095288 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0abea413-e08a-465a-8ec4-2be650bfd5bd-serving-cert podName:0abea413-e08a-465a-8ec4-2be650bfd5bd nodeName:}" failed. No retries permitted until 2026-02-16 02:22:35.595273229 +0000 UTC m=+7.939879354 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0abea413-e08a-465a-8ec4-2be650bfd5bd-serving-cert") pod "insights-operator-cb4f7b4cf-llpf5" (UID: "0abea413-e08a-465a-8ec4-2be650bfd5bd") : failed to sync secret cache: timed out waiting for the condition Feb 16 02:22:35.095574 master-0 kubenswrapper[31559]: E0216 02:22:35.095471 31559 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 02:22:35.095574 master-0 kubenswrapper[31559]: E0216 02:22:35.095550 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc3354cb-b6c3-40a5-a695-cccb079ad292-apiservice-cert podName:dc3354cb-b6c3-40a5-a695-cccb079ad292 nodeName:}" failed. No retries permitted until 2026-02-16 02:22:35.595512945 +0000 UTC m=+7.940119060 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/dc3354cb-b6c3-40a5-a695-cccb079ad292-apiservice-cert") pod "packageserver-87777c9b7-fxzh6" (UID: "dc3354cb-b6c3-40a5-a695-cccb079ad292") : failed to sync secret cache: timed out waiting for the condition Feb 16 02:22:35.096212 master-0 kubenswrapper[31559]: E0216 02:22:35.095674 31559 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 02:22:35.096212 master-0 kubenswrapper[31559]: E0216 02:22:35.095939 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e491b5ed-9c09-4308-9843-fba8d43bd3ae-client-ca podName:e491b5ed-9c09-4308-9843-fba8d43bd3ae nodeName:}" failed. No retries permitted until 2026-02-16 02:22:35.5957104 +0000 UTC m=+7.940316515 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/e491b5ed-9c09-4308-9843-fba8d43bd3ae-client-ca") pod "controller-manager-5788fc6459-29m25" (UID: "e491b5ed-9c09-4308-9843-fba8d43bd3ae") : failed to sync configmap cache: timed out waiting for the condition Feb 16 02:22:35.096825 master-0 kubenswrapper[31559]: E0216 02:22:35.096795 31559 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 02:22:35.096941 master-0 kubenswrapper[31559]: E0216 02:22:35.096876 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/83883885-f493-4559-9c0f-e28d69712475-serving-cert podName:83883885-f493-4559-9c0f-e28d69712475 nodeName:}" failed. No retries permitted until 2026-02-16 02:22:35.5968589 +0000 UTC m=+7.941464945 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/83883885-f493-4559-9c0f-e28d69712475-serving-cert") pod "route-controller-manager-998bd8b4b-hm5k2" (UID: "83883885-f493-4559-9c0f-e28d69712475") : failed to sync secret cache: timed out waiting for the condition Feb 16 02:22:35.098022 master-0 kubenswrapper[31559]: E0216 02:22:35.097890 31559 secret.go:189] Couldn't get secret openshift-cloud-controller-manager-operator/cloud-controller-manager-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 02:22:35.098022 master-0 kubenswrapper[31559]: E0216 02:22:35.097919 31559 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Feb 16 02:22:35.098022 master-0 kubenswrapper[31559]: E0216 02:22:35.097981 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0a900f93-91c9-4782-89a3-1cc09f3aec95-node-exporter-kube-rbac-proxy-config podName:0a900f93-91c9-4782-89a3-1cc09f3aec95 nodeName:}" failed. No retries permitted until 2026-02-16 02:22:35.597961679 +0000 UTC m=+7.942567704 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/0a900f93-91c9-4782-89a3-1cc09f3aec95-node-exporter-kube-rbac-proxy-config") pod "node-exporter-jxbq6" (UID: "0a900f93-91c9-4782-89a3-1cc09f3aec95") : failed to sync secret cache: timed out waiting for the condition Feb 16 02:22:35.098022 master-0 kubenswrapper[31559]: E0216 02:22:35.098002 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c4a146b2-c712-408a-97d8-5de3a84f3aaf-cloud-controller-manager-operator-tls podName:c4a146b2-c712-408a-97d8-5de3a84f3aaf nodeName:}" failed. No retries permitted until 2026-02-16 02:22:35.59799361 +0000 UTC m=+7.942599635 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cloud-controller-manager-operator-tls" (UniqueName: "kubernetes.io/secret/c4a146b2-c712-408a-97d8-5de3a84f3aaf-cloud-controller-manager-operator-tls") pod "cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj" (UID: "c4a146b2-c712-408a-97d8-5de3a84f3aaf") : failed to sync secret cache: timed out waiting for the condition Feb 16 02:22:35.099274 master-0 kubenswrapper[31559]: E0216 02:22:35.099089 31559 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 02:22:35.099274 master-0 kubenswrapper[31559]: E0216 02:22:35.099135 31559 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 02:22:35.099274 master-0 kubenswrapper[31559]: E0216 02:22:35.099142 31559 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-2md94v7udfjth: failed to sync secret cache: timed out waiting for the condition Feb 16 02:22:35.099274 master-0 kubenswrapper[31559]: E0216 02:22:35.099176 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/695d1f01-d3c1-4fb9-9dda-daf33eae11f5-metrics-client-ca podName:695d1f01-d3c1-4fb9-9dda-daf33eae11f5 nodeName:}" failed. No retries permitted until 2026-02-16 02:22:35.59915687 +0000 UTC m=+7.943762925 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/695d1f01-d3c1-4fb9-9dda-daf33eae11f5-metrics-client-ca") pod "prometheus-operator-7485d645b8-v9mmd" (UID: "695d1f01-d3c1-4fb9-9dda-daf33eae11f5") : failed to sync configmap cache: timed out waiting for the condition Feb 16 02:22:35.099274 master-0 kubenswrapper[31559]: E0216 02:22:35.099102 31559 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 02:22:35.099274 master-0 kubenswrapper[31559]: E0216 02:22:35.099209 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/27a42eb0-677c-414d-b0ec-f945ec39b7e9-cluster-baremetal-operator-tls podName:27a42eb0-677c-414d-b0ec-f945ec39b7e9 nodeName:}" failed. No retries permitted until 2026-02-16 02:22:35.599192511 +0000 UTC m=+7.943798566 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/27a42eb0-677c-414d-b0ec-f945ec39b7e9-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-7bc947fc7d-frvgm" (UID: "27a42eb0-677c-414d-b0ec-f945ec39b7e9") : failed to sync secret cache: timed out waiting for the condition Feb 16 02:22:35.099274 master-0 kubenswrapper[31559]: E0216 02:22:35.099234 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/27a42eb0-677c-414d-b0ec-f945ec39b7e9-cert podName:27a42eb0-677c-414d-b0ec-f945ec39b7e9 nodeName:}" failed. No retries permitted until 2026-02-16 02:22:35.599221832 +0000 UTC m=+7.943828007 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/27a42eb0-677c-414d-b0ec-f945ec39b7e9-cert") pod "cluster-baremetal-operator-7bc947fc7d-frvgm" (UID: "27a42eb0-677c-414d-b0ec-f945ec39b7e9") : failed to sync secret cache: timed out waiting for the condition Feb 16 02:22:35.099274 master-0 kubenswrapper[31559]: E0216 02:22:35.099254 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c267cc7-a51a-4b14-baee-e584254eefc5-client-ca-bundle podName:8c267cc7-a51a-4b14-baee-e584254eefc5 nodeName:}" failed. No retries permitted until 2026-02-16 02:22:35.599245932 +0000 UTC m=+7.943852077 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/8c267cc7-a51a-4b14-baee-e584254eefc5-client-ca-bundle") pod "metrics-server-67b79bd656-cs2n2" (UID: "8c267cc7-a51a-4b14-baee-e584254eefc5") : failed to sync secret cache: timed out waiting for the condition Feb 16 02:22:35.100383 master-0 kubenswrapper[31559]: E0216 02:22:35.100342 31559 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Feb 16 02:22:35.100479 master-0 kubenswrapper[31559]: E0216 02:22:35.100409 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86af980a-2653-40c3-a368-a795d7fb8558-kube-state-metrics-kube-rbac-proxy-config podName:86af980a-2653-40c3-a368-a795d7fb8558 nodeName:}" failed. No retries permitted until 2026-02-16 02:22:35.600393802 +0000 UTC m=+7.944999857 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/86af980a-2653-40c3-a368-a795d7fb8558-kube-state-metrics-kube-rbac-proxy-config") pod "kube-state-metrics-7cc9598d54-2gx8b" (UID: "86af980a-2653-40c3-a368-a795d7fb8558") : failed to sync secret cache: timed out waiting for the condition Feb 16 02:22:35.100479 master-0 kubenswrapper[31559]: E0216 02:22:35.100408 31559 secret.go:189] Couldn't get secret openshift-cluster-storage-operator/cluster-storage-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 02:22:35.100673 master-0 kubenswrapper[31559]: E0216 02:22:35.100479 31559 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 16 02:22:35.100673 master-0 kubenswrapper[31559]: E0216 02:22:35.100518 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a77e2f8f-d164-4a58-aab2-f3444c05cacb-cluster-storage-operator-serving-cert podName:a77e2f8f-d164-4a58-aab2-f3444c05cacb nodeName:}" failed. No retries permitted until 2026-02-16 02:22:35.600500645 +0000 UTC m=+7.945106700 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-storage-operator-serving-cert" (UniqueName: "kubernetes.io/secret/a77e2f8f-d164-4a58-aab2-f3444c05cacb-cluster-storage-operator-serving-cert") pod "cluster-storage-operator-75b869db96-qm7rm" (UID: "a77e2f8f-d164-4a58-aab2-f3444c05cacb") : failed to sync secret cache: timed out waiting for the condition Feb 16 02:22:35.100673 master-0 kubenswrapper[31559]: E0216 02:22:35.100548 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d870332c-2498-4135-a9b3-a71e67c2805b-auth-proxy-config podName:d870332c-2498-4135-a9b3-a71e67c2805b nodeName:}" failed. No retries permitted until 2026-02-16 02:22:35.600535636 +0000 UTC m=+7.945141691 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/d870332c-2498-4135-a9b3-a71e67c2805b-auth-proxy-config") pod "machine-config-operator-84976bb859-5gs6g" (UID: "d870332c-2498-4135-a9b3-a71e67c2805b") : failed to sync configmap cache: timed out waiting for the condition Feb 16 02:22:35.100673 master-0 kubenswrapper[31559]: E0216 02:22:35.100602 31559 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 02:22:35.100673 master-0 kubenswrapper[31559]: E0216 02:22:35.100657 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/48863ff6-63ac-42d7-bac7-29d888c92db9-cert podName:48863ff6-63ac-42d7-bac7-29d888c92db9 nodeName:}" failed. No retries permitted until 2026-02-16 02:22:35.600647299 +0000 UTC m=+7.945253324 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/48863ff6-63ac-42d7-bac7-29d888c92db9-cert") pod "cluster-autoscaler-operator-67fd9768b5-9rvcj" (UID: "48863ff6-63ac-42d7-bac7-29d888c92db9") : failed to sync secret cache: timed out waiting for the condition Feb 16 02:22:35.101045 master-0 kubenswrapper[31559]: E0216 02:22:35.100906 31559 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 02:22:35.101045 master-0 kubenswrapper[31559]: E0216 02:22:35.100943 31559 configmap.go:193] Couldn't get configMap openshift-insights/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 16 02:22:35.101045 master-0 kubenswrapper[31559]: E0216 02:22:35.100961 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e491b5ed-9c09-4308-9843-fba8d43bd3ae-serving-cert podName:e491b5ed-9c09-4308-9843-fba8d43bd3ae nodeName:}" failed. No retries permitted until 2026-02-16 02:22:35.600948547 +0000 UTC m=+7.945554672 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e491b5ed-9c09-4308-9843-fba8d43bd3ae-serving-cert") pod "controller-manager-5788fc6459-29m25" (UID: "e491b5ed-9c09-4308-9843-fba8d43bd3ae") : failed to sync secret cache: timed out waiting for the condition Feb 16 02:22:35.101045 master-0 kubenswrapper[31559]: E0216 02:22:35.100997 31559 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 02:22:35.101045 master-0 kubenswrapper[31559]: E0216 02:22:35.101001 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0abea413-e08a-465a-8ec4-2be650bfd5bd-trusted-ca-bundle podName:0abea413-e08a-465a-8ec4-2be650bfd5bd nodeName:}" failed. No retries permitted until 2026-02-16 02:22:35.600989918 +0000 UTC m=+7.945595943 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/0abea413-e08a-465a-8ec4-2be650bfd5bd-trusted-ca-bundle") pod "insights-operator-cb4f7b4cf-llpf5" (UID: "0abea413-e08a-465a-8ec4-2be650bfd5bd") : failed to sync configmap cache: timed out waiting for the condition Feb 16 02:22:35.101045 master-0 kubenswrapper[31559]: E0216 02:22:35.101028 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d870332c-2498-4135-a9b3-a71e67c2805b-proxy-tls podName:d870332c-2498-4135-a9b3-a71e67c2805b nodeName:}" failed. No retries permitted until 2026-02-16 02:22:35.601020629 +0000 UTC m=+7.945626784 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/d870332c-2498-4135-a9b3-a71e67c2805b-proxy-tls") pod "machine-config-operator-84976bb859-5gs6g" (UID: "d870332c-2498-4135-a9b3-a71e67c2805b") : failed to sync secret cache: timed out waiting for the condition Feb 16 02:22:35.102182 master-0 kubenswrapper[31559]: E0216 02:22:35.101662 31559 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 02:22:35.102182 master-0 kubenswrapper[31559]: E0216 02:22:35.101717 31559 configmap.go:193] Couldn't get configMap openshift-monitoring/kube-state-metrics-custom-resource-state-configmap: failed to sync configmap cache: timed out waiting for the condition Feb 16 02:22:35.102182 master-0 kubenswrapper[31559]: E0216 02:22:35.101745 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e491b5ed-9c09-4308-9843-fba8d43bd3ae-proxy-ca-bundles podName:e491b5ed-9c09-4308-9843-fba8d43bd3ae nodeName:}" failed. No retries permitted until 2026-02-16 02:22:35.601722517 +0000 UTC m=+7.946328572 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/e491b5ed-9c09-4308-9843-fba8d43bd3ae-proxy-ca-bundles") pod "controller-manager-5788fc6459-29m25" (UID: "e491b5ed-9c09-4308-9843-fba8d43bd3ae") : failed to sync configmap cache: timed out waiting for the condition Feb 16 02:22:35.102182 master-0 kubenswrapper[31559]: E0216 02:22:35.101774 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/86af980a-2653-40c3-a368-a795d7fb8558-kube-state-metrics-custom-resource-state-configmap podName:86af980a-2653-40c3-a368-a795d7fb8558 nodeName:}" failed. No retries permitted until 2026-02-16 02:22:35.601758908 +0000 UTC m=+7.946364963 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" (UniqueName: "kubernetes.io/configmap/86af980a-2653-40c3-a368-a795d7fb8558-kube-state-metrics-custom-resource-state-configmap") pod "kube-state-metrics-7cc9598d54-2gx8b" (UID: "86af980a-2653-40c3-a368-a795d7fb8558") : failed to sync configmap cache: timed out waiting for the condition Feb 16 02:22:35.102182 master-0 kubenswrapper[31559]: E0216 02:22:35.101801 31559 secret.go:189] Couldn't get secret openshift-machine-config-operator/proxy-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 02:22:35.102182 master-0 kubenswrapper[31559]: E0216 02:22:35.101828 31559 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 02:22:35.102182 master-0 kubenswrapper[31559]: E0216 02:22:35.101843 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0fbc8f91-f8cc-48d8-917c-64fa978069de-proxy-tls podName:0fbc8f91-f8cc-48d8-917c-64fa978069de nodeName:}" failed. No retries permitted until 2026-02-16 02:22:35.60182946 +0000 UTC m=+7.946435505 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/0fbc8f91-f8cc-48d8-917c-64fa978069de-proxy-tls") pod "machine-config-daemon-qd4l7" (UID: "0fbc8f91-f8cc-48d8-917c-64fa978069de") : failed to sync secret cache: timed out waiting for the condition Feb 16 02:22:35.102182 master-0 kubenswrapper[31559]: E0216 02:22:35.101895 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/83883885-f493-4559-9c0f-e28d69712475-client-ca podName:83883885-f493-4559-9c0f-e28d69712475 nodeName:}" failed. No retries permitted until 2026-02-16 02:22:35.601875551 +0000 UTC m=+7.946481676 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/83883885-f493-4559-9c0f-e28d69712475-client-ca") pod "route-controller-manager-998bd8b4b-hm5k2" (UID: "83883885-f493-4559-9c0f-e28d69712475") : failed to sync configmap cache: timed out waiting for the condition Feb 16 02:22:35.102182 master-0 kubenswrapper[31559]: E0216 02:22:35.102071 31559 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 02:22:35.102182 master-0 kubenswrapper[31559]: E0216 02:22:35.102155 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86af980a-2653-40c3-a368-a795d7fb8558-kube-state-metrics-tls podName:86af980a-2653-40c3-a368-a795d7fb8558 nodeName:}" failed. No retries permitted until 2026-02-16 02:22:35.602142068 +0000 UTC m=+7.946748173 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/86af980a-2653-40c3-a368-a795d7fb8558-kube-state-metrics-tls") pod "kube-state-metrics-7cc9598d54-2gx8b" (UID: "86af980a-2653-40c3-a368-a795d7fb8558") : failed to sync secret cache: timed out waiting for the condition Feb 16 02:22:35.103043 master-0 kubenswrapper[31559]: E0216 02:22:35.102221 31559 configmap.go:193] Couldn't get configMap openshift-cloud-controller-manager-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 16 02:22:35.103043 master-0 kubenswrapper[31559]: E0216 02:22:35.102280 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c4a146b2-c712-408a-97d8-5de3a84f3aaf-auth-proxy-config podName:c4a146b2-c712-408a-97d8-5de3a84f3aaf nodeName:}" failed. No retries permitted until 2026-02-16 02:22:35.602264441 +0000 UTC m=+7.946870496 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/c4a146b2-c712-408a-97d8-5de3a84f3aaf-auth-proxy-config") pod "cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj" (UID: "c4a146b2-c712-408a-97d8-5de3a84f3aaf") : failed to sync configmap cache: timed out waiting for the condition Feb 16 02:22:35.103043 master-0 kubenswrapper[31559]: E0216 02:22:35.102671 31559 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Feb 16 02:22:35.103043 master-0 kubenswrapper[31559]: E0216 02:22:35.102717 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/695d1f01-d3c1-4fb9-9dda-daf33eae11f5-prometheus-operator-kube-rbac-proxy-config podName:695d1f01-d3c1-4fb9-9dda-daf33eae11f5 nodeName:}" failed. No retries permitted until 2026-02-16 02:22:35.602705912 +0000 UTC m=+7.947311937 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/695d1f01-d3c1-4fb9-9dda-daf33eae11f5-prometheus-operator-kube-rbac-proxy-config") pod "prometheus-operator-7485d645b8-v9mmd" (UID: "695d1f01-d3c1-4fb9-9dda-daf33eae11f5") : failed to sync secret cache: timed out waiting for the condition Feb 16 02:22:35.106068 master-0 kubenswrapper[31559]: I0216 02:22:35.106030 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 16 02:22:35.125921 master-0 kubenswrapper[31559]: I0216 02:22:35.125860 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Feb 16 02:22:35.145713 master-0 kubenswrapper[31559]: I0216 02:22:35.145612 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Feb 16 02:22:35.165663 master-0 kubenswrapper[31559]: I0216 02:22:35.165594 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Feb 16 02:22:35.185985 master-0 kubenswrapper[31559]: I0216 02:22:35.185912 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Feb 16 02:22:35.206825 master-0 kubenswrapper[31559]: I0216 02:22:35.206768 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Feb 16 02:22:35.225308 master-0 kubenswrapper[31559]: I0216 02:22:35.225244 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 02:22:35.245380 master-0 kubenswrapper[31559]: I0216 02:22:35.245317 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Feb 16 02:22:35.266339 master-0 kubenswrapper[31559]: I0216 02:22:35.266277 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 02:22:35.286415 master-0 kubenswrapper[31559]: I0216 02:22:35.286349 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 02:22:35.306192 master-0 kubenswrapper[31559]: I0216 02:22:35.306123 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 02:22:35.326309 master-0 kubenswrapper[31559]: I0216 02:22:35.326256 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 02:22:35.339595 master-0 kubenswrapper[31559]: I0216 02:22:35.339547 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Feb 16 02:22:35.347157 master-0 kubenswrapper[31559]: I0216 02:22:35.347079 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-8pgh8" Feb 16 02:22:35.365734 master-0 kubenswrapper[31559]: I0216 02:22:35.365695 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Feb 16 02:22:35.385906 master-0 kubenswrapper[31559]: I0216 02:22:35.385871 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-785jj" Feb 16 02:22:35.406561 master-0 kubenswrapper[31559]: I0216 02:22:35.406414 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 02:22:35.426098 master-0 kubenswrapper[31559]: I0216 02:22:35.426028 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Feb 16 02:22:35.446765 master-0 kubenswrapper[31559]: I0216 02:22:35.446692 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 16 02:22:35.466466 master-0 kubenswrapper[31559]: I0216 02:22:35.466370 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 02:22:35.486377 master-0 kubenswrapper[31559]: I0216 02:22:35.486295 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 02:22:35.506386 master-0 kubenswrapper[31559]: I0216 02:22:35.506311 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 02:22:35.526658 master-0 kubenswrapper[31559]: I0216 02:22:35.526568 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 02:22:35.557513 master-0 kubenswrapper[31559]: I0216 02:22:35.557404 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 02:22:35.565340 master-0 kubenswrapper[31559]: I0216 02:22:35.565270 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 16 02:22:35.586331 master-0 kubenswrapper[31559]: I0216 02:22:35.586241 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 16 02:22:35.592968 master-0 kubenswrapper[31559]: I0216 02:22:35.592905 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/8c267cc7-a51a-4b14-baee-e584254eefc5-metrics-server-audit-profiles\") pod \"metrics-server-67b79bd656-cs2n2\" (UID: \"8c267cc7-a51a-4b14-baee-e584254eefc5\") " pod="openshift-monitoring/metrics-server-67b79bd656-cs2n2" Feb 16 02:22:35.593080 master-0 kubenswrapper[31559]: I0216 02:22:35.592970 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d8bbd369-4219-48ef-ae2d-b45c81789403-certs\") pod \"machine-config-server-5zv6j\" (UID: \"d8bbd369-4219-48ef-ae2d-b45c81789403\") " pod="openshift-machine-config-operator/machine-config-server-5zv6j" Feb 16 02:22:35.593080 master-0 kubenswrapper[31559]: I0216 02:22:35.593031 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d870332c-2498-4135-a9b3-a71e67c2805b-images\") pod \"machine-config-operator-84976bb859-5gs6g\" (UID: \"d870332c-2498-4135-a9b3-a71e67c2805b\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-5gs6g" Feb 16 02:22:35.593238 master-0 kubenswrapper[31559]: I0216 02:22:35.593085 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c8086f93-2d98-4218-afac-20a65e6bf943-webhook-certs\") pod \"multus-admission-controller-6d678b8d67-8gzlx\" (UID: \"c8086f93-2d98-4218-afac-20a65e6bf943\") " pod="openshift-multus/multus-admission-controller-6d678b8d67-8gzlx" Feb 16 02:22:35.593238 master-0 kubenswrapper[31559]: I0216 02:22:35.593127 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/27a42eb0-677c-414d-b0ec-f945ec39b7e9-images\") pod \"cluster-baremetal-operator-7bc947fc7d-frvgm\" (UID: \"27a42eb0-677c-414d-b0ec-f945ec39b7e9\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-frvgm" Feb 16 02:22:35.593238 master-0 kubenswrapper[31559]: I0216 02:22:35.593190 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/97b8261a-91e3-435e-93f8-0a17f30359fd-proxy-tls\") pod \"machine-config-controller-686c884b4d-zljgp\" (UID: \"97b8261a-91e3-435e-93f8-0a17f30359fd\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-zljgp" Feb 16 02:22:35.593418 master-0 kubenswrapper[31559]: I0216 02:22:35.593241 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fec84b8a-a0d1-4b07-8827-cef0beb89ecd-config\") pod \"machine-api-operator-bd7dd5c46-qw2zq\" (UID: \"fec84b8a-a0d1-4b07-8827-cef0beb89ecd\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-qw2zq" Feb 16 02:22:35.593418 master-0 kubenswrapper[31559]: I0216 02:22:35.593283 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/32d420d6-bbda-42c0-82fe-8b187ad91607-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-546cc7d765-2zl2r\" (UID: \"32d420d6-bbda-42c0-82fe-8b187ad91607\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-2zl2r" Feb 16 02:22:35.593580 master-0 kubenswrapper[31559]: I0216 02:22:35.593523 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/8c267cc7-a51a-4b14-baee-e584254eefc5-secret-metrics-server-tls\") pod \"metrics-server-67b79bd656-cs2n2\" (UID: \"8c267cc7-a51a-4b14-baee-e584254eefc5\") " pod="openshift-monitoring/metrics-server-67b79bd656-cs2n2" Feb 16 02:22:35.593642 master-0 kubenswrapper[31559]: I0216 02:22:35.593612 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83883885-f493-4559-9c0f-e28d69712475-config\") pod \"route-controller-manager-998bd8b4b-hm5k2\" (UID: \"83883885-f493-4559-9c0f-e28d69712475\") " pod="openshift-route-controller-manager/route-controller-manager-998bd8b4b-hm5k2" Feb 16 02:22:35.593755 master-0 kubenswrapper[31559]: I0216 02:22:35.593689 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/27a42eb0-677c-414d-b0ec-f945ec39b7e9-images\") pod \"cluster-baremetal-operator-7bc947fc7d-frvgm\" (UID: \"27a42eb0-677c-414d-b0ec-f945ec39b7e9\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-frvgm" Feb 16 02:22:35.593853 master-0 kubenswrapper[31559]: I0216 02:22:35.593714 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0abea413-e08a-465a-8ec4-2be650bfd5bd-service-ca-bundle\") pod \"insights-operator-cb4f7b4cf-llpf5\" (UID: \"0abea413-e08a-465a-8ec4-2be650bfd5bd\") " pod="openshift-insights/insights-operator-cb4f7b4cf-llpf5" Feb 16 02:22:35.593949 master-0 kubenswrapper[31559]: I0216 02:22:35.593862 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c442d349-668b-4d01-a097-5981b7a04eac-config\") pod \"machine-approver-8569dd85ff-vqtcl\" (UID: \"c442d349-668b-4d01-a097-5981b7a04eac\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-vqtcl" Feb 16 02:22:35.593949 master-0 kubenswrapper[31559]: I0216 02:22:35.593932 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c4a146b2-c712-408a-97d8-5de3a84f3aaf-images\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj\" (UID: \"c4a146b2-c712-408a-97d8-5de3a84f3aaf\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj" Feb 16 02:22:35.594257 master-0 kubenswrapper[31559]: I0216 02:22:35.594010 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/48863ff6-63ac-42d7-bac7-29d888c92db9-auth-proxy-config\") pod \"cluster-autoscaler-operator-67fd9768b5-9rvcj\" (UID: \"48863ff6-63ac-42d7-bac7-29d888c92db9\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-9rvcj" Feb 16 02:22:35.594257 master-0 kubenswrapper[31559]: I0216 02:22:35.594015 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fec84b8a-a0d1-4b07-8827-cef0beb89ecd-config\") pod \"machine-api-operator-bd7dd5c46-qw2zq\" (UID: \"fec84b8a-a0d1-4b07-8827-cef0beb89ecd\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-qw2zq" Feb 16 02:22:35.594257 master-0 kubenswrapper[31559]: I0216 02:22:35.594236 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27a42eb0-677c-414d-b0ec-f945ec39b7e9-config\") pod \"cluster-baremetal-operator-7bc947fc7d-frvgm\" (UID: \"27a42eb0-677c-414d-b0ec-f945ec39b7e9\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-frvgm" Feb 16 02:22:35.594476 master-0 kubenswrapper[31559]: I0216 02:22:35.594309 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fec84b8a-a0d1-4b07-8827-cef0beb89ecd-images\") pod \"machine-api-operator-bd7dd5c46-qw2zq\" (UID: \"fec84b8a-a0d1-4b07-8827-cef0beb89ecd\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-qw2zq" Feb 16 02:22:35.594476 master-0 kubenswrapper[31559]: I0216 02:22:35.594359 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c442d349-668b-4d01-a097-5981b7a04eac-auth-proxy-config\") pod \"machine-approver-8569dd85ff-vqtcl\" (UID: \"c442d349-668b-4d01-a097-5981b7a04eac\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-vqtcl" Feb 16 02:22:35.594476 master-0 kubenswrapper[31559]: I0216 02:22:35.594401 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/695d1f01-d3c1-4fb9-9dda-daf33eae11f5-prometheus-operator-tls\") pod \"prometheus-operator-7485d645b8-v9mmd\" (UID: \"695d1f01-d3c1-4fb9-9dda-daf33eae11f5\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-v9mmd" Feb 16 02:22:35.594656 master-0 kubenswrapper[31559]: I0216 02:22:35.594495 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dc3354cb-b6c3-40a5-a695-cccb079ad292-webhook-cert\") pod \"packageserver-87777c9b7-fxzh6\" (UID: \"dc3354cb-b6c3-40a5-a695-cccb079ad292\") " pod="openshift-operator-lifecycle-manager/packageserver-87777c9b7-fxzh6" Feb 16 02:22:35.594656 master-0 kubenswrapper[31559]: I0216 02:22:35.594520 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27a42eb0-677c-414d-b0ec-f945ec39b7e9-config\") pod \"cluster-baremetal-operator-7bc947fc7d-frvgm\" (UID: \"27a42eb0-677c-414d-b0ec-f945ec39b7e9\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-frvgm" Feb 16 02:22:35.594656 master-0 kubenswrapper[31559]: I0216 02:22:35.594620 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83883885-f493-4559-9c0f-e28d69712475-config\") pod \"route-controller-manager-998bd8b4b-hm5k2\" (UID: \"83883885-f493-4559-9c0f-e28d69712475\") " pod="openshift-route-controller-manager/route-controller-manager-998bd8b4b-hm5k2" Feb 16 02:22:35.594865 master-0 kubenswrapper[31559]: I0216 02:22:35.594609 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/48863ff6-63ac-42d7-bac7-29d888c92db9-auth-proxy-config\") pod \"cluster-autoscaler-operator-67fd9768b5-9rvcj\" (UID: \"48863ff6-63ac-42d7-bac7-29d888c92db9\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-9rvcj" Feb 16 02:22:35.594865 master-0 kubenswrapper[31559]: I0216 02:22:35.594751 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/32d420d6-bbda-42c0-82fe-8b187ad91607-openshift-state-metrics-tls\") pod \"openshift-state-metrics-546cc7d765-2zl2r\" (UID: \"32d420d6-bbda-42c0-82fe-8b187ad91607\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-2zl2r" Feb 16 02:22:35.594995 master-0 kubenswrapper[31559]: I0216 02:22:35.594864 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/97b8261a-91e3-435e-93f8-0a17f30359fd-mcc-auth-proxy-config\") pod \"machine-config-controller-686c884b4d-zljgp\" (UID: \"97b8261a-91e3-435e-93f8-0a17f30359fd\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-zljgp" Feb 16 02:22:35.594995 master-0 kubenswrapper[31559]: I0216 02:22:35.594971 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/0a900f93-91c9-4782-89a3-1cc09f3aec95-node-exporter-tls\") pod \"node-exporter-jxbq6\" (UID: \"0a900f93-91c9-4782-89a3-1cc09f3aec95\") " pod="openshift-monitoring/node-exporter-jxbq6" Feb 16 02:22:35.595124 master-0 kubenswrapper[31559]: I0216 02:22:35.595046 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c267cc7-a51a-4b14-baee-e584254eefc5-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-67b79bd656-cs2n2\" (UID: \"8c267cc7-a51a-4b14-baee-e584254eefc5\") " pod="openshift-monitoring/metrics-server-67b79bd656-cs2n2" Feb 16 02:22:35.595187 master-0 kubenswrapper[31559]: I0216 02:22:35.595145 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/c442d349-668b-4d01-a097-5981b7a04eac-machine-approver-tls\") pod \"machine-approver-8569dd85ff-vqtcl\" (UID: \"c442d349-668b-4d01-a097-5981b7a04eac\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-vqtcl" Feb 16 02:22:35.595248 master-0 kubenswrapper[31559]: I0216 02:22:35.595221 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d8bbd369-4219-48ef-ae2d-b45c81789403-node-bootstrap-token\") pod \"machine-config-server-5zv6j\" (UID: \"d8bbd369-4219-48ef-ae2d-b45c81789403\") " pod="openshift-machine-config-operator/machine-config-server-5zv6j" Feb 16 02:22:35.595338 master-0 kubenswrapper[31559]: I0216 02:22:35.595304 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e491b5ed-9c09-4308-9843-fba8d43bd3ae-config\") pod \"controller-manager-5788fc6459-29m25\" (UID: \"e491b5ed-9c09-4308-9843-fba8d43bd3ae\") " pod="openshift-controller-manager/controller-manager-5788fc6459-29m25" Feb 16 02:22:35.595415 master-0 kubenswrapper[31559]: I0216 02:22:35.595354 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/8c267cc7-a51a-4b14-baee-e584254eefc5-secret-metrics-client-certs\") pod \"metrics-server-67b79bd656-cs2n2\" (UID: \"8c267cc7-a51a-4b14-baee-e584254eefc5\") " pod="openshift-monitoring/metrics-server-67b79bd656-cs2n2" Feb 16 02:22:35.595415 master-0 kubenswrapper[31559]: I0216 02:22:35.595409 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/86af980a-2653-40c3-a368-a795d7fb8558-metrics-client-ca\") pod \"kube-state-metrics-7cc9598d54-2gx8b\" (UID: \"86af980a-2653-40c3-a368-a795d7fb8558\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-2gx8b" Feb 16 02:22:35.595657 master-0 kubenswrapper[31559]: I0216 02:22:35.595602 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0fbc8f91-f8cc-48d8-917c-64fa978069de-mcd-auth-proxy-config\") pod \"machine-config-daemon-qd4l7\" (UID: \"0fbc8f91-f8cc-48d8-917c-64fa978069de\") " pod="openshift-machine-config-operator/machine-config-daemon-qd4l7" Feb 16 02:22:35.595735 master-0 kubenswrapper[31559]: I0216 02:22:35.595680 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/0a900f93-91c9-4782-89a3-1cc09f3aec95-metrics-client-ca\") pod \"node-exporter-jxbq6\" (UID: \"0a900f93-91c9-4782-89a3-1cc09f3aec95\") " pod="openshift-monitoring/node-exporter-jxbq6" Feb 16 02:22:35.595735 master-0 kubenswrapper[31559]: I0216 02:22:35.595724 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/fec84b8a-a0d1-4b07-8827-cef0beb89ecd-machine-api-operator-tls\") pod \"machine-api-operator-bd7dd5c46-qw2zq\" (UID: \"fec84b8a-a0d1-4b07-8827-cef0beb89ecd\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-qw2zq" Feb 16 02:22:35.595883 master-0 kubenswrapper[31559]: I0216 02:22:35.595847 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/32d420d6-bbda-42c0-82fe-8b187ad91607-metrics-client-ca\") pod \"openshift-state-metrics-546cc7d765-2zl2r\" (UID: \"32d420d6-bbda-42c0-82fe-8b187ad91607\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-2zl2r" Feb 16 02:22:35.595955 master-0 kubenswrapper[31559]: I0216 02:22:35.595928 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7846b339-c46d-4983-b586-a28f2868f665-cert\") pod \"ingress-canary-6t7mx\" (UID: \"7846b339-c46d-4983-b586-a28f2868f665\") " pod="openshift-ingress-canary/ingress-canary-6t7mx" Feb 16 02:22:35.596017 master-0 kubenswrapper[31559]: I0216 02:22:35.595987 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e491b5ed-9c09-4308-9843-fba8d43bd3ae-config\") pod \"controller-manager-5788fc6459-29m25\" (UID: \"e491b5ed-9c09-4308-9843-fba8d43bd3ae\") " pod="openshift-controller-manager/controller-manager-5788fc6459-29m25" Feb 16 02:22:35.596114 master-0 kubenswrapper[31559]: I0216 02:22:35.596079 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/fec84b8a-a0d1-4b07-8827-cef0beb89ecd-machine-api-operator-tls\") pod \"machine-api-operator-bd7dd5c46-qw2zq\" (UID: \"fec84b8a-a0d1-4b07-8827-cef0beb89ecd\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-qw2zq" Feb 16 02:22:35.606335 master-0 kubenswrapper[31559]: I0216 02:22:35.606249 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Feb 16 02:22:35.636178 master-0 kubenswrapper[31559]: I0216 02:22:35.636083 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Feb 16 02:22:35.645758 master-0 kubenswrapper[31559]: I0216 02:22:35.645707 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 16 02:22:35.655679 master-0 kubenswrapper[31559]: I0216 02:22:35.655637 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fec84b8a-a0d1-4b07-8827-cef0beb89ecd-images\") pod \"machine-api-operator-bd7dd5c46-qw2zq\" (UID: \"fec84b8a-a0d1-4b07-8827-cef0beb89ecd\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-qw2zq" Feb 16 02:22:35.665941 master-0 kubenswrapper[31559]: I0216 02:22:35.665888 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Feb 16 02:22:35.674713 master-0 kubenswrapper[31559]: I0216 02:22:35.674658 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0abea413-e08a-465a-8ec4-2be650bfd5bd-service-ca-bundle\") pod \"insights-operator-cb4f7b4cf-llpf5\" (UID: \"0abea413-e08a-465a-8ec4-2be650bfd5bd\") " pod="openshift-insights/insights-operator-cb4f7b4cf-llpf5" Feb 16 02:22:35.685528 master-0 kubenswrapper[31559]: I0216 02:22:35.685473 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 16 02:22:35.693836 master-0 kubenswrapper[31559]: I0216 02:22:35.693762 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d870332c-2498-4135-a9b3-a71e67c2805b-images\") pod \"machine-config-operator-84976bb859-5gs6g\" (UID: \"d870332c-2498-4135-a9b3-a71e67c2805b\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-5gs6g" Feb 16 02:22:35.698152 master-0 kubenswrapper[31559]: I0216 02:22:35.698082 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0abea413-e08a-465a-8ec4-2be650bfd5bd-serving-cert\") pod \"insights-operator-cb4f7b4cf-llpf5\" (UID: \"0abea413-e08a-465a-8ec4-2be650bfd5bd\") " pod="openshift-insights/insights-operator-cb4f7b4cf-llpf5" Feb 16 02:22:35.698424 master-0 kubenswrapper[31559]: I0216 02:22:35.698383 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dc3354cb-b6c3-40a5-a695-cccb079ad292-apiservice-cert\") pod \"packageserver-87777c9b7-fxzh6\" (UID: \"dc3354cb-b6c3-40a5-a695-cccb079ad292\") " pod="openshift-operator-lifecycle-manager/packageserver-87777c9b7-fxzh6" Feb 16 02:22:35.698571 master-0 kubenswrapper[31559]: I0216 02:22:35.698531 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e491b5ed-9c09-4308-9843-fba8d43bd3ae-client-ca\") pod \"controller-manager-5788fc6459-29m25\" (UID: \"e491b5ed-9c09-4308-9843-fba8d43bd3ae\") " pod="openshift-controller-manager/controller-manager-5788fc6459-29m25" Feb 16 02:22:35.698679 master-0 kubenswrapper[31559]: I0216 02:22:35.698639 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83883885-f493-4559-9c0f-e28d69712475-serving-cert\") pod \"route-controller-manager-998bd8b4b-hm5k2\" (UID: \"83883885-f493-4559-9c0f-e28d69712475\") " pod="openshift-route-controller-manager/route-controller-manager-998bd8b4b-hm5k2" Feb 16 02:22:35.698737 master-0 kubenswrapper[31559]: I0216 02:22:35.698641 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0abea413-e08a-465a-8ec4-2be650bfd5bd-serving-cert\") pod \"insights-operator-cb4f7b4cf-llpf5\" (UID: \"0abea413-e08a-465a-8ec4-2be650bfd5bd\") " pod="openshift-insights/insights-operator-cb4f7b4cf-llpf5" Feb 16 02:22:35.698781 master-0 kubenswrapper[31559]: I0216 02:22:35.698715 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/0a900f93-91c9-4782-89a3-1cc09f3aec95-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-jxbq6\" (UID: \"0a900f93-91c9-4782-89a3-1cc09f3aec95\") " pod="openshift-monitoring/node-exporter-jxbq6" Feb 16 02:22:35.698824 master-0 kubenswrapper[31559]: I0216 02:22:35.698793 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e491b5ed-9c09-4308-9843-fba8d43bd3ae-client-ca\") pod \"controller-manager-5788fc6459-29m25\" (UID: \"e491b5ed-9c09-4308-9843-fba8d43bd3ae\") " pod="openshift-controller-manager/controller-manager-5788fc6459-29m25" Feb 16 02:22:35.698824 master-0 kubenswrapper[31559]: I0216 02:22:35.698815 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/c4a146b2-c712-408a-97d8-5de3a84f3aaf-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj\" (UID: \"c4a146b2-c712-408a-97d8-5de3a84f3aaf\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj" Feb 16 02:22:35.698935 master-0 kubenswrapper[31559]: I0216 02:22:35.698876 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83883885-f493-4559-9c0f-e28d69712475-serving-cert\") pod \"route-controller-manager-998bd8b4b-hm5k2\" (UID: \"83883885-f493-4559-9c0f-e28d69712475\") " pod="openshift-route-controller-manager/route-controller-manager-998bd8b4b-hm5k2" Feb 16 02:22:35.698988 master-0 kubenswrapper[31559]: I0216 02:22:35.698969 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/27a42eb0-677c-414d-b0ec-f945ec39b7e9-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-7bc947fc7d-frvgm\" (UID: \"27a42eb0-677c-414d-b0ec-f945ec39b7e9\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-frvgm" Feb 16 02:22:35.699038 master-0 kubenswrapper[31559]: I0216 02:22:35.699007 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/27a42eb0-677c-414d-b0ec-f945ec39b7e9-cert\") pod \"cluster-baremetal-operator-7bc947fc7d-frvgm\" (UID: \"27a42eb0-677c-414d-b0ec-f945ec39b7e9\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-frvgm" Feb 16 02:22:35.699080 master-0 kubenswrapper[31559]: I0216 02:22:35.699061 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/695d1f01-d3c1-4fb9-9dda-daf33eae11f5-metrics-client-ca\") pod \"prometheus-operator-7485d645b8-v9mmd\" (UID: \"695d1f01-d3c1-4fb9-9dda-daf33eae11f5\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-v9mmd" Feb 16 02:22:35.699123 master-0 kubenswrapper[31559]: I0216 02:22:35.699091 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/86af980a-2653-40c3-a368-a795d7fb8558-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7cc9598d54-2gx8b\" (UID: \"86af980a-2653-40c3-a368-a795d7fb8558\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-2gx8b" Feb 16 02:22:35.699312 master-0 kubenswrapper[31559]: I0216 02:22:35.699274 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c267cc7-a51a-4b14-baee-e584254eefc5-client-ca-bundle\") pod \"metrics-server-67b79bd656-cs2n2\" (UID: \"8c267cc7-a51a-4b14-baee-e584254eefc5\") " pod="openshift-monitoring/metrics-server-67b79bd656-cs2n2" Feb 16 02:22:35.699363 master-0 kubenswrapper[31559]: I0216 02:22:35.699334 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/a77e2f8f-d164-4a58-aab2-f3444c05cacb-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-75b869db96-qm7rm\" (UID: \"a77e2f8f-d164-4a58-aab2-f3444c05cacb\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-qm7rm" Feb 16 02:22:35.699363 master-0 kubenswrapper[31559]: I0216 02:22:35.699345 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/27a42eb0-677c-414d-b0ec-f945ec39b7e9-cert\") pod \"cluster-baremetal-operator-7bc947fc7d-frvgm\" (UID: \"27a42eb0-677c-414d-b0ec-f945ec39b7e9\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-frvgm" Feb 16 02:22:35.699363 master-0 kubenswrapper[31559]: I0216 02:22:35.699358 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d870332c-2498-4135-a9b3-a71e67c2805b-auth-proxy-config\") pod \"machine-config-operator-84976bb859-5gs6g\" (UID: \"d870332c-2498-4135-a9b3-a71e67c2805b\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-5gs6g" Feb 16 02:22:35.699496 master-0 kubenswrapper[31559]: I0216 02:22:35.699383 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/27a42eb0-677c-414d-b0ec-f945ec39b7e9-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-7bc947fc7d-frvgm\" (UID: \"27a42eb0-677c-414d-b0ec-f945ec39b7e9\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-frvgm" Feb 16 02:22:35.699496 master-0 kubenswrapper[31559]: I0216 02:22:35.699484 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d870332c-2498-4135-a9b3-a71e67c2805b-proxy-tls\") pod \"machine-config-operator-84976bb859-5gs6g\" (UID: \"d870332c-2498-4135-a9b3-a71e67c2805b\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-5gs6g" Feb 16 02:22:35.699591 master-0 kubenswrapper[31559]: I0216 02:22:35.699551 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/48863ff6-63ac-42d7-bac7-29d888c92db9-cert\") pod \"cluster-autoscaler-operator-67fd9768b5-9rvcj\" (UID: \"48863ff6-63ac-42d7-bac7-29d888c92db9\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-9rvcj" Feb 16 02:22:35.699591 master-0 kubenswrapper[31559]: I0216 02:22:35.699579 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0abea413-e08a-465a-8ec4-2be650bfd5bd-trusted-ca-bundle\") pod \"insights-operator-cb4f7b4cf-llpf5\" (UID: \"0abea413-e08a-465a-8ec4-2be650bfd5bd\") " pod="openshift-insights/insights-operator-cb4f7b4cf-llpf5" Feb 16 02:22:35.699666 master-0 kubenswrapper[31559]: I0216 02:22:35.699634 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e491b5ed-9c09-4308-9843-fba8d43bd3ae-serving-cert\") pod \"controller-manager-5788fc6459-29m25\" (UID: \"e491b5ed-9c09-4308-9843-fba8d43bd3ae\") " pod="openshift-controller-manager/controller-manager-5788fc6459-29m25" Feb 16 02:22:35.699714 master-0 kubenswrapper[31559]: I0216 02:22:35.699662 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/86af980a-2653-40c3-a368-a795d7fb8558-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7cc9598d54-2gx8b\" (UID: \"86af980a-2653-40c3-a368-a795d7fb8558\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-2gx8b" Feb 16 02:22:35.699757 master-0 kubenswrapper[31559]: I0216 02:22:35.699740 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/48863ff6-63ac-42d7-bac7-29d888c92db9-cert\") pod \"cluster-autoscaler-operator-67fd9768b5-9rvcj\" (UID: \"48863ff6-63ac-42d7-bac7-29d888c92db9\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-9rvcj" Feb 16 02:22:35.699805 master-0 kubenswrapper[31559]: I0216 02:22:35.699755 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d870332c-2498-4135-a9b3-a71e67c2805b-proxy-tls\") pod \"machine-config-operator-84976bb859-5gs6g\" (UID: \"d870332c-2498-4135-a9b3-a71e67c2805b\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-5gs6g" Feb 16 02:22:35.699805 master-0 kubenswrapper[31559]: I0216 02:22:35.699788 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e491b5ed-9c09-4308-9843-fba8d43bd3ae-proxy-ca-bundles\") pod \"controller-manager-5788fc6459-29m25\" (UID: \"e491b5ed-9c09-4308-9843-fba8d43bd3ae\") " pod="openshift-controller-manager/controller-manager-5788fc6459-29m25" Feb 16 02:22:35.699882 master-0 kubenswrapper[31559]: I0216 02:22:35.699834 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/83883885-f493-4559-9c0f-e28d69712475-client-ca\") pod \"route-controller-manager-998bd8b4b-hm5k2\" (UID: \"83883885-f493-4559-9c0f-e28d69712475\") " pod="openshift-route-controller-manager/route-controller-manager-998bd8b4b-hm5k2" Feb 16 02:22:35.699882 master-0 kubenswrapper[31559]: I0216 02:22:35.699855 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0fbc8f91-f8cc-48d8-917c-64fa978069de-proxy-tls\") pod \"machine-config-daemon-qd4l7\" (UID: \"0fbc8f91-f8cc-48d8-917c-64fa978069de\") " pod="openshift-machine-config-operator/machine-config-daemon-qd4l7" Feb 16 02:22:35.699882 master-0 kubenswrapper[31559]: I0216 02:22:35.699876 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/86af980a-2653-40c3-a368-a795d7fb8558-kube-state-metrics-tls\") pod \"kube-state-metrics-7cc9598d54-2gx8b\" (UID: \"86af980a-2653-40c3-a368-a795d7fb8558\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-2gx8b" Feb 16 02:22:35.700085 master-0 kubenswrapper[31559]: I0216 02:22:35.699917 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c4a146b2-c712-408a-97d8-5de3a84f3aaf-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj\" (UID: \"c4a146b2-c712-408a-97d8-5de3a84f3aaf\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj" Feb 16 02:22:35.700085 master-0 kubenswrapper[31559]: I0216 02:22:35.699940 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/695d1f01-d3c1-4fb9-9dda-daf33eae11f5-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-7485d645b8-v9mmd\" (UID: \"695d1f01-d3c1-4fb9-9dda-daf33eae11f5\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-v9mmd" Feb 16 02:22:35.700085 master-0 kubenswrapper[31559]: I0216 02:22:35.699966 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e491b5ed-9c09-4308-9843-fba8d43bd3ae-serving-cert\") pod \"controller-manager-5788fc6459-29m25\" (UID: \"e491b5ed-9c09-4308-9843-fba8d43bd3ae\") " pod="openshift-controller-manager/controller-manager-5788fc6459-29m25" Feb 16 02:22:35.700211 master-0 kubenswrapper[31559]: I0216 02:22:35.699990 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0abea413-e08a-465a-8ec4-2be650bfd5bd-trusted-ca-bundle\") pod \"insights-operator-cb4f7b4cf-llpf5\" (UID: \"0abea413-e08a-465a-8ec4-2be650bfd5bd\") " pod="openshift-insights/insights-operator-cb4f7b4cf-llpf5" Feb 16 02:22:35.700211 master-0 kubenswrapper[31559]: I0216 02:22:35.700120 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/a77e2f8f-d164-4a58-aab2-f3444c05cacb-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-75b869db96-qm7rm\" (UID: \"a77e2f8f-d164-4a58-aab2-f3444c05cacb\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-qm7rm" Feb 16 02:22:35.700311 master-0 kubenswrapper[31559]: I0216 02:22:35.700284 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/83883885-f493-4559-9c0f-e28d69712475-client-ca\") pod \"route-controller-manager-998bd8b4b-hm5k2\" (UID: \"83883885-f493-4559-9c0f-e28d69712475\") " pod="openshift-route-controller-manager/route-controller-manager-998bd8b4b-hm5k2" Feb 16 02:22:35.700675 master-0 kubenswrapper[31559]: I0216 02:22:35.700613 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e491b5ed-9c09-4308-9843-fba8d43bd3ae-proxy-ca-bundles\") pod \"controller-manager-5788fc6459-29m25\" (UID: \"e491b5ed-9c09-4308-9843-fba8d43bd3ae\") " pod="openshift-controller-manager/controller-manager-5788fc6459-29m25" Feb 16 02:22:35.706381 master-0 kubenswrapper[31559]: I0216 02:22:35.706332 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 16 02:22:35.709958 master-0 kubenswrapper[31559]: I0216 02:22:35.709908 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d870332c-2498-4135-a9b3-a71e67c2805b-auth-proxy-config\") pod \"machine-config-operator-84976bb859-5gs6g\" (UID: \"d870332c-2498-4135-a9b3-a71e67c2805b\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-5gs6g" Feb 16 02:22:35.716049 master-0 kubenswrapper[31559]: I0216 02:22:35.716013 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/97b8261a-91e3-435e-93f8-0a17f30359fd-mcc-auth-proxy-config\") pod \"machine-config-controller-686c884b4d-zljgp\" (UID: \"97b8261a-91e3-435e-93f8-0a17f30359fd\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-zljgp" Feb 16 02:22:35.716263 master-0 kubenswrapper[31559]: I0216 02:22:35.716237 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0fbc8f91-f8cc-48d8-917c-64fa978069de-mcd-auth-proxy-config\") pod \"machine-config-daemon-qd4l7\" (UID: \"0fbc8f91-f8cc-48d8-917c-64fa978069de\") " pod="openshift-machine-config-operator/machine-config-daemon-qd4l7" Feb 16 02:22:35.725863 master-0 kubenswrapper[31559]: I0216 02:22:35.725812 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 16 02:22:35.746547 master-0 kubenswrapper[31559]: I0216 02:22:35.746480 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 16 02:22:35.766311 master-0 kubenswrapper[31559]: I0216 02:22:35.766240 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Feb 16 02:22:35.785659 master-0 kubenswrapper[31559]: I0216 02:22:35.785614 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 16 02:22:35.790065 master-0 kubenswrapper[31559]: I0216 02:22:35.790008 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dc3354cb-b6c3-40a5-a695-cccb079ad292-apiservice-cert\") pod \"packageserver-87777c9b7-fxzh6\" (UID: \"dc3354cb-b6c3-40a5-a695-cccb079ad292\") " pod="openshift-operator-lifecycle-manager/packageserver-87777c9b7-fxzh6" Feb 16 02:22:35.795553 master-0 kubenswrapper[31559]: I0216 02:22:35.795505 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dc3354cb-b6c3-40a5-a695-cccb079ad292-webhook-cert\") pod \"packageserver-87777c9b7-fxzh6\" (UID: \"dc3354cb-b6c3-40a5-a695-cccb079ad292\") " pod="openshift-operator-lifecycle-manager/packageserver-87777c9b7-fxzh6" Feb 16 02:22:35.807129 master-0 kubenswrapper[31559]: I0216 02:22:35.807066 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-tmpwz" Feb 16 02:22:35.826355 master-0 kubenswrapper[31559]: I0216 02:22:35.826265 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-mhjgf" Feb 16 02:22:35.846532 master-0 kubenswrapper[31559]: I0216 02:22:35.846462 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-w8x86" Feb 16 02:22:35.865711 master-0 kubenswrapper[31559]: I0216 02:22:35.865662 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-whkzn" Feb 16 02:22:35.885715 master-0 kubenswrapper[31559]: I0216 02:22:35.885616 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 16 02:22:35.894170 master-0 kubenswrapper[31559]: I0216 02:22:35.894103 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c8086f93-2d98-4218-afac-20a65e6bf943-webhook-certs\") pod \"multus-admission-controller-6d678b8d67-8gzlx\" (UID: \"c8086f93-2d98-4218-afac-20a65e6bf943\") " pod="openshift-multus/multus-admission-controller-6d678b8d67-8gzlx" Feb 16 02:22:35.905529 master-0 kubenswrapper[31559]: I0216 02:22:35.905463 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-tw6fq" Feb 16 02:22:35.926492 master-0 kubenswrapper[31559]: I0216 02:22:35.926305 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 16 02:22:35.946309 master-0 kubenswrapper[31559]: I0216 02:22:35.946231 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-kvrqk" Feb 16 02:22:35.964401 master-0 kubenswrapper[31559]: I0216 02:22:35.964290 31559 request.go:700] Waited for 2.003623594s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-machine-approver/secrets?fieldSelector=metadata.name%3Dmachine-approver-tls&limit=500&resourceVersion=0 Feb 16 02:22:35.966387 master-0 kubenswrapper[31559]: I0216 02:22:35.966343 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 16 02:22:35.975900 master-0 kubenswrapper[31559]: I0216 02:22:35.975823 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/c442d349-668b-4d01-a097-5981b7a04eac-machine-approver-tls\") pod \"machine-approver-8569dd85ff-vqtcl\" (UID: \"c442d349-668b-4d01-a097-5981b7a04eac\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-vqtcl" Feb 16 02:22:35.986136 master-0 kubenswrapper[31559]: I0216 02:22:35.986081 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 16 02:22:35.995689 master-0 kubenswrapper[31559]: I0216 02:22:35.995596 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c442d349-668b-4d01-a097-5981b7a04eac-auth-proxy-config\") pod \"machine-approver-8569dd85ff-vqtcl\" (UID: \"c442d349-668b-4d01-a097-5981b7a04eac\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-vqtcl" Feb 16 02:22:36.005885 master-0 kubenswrapper[31559]: I0216 02:22:36.005817 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 16 02:22:36.015337 master-0 kubenswrapper[31559]: I0216 02:22:36.015267 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c442d349-668b-4d01-a097-5981b7a04eac-config\") pod \"machine-approver-8569dd85ff-vqtcl\" (UID: \"c442d349-668b-4d01-a097-5981b7a04eac\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-vqtcl" Feb 16 02:22:36.027167 master-0 kubenswrapper[31559]: I0216 02:22:36.027101 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 16 02:22:36.046797 master-0 kubenswrapper[31559]: I0216 02:22:36.046687 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 16 02:22:36.051097 master-0 kubenswrapper[31559]: I0216 02:22:36.051017 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0fbc8f91-f8cc-48d8-917c-64fa978069de-proxy-tls\") pod \"machine-config-daemon-qd4l7\" (UID: \"0fbc8f91-f8cc-48d8-917c-64fa978069de\") " pod="openshift-machine-config-operator/machine-config-daemon-qd4l7" Feb 16 02:22:36.066050 master-0 kubenswrapper[31559]: I0216 02:22:36.065962 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-bh67v" Feb 16 02:22:36.086013 master-0 kubenswrapper[31559]: I0216 02:22:36.085861 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 16 02:22:36.094737 master-0 kubenswrapper[31559]: I0216 02:22:36.094631 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/97b8261a-91e3-435e-93f8-0a17f30359fd-proxy-tls\") pod \"machine-config-controller-686c884b4d-zljgp\" (UID: \"97b8261a-91e3-435e-93f8-0a17f30359fd\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-zljgp" Feb 16 02:22:36.105703 master-0 kubenswrapper[31559]: I0216 02:22:36.105633 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-xrwft" Feb 16 02:22:36.126483 master-0 kubenswrapper[31559]: I0216 02:22:36.126402 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-ccbvw" Feb 16 02:22:36.145673 master-0 kubenswrapper[31559]: I0216 02:22:36.145605 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Feb 16 02:22:36.156625 master-0 kubenswrapper[31559]: I0216 02:22:36.156561 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c4a146b2-c712-408a-97d8-5de3a84f3aaf-images\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj\" (UID: \"c4a146b2-c712-408a-97d8-5de3a84f3aaf\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj" Feb 16 02:22:36.167118 master-0 kubenswrapper[31559]: I0216 02:22:36.167049 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Feb 16 02:22:36.169842 master-0 kubenswrapper[31559]: I0216 02:22:36.169776 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/c4a146b2-c712-408a-97d8-5de3a84f3aaf-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj\" (UID: \"c4a146b2-c712-408a-97d8-5de3a84f3aaf\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj" Feb 16 02:22:36.187035 master-0 kubenswrapper[31559]: I0216 02:22:36.186887 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Feb 16 02:22:36.191009 master-0 kubenswrapper[31559]: I0216 02:22:36.190911 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c4a146b2-c712-408a-97d8-5de3a84f3aaf-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj\" (UID: \"c4a146b2-c712-408a-97d8-5de3a84f3aaf\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj" Feb 16 02:22:36.205678 master-0 kubenswrapper[31559]: I0216 02:22:36.205630 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Feb 16 02:22:36.226649 master-0 kubenswrapper[31559]: I0216 02:22:36.226575 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Feb 16 02:22:36.246689 master-0 kubenswrapper[31559]: I0216 02:22:36.246614 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 16 02:22:36.256959 master-0 kubenswrapper[31559]: I0216 02:22:36.256803 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d8bbd369-4219-48ef-ae2d-b45c81789403-node-bootstrap-token\") pod \"machine-config-server-5zv6j\" (UID: \"d8bbd369-4219-48ef-ae2d-b45c81789403\") " pod="openshift-machine-config-operator/machine-config-server-5zv6j" Feb 16 02:22:36.266477 master-0 kubenswrapper[31559]: I0216 02:22:36.266371 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-qmfsl" Feb 16 02:22:36.287205 master-0 kubenswrapper[31559]: I0216 02:22:36.287134 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-nfl29" Feb 16 02:22:36.307470 master-0 kubenswrapper[31559]: I0216 02:22:36.307352 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 16 02:22:36.313784 master-0 kubenswrapper[31559]: I0216 02:22:36.313734 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d8bbd369-4219-48ef-ae2d-b45c81789403-certs\") pod \"machine-config-server-5zv6j\" (UID: \"d8bbd369-4219-48ef-ae2d-b45c81789403\") " pod="openshift-machine-config-operator/machine-config-server-5zv6j" Feb 16 02:22:36.327398 master-0 kubenswrapper[31559]: I0216 02:22:36.327341 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Feb 16 02:22:36.335273 master-0 kubenswrapper[31559]: I0216 02:22:36.335214 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/695d1f01-d3c1-4fb9-9dda-daf33eae11f5-prometheus-operator-tls\") pod \"prometheus-operator-7485d645b8-v9mmd\" (UID: \"695d1f01-d3c1-4fb9-9dda-daf33eae11f5\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-v9mmd" Feb 16 02:22:36.346998 master-0 kubenswrapper[31559]: I0216 02:22:36.346929 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Feb 16 02:22:36.350078 master-0 kubenswrapper[31559]: I0216 02:22:36.350010 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/695d1f01-d3c1-4fb9-9dda-daf33eae11f5-metrics-client-ca\") pod \"prometheus-operator-7485d645b8-v9mmd\" (UID: \"695d1f01-d3c1-4fb9-9dda-daf33eae11f5\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-v9mmd" Feb 16 02:22:36.356135 master-0 kubenswrapper[31559]: I0216 02:22:36.356074 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/86af980a-2653-40c3-a368-a795d7fb8558-metrics-client-ca\") pod \"kube-state-metrics-7cc9598d54-2gx8b\" (UID: \"86af980a-2653-40c3-a368-a795d7fb8558\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-2gx8b" Feb 16 02:22:36.356506 master-0 kubenswrapper[31559]: I0216 02:22:36.356457 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/32d420d6-bbda-42c0-82fe-8b187ad91607-metrics-client-ca\") pod \"openshift-state-metrics-546cc7d765-2zl2r\" (UID: \"32d420d6-bbda-42c0-82fe-8b187ad91607\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-2zl2r" Feb 16 02:22:36.356705 master-0 kubenswrapper[31559]: I0216 02:22:36.356644 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/0a900f93-91c9-4782-89a3-1cc09f3aec95-metrics-client-ca\") pod \"node-exporter-jxbq6\" (UID: \"0a900f93-91c9-4782-89a3-1cc09f3aec95\") " pod="openshift-monitoring/node-exporter-jxbq6" Feb 16 02:22:36.367520 master-0 kubenswrapper[31559]: I0216 02:22:36.367471 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Feb 16 02:22:36.370966 master-0 kubenswrapper[31559]: I0216 02:22:36.370904 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/695d1f01-d3c1-4fb9-9dda-daf33eae11f5-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-7485d645b8-v9mmd\" (UID: \"695d1f01-d3c1-4fb9-9dda-daf33eae11f5\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-v9mmd" Feb 16 02:22:36.386984 master-0 kubenswrapper[31559]: I0216 02:22:36.386912 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Feb 16 02:22:36.390098 master-0 kubenswrapper[31559]: I0216 02:22:36.390039 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/0a900f93-91c9-4782-89a3-1cc09f3aec95-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-jxbq6\" (UID: \"0a900f93-91c9-4782-89a3-1cc09f3aec95\") " pod="openshift-monitoring/node-exporter-jxbq6" Feb 16 02:22:36.406982 master-0 kubenswrapper[31559]: I0216 02:22:36.406889 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-r9rvr" Feb 16 02:22:36.426687 master-0 kubenswrapper[31559]: I0216 02:22:36.426588 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Feb 16 02:22:36.436664 master-0 kubenswrapper[31559]: I0216 02:22:36.436590 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/0a900f93-91c9-4782-89a3-1cc09f3aec95-node-exporter-tls\") pod \"node-exporter-jxbq6\" (UID: \"0a900f93-91c9-4782-89a3-1cc09f3aec95\") " pod="openshift-monitoring/node-exporter-jxbq6" Feb 16 02:22:36.446767 master-0 kubenswrapper[31559]: I0216 02:22:36.446627 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-bxb2h" Feb 16 02:22:36.466939 master-0 kubenswrapper[31559]: I0216 02:22:36.466862 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Feb 16 02:22:36.470786 master-0 kubenswrapper[31559]: I0216 02:22:36.470743 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/86af980a-2653-40c3-a368-a795d7fb8558-kube-state-metrics-tls\") pod \"kube-state-metrics-7cc9598d54-2gx8b\" (UID: \"86af980a-2653-40c3-a368-a795d7fb8558\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-2gx8b" Feb 16 02:22:36.486746 master-0 kubenswrapper[31559]: I0216 02:22:36.486689 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-9dqnm" Feb 16 02:22:36.506099 master-0 kubenswrapper[31559]: I0216 02:22:36.506063 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Feb 16 02:22:36.516201 master-0 kubenswrapper[31559]: I0216 02:22:36.516152 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/32d420d6-bbda-42c0-82fe-8b187ad91607-openshift-state-metrics-tls\") pod \"openshift-state-metrics-546cc7d765-2zl2r\" (UID: \"32d420d6-bbda-42c0-82fe-8b187ad91607\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-2zl2r" Feb 16 02:22:36.526826 master-0 kubenswrapper[31559]: I0216 02:22:36.526775 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Feb 16 02:22:36.533869 master-0 kubenswrapper[31559]: I0216 02:22:36.533811 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/32d420d6-bbda-42c0-82fe-8b187ad91607-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-546cc7d765-2zl2r\" (UID: \"32d420d6-bbda-42c0-82fe-8b187ad91607\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-2zl2r" Feb 16 02:22:36.546012 master-0 kubenswrapper[31559]: I0216 02:22:36.545932 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Feb 16 02:22:36.550106 master-0 kubenswrapper[31559]: I0216 02:22:36.550068 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/86af980a-2653-40c3-a368-a795d7fb8558-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7cc9598d54-2gx8b\" (UID: \"86af980a-2653-40c3-a368-a795d7fb8558\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-2gx8b" Feb 16 02:22:36.566101 master-0 kubenswrapper[31559]: I0216 02:22:36.566049 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Feb 16 02:22:36.570251 master-0 kubenswrapper[31559]: I0216 02:22:36.570163 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/86af980a-2653-40c3-a368-a795d7fb8558-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7cc9598d54-2gx8b\" (UID: \"86af980a-2653-40c3-a368-a795d7fb8558\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-2gx8b" Feb 16 02:22:36.586152 master-0 kubenswrapper[31559]: I0216 02:22:36.586076 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-2md94v7udfjth" Feb 16 02:22:36.590057 master-0 kubenswrapper[31559]: I0216 02:22:36.590017 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c267cc7-a51a-4b14-baee-e584254eefc5-client-ca-bundle\") pod \"metrics-server-67b79bd656-cs2n2\" (UID: \"8c267cc7-a51a-4b14-baee-e584254eefc5\") " pod="openshift-monitoring/metrics-server-67b79bd656-cs2n2" Feb 16 02:22:36.594148 master-0 kubenswrapper[31559]: E0216 02:22:36.594102 31559 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 02:22:36.594148 master-0 kubenswrapper[31559]: E0216 02:22:36.594110 31559 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-server-audit-profiles: failed to sync configmap cache: timed out waiting for the condition Feb 16 02:22:36.594342 master-0 kubenswrapper[31559]: E0216 02:22:36.594165 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c267cc7-a51a-4b14-baee-e584254eefc5-secret-metrics-server-tls podName:8c267cc7-a51a-4b14-baee-e584254eefc5 nodeName:}" failed. No retries permitted until 2026-02-16 02:22:37.594148668 +0000 UTC m=+9.938754683 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-metrics-server-tls" (UniqueName: "kubernetes.io/secret/8c267cc7-a51a-4b14-baee-e584254eefc5-secret-metrics-server-tls") pod "metrics-server-67b79bd656-cs2n2" (UID: "8c267cc7-a51a-4b14-baee-e584254eefc5") : failed to sync secret cache: timed out waiting for the condition Feb 16 02:22:36.594342 master-0 kubenswrapper[31559]: E0216 02:22:36.594207 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8c267cc7-a51a-4b14-baee-e584254eefc5-metrics-server-audit-profiles podName:8c267cc7-a51a-4b14-baee-e584254eefc5 nodeName:}" failed. No retries permitted until 2026-02-16 02:22:37.594181589 +0000 UTC m=+9.938787634 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-server-audit-profiles" (UniqueName: "kubernetes.io/configmap/8c267cc7-a51a-4b14-baee-e584254eefc5-metrics-server-audit-profiles") pod "metrics-server-67b79bd656-cs2n2" (UID: "8c267cc7-a51a-4b14-baee-e584254eefc5") : failed to sync configmap cache: timed out waiting for the condition Feb 16 02:22:36.595403 master-0 kubenswrapper[31559]: E0216 02:22:36.595333 31559 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 16 02:22:36.595570 master-0 kubenswrapper[31559]: E0216 02:22:36.595501 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8c267cc7-a51a-4b14-baee-e584254eefc5-configmap-kubelet-serving-ca-bundle podName:8c267cc7-a51a-4b14-baee-e584254eefc5 nodeName:}" failed. No retries permitted until 2026-02-16 02:22:37.595473772 +0000 UTC m=+9.940079817 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/8c267cc7-a51a-4b14-baee-e584254eefc5-configmap-kubelet-serving-ca-bundle") pod "metrics-server-67b79bd656-cs2n2" (UID: "8c267cc7-a51a-4b14-baee-e584254eefc5") : failed to sync configmap cache: timed out waiting for the condition Feb 16 02:22:36.596422 master-0 kubenswrapper[31559]: E0216 02:22:36.596368 31559 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: failed to sync secret cache: timed out waiting for the condition Feb 16 02:22:36.596577 master-0 kubenswrapper[31559]: E0216 02:22:36.596460 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c267cc7-a51a-4b14-baee-e584254eefc5-secret-metrics-client-certs podName:8c267cc7-a51a-4b14-baee-e584254eefc5 nodeName:}" failed. No retries permitted until 2026-02-16 02:22:37.596446777 +0000 UTC m=+9.941052782 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/8c267cc7-a51a-4b14-baee-e584254eefc5-secret-metrics-client-certs") pod "metrics-server-67b79bd656-cs2n2" (UID: "8c267cc7-a51a-4b14-baee-e584254eefc5") : failed to sync secret cache: timed out waiting for the condition Feb 16 02:22:36.596757 master-0 kubenswrapper[31559]: E0216 02:22:36.596671 31559 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 02:22:36.596859 master-0 kubenswrapper[31559]: E0216 02:22:36.596836 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7846b339-c46d-4983-b586-a28f2868f665-cert podName:7846b339-c46d-4983-b586-a28f2868f665 nodeName:}" failed. No retries permitted until 2026-02-16 02:22:37.596803767 +0000 UTC m=+9.941409822 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7846b339-c46d-4983-b586-a28f2868f665-cert") pod "ingress-canary-6t7mx" (UID: "7846b339-c46d-4983-b586-a28f2868f665") : failed to sync secret cache: timed out waiting for the condition Feb 16 02:22:36.605905 master-0 kubenswrapper[31559]: I0216 02:22:36.605853 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-wsv7k" Feb 16 02:22:36.626368 master-0 kubenswrapper[31559]: I0216 02:22:36.626281 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Feb 16 02:22:36.646619 master-0 kubenswrapper[31559]: I0216 02:22:36.646557 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Feb 16 02:22:36.666874 master-0 kubenswrapper[31559]: I0216 02:22:36.666766 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Feb 16 02:22:36.686187 master-0 kubenswrapper[31559]: I0216 02:22:36.686091 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Feb 16 02:22:36.706348 master-0 kubenswrapper[31559]: I0216 02:22:36.706235 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-4zs9t" Feb 16 02:22:36.725421 master-0 kubenswrapper[31559]: I0216 02:22:36.725360 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 16 02:22:36.746168 master-0 kubenswrapper[31559]: I0216 02:22:36.746079 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 16 02:22:36.766022 master-0 kubenswrapper[31559]: I0216 02:22:36.765947 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 16 02:22:36.819244 master-0 kubenswrapper[31559]: E0216 02:22:36.819135 31559 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.842s" Feb 16 02:22:36.819244 master-0 kubenswrapper[31559]: I0216 02:22:36.819246 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Feb 16 02:22:36.819843 master-0 kubenswrapper[31559]: I0216 02:22:36.819277 31559 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" mirrorPodUID="05ee7f0d-b929-461e-8464-8e1aa0635e08" Feb 16 02:22:36.835043 master-0 kubenswrapper[31559]: I0216 02:22:36.834965 31559 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Feb 16 02:22:36.853839 master-0 kubenswrapper[31559]: I0216 02:22:36.853575 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zr872\" (UniqueName: \"kubernetes.io/projected/4a5b01c1-1231-4e69-8b6c-c4981b65b26e-kube-api-access-zr872\") pod \"kube-storage-version-migrator-operator-cd5474998-x2sh4\" (UID: \"4a5b01c1-1231-4e69-8b6c-c4981b65b26e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-x2sh4" Feb 16 02:22:36.869586 master-0 kubenswrapper[31559]: I0216 02:22:36.869498 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pc9jt\" (UniqueName: \"kubernetes.io/projected/75915935-00a2-44ce-99d1-03e2492044d4-kube-api-access-pc9jt\") pod \"network-check-source-7d8f4c8c66-kcnkd\" (UID: \"75915935-00a2-44ce-99d1-03e2492044d4\") " pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-kcnkd" Feb 16 02:22:36.888276 master-0 kubenswrapper[31559]: I0216 02:22:36.888198 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvf8t\" (UniqueName: \"kubernetes.io/projected/21686a6d-f685-4fb6-98af-3e8a39c5981b-kube-api-access-lvf8t\") pod \"cluster-monitoring-operator-756d64c8c4-5q4zs\" (UID: \"21686a6d-f685-4fb6-98af-3e8a39c5981b\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-5q4zs" Feb 16 02:22:36.908060 master-0 kubenswrapper[31559]: I0216 02:22:36.907891 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7llx6\" (UniqueName: \"kubernetes.io/projected/6c02961f-30ec-4405-b7fa-9c4192342ae9-kube-api-access-7llx6\") pod \"openshift-controller-manager-operator-5f5f84757d-b47jp\" (UID: \"6c02961f-30ec-4405-b7fa-9c4192342ae9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-b47jp" Feb 16 02:22:36.929379 master-0 kubenswrapper[31559]: I0216 02:22:36.929327 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmjjn\" (UniqueName: \"kubernetes.io/projected/d870332c-2498-4135-a9b3-a71e67c2805b-kube-api-access-wmjjn\") pod \"machine-config-operator-84976bb859-5gs6g\" (UID: \"d870332c-2498-4135-a9b3-a71e67c2805b\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-5gs6g" Feb 16 02:22:36.946367 master-0 kubenswrapper[31559]: I0216 02:22:36.946296 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9sgx\" (UniqueName: \"kubernetes.io/projected/2ffa4db8-97da-42de-8e51-35680f518ca7-kube-api-access-t9sgx\") pod \"dns-operator-86b8869b79-4rfwq\" (UID: \"2ffa4db8-97da-42de-8e51-35680f518ca7\") " pod="openshift-dns-operator/dns-operator-86b8869b79-4rfwq" Feb 16 02:22:36.972636 master-0 kubenswrapper[31559]: I0216 02:22:36.971485 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a8f33151-61df-4b66-ba85-9ba210779059-kube-api-access\") pod \"kube-controller-manager-operator-78ff47c7c5-dgxhp\" (UID: \"a8f33151-61df-4b66-ba85-9ba210779059\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-dgxhp" Feb 16 02:22:36.982324 master-0 kubenswrapper[31559]: I0216 02:22:36.982240 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r94gg\" (UniqueName: \"kubernetes.io/projected/9defdfff-eb18-4beb-9591-918d0e4b4236-kube-api-access-r94gg\") pod \"service-ca-676cd8b9b5-x6nhn\" (UID: \"9defdfff-eb18-4beb-9591-918d0e4b4236\") " pod="openshift-service-ca/service-ca-676cd8b9b5-x6nhn" Feb 16 02:22:36.983693 master-0 kubenswrapper[31559]: I0216 02:22:36.983615 31559 request.go:700] Waited for 2.895734761s due to client-side throttling, not priority and fairness, request: POST:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/serviceaccounts/openshift-state-metrics/token Feb 16 02:22:37.007305 master-0 kubenswrapper[31559]: I0216 02:22:37.007210 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4mp4\" (UniqueName: \"kubernetes.io/projected/32d420d6-bbda-42c0-82fe-8b187ad91607-kube-api-access-w4mp4\") pod \"openshift-state-metrics-546cc7d765-2zl2r\" (UID: \"32d420d6-bbda-42c0-82fe-8b187ad91607\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-2zl2r" Feb 16 02:22:37.021446 master-0 kubenswrapper[31559]: I0216 02:22:37.021390 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6p8rc\" (UniqueName: \"kubernetes.io/projected/c4a146b2-c712-408a-97d8-5de3a84f3aaf-kube-api-access-6p8rc\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj\" (UID: \"c4a146b2-c712-408a-97d8-5de3a84f3aaf\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-nhmdj" Feb 16 02:22:37.041632 master-0 kubenswrapper[31559]: I0216 02:22:37.041579 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7fmj\" (UniqueName: \"kubernetes.io/projected/b2a83ddd-ffa5-4127-9099-91187ad9dbba-kube-api-access-t7fmj\") pod \"cluster-node-tuning-operator-ff6c9b66-845gn\" (UID: \"b2a83ddd-ffa5-4127-9099-91187ad9dbba\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-845gn" Feb 16 02:22:37.069749 master-0 kubenswrapper[31559]: I0216 02:22:37.069695 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhz2m\" (UniqueName: \"kubernetes.io/projected/91938be6-9ae4-4849-abe8-fc842daecd23-kube-api-access-bhz2m\") pod \"service-ca-operator-5dc4688546-ck5nr\" (UID: \"91938be6-9ae4-4849-abe8-fc842daecd23\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-ck5nr" Feb 16 02:22:37.080776 master-0 kubenswrapper[31559]: I0216 02:22:37.080724 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lj6v2\" (UniqueName: \"kubernetes.io/projected/857357a1-dc98-4dd5-98b3-c94b1ddf9dec-kube-api-access-lj6v2\") pod \"catalogd-controller-manager-67bc7c997f-zc2br\" (UID: \"857357a1-dc98-4dd5-98b3-c94b1ddf9dec\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-zc2br" Feb 16 02:22:37.107775 master-0 kubenswrapper[31559]: I0216 02:22:37.107676 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvzpb\" (UniqueName: \"kubernetes.io/projected/89041b37-18f6-499d-89ec-a0523a25dc58-kube-api-access-zvzpb\") pod \"redhat-operators-9c6g5\" (UID: \"89041b37-18f6-499d-89ec-a0523a25dc58\") " pod="openshift-marketplace/redhat-operators-9c6g5" Feb 16 02:22:37.129587 master-0 kubenswrapper[31559]: I0216 02:22:37.129544 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sctj8\" (UniqueName: \"kubernetes.io/projected/0a900f93-91c9-4782-89a3-1cc09f3aec95-kube-api-access-sctj8\") pod \"node-exporter-jxbq6\" (UID: \"0a900f93-91c9-4782-89a3-1cc09f3aec95\") " pod="openshift-monitoring/node-exporter-jxbq6" Feb 16 02:22:37.150027 master-0 kubenswrapper[31559]: I0216 02:22:37.149970 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcq6v\" (UniqueName: \"kubernetes.io/projected/e379cfaf-3a4c-40e7-8641-3524b3669295-kube-api-access-gcq6v\") pod \"openshift-apiserver-operator-6d4655d9cf-v7lmz\" (UID: \"e379cfaf-3a4c-40e7-8641-3524b3669295\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-v7lmz" Feb 16 02:22:37.171197 master-0 kubenswrapper[31559]: I0216 02:22:37.171135 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hm44l\" (UniqueName: \"kubernetes.io/projected/dc3354cb-b6c3-40a5-a695-cccb079ad292-kube-api-access-hm44l\") pod \"packageserver-87777c9b7-fxzh6\" (UID: \"dc3354cb-b6c3-40a5-a695-cccb079ad292\") " pod="openshift-operator-lifecycle-manager/packageserver-87777c9b7-fxzh6" Feb 16 02:22:37.189903 master-0 kubenswrapper[31559]: I0216 02:22:37.189849 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kf7tw\" (UniqueName: \"kubernetes.io/projected/27d876a7-6a48-4942-ad96-ed8ed3aa104b-kube-api-access-kf7tw\") pod \"operator-controller-controller-manager-85c9b89969-g9lcm\" (UID: \"27d876a7-6a48-4942-ad96-ed8ed3aa104b\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-g9lcm" Feb 16 02:22:37.209888 master-0 kubenswrapper[31559]: I0216 02:22:37.209852 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cnfs\" (UniqueName: \"kubernetes.io/projected/7846b339-c46d-4983-b586-a28f2868f665-kube-api-access-5cnfs\") pod \"ingress-canary-6t7mx\" (UID: \"7846b339-c46d-4983-b586-a28f2868f665\") " pod="openshift-ingress-canary/ingress-canary-6t7mx" Feb 16 02:22:37.228837 master-0 kubenswrapper[31559]: I0216 02:22:37.228715 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4djm\" (UniqueName: \"kubernetes.io/projected/27a42eb0-677c-414d-b0ec-f945ec39b7e9-kube-api-access-l4djm\") pod \"cluster-baremetal-operator-7bc947fc7d-frvgm\" (UID: \"27a42eb0-677c-414d-b0ec-f945ec39b7e9\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-frvgm" Feb 16 02:22:37.251336 master-0 kubenswrapper[31559]: I0216 02:22:37.251298 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88kmw\" (UniqueName: \"kubernetes.io/projected/fec84b8a-a0d1-4b07-8827-cef0beb89ecd-kube-api-access-88kmw\") pod \"machine-api-operator-bd7dd5c46-qw2zq\" (UID: \"fec84b8a-a0d1-4b07-8827-cef0beb89ecd\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-qw2zq" Feb 16 02:22:37.269651 master-0 kubenswrapper[31559]: I0216 02:22:37.269560 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vchs\" (UniqueName: \"kubernetes.io/projected/c442d349-668b-4d01-a097-5981b7a04eac-kube-api-access-4vchs\") pod \"machine-approver-8569dd85ff-vqtcl\" (UID: \"c442d349-668b-4d01-a097-5981b7a04eac\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-vqtcl" Feb 16 02:22:37.290602 master-0 kubenswrapper[31559]: I0216 02:22:37.290550 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9snq8\" (UniqueName: \"kubernetes.io/projected/8c267cc7-a51a-4b14-baee-e584254eefc5-kube-api-access-9snq8\") pod \"metrics-server-67b79bd656-cs2n2\" (UID: \"8c267cc7-a51a-4b14-baee-e584254eefc5\") " pod="openshift-monitoring/metrics-server-67b79bd656-cs2n2" Feb 16 02:22:37.309674 master-0 kubenswrapper[31559]: I0216 02:22:37.309639 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxlnm\" (UniqueName: \"kubernetes.io/projected/0abea413-e08a-465a-8ec4-2be650bfd5bd-kube-api-access-bxlnm\") pod \"insights-operator-cb4f7b4cf-llpf5\" (UID: \"0abea413-e08a-465a-8ec4-2be650bfd5bd\") " pod="openshift-insights/insights-operator-cb4f7b4cf-llpf5" Feb 16 02:22:37.328406 master-0 kubenswrapper[31559]: I0216 02:22:37.328334 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjsbs\" (UniqueName: \"kubernetes.io/projected/6dcef814-353e-4985-9afc-9e545f7853ae-kube-api-access-pjsbs\") pod \"ovnkube-node-bs85n\" (UID: \"6dcef814-353e-4985-9afc-9e545f7853ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:22:37.349683 master-0 kubenswrapper[31559]: I0216 02:22:37.349619 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ad700b17-ba2a-41d4-8bec-538a009a613b-kube-api-access\") pod \"cluster-version-operator-649c4f5445-lzvc4\" (UID: \"ad700b17-ba2a-41d4-8bec-538a009a613b\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-lzvc4" Feb 16 02:22:37.370313 master-0 kubenswrapper[31559]: I0216 02:22:37.370238 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85sdg\" (UniqueName: \"kubernetes.io/projected/17390d9a-148d-4927-a831-5bc4873c43d5-kube-api-access-85sdg\") pod \"router-default-864ddd5f56-ffptx\" (UID: \"17390d9a-148d-4927-a831-5bc4873c43d5\") " pod="openshift-ingress/router-default-864ddd5f56-ffptx" Feb 16 02:22:37.392782 master-0 kubenswrapper[31559]: I0216 02:22:37.392738 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nl7r8\" (UniqueName: \"kubernetes.io/projected/456e6c3a-c16c-470b-a0cd-bb79865b54f0-kube-api-access-nl7r8\") pod \"network-operator-6fcf4c966-dctqr\" (UID: \"456e6c3a-c16c-470b-a0cd-bb79865b54f0\") " pod="openshift-network-operator/network-operator-6fcf4c966-dctqr" Feb 16 02:22:37.409222 master-0 kubenswrapper[31559]: I0216 02:22:37.409180 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6f8fj\" (UniqueName: \"kubernetes.io/projected/76915cba-7c11-4bd8-9943-81de74e7781b-kube-api-access-6f8fj\") pod \"catalog-operator-588944557d-2z8fq\" (UID: \"76915cba-7c11-4bd8-9943-81de74e7781b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-2z8fq" Feb 16 02:22:37.430080 master-0 kubenswrapper[31559]: I0216 02:22:37.429996 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ns9l\" (UniqueName: \"kubernetes.io/projected/f91346c7-bde4-4fa2-ac27-b5f0d25eeb75-kube-api-access-4ns9l\") pod \"multus-additional-cni-plugins-mvdkf\" (UID: \"f91346c7-bde4-4fa2-ac27-b5f0d25eeb75\") " pod="openshift-multus/multus-additional-cni-plugins-mvdkf" Feb 16 02:22:37.452978 master-0 kubenswrapper[31559]: I0216 02:22:37.452736 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8lvq\" (UniqueName: \"kubernetes.io/projected/467d92a2-1cf3-418d-b41e-8e5f9d7a5b74-kube-api-access-f8lvq\") pod \"olm-operator-6b56bd877c-qwp9g\" (UID: \"467d92a2-1cf3-418d-b41e-8e5f9d7a5b74\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-qwp9g" Feb 16 02:22:37.468380 master-0 kubenswrapper[31559]: I0216 02:22:37.468302 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58cq8\" (UniqueName: \"kubernetes.io/projected/f7317f91-9441-449f-9738-85da088cf94f-kube-api-access-58cq8\") pod \"ovnkube-control-plane-bb7ffbb8d-mcff9\" (UID: \"f7317f91-9441-449f-9738-85da088cf94f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-mcff9" Feb 16 02:22:37.489119 master-0 kubenswrapper[31559]: I0216 02:22:37.488969 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grvmr\" (UniqueName: \"kubernetes.io/projected/d8bbd369-4219-48ef-ae2d-b45c81789403-kube-api-access-grvmr\") pod \"machine-config-server-5zv6j\" (UID: \"d8bbd369-4219-48ef-ae2d-b45c81789403\") " pod="openshift-machine-config-operator/machine-config-server-5zv6j" Feb 16 02:22:37.507070 master-0 kubenswrapper[31559]: I0216 02:22:37.507018 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bnwz\" (UniqueName: \"kubernetes.io/projected/0fbc8f91-f8cc-48d8-917c-64fa978069de-kube-api-access-5bnwz\") pod \"machine-config-daemon-qd4l7\" (UID: \"0fbc8f91-f8cc-48d8-917c-64fa978069de\") " pod="openshift-machine-config-operator/machine-config-daemon-qd4l7" Feb 16 02:22:37.529374 master-0 kubenswrapper[31559]: I0216 02:22:37.529285 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdllq\" (UniqueName: \"kubernetes.io/projected/430c146b-ceaf-411a-add6-ce949243aabf-kube-api-access-vdllq\") pod \"multus-8jgrl\" (UID: \"430c146b-ceaf-411a-add6-ce949243aabf\") " pod="openshift-multus/multus-8jgrl" Feb 16 02:22:37.549519 master-0 kubenswrapper[31559]: I0216 02:22:37.549467 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlnkb\" (UniqueName: \"kubernetes.io/projected/dbc5b101-936f-4bf3-bbf3-f30966b0ab50-kube-api-access-jlnkb\") pod \"network-node-identity-kffmg\" (UID: \"dbc5b101-936f-4bf3-bbf3-f30966b0ab50\") " pod="openshift-network-node-identity/network-node-identity-kffmg" Feb 16 02:22:37.558061 master-0 kubenswrapper[31559]: I0216 02:22:37.557940 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsxrl\" (UniqueName: \"kubernetes.io/projected/a77e2f8f-d164-4a58-aab2-f3444c05cacb-kube-api-access-bsxrl\") pod \"cluster-storage-operator-75b869db96-qm7rm\" (UID: \"a77e2f8f-d164-4a58-aab2-f3444c05cacb\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-qm7rm" Feb 16 02:22:37.589483 master-0 kubenswrapper[31559]: I0216 02:22:37.589338 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p49hf\" (UniqueName: \"kubernetes.io/projected/5f810ea0-e32d-4097-beca-5194349a57a6-kube-api-access-p49hf\") pod \"community-operators-s95k9\" (UID: \"5f810ea0-e32d-4097-beca-5194349a57a6\") " pod="openshift-marketplace/community-operators-s95k9" Feb 16 02:22:37.610035 master-0 kubenswrapper[31559]: I0216 02:22:37.609936 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xj8x2\" (UniqueName: \"kubernetes.io/projected/30fef0d5-46ea-4fa3-9ffa-88187d010ffe-kube-api-access-xj8x2\") pod \"cloud-credential-operator-595c8f9ff-n8xmg\" (UID: \"30fef0d5-46ea-4fa3-9ffa-88187d010ffe\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-n8xmg" Feb 16 02:22:37.631708 master-0 kubenswrapper[31559]: I0216 02:22:37.631559 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/04804a08-e3a5-46f3-abcb-967866834baa-bound-sa-token\") pod \"ingress-operator-c588d8cb4-nbjz6\" (UID: \"04804a08-e3a5-46f3-abcb-967866834baa\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nbjz6" Feb 16 02:22:37.646648 master-0 kubenswrapper[31559]: I0216 02:22:37.646577 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/8c267cc7-a51a-4b14-baee-e584254eefc5-metrics-server-audit-profiles\") pod \"metrics-server-67b79bd656-cs2n2\" (UID: \"8c267cc7-a51a-4b14-baee-e584254eefc5\") " pod="openshift-monitoring/metrics-server-67b79bd656-cs2n2" Feb 16 02:22:37.646770 master-0 kubenswrapper[31559]: I0216 02:22:37.646652 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/8c267cc7-a51a-4b14-baee-e584254eefc5-secret-metrics-server-tls\") pod \"metrics-server-67b79bd656-cs2n2\" (UID: \"8c267cc7-a51a-4b14-baee-e584254eefc5\") " pod="openshift-monitoring/metrics-server-67b79bd656-cs2n2" Feb 16 02:22:37.646770 master-0 kubenswrapper[31559]: I0216 02:22:37.646717 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c267cc7-a51a-4b14-baee-e584254eefc5-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-67b79bd656-cs2n2\" (UID: \"8c267cc7-a51a-4b14-baee-e584254eefc5\") " pod="openshift-monitoring/metrics-server-67b79bd656-cs2n2" Feb 16 02:22:37.646770 master-0 kubenswrapper[31559]: I0216 02:22:37.646763 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/8c267cc7-a51a-4b14-baee-e584254eefc5-secret-metrics-client-certs\") pod \"metrics-server-67b79bd656-cs2n2\" (UID: \"8c267cc7-a51a-4b14-baee-e584254eefc5\") " pod="openshift-monitoring/metrics-server-67b79bd656-cs2n2" Feb 16 02:22:37.646923 master-0 kubenswrapper[31559]: I0216 02:22:37.646805 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7846b339-c46d-4983-b586-a28f2868f665-cert\") pod \"ingress-canary-6t7mx\" (UID: \"7846b339-c46d-4983-b586-a28f2868f665\") " pod="openshift-ingress-canary/ingress-canary-6t7mx" Feb 16 02:22:37.647609 master-0 kubenswrapper[31559]: I0216 02:22:37.647552 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c267cc7-a51a-4b14-baee-e584254eefc5-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-67b79bd656-cs2n2\" (UID: \"8c267cc7-a51a-4b14-baee-e584254eefc5\") " pod="openshift-monitoring/metrics-server-67b79bd656-cs2n2" Feb 16 02:22:37.647791 master-0 kubenswrapper[31559]: I0216 02:22:37.647697 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/8c267cc7-a51a-4b14-baee-e584254eefc5-metrics-server-audit-profiles\") pod \"metrics-server-67b79bd656-cs2n2\" (UID: \"8c267cc7-a51a-4b14-baee-e584254eefc5\") " pod="openshift-monitoring/metrics-server-67b79bd656-cs2n2" Feb 16 02:22:37.647862 master-0 kubenswrapper[31559]: I0216 02:22:37.647822 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7846b339-c46d-4983-b586-a28f2868f665-cert\") pod \"ingress-canary-6t7mx\" (UID: \"7846b339-c46d-4983-b586-a28f2868f665\") " pod="openshift-ingress-canary/ingress-canary-6t7mx" Feb 16 02:22:37.647936 master-0 kubenswrapper[31559]: I0216 02:22:37.647902 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/8c267cc7-a51a-4b14-baee-e584254eefc5-secret-metrics-client-certs\") pod \"metrics-server-67b79bd656-cs2n2\" (UID: \"8c267cc7-a51a-4b14-baee-e584254eefc5\") " pod="openshift-monitoring/metrics-server-67b79bd656-cs2n2" Feb 16 02:22:37.648259 master-0 kubenswrapper[31559]: I0216 02:22:37.648209 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/8c267cc7-a51a-4b14-baee-e584254eefc5-secret-metrics-server-tls\") pod \"metrics-server-67b79bd656-cs2n2\" (UID: \"8c267cc7-a51a-4b14-baee-e584254eefc5\") " pod="openshift-monitoring/metrics-server-67b79bd656-cs2n2" Feb 16 02:22:37.657717 master-0 kubenswrapper[31559]: I0216 02:22:37.657675 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24b6h\" (UniqueName: \"kubernetes.io/projected/bde83629-b39c-401e-bc30-5ce205638918-kube-api-access-24b6h\") pod \"marketplace-operator-6cc5b65c6b-8nl7s\" (UID: \"bde83629-b39c-401e-bc30-5ce205638918\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-8nl7s" Feb 16 02:22:37.658111 master-0 kubenswrapper[31559]: I0216 02:22:37.658069 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmqrb\" (UniqueName: \"kubernetes.io/projected/97b8261a-91e3-435e-93f8-0a17f30359fd-kube-api-access-pmqrb\") pod \"machine-config-controller-686c884b4d-zljgp\" (UID: \"97b8261a-91e3-435e-93f8-0a17f30359fd\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-zljgp" Feb 16 02:22:37.680476 master-0 kubenswrapper[31559]: I0216 02:22:37.680351 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4gmn\" (UniqueName: \"kubernetes.io/projected/23755f7f-dce6-4dcf-9664-22e3aedb5c81-kube-api-access-n4gmn\") pod \"package-server-manager-5c696dbdcd-tkqng\" (UID: \"23755f7f-dce6-4dcf-9664-22e3aedb5c81\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-tkqng" Feb 16 02:22:37.680903 master-0 kubenswrapper[31559]: I0216 02:22:37.680846 31559 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:22:37.680903 master-0 kubenswrapper[31559]: [-]has-synced failed: reason withheld Feb 16 02:22:37.680903 master-0 kubenswrapper[31559]: [+]process-running ok Feb 16 02:22:37.680903 master-0 kubenswrapper[31559]: healthz check failed Feb 16 02:22:37.681228 master-0 kubenswrapper[31559]: I0216 02:22:37.680910 31559 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:22:37.708510 master-0 kubenswrapper[31559]: I0216 02:22:37.708429 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9p9r\" (UniqueName: \"kubernetes.io/projected/a0540a70-a256-422b-a827-e564d0e67866-kube-api-access-s9p9r\") pod \"cluster-image-registry-operator-96c8c64b8-bxgpd\" (UID: \"a0540a70-a256-422b-a827-e564d0e67866\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-bxgpd" Feb 16 02:22:37.716781 master-0 kubenswrapper[31559]: I0216 02:22:37.716743 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cstlg\" (UniqueName: \"kubernetes.io/projected/5b923d74-bad3-4780-8e7e-e8365ac9ea06-kube-api-access-cstlg\") pod \"certified-operators-gkbtj\" (UID: \"5b923d74-bad3-4780-8e7e-e8365ac9ea06\") " pod="openshift-marketplace/certified-operators-gkbtj" Feb 16 02:22:37.738921 master-0 kubenswrapper[31559]: I0216 02:22:37.738882 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgj82\" (UniqueName: \"kubernetes.io/projected/48863ff6-63ac-42d7-bac7-29d888c92db9-kube-api-access-kgj82\") pod \"cluster-autoscaler-operator-67fd9768b5-9rvcj\" (UID: \"48863ff6-63ac-42d7-bac7-29d888c92db9\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-9rvcj" Feb 16 02:22:37.758635 master-0 kubenswrapper[31559]: I0216 02:22:37.758505 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ln8g4\" (UniqueName: \"kubernetes.io/projected/5b62004d-7fe3-47ae-8e26-8496befb047c-kube-api-access-ln8g4\") pod \"cluster-samples-operator-f8cbff74c-k8jz5\" (UID: \"5b62004d-7fe3-47ae-8e26-8496befb047c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-k8jz5" Feb 16 02:22:37.787716 master-0 kubenswrapper[31559]: I0216 02:22:37.787657 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kws4h\" (UniqueName: \"kubernetes.io/projected/695d1f01-d3c1-4fb9-9dda-daf33eae11f5-kube-api-access-kws4h\") pod \"prometheus-operator-7485d645b8-v9mmd\" (UID: \"695d1f01-d3c1-4fb9-9dda-daf33eae11f5\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-v9mmd" Feb 16 02:22:37.807625 master-0 kubenswrapper[31559]: I0216 02:22:37.807552 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9n58j\" (UniqueName: \"kubernetes.io/projected/22739961-e322-47f1-b232-eaa4cc35319c-kube-api-access-9n58j\") pod \"apiserver-6796f86fd6-qtxkl\" (UID: \"22739961-e322-47f1-b232-eaa4cc35319c\") " pod="openshift-oauth-apiserver/apiserver-6796f86fd6-qtxkl" Feb 16 02:22:37.827726 master-0 kubenswrapper[31559]: I0216 02:22:37.827654 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xr7gn\" (UniqueName: \"kubernetes.io/projected/c9cd32bc-a13a-44ee-ba52-7bb335c7007b-kube-api-access-xr7gn\") pod \"authentication-operator-755d954778-bngv9\" (UID: \"c9cd32bc-a13a-44ee-ba52-7bb335c7007b\") " pod="openshift-authentication-operator/authentication-operator-755d954778-bngv9" Feb 16 02:22:37.851557 master-0 kubenswrapper[31559]: I0216 02:22:37.851479 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5j9vb\" (UniqueName: \"kubernetes.io/projected/1a07cd28-a33d-4abd-9198-ba82bacd51ba-kube-api-access-5j9vb\") pod \"node-resolver-7tjn9\" (UID: \"1a07cd28-a33d-4abd-9198-ba82bacd51ba\") " pod="openshift-dns/node-resolver-7tjn9" Feb 16 02:22:37.869451 master-0 kubenswrapper[31559]: I0216 02:22:37.869370 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rc6w\" (UniqueName: \"kubernetes.io/projected/04804a08-e3a5-46f3-abcb-967866834baa-kube-api-access-8rc6w\") pod \"ingress-operator-c588d8cb4-nbjz6\" (UID: \"04804a08-e3a5-46f3-abcb-967866834baa\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-nbjz6" Feb 16 02:22:37.889178 master-0 kubenswrapper[31559]: I0216 02:22:37.889069 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxnht\" (UniqueName: \"kubernetes.io/projected/8f918d5b-1a4c-4b56-98a4-5cef638bb615-kube-api-access-fxnht\") pod \"apiserver-578b9bc556-8g98v\" (UID: \"8f918d5b-1a4c-4b56-98a4-5cef638bb615\") " pod="openshift-apiserver/apiserver-578b9bc556-8g98v" Feb 16 02:22:37.915537 master-0 kubenswrapper[31559]: I0216 02:22:37.915422 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8d49c\" (UniqueName: \"kubernetes.io/projected/1f2d2601-481d-4e86-ac4c-3d34d5691261-kube-api-access-8d49c\") pod \"cluster-olm-operator-55b69c6c48-jshtp\" (UID: \"1f2d2601-481d-4e86-ac4c-3d34d5691261\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-jshtp" Feb 16 02:22:37.929359 master-0 kubenswrapper[31559]: I0216 02:22:37.929213 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bff42\" (UniqueName: \"kubernetes.io/projected/724ac845-3835-458b-9645-e665be135ff9-kube-api-access-bff42\") pod \"etcd-operator-67bf55ccdd-htjgz\" (UID: \"724ac845-3835-458b-9645-e665be135ff9\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-htjgz" Feb 16 02:22:37.948179 master-0 kubenswrapper[31559]: I0216 02:22:37.948087 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1743372f-bdb0-4558-b47b-3714f3aa3fde-kube-api-access\") pod \"openshift-kube-scheduler-operator-7485d55966-mmhcs\" (UID: \"1743372f-bdb0-4558-b47b-3714f3aa3fde\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-mmhcs" Feb 16 02:22:37.968210 master-0 kubenswrapper[31559]: I0216 02:22:37.968145 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47lht\" (UniqueName: \"kubernetes.io/projected/2a67f799-fd8d-4bee-9d67-720151c1650b-kube-api-access-47lht\") pod \"iptables-alerter-9bnql\" (UID: \"2a67f799-fd8d-4bee-9d67-720151c1650b\") " pod="openshift-network-operator/iptables-alerter-9bnql" Feb 16 02:22:37.981236 master-0 kubenswrapper[31559]: I0216 02:22:37.981143 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfgxq\" (UniqueName: \"kubernetes.io/projected/e478bdcc-052e-42f8-91b6-58c26cfc9cfc-kube-api-access-pfgxq\") pod \"network-check-target-hswdj\" (UID: \"e478bdcc-052e-42f8-91b6-58c26cfc9cfc\") " pod="openshift-network-diagnostics/network-check-target-hswdj" Feb 16 02:22:37.984227 master-0 kubenswrapper[31559]: I0216 02:22:37.984165 31559 request.go:700] Waited for 3.885156551s due to client-side throttling, not priority and fairness, request: POST:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-image-registry/serviceaccounts/cluster-image-registry-operator/token Feb 16 02:22:38.007373 master-0 kubenswrapper[31559]: I0216 02:22:38.007315 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a0540a70-a256-422b-a827-e564d0e67866-bound-sa-token\") pod \"cluster-image-registry-operator-96c8c64b8-bxgpd\" (UID: \"a0540a70-a256-422b-a827-e564d0e67866\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-bxgpd" Feb 16 02:22:38.029091 master-0 kubenswrapper[31559]: I0216 02:22:38.028923 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nmjz\" (UniqueName: \"kubernetes.io/projected/83883885-f493-4559-9c0f-e28d69712475-kube-api-access-6nmjz\") pod \"route-controller-manager-998bd8b4b-hm5k2\" (UID: \"83883885-f493-4559-9c0f-e28d69712475\") " pod="openshift-route-controller-manager/route-controller-manager-998bd8b4b-hm5k2" Feb 16 02:22:38.043056 master-0 kubenswrapper[31559]: I0216 02:22:38.042975 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kj7r\" (UniqueName: \"kubernetes.io/projected/00ef3b03-55dc-4661-b7fd-1e586c45b5de-kube-api-access-7kj7r\") pod \"tuned-vvw25\" (UID: \"00ef3b03-55dc-4661-b7fd-1e586c45b5de\") " pod="openshift-cluster-node-tuning-operator/tuned-vvw25" Feb 16 02:22:38.071156 master-0 kubenswrapper[31559]: I0216 02:22:38.071103 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7276\" (UniqueName: \"kubernetes.io/projected/a3065737-c7c0-4fbb-b484-f2a9204d4908-kube-api-access-w7276\") pod \"csi-snapshot-controller-74b6595c6d-466x9\" (UID: \"a3065737-c7c0-4fbb-b484-f2a9204d4908\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-466x9" Feb 16 02:22:38.088779 master-0 kubenswrapper[31559]: I0216 02:22:38.088700 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbmnx\" (UniqueName: \"kubernetes.io/projected/a8d00a01-aa48-4830-a558-93a31cb98b31-kube-api-access-lbmnx\") pod \"control-plane-machine-set-operator-d8bf84b88-r5l9f\" (UID: \"a8d00a01-aa48-4830-a558-93a31cb98b31\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-r5l9f" Feb 16 02:22:38.107774 master-0 kubenswrapper[31559]: I0216 02:22:38.107733 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2qvg\" (UniqueName: \"kubernetes.io/projected/9be9fd24-fdb1-43dc-80b8-68020427bfd7-kube-api-access-k2qvg\") pod \"openshift-config-operator-7c6bdb986f-zlbd2\" (UID: \"9be9fd24-fdb1-43dc-80b8-68020427bfd7\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-zlbd2" Feb 16 02:22:38.133276 master-0 kubenswrapper[31559]: I0216 02:22:38.133169 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5m4sb\" (UniqueName: \"kubernetes.io/projected/7f0f9b7d-e663-4927-861b-a9544d483b6e-kube-api-access-5m4sb\") pod \"network-metrics-daemon-gn9mv\" (UID: \"7f0f9b7d-e663-4927-861b-a9544d483b6e\") " pod="openshift-multus/network-metrics-daemon-gn9mv" Feb 16 02:22:38.152751 master-0 kubenswrapper[31559]: I0216 02:22:38.152694 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgqtf\" (UniqueName: \"kubernetes.io/projected/86af980a-2653-40c3-a368-a795d7fb8558-kube-api-access-tgqtf\") pod \"kube-state-metrics-7cc9598d54-2gx8b\" (UID: \"86af980a-2653-40c3-a368-a795d7fb8558\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-2gx8b" Feb 16 02:22:38.165281 master-0 kubenswrapper[31559]: I0216 02:22:38.165205 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxq28\" (UniqueName: \"kubernetes.io/projected/1487f82c-c14a-4f65-be77-5af2612f56f4-kube-api-access-wxq28\") pod \"redhat-marketplace-thm6w\" (UID: \"1487f82c-c14a-4f65-be77-5af2612f56f4\") " pod="openshift-marketplace/redhat-marketplace-thm6w" Feb 16 02:22:38.187926 master-0 kubenswrapper[31559]: I0216 02:22:38.187861 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/980aa005-f51d-4ca2-aee6-a6fdeefd86d0-kube-api-access\") pod \"kube-apiserver-operator-54984b6678-dsjz2\" (UID: \"980aa005-f51d-4ca2-aee6-a6fdeefd86d0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-dsjz2" Feb 16 02:22:38.210137 master-0 kubenswrapper[31559]: I0216 02:22:38.210056 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cz49l\" (UniqueName: \"kubernetes.io/projected/c8086f93-2d98-4218-afac-20a65e6bf943-kube-api-access-cz49l\") pod \"multus-admission-controller-6d678b8d67-8gzlx\" (UID: \"c8086f93-2d98-4218-afac-20a65e6bf943\") " pod="openshift-multus/multus-admission-controller-6d678b8d67-8gzlx" Feb 16 02:22:38.228358 master-0 kubenswrapper[31559]: I0216 02:22:38.228282 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnh76\" (UniqueName: \"kubernetes.io/projected/676adb95-3ffd-43e5-89e3-9d7a7d74df28-kube-api-access-lnh76\") pod \"migrator-5bd989df77-sh2wj\" (UID: \"676adb95-3ffd-43e5-89e3-9d7a7d74df28\") " pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-sh2wj" Feb 16 02:22:38.250223 master-0 kubenswrapper[31559]: I0216 02:22:38.250148 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4p8p\" (UniqueName: \"kubernetes.io/projected/e491b5ed-9c09-4308-9843-fba8d43bd3ae-kube-api-access-j4p8p\") pod \"controller-manager-5788fc6459-29m25\" (UID: \"e491b5ed-9c09-4308-9843-fba8d43bd3ae\") " pod="openshift-controller-manager/controller-manager-5788fc6459-29m25" Feb 16 02:22:38.269859 master-0 kubenswrapper[31559]: I0216 02:22:38.269785 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqmhs\" (UniqueName: \"kubernetes.io/projected/7ac81030-35d1-4d86-844d-65d1156d8944-kube-api-access-lqmhs\") pod \"dns-default-njlg6\" (UID: \"7ac81030-35d1-4d86-844d-65d1156d8944\") " pod="openshift-dns/dns-default-njlg6" Feb 16 02:22:38.289821 master-0 kubenswrapper[31559]: I0216 02:22:38.289685 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2582m\" (UniqueName: \"kubernetes.io/projected/d008dbd4-e713-4f2e-b64d-ca9cfc83a502-kube-api-access-2582m\") pod \"csi-snapshot-controller-operator-7b87b97578-8n9v4\" (UID: \"d008dbd4-e713-4f2e-b64d-ca9cfc83a502\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-8n9v4" Feb 16 02:22:38.305470 master-0 kubenswrapper[31559]: E0216 02:22:38.305353 31559 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 16 02:22:38.305470 master-0 kubenswrapper[31559]: E0216 02:22:38.305428 31559 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 16 02:22:38.305664 master-0 kubenswrapper[31559]: E0216 02:22:38.305633 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/20bf60f7-9e36-477e-96a5-4fc8dc1bca5e-kube-api-access podName:20bf60f7-9e36-477e-96a5-4fc8dc1bca5e nodeName:}" failed. No retries permitted until 2026-02-16 02:22:38.805597408 +0000 UTC m=+11.150203463 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/20bf60f7-9e36-477e-96a5-4fc8dc1bca5e-kube-api-access") pod "installer-3-master-0" (UID: "20bf60f7-9e36-477e-96a5-4fc8dc1bca5e") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 16 02:22:38.315489 master-0 kubenswrapper[31559]: E0216 02:22:38.315427 31559 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-master-0\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:22:38.334987 master-0 kubenswrapper[31559]: E0216 02:22:38.334925 31559 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 02:22:38.406723 master-0 kubenswrapper[31559]: E0216 02:22:38.406660 31559 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.587s" Feb 16 02:22:38.420590 master-0 kubenswrapper[31559]: I0216 02:22:38.420538 31559 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Feb 16 02:22:38.427424 master-0 kubenswrapper[31559]: I0216 02:22:38.427395 31559 kubelet_node_status.go:115] "Node was previously registered" node="master-0" Feb 16 02:22:38.427545 master-0 kubenswrapper[31559]: I0216 02:22:38.427530 31559 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Feb 16 02:22:38.467476 master-0 kubenswrapper[31559]: I0216 02:22:38.467398 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-864ddd5f56-ffptx" Feb 16 02:22:38.467684 master-0 kubenswrapper[31559]: I0216 02:22:38.467506 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-6796f86fd6-qtxkl" Feb 16 02:22:38.467684 master-0 kubenswrapper[31559]: I0216 02:22:38.467583 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gkbtj" Feb 16 02:22:38.467684 master-0 kubenswrapper[31559]: I0216 02:22:38.467602 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Feb 16 02:22:38.467797 master-0 kubenswrapper[31559]: I0216 02:22:38.467729 31559 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" mirrorPodUID="05ee7f0d-b929-461e-8464-8e1aa0635e08" Feb 16 02:22:38.467845 master-0 kubenswrapper[31559]: I0216 02:22:38.467824 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-864ddd5f56-ffptx" Feb 16 02:22:38.468596 master-0 kubenswrapper[31559]: I0216 02:22:38.467892 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-6796f86fd6-qtxkl" Feb 16 02:22:38.468674 master-0 kubenswrapper[31559]: I0216 02:22:38.468633 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9c6g5" Feb 16 02:22:38.468747 master-0 kubenswrapper[31559]: I0216 02:22:38.468727 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-s95k9" Feb 16 02:22:38.468797 master-0 kubenswrapper[31559]: I0216 02:22:38.468771 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-6796f86fd6-qtxkl" Feb 16 02:22:38.469731 master-0 kubenswrapper[31559]: I0216 02:22:38.469694 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-578b9bc556-8g98v" Feb 16 02:22:38.469781 master-0 kubenswrapper[31559]: I0216 02:22:38.469772 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-s95k9" Feb 16 02:22:38.469958 master-0 kubenswrapper[31559]: I0216 02:22:38.469925 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-thm6w" Feb 16 02:22:38.470057 master-0 kubenswrapper[31559]: I0216 02:22:38.470028 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gkbtj" Feb 16 02:22:38.470234 master-0 kubenswrapper[31559]: I0216 02:22:38.470203 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gkbtj" Feb 16 02:22:38.470909 master-0 kubenswrapper[31559]: I0216 02:22:38.470881 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9c6g5" Feb 16 02:22:38.470977 master-0 kubenswrapper[31559]: I0216 02:22:38.470936 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-s95k9" Feb 16 02:22:38.471017 master-0 kubenswrapper[31559]: I0216 02:22:38.470976 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-zlbd2" Feb 16 02:22:38.471070 master-0 kubenswrapper[31559]: I0216 02:22:38.471013 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-hswdj" Feb 16 02:22:38.471070 master-0 kubenswrapper[31559]: I0216 02:22:38.471043 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-hswdj" Feb 16 02:22:38.471144 master-0 kubenswrapper[31559]: I0216 02:22:38.471071 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-zlbd2" Feb 16 02:22:38.471144 master-0 kubenswrapper[31559]: I0216 02:22:38.471091 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-578b9bc556-8g98v" Feb 16 02:22:38.471144 master-0 kubenswrapper[31559]: I0216 02:22:38.471126 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-998bd8b4b-hm5k2" Feb 16 02:22:38.471244 master-0 kubenswrapper[31559]: I0216 02:22:38.471163 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-578b9bc556-8g98v" Feb 16 02:22:38.471244 master-0 kubenswrapper[31559]: I0216 02:22:38.471191 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-998bd8b4b-hm5k2" Feb 16 02:22:38.471244 master-0 kubenswrapper[31559]: I0216 02:22:38.471231 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-zc2br" Feb 16 02:22:38.471544 master-0 kubenswrapper[31559]: I0216 02:22:38.471516 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-zc2br" Feb 16 02:22:38.471618 master-0 kubenswrapper[31559]: I0216 02:22:38.471606 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-njlg6" Feb 16 02:22:38.471664 master-0 kubenswrapper[31559]: I0216 02:22:38.471643 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-njlg6" Feb 16 02:22:38.471706 master-0 kubenswrapper[31559]: I0216 02:22:38.471669 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5788fc6459-29m25" Feb 16 02:22:38.471706 master-0 kubenswrapper[31559]: I0216 02:22:38.471695 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-thm6w" Feb 16 02:22:38.474366 master-0 kubenswrapper[31559]: I0216 02:22:38.474323 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5788fc6459-29m25" Feb 16 02:22:38.544300 master-0 kubenswrapper[31559]: I0216 02:22:38.544146 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-thm6w" Feb 16 02:22:38.545726 master-0 kubenswrapper[31559]: I0216 02:22:38.545672 31559 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:22:38.545726 master-0 kubenswrapper[31559]: [-]has-synced failed: reason withheld Feb 16 02:22:38.545726 master-0 kubenswrapper[31559]: [+]process-running ok Feb 16 02:22:38.545726 master-0 kubenswrapper[31559]: healthz check failed Feb 16 02:22:38.545993 master-0 kubenswrapper[31559]: I0216 02:22:38.545784 31559 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:22:38.780046 master-0 kubenswrapper[31559]: I0216 02:22:38.779989 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Feb 16 02:22:38.799959 master-0 kubenswrapper[31559]: I0216 02:22:38.799857 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Feb 16 02:22:38.868380 master-0 kubenswrapper[31559]: I0216 02:22:38.868286 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/20bf60f7-9e36-477e-96a5-4fc8dc1bca5e-kube-api-access\") pod \"installer-3-master-0\" (UID: \"20bf60f7-9e36-477e-96a5-4fc8dc1bca5e\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 16 02:22:38.868679 master-0 kubenswrapper[31559]: E0216 02:22:38.868609 31559 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 16 02:22:38.868679 master-0 kubenswrapper[31559]: E0216 02:22:38.868644 31559 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 16 02:22:38.868814 master-0 kubenswrapper[31559]: E0216 02:22:38.868724 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/20bf60f7-9e36-477e-96a5-4fc8dc1bca5e-kube-api-access podName:20bf60f7-9e36-477e-96a5-4fc8dc1bca5e nodeName:}" failed. No retries permitted until 2026-02-16 02:22:39.868701102 +0000 UTC m=+12.213307147 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/20bf60f7-9e36-477e-96a5-4fc8dc1bca5e-kube-api-access") pod "installer-3-master-0" (UID: "20bf60f7-9e36-477e-96a5-4fc8dc1bca5e") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 16 02:22:39.351740 master-0 kubenswrapper[31559]: I0216 02:22:39.351568 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-9dx2k" Feb 16 02:22:39.357417 master-0 kubenswrapper[31559]: I0216 02:22:39.357354 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-9dx2k" Feb 16 02:22:39.408192 master-0 kubenswrapper[31559]: I0216 02:22:39.408132 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Feb 16 02:22:39.543412 master-0 kubenswrapper[31559]: I0216 02:22:39.543299 31559 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:22:39.543412 master-0 kubenswrapper[31559]: [-]has-synced failed: reason withheld Feb 16 02:22:39.543412 master-0 kubenswrapper[31559]: [+]process-running ok Feb 16 02:22:39.543412 master-0 kubenswrapper[31559]: healthz check failed Feb 16 02:22:39.543412 master-0 kubenswrapper[31559]: I0216 02:22:39.543384 31559 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:22:39.892836 master-0 kubenswrapper[31559]: I0216 02:22:39.892766 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/20bf60f7-9e36-477e-96a5-4fc8dc1bca5e-kube-api-access\") pod \"installer-3-master-0\" (UID: \"20bf60f7-9e36-477e-96a5-4fc8dc1bca5e\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 16 02:22:39.893090 master-0 kubenswrapper[31559]: E0216 02:22:39.892942 31559 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 16 02:22:39.893090 master-0 kubenswrapper[31559]: E0216 02:22:39.892964 31559 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 16 02:22:39.893090 master-0 kubenswrapper[31559]: E0216 02:22:39.893013 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/20bf60f7-9e36-477e-96a5-4fc8dc1bca5e-kube-api-access podName:20bf60f7-9e36-477e-96a5-4fc8dc1bca5e nodeName:}" failed. No retries permitted until 2026-02-16 02:22:41.89299798 +0000 UTC m=+14.237604005 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/20bf60f7-9e36-477e-96a5-4fc8dc1bca5e-kube-api-access") pod "installer-3-master-0" (UID: "20bf60f7-9e36-477e-96a5-4fc8dc1bca5e") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 16 02:22:40.117258 master-0 kubenswrapper[31559]: I0216 02:22:40.117187 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-qwp9g" Feb 16 02:22:40.125276 master-0 kubenswrapper[31559]: I0216 02:22:40.125210 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-qwp9g" Feb 16 02:22:40.191729 master-0 kubenswrapper[31559]: I0216 02:22:40.191186 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-0" podStartSLOduration=7.191156639 podStartE2EDuration="7.191156639s" podCreationTimestamp="2026-02-16 02:22:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:22:40.187282878 +0000 UTC m=+12.531888953" watchObservedRunningTime="2026-02-16 02:22:40.191156639 +0000 UTC m=+12.535762694" Feb 16 02:22:40.542424 master-0 kubenswrapper[31559]: I0216 02:22:40.542241 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9c6g5" Feb 16 02:22:40.545337 master-0 kubenswrapper[31559]: I0216 02:22:40.545220 31559 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:22:40.545337 master-0 kubenswrapper[31559]: [-]has-synced failed: reason withheld Feb 16 02:22:40.545337 master-0 kubenswrapper[31559]: [+]process-running ok Feb 16 02:22:40.545337 master-0 kubenswrapper[31559]: healthz check failed Feb 16 02:22:40.545337 master-0 kubenswrapper[31559]: I0216 02:22:40.545291 31559 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:22:40.604137 master-0 kubenswrapper[31559]: I0216 02:22:40.604071 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9c6g5" Feb 16 02:22:40.843201 master-0 kubenswrapper[31559]: I0216 02:22:40.842965 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-8nl7s" Feb 16 02:22:40.847601 master-0 kubenswrapper[31559]: I0216 02:22:40.847206 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-8nl7s" Feb 16 02:22:41.154465 master-0 kubenswrapper[31559]: I0216 02:22:41.154319 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:22:41.176032 master-0 kubenswrapper[31559]: I0216 02:22:41.175907 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:22:41.410668 master-0 kubenswrapper[31559]: I0216 02:22:41.410567 31559 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 02:22:41.410668 master-0 kubenswrapper[31559]: I0216 02:22:41.410614 31559 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 02:22:41.465827 master-0 kubenswrapper[31559]: I0216 02:22:41.465766 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-tkqng" Feb 16 02:22:41.471526 master-0 kubenswrapper[31559]: I0216 02:22:41.471487 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-tkqng" Feb 16 02:22:41.544508 master-0 kubenswrapper[31559]: I0216 02:22:41.544043 31559 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:22:41.544508 master-0 kubenswrapper[31559]: [-]has-synced failed: reason withheld Feb 16 02:22:41.544508 master-0 kubenswrapper[31559]: [+]process-running ok Feb 16 02:22:41.544508 master-0 kubenswrapper[31559]: healthz check failed Feb 16 02:22:41.544508 master-0 kubenswrapper[31559]: I0216 02:22:41.544131 31559 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:22:41.932597 master-0 kubenswrapper[31559]: I0216 02:22:41.932504 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/20bf60f7-9e36-477e-96a5-4fc8dc1bca5e-kube-api-access\") pod \"installer-3-master-0\" (UID: \"20bf60f7-9e36-477e-96a5-4fc8dc1bca5e\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 16 02:22:41.932895 master-0 kubenswrapper[31559]: E0216 02:22:41.932840 31559 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 16 02:22:41.932991 master-0 kubenswrapper[31559]: E0216 02:22:41.932910 31559 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 16 02:22:41.933066 master-0 kubenswrapper[31559]: E0216 02:22:41.932998 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/20bf60f7-9e36-477e-96a5-4fc8dc1bca5e-kube-api-access podName:20bf60f7-9e36-477e-96a5-4fc8dc1bca5e nodeName:}" failed. No retries permitted until 2026-02-16 02:22:45.932964709 +0000 UTC m=+18.277570764 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/20bf60f7-9e36-477e-96a5-4fc8dc1bca5e-kube-api-access") pod "installer-3-master-0" (UID: "20bf60f7-9e36-477e-96a5-4fc8dc1bca5e") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 16 02:22:42.270175 master-0 kubenswrapper[31559]: I0216 02:22:42.269953 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-g9lcm" Feb 16 02:22:42.271315 master-0 kubenswrapper[31559]: I0216 02:22:42.271260 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-g9lcm" Feb 16 02:22:42.544608 master-0 kubenswrapper[31559]: I0216 02:22:42.544470 31559 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:22:42.544608 master-0 kubenswrapper[31559]: [-]has-synced failed: reason withheld Feb 16 02:22:42.544608 master-0 kubenswrapper[31559]: [+]process-running ok Feb 16 02:22:42.544608 master-0 kubenswrapper[31559]: healthz check failed Feb 16 02:22:42.544608 master-0 kubenswrapper[31559]: I0216 02:22:42.544538 31559 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:22:42.850893 master-0 kubenswrapper[31559]: I0216 02:22:42.850755 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-6796f86fd6-qtxkl" Feb 16 02:22:43.151075 master-0 kubenswrapper[31559]: I0216 02:22:43.150939 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-578b9bc556-8g98v" Feb 16 02:22:43.491007 master-0 kubenswrapper[31559]: I0216 02:22:43.490964 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:22:43.491260 master-0 kubenswrapper[31559]: I0216 02:22:43.491171 31559 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 02:22:43.491260 master-0 kubenswrapper[31559]: I0216 02:22:43.491185 31559 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 02:22:43.524184 master-0 kubenswrapper[31559]: I0216 02:22:43.524133 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:22:43.547158 master-0 kubenswrapper[31559]: I0216 02:22:43.547095 31559 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:22:43.547158 master-0 kubenswrapper[31559]: [-]has-synced failed: reason withheld Feb 16 02:22:43.547158 master-0 kubenswrapper[31559]: [+]process-running ok Feb 16 02:22:43.547158 master-0 kubenswrapper[31559]: healthz check failed Feb 16 02:22:43.547696 master-0 kubenswrapper[31559]: I0216 02:22:43.547167 31559 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:22:43.727884 master-0 kubenswrapper[31559]: I0216 02:22:43.727812 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:22:43.756453 master-0 kubenswrapper[31559]: I0216 02:22:43.755774 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bs85n" Feb 16 02:22:44.544239 master-0 kubenswrapper[31559]: I0216 02:22:44.544171 31559 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:22:44.544239 master-0 kubenswrapper[31559]: [-]has-synced failed: reason withheld Feb 16 02:22:44.544239 master-0 kubenswrapper[31559]: [+]process-running ok Feb 16 02:22:44.544239 master-0 kubenswrapper[31559]: healthz check failed Feb 16 02:22:44.544539 master-0 kubenswrapper[31559]: I0216 02:22:44.544256 31559 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:22:45.544555 master-0 kubenswrapper[31559]: I0216 02:22:45.544489 31559 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:22:45.544555 master-0 kubenswrapper[31559]: [-]has-synced failed: reason withheld Feb 16 02:22:45.544555 master-0 kubenswrapper[31559]: [+]process-running ok Feb 16 02:22:45.544555 master-0 kubenswrapper[31559]: healthz check failed Feb 16 02:22:45.545226 master-0 kubenswrapper[31559]: I0216 02:22:45.544573 31559 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:22:46.030550 master-0 kubenswrapper[31559]: I0216 02:22:46.030411 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/20bf60f7-9e36-477e-96a5-4fc8dc1bca5e-kube-api-access\") pod \"installer-3-master-0\" (UID: \"20bf60f7-9e36-477e-96a5-4fc8dc1bca5e\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 16 02:22:46.031136 master-0 kubenswrapper[31559]: E0216 02:22:46.030665 31559 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 16 02:22:46.031136 master-0 kubenswrapper[31559]: E0216 02:22:46.030717 31559 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 16 02:22:46.031136 master-0 kubenswrapper[31559]: E0216 02:22:46.030834 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/20bf60f7-9e36-477e-96a5-4fc8dc1bca5e-kube-api-access podName:20bf60f7-9e36-477e-96a5-4fc8dc1bca5e nodeName:}" failed. No retries permitted until 2026-02-16 02:22:54.030809505 +0000 UTC m=+26.375415530 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/20bf60f7-9e36-477e-96a5-4fc8dc1bca5e-kube-api-access") pod "installer-3-master-0" (UID: "20bf60f7-9e36-477e-96a5-4fc8dc1bca5e") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 16 02:22:46.429033 master-0 kubenswrapper[31559]: I0216 02:22:46.428947 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-87777c9b7-fxzh6" Feb 16 02:22:46.434957 master-0 kubenswrapper[31559]: I0216 02:22:46.434890 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-87777c9b7-fxzh6" Feb 16 02:22:46.545761 master-0 kubenswrapper[31559]: I0216 02:22:46.545675 31559 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:22:46.545761 master-0 kubenswrapper[31559]: [-]has-synced failed: reason withheld Feb 16 02:22:46.545761 master-0 kubenswrapper[31559]: [+]process-running ok Feb 16 02:22:46.545761 master-0 kubenswrapper[31559]: healthz check failed Feb 16 02:22:46.546474 master-0 kubenswrapper[31559]: I0216 02:22:46.545761 31559 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:22:47.295458 master-0 kubenswrapper[31559]: I0216 02:22:47.295341 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-2z8fq" Feb 16 02:22:47.301380 master-0 kubenswrapper[31559]: I0216 02:22:47.301310 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-2z8fq" Feb 16 02:22:47.543621 master-0 kubenswrapper[31559]: I0216 02:22:47.543550 31559 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:22:47.543621 master-0 kubenswrapper[31559]: [-]has-synced failed: reason withheld Feb 16 02:22:47.543621 master-0 kubenswrapper[31559]: [+]process-running ok Feb 16 02:22:47.543621 master-0 kubenswrapper[31559]: healthz check failed Feb 16 02:22:47.543970 master-0 kubenswrapper[31559]: I0216 02:22:47.543633 31559 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:22:47.938081 master-0 kubenswrapper[31559]: I0216 02:22:47.938008 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gkbtj" Feb 16 02:22:47.938919 master-0 kubenswrapper[31559]: I0216 02:22:47.938158 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-s95k9" Feb 16 02:22:48.541502 master-0 kubenswrapper[31559]: I0216 02:22:48.541406 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-thm6w" Feb 16 02:22:48.544573 master-0 kubenswrapper[31559]: I0216 02:22:48.544529 31559 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:22:48.544573 master-0 kubenswrapper[31559]: [-]has-synced failed: reason withheld Feb 16 02:22:48.544573 master-0 kubenswrapper[31559]: [+]process-running ok Feb 16 02:22:48.544573 master-0 kubenswrapper[31559]: healthz check failed Feb 16 02:22:48.544797 master-0 kubenswrapper[31559]: I0216 02:22:48.544610 31559 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:22:49.544995 master-0 kubenswrapper[31559]: I0216 02:22:49.544898 31559 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:22:49.544995 master-0 kubenswrapper[31559]: [-]has-synced failed: reason withheld Feb 16 02:22:49.544995 master-0 kubenswrapper[31559]: [+]process-running ok Feb 16 02:22:49.544995 master-0 kubenswrapper[31559]: healthz check failed Feb 16 02:22:49.546612 master-0 kubenswrapper[31559]: I0216 02:22:49.544998 31559 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:22:50.151630 master-0 kubenswrapper[31559]: I0216 02:22:50.151494 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 02:22:50.544754 master-0 kubenswrapper[31559]: I0216 02:22:50.544661 31559 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:22:50.544754 master-0 kubenswrapper[31559]: [-]has-synced failed: reason withheld Feb 16 02:22:50.544754 master-0 kubenswrapper[31559]: [+]process-running ok Feb 16 02:22:50.544754 master-0 kubenswrapper[31559]: healthz check failed Feb 16 02:22:50.545779 master-0 kubenswrapper[31559]: I0216 02:22:50.544774 31559 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:22:51.544827 master-0 kubenswrapper[31559]: I0216 02:22:51.544696 31559 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:22:51.544827 master-0 kubenswrapper[31559]: [-]has-synced failed: reason withheld Feb 16 02:22:51.544827 master-0 kubenswrapper[31559]: [+]process-running ok Feb 16 02:22:51.544827 master-0 kubenswrapper[31559]: healthz check failed Feb 16 02:22:51.546094 master-0 kubenswrapper[31559]: I0216 02:22:51.544871 31559 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:22:52.545229 master-0 kubenswrapper[31559]: I0216 02:22:52.545156 31559 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:22:52.545229 master-0 kubenswrapper[31559]: [-]has-synced failed: reason withheld Feb 16 02:22:52.545229 master-0 kubenswrapper[31559]: [+]process-running ok Feb 16 02:22:52.545229 master-0 kubenswrapper[31559]: healthz check failed Feb 16 02:22:52.546236 master-0 kubenswrapper[31559]: I0216 02:22:52.545240 31559 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:22:53.543765 master-0 kubenswrapper[31559]: I0216 02:22:53.543669 31559 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:22:53.543765 master-0 kubenswrapper[31559]: [-]has-synced failed: reason withheld Feb 16 02:22:53.543765 master-0 kubenswrapper[31559]: [+]process-running ok Feb 16 02:22:53.543765 master-0 kubenswrapper[31559]: healthz check failed Feb 16 02:22:53.543765 master-0 kubenswrapper[31559]: I0216 02:22:53.543748 31559 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:22:54.080055 master-0 kubenswrapper[31559]: I0216 02:22:54.079953 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/20bf60f7-9e36-477e-96a5-4fc8dc1bca5e-kube-api-access\") pod \"installer-3-master-0\" (UID: \"20bf60f7-9e36-477e-96a5-4fc8dc1bca5e\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 16 02:22:54.080952 master-0 kubenswrapper[31559]: E0216 02:22:54.080493 31559 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 16 02:22:54.080952 master-0 kubenswrapper[31559]: E0216 02:22:54.080536 31559 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 16 02:22:54.080952 master-0 kubenswrapper[31559]: E0216 02:22:54.080645 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/20bf60f7-9e36-477e-96a5-4fc8dc1bca5e-kube-api-access podName:20bf60f7-9e36-477e-96a5-4fc8dc1bca5e nodeName:}" failed. No retries permitted until 2026-02-16 02:23:10.080607643 +0000 UTC m=+42.425213698 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/20bf60f7-9e36-477e-96a5-4fc8dc1bca5e-kube-api-access") pod "installer-3-master-0" (UID: "20bf60f7-9e36-477e-96a5-4fc8dc1bca5e") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 16 02:22:54.544718 master-0 kubenswrapper[31559]: I0216 02:22:54.544622 31559 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:22:54.544718 master-0 kubenswrapper[31559]: [-]has-synced failed: reason withheld Feb 16 02:22:54.544718 master-0 kubenswrapper[31559]: [+]process-running ok Feb 16 02:22:54.544718 master-0 kubenswrapper[31559]: healthz check failed Feb 16 02:22:54.545196 master-0 kubenswrapper[31559]: I0216 02:22:54.544718 31559 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:22:55.545328 master-0 kubenswrapper[31559]: I0216 02:22:55.545258 31559 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:22:55.545328 master-0 kubenswrapper[31559]: [-]has-synced failed: reason withheld Feb 16 02:22:55.545328 master-0 kubenswrapper[31559]: [+]process-running ok Feb 16 02:22:55.545328 master-0 kubenswrapper[31559]: healthz check failed Feb 16 02:22:55.545328 master-0 kubenswrapper[31559]: I0216 02:22:55.545332 31559 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:22:55.878664 master-0 kubenswrapper[31559]: I0216 02:22:55.878507 31559 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 16 02:22:55.879087 master-0 kubenswrapper[31559]: I0216 02:22:55.878840 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="ebf941eaba3a97825b1c8002f4b27a20" containerName="startup-monitor" containerID="cri-o://c2d62a7e7f4bbc8330cc4b08997f3b933fe2a87b2e1d2971c8efe2da06bf71fa" gracePeriod=5 Feb 16 02:22:56.099412 master-0 kubenswrapper[31559]: I0216 02:22:56.099313 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-67b79bd656-cs2n2" Feb 16 02:22:56.545330 master-0 kubenswrapper[31559]: I0216 02:22:56.545266 31559 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:22:56.545330 master-0 kubenswrapper[31559]: [-]has-synced failed: reason withheld Feb 16 02:22:56.545330 master-0 kubenswrapper[31559]: [+]process-running ok Feb 16 02:22:56.545330 master-0 kubenswrapper[31559]: healthz check failed Feb 16 02:22:56.545943 master-0 kubenswrapper[31559]: I0216 02:22:56.545342 31559 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:22:57.092635 master-0 kubenswrapper[31559]: I0216 02:22:57.092557 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-67b79bd656-cs2n2" Feb 16 02:22:57.544521 master-0 kubenswrapper[31559]: I0216 02:22:57.544456 31559 patch_prober.go:28] interesting pod/router-default-864ddd5f56-ffptx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 02:22:57.544521 master-0 kubenswrapper[31559]: [-]has-synced failed: reason withheld Feb 16 02:22:57.544521 master-0 kubenswrapper[31559]: [+]process-running ok Feb 16 02:22:57.544521 master-0 kubenswrapper[31559]: healthz check failed Feb 16 02:22:57.544864 master-0 kubenswrapper[31559]: I0216 02:22:57.544537 31559 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-ffptx" podUID="17390d9a-148d-4927-a831-5bc4873c43d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 02:22:58.549069 master-0 kubenswrapper[31559]: I0216 02:22:58.549036 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-864ddd5f56-ffptx" Feb 16 02:22:58.553041 master-0 kubenswrapper[31559]: I0216 02:22:58.553026 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-864ddd5f56-ffptx" Feb 16 02:23:01.485508 master-0 kubenswrapper[31559]: I0216 02:23:01.485416 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_ebf941eaba3a97825b1c8002f4b27a20/startup-monitor/0.log" Feb 16 02:23:01.485508 master-0 kubenswrapper[31559]: I0216 02:23:01.485524 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 02:23:01.588772 master-0 kubenswrapper[31559]: I0216 02:23:01.588684 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_ebf941eaba3a97825b1c8002f4b27a20/startup-monitor/0.log" Feb 16 02:23:01.589168 master-0 kubenswrapper[31559]: I0216 02:23:01.588782 31559 generic.go:334] "Generic (PLEG): container finished" podID="ebf941eaba3a97825b1c8002f4b27a20" containerID="c2d62a7e7f4bbc8330cc4b08997f3b933fe2a87b2e1d2971c8efe2da06bf71fa" exitCode=137 Feb 16 02:23:01.589168 master-0 kubenswrapper[31559]: I0216 02:23:01.588866 31559 scope.go:117] "RemoveContainer" containerID="c2d62a7e7f4bbc8330cc4b08997f3b933fe2a87b2e1d2971c8efe2da06bf71fa" Feb 16 02:23:01.589168 master-0 kubenswrapper[31559]: I0216 02:23:01.588871 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 02:23:01.599013 master-0 kubenswrapper[31559]: I0216 02:23:01.598900 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-pod-resource-dir\") pod \"ebf941eaba3a97825b1c8002f4b27a20\" (UID: \"ebf941eaba3a97825b1c8002f4b27a20\") " Feb 16 02:23:01.599013 master-0 kubenswrapper[31559]: I0216 02:23:01.598993 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-var-lock\") pod \"ebf941eaba3a97825b1c8002f4b27a20\" (UID: \"ebf941eaba3a97825b1c8002f4b27a20\") " Feb 16 02:23:01.599194 master-0 kubenswrapper[31559]: I0216 02:23:01.599117 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-var-log\") pod \"ebf941eaba3a97825b1c8002f4b27a20\" (UID: \"ebf941eaba3a97825b1c8002f4b27a20\") " Feb 16 02:23:01.599250 master-0 kubenswrapper[31559]: I0216 02:23:01.599213 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-resource-dir\") pod \"ebf941eaba3a97825b1c8002f4b27a20\" (UID: \"ebf941eaba3a97825b1c8002f4b27a20\") " Feb 16 02:23:01.599299 master-0 kubenswrapper[31559]: I0216 02:23:01.599282 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-manifests\") pod \"ebf941eaba3a97825b1c8002f4b27a20\" (UID: \"ebf941eaba3a97825b1c8002f4b27a20\") " Feb 16 02:23:01.599352 master-0 kubenswrapper[31559]: I0216 02:23:01.599289 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-var-lock" (OuterVolumeSpecName: "var-lock") pod "ebf941eaba3a97825b1c8002f4b27a20" (UID: "ebf941eaba3a97825b1c8002f4b27a20"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:23:01.599352 master-0 kubenswrapper[31559]: I0216 02:23:01.599348 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-var-log" (OuterVolumeSpecName: "var-log") pod "ebf941eaba3a97825b1c8002f4b27a20" (UID: "ebf941eaba3a97825b1c8002f4b27a20"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:23:01.599467 master-0 kubenswrapper[31559]: I0216 02:23:01.599377 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "ebf941eaba3a97825b1c8002f4b27a20" (UID: "ebf941eaba3a97825b1c8002f4b27a20"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:23:01.599528 master-0 kubenswrapper[31559]: I0216 02:23:01.599470 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-manifests" (OuterVolumeSpecName: "manifests") pod "ebf941eaba3a97825b1c8002f4b27a20" (UID: "ebf941eaba3a97825b1c8002f4b27a20"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:23:01.599981 master-0 kubenswrapper[31559]: I0216 02:23:01.599924 31559 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-var-log\") on node \"master-0\" DevicePath \"\"" Feb 16 02:23:01.599981 master-0 kubenswrapper[31559]: I0216 02:23:01.599973 31559 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 02:23:01.600101 master-0 kubenswrapper[31559]: I0216 02:23:01.599996 31559 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-manifests\") on node \"master-0\" DevicePath \"\"" Feb 16 02:23:01.600101 master-0 kubenswrapper[31559]: I0216 02:23:01.600015 31559 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 16 02:23:01.605478 master-0 kubenswrapper[31559]: I0216 02:23:01.605389 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "ebf941eaba3a97825b1c8002f4b27a20" (UID: "ebf941eaba3a97825b1c8002f4b27a20"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:23:01.617462 master-0 kubenswrapper[31559]: I0216 02:23:01.617324 31559 scope.go:117] "RemoveContainer" containerID="c2d62a7e7f4bbc8330cc4b08997f3b933fe2a87b2e1d2971c8efe2da06bf71fa" Feb 16 02:23:01.619805 master-0 kubenswrapper[31559]: E0216 02:23:01.619489 31559 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2d62a7e7f4bbc8330cc4b08997f3b933fe2a87b2e1d2971c8efe2da06bf71fa\": container with ID starting with c2d62a7e7f4bbc8330cc4b08997f3b933fe2a87b2e1d2971c8efe2da06bf71fa not found: ID does not exist" containerID="c2d62a7e7f4bbc8330cc4b08997f3b933fe2a87b2e1d2971c8efe2da06bf71fa" Feb 16 02:23:01.619805 master-0 kubenswrapper[31559]: I0216 02:23:01.619595 31559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2d62a7e7f4bbc8330cc4b08997f3b933fe2a87b2e1d2971c8efe2da06bf71fa"} err="failed to get container status \"c2d62a7e7f4bbc8330cc4b08997f3b933fe2a87b2e1d2971c8efe2da06bf71fa\": rpc error: code = NotFound desc = could not find container \"c2d62a7e7f4bbc8330cc4b08997f3b933fe2a87b2e1d2971c8efe2da06bf71fa\": container with ID starting with c2d62a7e7f4bbc8330cc4b08997f3b933fe2a87b2e1d2971c8efe2da06bf71fa not found: ID does not exist" Feb 16 02:23:01.702115 master-0 kubenswrapper[31559]: I0216 02:23:01.702004 31559 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebf941eaba3a97825b1c8002f4b27a20-pod-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 02:23:01.942184 master-0 kubenswrapper[31559]: I0216 02:23:01.941912 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebf941eaba3a97825b1c8002f4b27a20" path="/var/lib/kubelet/pods/ebf941eaba3a97825b1c8002f4b27a20/volumes" Feb 16 02:23:02.801082 master-0 kubenswrapper[31559]: I0216 02:23:02.801029 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-7f488d596f-4xjjh"] Feb 16 02:23:02.802191 master-0 kubenswrapper[31559]: E0216 02:23:02.802141 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f35c7c9-16ec-486e-99ff-f1cbcce76eb3" containerName="installer" Feb 16 02:23:02.802494 master-0 kubenswrapper[31559]: I0216 02:23:02.802478 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f35c7c9-16ec-486e-99ff-f1cbcce76eb3" containerName="installer" Feb 16 02:23:02.802599 master-0 kubenswrapper[31559]: E0216 02:23:02.802585 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fb79c4e-3171-4d3c-a0d1-ed1a93acafa8" containerName="assisted-installer-controller" Feb 16 02:23:02.802686 master-0 kubenswrapper[31559]: I0216 02:23:02.802674 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fb79c4e-3171-4d3c-a0d1-ed1a93acafa8" containerName="assisted-installer-controller" Feb 16 02:23:02.802768 master-0 kubenswrapper[31559]: E0216 02:23:02.802754 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4733c2df-0f5a-4696-b8c6-2568ebc7debc" containerName="installer" Feb 16 02:23:02.802849 master-0 kubenswrapper[31559]: I0216 02:23:02.802837 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="4733c2df-0f5a-4696-b8c6-2568ebc7debc" containerName="installer" Feb 16 02:23:02.802956 master-0 kubenswrapper[31559]: E0216 02:23:02.802943 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ff0fbd8-9ecc-421f-952c-c90ea17ddc7b" containerName="installer" Feb 16 02:23:02.803060 master-0 kubenswrapper[31559]: I0216 02:23:02.803042 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ff0fbd8-9ecc-421f-952c-c90ea17ddc7b" containerName="installer" Feb 16 02:23:02.803165 master-0 kubenswrapper[31559]: E0216 02:23:02.803151 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b79cd7f-675e-4778-be06-95e79b1c008a" containerName="installer" Feb 16 02:23:02.803249 master-0 kubenswrapper[31559]: I0216 02:23:02.803234 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b79cd7f-675e-4778-be06-95e79b1c008a" containerName="installer" Feb 16 02:23:02.803361 master-0 kubenswrapper[31559]: E0216 02:23:02.803344 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20bf60f7-9e36-477e-96a5-4fc8dc1bca5e" containerName="installer" Feb 16 02:23:02.803477 master-0 kubenswrapper[31559]: I0216 02:23:02.803460 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="20bf60f7-9e36-477e-96a5-4fc8dc1bca5e" containerName="installer" Feb 16 02:23:02.803619 master-0 kubenswrapper[31559]: E0216 02:23:02.803599 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ea4c28c-8f53-4b41-9c85-c8c50599d7cd" containerName="installer" Feb 16 02:23:02.803726 master-0 kubenswrapper[31559]: I0216 02:23:02.803708 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ea4c28c-8f53-4b41-9c85-c8c50599d7cd" containerName="installer" Feb 16 02:23:02.803838 master-0 kubenswrapper[31559]: E0216 02:23:02.803820 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8269ffdd-7357-4a8c-b578-0f482558f93e" containerName="collect-profiles" Feb 16 02:23:02.803957 master-0 kubenswrapper[31559]: I0216 02:23:02.803939 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="8269ffdd-7357-4a8c-b578-0f482558f93e" containerName="collect-profiles" Feb 16 02:23:02.804080 master-0 kubenswrapper[31559]: E0216 02:23:02.804063 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c399bab-ff5e-4fd0-959b-354508c39eec" containerName="installer" Feb 16 02:23:02.804182 master-0 kubenswrapper[31559]: I0216 02:23:02.804165 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c399bab-ff5e-4fd0-959b-354508c39eec" containerName="installer" Feb 16 02:23:02.804315 master-0 kubenswrapper[31559]: E0216 02:23:02.804298 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebf941eaba3a97825b1c8002f4b27a20" containerName="startup-monitor" Feb 16 02:23:02.804424 master-0 kubenswrapper[31559]: I0216 02:23:02.804409 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebf941eaba3a97825b1c8002f4b27a20" containerName="startup-monitor" Feb 16 02:23:02.804562 master-0 kubenswrapper[31559]: E0216 02:23:02.804544 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80f43f07-ce08-4c21-9463-ea983a110244" containerName="installer" Feb 16 02:23:02.804691 master-0 kubenswrapper[31559]: I0216 02:23:02.804672 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="80f43f07-ce08-4c21-9463-ea983a110244" containerName="installer" Feb 16 02:23:02.804831 master-0 kubenswrapper[31559]: E0216 02:23:02.804810 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9460ca0802075a8a6a10d7b3e6052c4d" containerName="kube-scheduler" Feb 16 02:23:02.804943 master-0 kubenswrapper[31559]: I0216 02:23:02.804925 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="9460ca0802075a8a6a10d7b3e6052c4d" containerName="kube-scheduler" Feb 16 02:23:02.805062 master-0 kubenswrapper[31559]: E0216 02:23:02.805044 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9063971f-d258-4c4b-9e12-06b7de390d3b" containerName="installer" Feb 16 02:23:02.805167 master-0 kubenswrapper[31559]: I0216 02:23:02.805149 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="9063971f-d258-4c4b-9e12-06b7de390d3b" containerName="installer" Feb 16 02:23:02.805493 master-0 kubenswrapper[31559]: I0216 02:23:02.805469 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ff0fbd8-9ecc-421f-952c-c90ea17ddc7b" containerName="installer" Feb 16 02:23:02.805641 master-0 kubenswrapper[31559]: I0216 02:23:02.805621 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fb79c4e-3171-4d3c-a0d1-ed1a93acafa8" containerName="assisted-installer-controller" Feb 16 02:23:02.805751 master-0 kubenswrapper[31559]: I0216 02:23:02.805732 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="9063971f-d258-4c4b-9e12-06b7de390d3b" containerName="installer" Feb 16 02:23:02.805884 master-0 kubenswrapper[31559]: I0216 02:23:02.805868 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="8269ffdd-7357-4a8c-b578-0f482558f93e" containerName="collect-profiles" Feb 16 02:23:02.805986 master-0 kubenswrapper[31559]: I0216 02:23:02.805970 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c399bab-ff5e-4fd0-959b-354508c39eec" containerName="installer" Feb 16 02:23:02.806085 master-0 kubenswrapper[31559]: I0216 02:23:02.806071 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b79cd7f-675e-4778-be06-95e79b1c008a" containerName="installer" Feb 16 02:23:02.806163 master-0 kubenswrapper[31559]: I0216 02:23:02.806150 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="9460ca0802075a8a6a10d7b3e6052c4d" containerName="kube-scheduler" Feb 16 02:23:02.806240 master-0 kubenswrapper[31559]: I0216 02:23:02.806228 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="20bf60f7-9e36-477e-96a5-4fc8dc1bca5e" containerName="installer" Feb 16 02:23:02.806318 master-0 kubenswrapper[31559]: I0216 02:23:02.806306 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebf941eaba3a97825b1c8002f4b27a20" containerName="startup-monitor" Feb 16 02:23:02.806408 master-0 kubenswrapper[31559]: I0216 02:23:02.806395 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="4733c2df-0f5a-4696-b8c6-2568ebc7debc" containerName="installer" Feb 16 02:23:02.806510 master-0 kubenswrapper[31559]: I0216 02:23:02.806497 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f35c7c9-16ec-486e-99ff-f1cbcce76eb3" containerName="installer" Feb 16 02:23:02.806602 master-0 kubenswrapper[31559]: I0216 02:23:02.806589 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ea4c28c-8f53-4b41-9c85-c8c50599d7cd" containerName="installer" Feb 16 02:23:02.806676 master-0 kubenswrapper[31559]: I0216 02:23:02.806663 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="80f43f07-ce08-4c21-9463-ea983a110244" containerName="installer" Feb 16 02:23:02.807313 master-0 kubenswrapper[31559]: I0216 02:23:02.807294 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7f488d596f-4xjjh" Feb 16 02:23:02.811872 master-0 kubenswrapper[31559]: I0216 02:23:02.811820 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 16 02:23:02.812043 master-0 kubenswrapper[31559]: I0216 02:23:02.811866 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 16 02:23:02.812043 master-0 kubenswrapper[31559]: I0216 02:23:02.811901 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 16 02:23:02.812043 master-0 kubenswrapper[31559]: I0216 02:23:02.811988 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 16 02:23:02.813250 master-0 kubenswrapper[31559]: I0216 02:23:02.813170 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 16 02:23:02.814421 master-0 kubenswrapper[31559]: I0216 02:23:02.814388 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 16 02:23:02.814598 master-0 kubenswrapper[31559]: I0216 02:23:02.814549 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 16 02:23:02.814891 master-0 kubenswrapper[31559]: I0216 02:23:02.814857 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 16 02:23:02.815952 master-0 kubenswrapper[31559]: I0216 02:23:02.815913 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 16 02:23:02.816144 master-0 kubenswrapper[31559]: I0216 02:23:02.816117 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 16 02:23:02.816257 master-0 kubenswrapper[31559]: I0216 02:23:02.816222 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 16 02:23:02.819521 master-0 kubenswrapper[31559]: I0216 02:23:02.818994 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-xg7sl" Feb 16 02:23:02.824682 master-0 kubenswrapper[31559]: I0216 02:23:02.824621 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 16 02:23:02.833253 master-0 kubenswrapper[31559]: I0216 02:23:02.833210 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 16 02:23:02.838657 master-0 kubenswrapper[31559]: I0216 02:23:02.838597 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7f488d596f-4xjjh"] Feb 16 02:23:02.916720 master-0 kubenswrapper[31559]: I0216 02:23:02.916664 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-v4-0-config-system-service-ca\") pod \"oauth-openshift-7f488d596f-4xjjh\" (UID: \"9a4294a9-8cd3-4d7a-87d8-fa039261ec60\") " pod="openshift-authentication/oauth-openshift-7f488d596f-4xjjh" Feb 16 02:23:02.916928 master-0 kubenswrapper[31559]: I0216 02:23:02.916765 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7f488d596f-4xjjh\" (UID: \"9a4294a9-8cd3-4d7a-87d8-fa039261ec60\") " pod="openshift-authentication/oauth-openshift-7f488d596f-4xjjh" Feb 16 02:23:02.916928 master-0 kubenswrapper[31559]: I0216 02:23:02.916805 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-audit-dir\") pod \"oauth-openshift-7f488d596f-4xjjh\" (UID: \"9a4294a9-8cd3-4d7a-87d8-fa039261ec60\") " pod="openshift-authentication/oauth-openshift-7f488d596f-4xjjh" Feb 16 02:23:02.916995 master-0 kubenswrapper[31559]: I0216 02:23:02.916960 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-v4-0-config-user-template-error\") pod \"oauth-openshift-7f488d596f-4xjjh\" (UID: \"9a4294a9-8cd3-4d7a-87d8-fa039261ec60\") " pod="openshift-authentication/oauth-openshift-7f488d596f-4xjjh" Feb 16 02:23:02.917118 master-0 kubenswrapper[31559]: I0216 02:23:02.917026 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7f488d596f-4xjjh\" (UID: \"9a4294a9-8cd3-4d7a-87d8-fa039261ec60\") " pod="openshift-authentication/oauth-openshift-7f488d596f-4xjjh" Feb 16 02:23:02.917348 master-0 kubenswrapper[31559]: I0216 02:23:02.917283 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-audit-policies\") pod \"oauth-openshift-7f488d596f-4xjjh\" (UID: \"9a4294a9-8cd3-4d7a-87d8-fa039261ec60\") " pod="openshift-authentication/oauth-openshift-7f488d596f-4xjjh" Feb 16 02:23:02.917520 master-0 kubenswrapper[31559]: I0216 02:23:02.917474 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-v4-0-config-system-router-certs\") pod \"oauth-openshift-7f488d596f-4xjjh\" (UID: \"9a4294a9-8cd3-4d7a-87d8-fa039261ec60\") " pod="openshift-authentication/oauth-openshift-7f488d596f-4xjjh" Feb 16 02:23:02.917574 master-0 kubenswrapper[31559]: I0216 02:23:02.917545 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-v4-0-config-system-session\") pod \"oauth-openshift-7f488d596f-4xjjh\" (UID: \"9a4294a9-8cd3-4d7a-87d8-fa039261ec60\") " pod="openshift-authentication/oauth-openshift-7f488d596f-4xjjh" Feb 16 02:23:02.917620 master-0 kubenswrapper[31559]: I0216 02:23:02.917580 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7f488d596f-4xjjh\" (UID: \"9a4294a9-8cd3-4d7a-87d8-fa039261ec60\") " pod="openshift-authentication/oauth-openshift-7f488d596f-4xjjh" Feb 16 02:23:02.917653 master-0 kubenswrapper[31559]: I0216 02:23:02.917619 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vzdx\" (UniqueName: \"kubernetes.io/projected/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-kube-api-access-8vzdx\") pod \"oauth-openshift-7f488d596f-4xjjh\" (UID: \"9a4294a9-8cd3-4d7a-87d8-fa039261ec60\") " pod="openshift-authentication/oauth-openshift-7f488d596f-4xjjh" Feb 16 02:23:02.917687 master-0 kubenswrapper[31559]: I0216 02:23:02.917659 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-v4-0-config-user-template-login\") pod \"oauth-openshift-7f488d596f-4xjjh\" (UID: \"9a4294a9-8cd3-4d7a-87d8-fa039261ec60\") " pod="openshift-authentication/oauth-openshift-7f488d596f-4xjjh" Feb 16 02:23:02.917720 master-0 kubenswrapper[31559]: I0216 02:23:02.917693 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7f488d596f-4xjjh\" (UID: \"9a4294a9-8cd3-4d7a-87d8-fa039261ec60\") " pod="openshift-authentication/oauth-openshift-7f488d596f-4xjjh" Feb 16 02:23:02.917756 master-0 kubenswrapper[31559]: I0216 02:23:02.917727 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7f488d596f-4xjjh\" (UID: \"9a4294a9-8cd3-4d7a-87d8-fa039261ec60\") " pod="openshift-authentication/oauth-openshift-7f488d596f-4xjjh" Feb 16 02:23:03.018898 master-0 kubenswrapper[31559]: I0216 02:23:03.018831 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-v4-0-config-system-service-ca\") pod \"oauth-openshift-7f488d596f-4xjjh\" (UID: \"9a4294a9-8cd3-4d7a-87d8-fa039261ec60\") " pod="openshift-authentication/oauth-openshift-7f488d596f-4xjjh" Feb 16 02:23:03.019106 master-0 kubenswrapper[31559]: I0216 02:23:03.018920 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7f488d596f-4xjjh\" (UID: \"9a4294a9-8cd3-4d7a-87d8-fa039261ec60\") " pod="openshift-authentication/oauth-openshift-7f488d596f-4xjjh" Feb 16 02:23:03.019106 master-0 kubenswrapper[31559]: I0216 02:23:03.018957 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-audit-dir\") pod \"oauth-openshift-7f488d596f-4xjjh\" (UID: \"9a4294a9-8cd3-4d7a-87d8-fa039261ec60\") " pod="openshift-authentication/oauth-openshift-7f488d596f-4xjjh" Feb 16 02:23:03.019106 master-0 kubenswrapper[31559]: I0216 02:23:03.019008 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-v4-0-config-user-template-error\") pod \"oauth-openshift-7f488d596f-4xjjh\" (UID: \"9a4294a9-8cd3-4d7a-87d8-fa039261ec60\") " pod="openshift-authentication/oauth-openshift-7f488d596f-4xjjh" Feb 16 02:23:03.019106 master-0 kubenswrapper[31559]: I0216 02:23:03.019052 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7f488d596f-4xjjh\" (UID: \"9a4294a9-8cd3-4d7a-87d8-fa039261ec60\") " pod="openshift-authentication/oauth-openshift-7f488d596f-4xjjh" Feb 16 02:23:03.019231 master-0 kubenswrapper[31559]: I0216 02:23:03.019107 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-audit-dir\") pod \"oauth-openshift-7f488d596f-4xjjh\" (UID: \"9a4294a9-8cd3-4d7a-87d8-fa039261ec60\") " pod="openshift-authentication/oauth-openshift-7f488d596f-4xjjh" Feb 16 02:23:03.019231 master-0 kubenswrapper[31559]: I0216 02:23:03.019115 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-audit-policies\") pod \"oauth-openshift-7f488d596f-4xjjh\" (UID: \"9a4294a9-8cd3-4d7a-87d8-fa039261ec60\") " pod="openshift-authentication/oauth-openshift-7f488d596f-4xjjh" Feb 16 02:23:03.019290 master-0 kubenswrapper[31559]: I0216 02:23:03.019238 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-v4-0-config-system-router-certs\") pod \"oauth-openshift-7f488d596f-4xjjh\" (UID: \"9a4294a9-8cd3-4d7a-87d8-fa039261ec60\") " pod="openshift-authentication/oauth-openshift-7f488d596f-4xjjh" Feb 16 02:23:03.019290 master-0 kubenswrapper[31559]: I0216 02:23:03.019277 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-v4-0-config-system-session\") pod \"oauth-openshift-7f488d596f-4xjjh\" (UID: \"9a4294a9-8cd3-4d7a-87d8-fa039261ec60\") " pod="openshift-authentication/oauth-openshift-7f488d596f-4xjjh" Feb 16 02:23:03.019348 master-0 kubenswrapper[31559]: I0216 02:23:03.019299 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7f488d596f-4xjjh\" (UID: \"9a4294a9-8cd3-4d7a-87d8-fa039261ec60\") " pod="openshift-authentication/oauth-openshift-7f488d596f-4xjjh" Feb 16 02:23:03.019467 master-0 kubenswrapper[31559]: I0216 02:23:03.019445 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vzdx\" (UniqueName: \"kubernetes.io/projected/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-kube-api-access-8vzdx\") pod \"oauth-openshift-7f488d596f-4xjjh\" (UID: \"9a4294a9-8cd3-4d7a-87d8-fa039261ec60\") " pod="openshift-authentication/oauth-openshift-7f488d596f-4xjjh" Feb 16 02:23:03.019521 master-0 kubenswrapper[31559]: I0216 02:23:03.019482 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-v4-0-config-user-template-login\") pod \"oauth-openshift-7f488d596f-4xjjh\" (UID: \"9a4294a9-8cd3-4d7a-87d8-fa039261ec60\") " pod="openshift-authentication/oauth-openshift-7f488d596f-4xjjh" Feb 16 02:23:03.019521 master-0 kubenswrapper[31559]: I0216 02:23:03.019503 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7f488d596f-4xjjh\" (UID: \"9a4294a9-8cd3-4d7a-87d8-fa039261ec60\") " pod="openshift-authentication/oauth-openshift-7f488d596f-4xjjh" Feb 16 02:23:03.019582 master-0 kubenswrapper[31559]: I0216 02:23:03.019526 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7f488d596f-4xjjh\" (UID: \"9a4294a9-8cd3-4d7a-87d8-fa039261ec60\") " pod="openshift-authentication/oauth-openshift-7f488d596f-4xjjh" Feb 16 02:23:03.019757 master-0 kubenswrapper[31559]: I0216 02:23:03.019717 31559 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 16 02:23:03.020131 master-0 kubenswrapper[31559]: I0216 02:23:03.020088 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-v4-0-config-system-service-ca\") pod \"oauth-openshift-7f488d596f-4xjjh\" (UID: \"9a4294a9-8cd3-4d7a-87d8-fa039261ec60\") " pod="openshift-authentication/oauth-openshift-7f488d596f-4xjjh" Feb 16 02:23:03.020131 master-0 kubenswrapper[31559]: I0216 02:23:03.020115 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-audit-policies\") pod \"oauth-openshift-7f488d596f-4xjjh\" (UID: \"9a4294a9-8cd3-4d7a-87d8-fa039261ec60\") " pod="openshift-authentication/oauth-openshift-7f488d596f-4xjjh" Feb 16 02:23:03.020223 master-0 kubenswrapper[31559]: E0216 02:23:03.020195 31559 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: configmap "v4-0-config-system-cliconfig" not found Feb 16 02:23:03.020255 master-0 kubenswrapper[31559]: E0216 02:23:03.020247 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-v4-0-config-system-cliconfig podName:9a4294a9-8cd3-4d7a-87d8-fa039261ec60 nodeName:}" failed. No retries permitted until 2026-02-16 02:23:03.520231716 +0000 UTC m=+35.864837731 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-v4-0-config-system-cliconfig") pod "oauth-openshift-7f488d596f-4xjjh" (UID: "9a4294a9-8cd3-4d7a-87d8-fa039261ec60") : configmap "v4-0-config-system-cliconfig" not found Feb 16 02:23:03.022813 master-0 kubenswrapper[31559]: I0216 02:23:03.020676 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7f488d596f-4xjjh\" (UID: \"9a4294a9-8cd3-4d7a-87d8-fa039261ec60\") " pod="openshift-authentication/oauth-openshift-7f488d596f-4xjjh" Feb 16 02:23:03.023549 master-0 kubenswrapper[31559]: I0216 02:23:03.023513 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7f488d596f-4xjjh\" (UID: \"9a4294a9-8cd3-4d7a-87d8-fa039261ec60\") " pod="openshift-authentication/oauth-openshift-7f488d596f-4xjjh" Feb 16 02:23:03.024345 master-0 kubenswrapper[31559]: I0216 02:23:03.024302 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-v4-0-config-system-router-certs\") pod \"oauth-openshift-7f488d596f-4xjjh\" (UID: \"9a4294a9-8cd3-4d7a-87d8-fa039261ec60\") " pod="openshift-authentication/oauth-openshift-7f488d596f-4xjjh" Feb 16 02:23:03.024400 master-0 kubenswrapper[31559]: I0216 02:23:03.024337 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-v4-0-config-system-session\") pod \"oauth-openshift-7f488d596f-4xjjh\" (UID: \"9a4294a9-8cd3-4d7a-87d8-fa039261ec60\") " pod="openshift-authentication/oauth-openshift-7f488d596f-4xjjh" Feb 16 02:23:03.028600 master-0 kubenswrapper[31559]: I0216 02:23:03.028564 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-v4-0-config-user-template-error\") pod \"oauth-openshift-7f488d596f-4xjjh\" (UID: \"9a4294a9-8cd3-4d7a-87d8-fa039261ec60\") " pod="openshift-authentication/oauth-openshift-7f488d596f-4xjjh" Feb 16 02:23:03.029022 master-0 kubenswrapper[31559]: I0216 02:23:03.028940 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-v4-0-config-user-template-login\") pod \"oauth-openshift-7f488d596f-4xjjh\" (UID: \"9a4294a9-8cd3-4d7a-87d8-fa039261ec60\") " pod="openshift-authentication/oauth-openshift-7f488d596f-4xjjh" Feb 16 02:23:03.031480 master-0 kubenswrapper[31559]: I0216 02:23:03.031399 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7f488d596f-4xjjh\" (UID: \"9a4294a9-8cd3-4d7a-87d8-fa039261ec60\") " pod="openshift-authentication/oauth-openshift-7f488d596f-4xjjh" Feb 16 02:23:03.032734 master-0 kubenswrapper[31559]: I0216 02:23:03.032712 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7f488d596f-4xjjh\" (UID: \"9a4294a9-8cd3-4d7a-87d8-fa039261ec60\") " pod="openshift-authentication/oauth-openshift-7f488d596f-4xjjh" Feb 16 02:23:03.039483 master-0 kubenswrapper[31559]: I0216 02:23:03.039407 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vzdx\" (UniqueName: \"kubernetes.io/projected/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-kube-api-access-8vzdx\") pod \"oauth-openshift-7f488d596f-4xjjh\" (UID: \"9a4294a9-8cd3-4d7a-87d8-fa039261ec60\") " pod="openshift-authentication/oauth-openshift-7f488d596f-4xjjh" Feb 16 02:23:03.526493 master-0 kubenswrapper[31559]: I0216 02:23:03.526110 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7f488d596f-4xjjh\" (UID: \"9a4294a9-8cd3-4d7a-87d8-fa039261ec60\") " pod="openshift-authentication/oauth-openshift-7f488d596f-4xjjh" Feb 16 02:23:03.526493 master-0 kubenswrapper[31559]: E0216 02:23:03.526364 31559 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: configmap "v4-0-config-system-cliconfig" not found Feb 16 02:23:03.527407 master-0 kubenswrapper[31559]: E0216 02:23:03.526856 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-v4-0-config-system-cliconfig podName:9a4294a9-8cd3-4d7a-87d8-fa039261ec60 nodeName:}" failed. No retries permitted until 2026-02-16 02:23:04.526417819 +0000 UTC m=+36.871023874 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-v4-0-config-system-cliconfig") pod "oauth-openshift-7f488d596f-4xjjh" (UID: "9a4294a9-8cd3-4d7a-87d8-fa039261ec60") : configmap "v4-0-config-system-cliconfig" not found Feb 16 02:23:04.545286 master-0 kubenswrapper[31559]: I0216 02:23:04.544953 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7f488d596f-4xjjh\" (UID: \"9a4294a9-8cd3-4d7a-87d8-fa039261ec60\") " pod="openshift-authentication/oauth-openshift-7f488d596f-4xjjh" Feb 16 02:23:04.546161 master-0 kubenswrapper[31559]: E0216 02:23:04.545203 31559 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: configmap "v4-0-config-system-cliconfig" not found Feb 16 02:23:04.546161 master-0 kubenswrapper[31559]: E0216 02:23:04.545511 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-v4-0-config-system-cliconfig podName:9a4294a9-8cd3-4d7a-87d8-fa039261ec60 nodeName:}" failed. No retries permitted until 2026-02-16 02:23:06.545426659 +0000 UTC m=+38.890032714 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-v4-0-config-system-cliconfig") pod "oauth-openshift-7f488d596f-4xjjh" (UID: "9a4294a9-8cd3-4d7a-87d8-fa039261ec60") : configmap "v4-0-config-system-cliconfig" not found Feb 16 02:23:06.576032 master-0 kubenswrapper[31559]: I0216 02:23:06.575926 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7f488d596f-4xjjh\" (UID: \"9a4294a9-8cd3-4d7a-87d8-fa039261ec60\") " pod="openshift-authentication/oauth-openshift-7f488d596f-4xjjh" Feb 16 02:23:06.576880 master-0 kubenswrapper[31559]: E0216 02:23:06.576134 31559 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: configmap "v4-0-config-system-cliconfig" not found Feb 16 02:23:06.576880 master-0 kubenswrapper[31559]: E0216 02:23:06.576283 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-v4-0-config-system-cliconfig podName:9a4294a9-8cd3-4d7a-87d8-fa039261ec60 nodeName:}" failed. No retries permitted until 2026-02-16 02:23:10.576249621 +0000 UTC m=+42.920855676 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-v4-0-config-system-cliconfig") pod "oauth-openshift-7f488d596f-4xjjh" (UID: "9a4294a9-8cd3-4d7a-87d8-fa039261ec60") : configmap "v4-0-config-system-cliconfig" not found Feb 16 02:23:09.226625 master-0 kubenswrapper[31559]: I0216 02:23:09.226524 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-7f488d596f-4xjjh"] Feb 16 02:23:09.227585 master-0 kubenswrapper[31559]: E0216 02:23:09.227388 31559 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[v4-0-config-system-cliconfig], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-authentication/oauth-openshift-7f488d596f-4xjjh" podUID="9a4294a9-8cd3-4d7a-87d8-fa039261ec60" Feb 16 02:23:09.664901 master-0 kubenswrapper[31559]: I0216 02:23:09.664728 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7f488d596f-4xjjh" Feb 16 02:23:09.680377 master-0 kubenswrapper[31559]: I0216 02:23:09.680300 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7f488d596f-4xjjh" Feb 16 02:23:09.827510 master-0 kubenswrapper[31559]: I0216 02:23:09.827412 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-v4-0-config-system-session\") pod \"9a4294a9-8cd3-4d7a-87d8-fa039261ec60\" (UID: \"9a4294a9-8cd3-4d7a-87d8-fa039261ec60\") " Feb 16 02:23:09.827793 master-0 kubenswrapper[31559]: I0216 02:23:09.827552 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-v4-0-config-system-router-certs\") pod \"9a4294a9-8cd3-4d7a-87d8-fa039261ec60\" (UID: \"9a4294a9-8cd3-4d7a-87d8-fa039261ec60\") " Feb 16 02:23:09.827793 master-0 kubenswrapper[31559]: I0216 02:23:09.827741 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-v4-0-config-system-service-ca\") pod \"9a4294a9-8cd3-4d7a-87d8-fa039261ec60\" (UID: \"9a4294a9-8cd3-4d7a-87d8-fa039261ec60\") " Feb 16 02:23:09.827949 master-0 kubenswrapper[31559]: I0216 02:23:09.827825 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-audit-dir\") pod \"9a4294a9-8cd3-4d7a-87d8-fa039261ec60\" (UID: \"9a4294a9-8cd3-4d7a-87d8-fa039261ec60\") " Feb 16 02:23:09.827949 master-0 kubenswrapper[31559]: I0216 02:23:09.827916 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-v4-0-config-user-template-provider-selection\") pod \"9a4294a9-8cd3-4d7a-87d8-fa039261ec60\" (UID: \"9a4294a9-8cd3-4d7a-87d8-fa039261ec60\") " Feb 16 02:23:09.828080 master-0 kubenswrapper[31559]: I0216 02:23:09.827964 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-v4-0-config-user-template-error\") pod \"9a4294a9-8cd3-4d7a-87d8-fa039261ec60\" (UID: \"9a4294a9-8cd3-4d7a-87d8-fa039261ec60\") " Feb 16 02:23:09.828080 master-0 kubenswrapper[31559]: I0216 02:23:09.828026 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-v4-0-config-user-template-login\") pod \"9a4294a9-8cd3-4d7a-87d8-fa039261ec60\" (UID: \"9a4294a9-8cd3-4d7a-87d8-fa039261ec60\") " Feb 16 02:23:09.828225 master-0 kubenswrapper[31559]: I0216 02:23:09.828115 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-v4-0-config-system-trusted-ca-bundle\") pod \"9a4294a9-8cd3-4d7a-87d8-fa039261ec60\" (UID: \"9a4294a9-8cd3-4d7a-87d8-fa039261ec60\") " Feb 16 02:23:09.828225 master-0 kubenswrapper[31559]: I0216 02:23:09.828146 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-audit-policies\") pod \"9a4294a9-8cd3-4d7a-87d8-fa039261ec60\" (UID: \"9a4294a9-8cd3-4d7a-87d8-fa039261ec60\") " Feb 16 02:23:09.828225 master-0 kubenswrapper[31559]: I0216 02:23:09.828192 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vzdx\" (UniqueName: \"kubernetes.io/projected/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-kube-api-access-8vzdx\") pod \"9a4294a9-8cd3-4d7a-87d8-fa039261ec60\" (UID: \"9a4294a9-8cd3-4d7a-87d8-fa039261ec60\") " Feb 16 02:23:09.828424 master-0 kubenswrapper[31559]: I0216 02:23:09.828190 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "9a4294a9-8cd3-4d7a-87d8-fa039261ec60" (UID: "9a4294a9-8cd3-4d7a-87d8-fa039261ec60"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:23:09.828424 master-0 kubenswrapper[31559]: I0216 02:23:09.828226 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-v4-0-config-system-serving-cert\") pod \"9a4294a9-8cd3-4d7a-87d8-fa039261ec60\" (UID: \"9a4294a9-8cd3-4d7a-87d8-fa039261ec60\") " Feb 16 02:23:09.828424 master-0 kubenswrapper[31559]: I0216 02:23:09.828353 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-v4-0-config-system-ocp-branding-template\") pod \"9a4294a9-8cd3-4d7a-87d8-fa039261ec60\" (UID: \"9a4294a9-8cd3-4d7a-87d8-fa039261ec60\") " Feb 16 02:23:09.828988 master-0 kubenswrapper[31559]: I0216 02:23:09.828928 31559 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-audit-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 02:23:09.830077 master-0 kubenswrapper[31559]: I0216 02:23:09.830001 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "9a4294a9-8cd3-4d7a-87d8-fa039261ec60" (UID: "9a4294a9-8cd3-4d7a-87d8-fa039261ec60"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:23:09.830646 master-0 kubenswrapper[31559]: I0216 02:23:09.830587 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "9a4294a9-8cd3-4d7a-87d8-fa039261ec60" (UID: "9a4294a9-8cd3-4d7a-87d8-fa039261ec60"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:23:09.830807 master-0 kubenswrapper[31559]: I0216 02:23:09.830755 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "9a4294a9-8cd3-4d7a-87d8-fa039261ec60" (UID: "9a4294a9-8cd3-4d7a-87d8-fa039261ec60"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:23:09.832819 master-0 kubenswrapper[31559]: I0216 02:23:09.832471 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "9a4294a9-8cd3-4d7a-87d8-fa039261ec60" (UID: "9a4294a9-8cd3-4d7a-87d8-fa039261ec60"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:23:09.832819 master-0 kubenswrapper[31559]: I0216 02:23:09.832699 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "9a4294a9-8cd3-4d7a-87d8-fa039261ec60" (UID: "9a4294a9-8cd3-4d7a-87d8-fa039261ec60"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:23:09.834152 master-0 kubenswrapper[31559]: I0216 02:23:09.834095 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "9a4294a9-8cd3-4d7a-87d8-fa039261ec60" (UID: "9a4294a9-8cd3-4d7a-87d8-fa039261ec60"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:23:09.835262 master-0 kubenswrapper[31559]: I0216 02:23:09.835172 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "9a4294a9-8cd3-4d7a-87d8-fa039261ec60" (UID: "9a4294a9-8cd3-4d7a-87d8-fa039261ec60"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:23:09.835391 master-0 kubenswrapper[31559]: I0216 02:23:09.835254 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "9a4294a9-8cd3-4d7a-87d8-fa039261ec60" (UID: "9a4294a9-8cd3-4d7a-87d8-fa039261ec60"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:23:09.835391 master-0 kubenswrapper[31559]: I0216 02:23:09.835257 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-kube-api-access-8vzdx" (OuterVolumeSpecName: "kube-api-access-8vzdx") pod "9a4294a9-8cd3-4d7a-87d8-fa039261ec60" (UID: "9a4294a9-8cd3-4d7a-87d8-fa039261ec60"). InnerVolumeSpecName "kube-api-access-8vzdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:23:09.836110 master-0 kubenswrapper[31559]: I0216 02:23:09.836036 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "9a4294a9-8cd3-4d7a-87d8-fa039261ec60" (UID: "9a4294a9-8cd3-4d7a-87d8-fa039261ec60"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:23:09.836388 master-0 kubenswrapper[31559]: I0216 02:23:09.836341 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "9a4294a9-8cd3-4d7a-87d8-fa039261ec60" (UID: "9a4294a9-8cd3-4d7a-87d8-fa039261ec60"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:23:09.931667 master-0 kubenswrapper[31559]: I0216 02:23:09.931538 31559 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-v4-0-config-user-template-login\") on node \"master-0\" DevicePath \"\"" Feb 16 02:23:09.931667 master-0 kubenswrapper[31559]: I0216 02:23:09.931599 31559 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-v4-0-config-system-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 02:23:09.931667 master-0 kubenswrapper[31559]: I0216 02:23:09.931620 31559 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-audit-policies\") on node \"master-0\" DevicePath \"\"" Feb 16 02:23:09.931667 master-0 kubenswrapper[31559]: I0216 02:23:09.931638 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vzdx\" (UniqueName: \"kubernetes.io/projected/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-kube-api-access-8vzdx\") on node \"master-0\" DevicePath \"\"" Feb 16 02:23:09.931667 master-0 kubenswrapper[31559]: I0216 02:23:09.931658 31559 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-v4-0-config-system-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 16 02:23:09.931667 master-0 kubenswrapper[31559]: I0216 02:23:09.931678 31559 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-v4-0-config-system-ocp-branding-template\") on node \"master-0\" DevicePath \"\"" Feb 16 02:23:09.934136 master-0 kubenswrapper[31559]: I0216 02:23:09.931705 31559 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-v4-0-config-system-session\") on node \"master-0\" DevicePath \"\"" Feb 16 02:23:09.934136 master-0 kubenswrapper[31559]: I0216 02:23:09.931732 31559 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-v4-0-config-system-router-certs\") on node \"master-0\" DevicePath \"\"" Feb 16 02:23:09.934136 master-0 kubenswrapper[31559]: I0216 02:23:09.931825 31559 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-v4-0-config-system-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 16 02:23:09.934136 master-0 kubenswrapper[31559]: I0216 02:23:09.931916 31559 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-v4-0-config-user-template-provider-selection\") on node \"master-0\" DevicePath \"\"" Feb 16 02:23:09.934136 master-0 kubenswrapper[31559]: I0216 02:23:09.932002 31559 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-v4-0-config-user-template-error\") on node \"master-0\" DevicePath \"\"" Feb 16 02:23:10.138899 master-0 kubenswrapper[31559]: I0216 02:23:10.138645 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/20bf60f7-9e36-477e-96a5-4fc8dc1bca5e-kube-api-access\") pod \"installer-3-master-0\" (UID: \"20bf60f7-9e36-477e-96a5-4fc8dc1bca5e\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 16 02:23:10.139197 master-0 kubenswrapper[31559]: E0216 02:23:10.138943 31559 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 16 02:23:10.139197 master-0 kubenswrapper[31559]: E0216 02:23:10.138975 31559 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 16 02:23:10.139197 master-0 kubenswrapper[31559]: E0216 02:23:10.139055 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/20bf60f7-9e36-477e-96a5-4fc8dc1bca5e-kube-api-access podName:20bf60f7-9e36-477e-96a5-4fc8dc1bca5e nodeName:}" failed. No retries permitted until 2026-02-16 02:23:42.139032062 +0000 UTC m=+74.483638117 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/20bf60f7-9e36-477e-96a5-4fc8dc1bca5e-kube-api-access") pod "installer-3-master-0" (UID: "20bf60f7-9e36-477e-96a5-4fc8dc1bca5e") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 16 02:23:10.645802 master-0 kubenswrapper[31559]: I0216 02:23:10.645712 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7f488d596f-4xjjh\" (UID: \"9a4294a9-8cd3-4d7a-87d8-fa039261ec60\") " pod="openshift-authentication/oauth-openshift-7f488d596f-4xjjh" Feb 16 02:23:10.646862 master-0 kubenswrapper[31559]: I0216 02:23:10.646795 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7f488d596f-4xjjh\" (UID: \"9a4294a9-8cd3-4d7a-87d8-fa039261ec60\") " pod="openshift-authentication/oauth-openshift-7f488d596f-4xjjh" Feb 16 02:23:10.672635 master-0 kubenswrapper[31559]: I0216 02:23:10.672559 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7f488d596f-4xjjh" Feb 16 02:23:10.746682 master-0 kubenswrapper[31559]: I0216 02:23:10.746626 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-v4-0-config-system-cliconfig\") pod \"9a4294a9-8cd3-4d7a-87d8-fa039261ec60\" (UID: \"9a4294a9-8cd3-4d7a-87d8-fa039261ec60\") " Feb 16 02:23:10.747406 master-0 kubenswrapper[31559]: I0216 02:23:10.747324 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "9a4294a9-8cd3-4d7a-87d8-fa039261ec60" (UID: "9a4294a9-8cd3-4d7a-87d8-fa039261ec60"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:23:10.847964 master-0 kubenswrapper[31559]: I0216 02:23:10.847881 31559 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9a4294a9-8cd3-4d7a-87d8-fa039261ec60-v4-0-config-system-cliconfig\") on node \"master-0\" DevicePath \"\"" Feb 16 02:23:11.025815 master-0 kubenswrapper[31559]: I0216 02:23:11.025733 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-565b9bd659-h8ks5"] Feb 16 02:23:11.031067 master-0 kubenswrapper[31559]: I0216 02:23:11.031013 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-7f488d596f-4xjjh"] Feb 16 02:23:11.031247 master-0 kubenswrapper[31559]: I0216 02:23:11.031165 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-565b9bd659-h8ks5" Feb 16 02:23:11.032394 master-0 kubenswrapper[31559]: I0216 02:23:11.032346 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-7f488d596f-4xjjh"] Feb 16 02:23:11.036869 master-0 kubenswrapper[31559]: I0216 02:23:11.036827 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 16 02:23:11.037096 master-0 kubenswrapper[31559]: I0216 02:23:11.037059 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-xg7sl" Feb 16 02:23:11.037253 master-0 kubenswrapper[31559]: I0216 02:23:11.037216 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 16 02:23:11.039322 master-0 kubenswrapper[31559]: I0216 02:23:11.039284 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 16 02:23:11.039679 master-0 kubenswrapper[31559]: I0216 02:23:11.039641 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 16 02:23:11.040017 master-0 kubenswrapper[31559]: I0216 02:23:11.039973 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 16 02:23:11.040618 master-0 kubenswrapper[31559]: I0216 02:23:11.040582 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 16 02:23:11.041362 master-0 kubenswrapper[31559]: I0216 02:23:11.041321 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 16 02:23:11.041825 master-0 kubenswrapper[31559]: I0216 02:23:11.041796 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 16 02:23:11.042093 master-0 kubenswrapper[31559]: I0216 02:23:11.042062 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 16 02:23:11.042636 master-0 kubenswrapper[31559]: I0216 02:23:11.042607 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 16 02:23:11.043647 master-0 kubenswrapper[31559]: I0216 02:23:11.043615 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 16 02:23:11.049254 master-0 kubenswrapper[31559]: I0216 02:23:11.049201 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-565b9bd659-h8ks5"] Feb 16 02:23:11.055830 master-0 kubenswrapper[31559]: I0216 02:23:11.055789 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 16 02:23:11.060767 master-0 kubenswrapper[31559]: I0216 02:23:11.060712 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 16 02:23:11.161164 master-0 kubenswrapper[31559]: I0216 02:23:11.161069 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/be05b90a-997b-4183-935a-05ee1461cd65-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-565b9bd659-h8ks5\" (UID: \"be05b90a-997b-4183-935a-05ee1461cd65\") " pod="openshift-authentication/oauth-openshift-565b9bd659-h8ks5" Feb 16 02:23:11.161164 master-0 kubenswrapper[31559]: I0216 02:23:11.161168 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/be05b90a-997b-4183-935a-05ee1461cd65-audit-policies\") pod \"oauth-openshift-565b9bd659-h8ks5\" (UID: \"be05b90a-997b-4183-935a-05ee1461cd65\") " pod="openshift-authentication/oauth-openshift-565b9bd659-h8ks5" Feb 16 02:23:11.161560 master-0 kubenswrapper[31559]: I0216 02:23:11.161212 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/be05b90a-997b-4183-935a-05ee1461cd65-v4-0-config-system-service-ca\") pod \"oauth-openshift-565b9bd659-h8ks5\" (UID: \"be05b90a-997b-4183-935a-05ee1461cd65\") " pod="openshift-authentication/oauth-openshift-565b9bd659-h8ks5" Feb 16 02:23:11.161560 master-0 kubenswrapper[31559]: I0216 02:23:11.161430 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxq4b\" (UniqueName: \"kubernetes.io/projected/be05b90a-997b-4183-935a-05ee1461cd65-kube-api-access-mxq4b\") pod \"oauth-openshift-565b9bd659-h8ks5\" (UID: \"be05b90a-997b-4183-935a-05ee1461cd65\") " pod="openshift-authentication/oauth-openshift-565b9bd659-h8ks5" Feb 16 02:23:11.161560 master-0 kubenswrapper[31559]: I0216 02:23:11.161547 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/be05b90a-997b-4183-935a-05ee1461cd65-v4-0-config-system-cliconfig\") pod \"oauth-openshift-565b9bd659-h8ks5\" (UID: \"be05b90a-997b-4183-935a-05ee1461cd65\") " pod="openshift-authentication/oauth-openshift-565b9bd659-h8ks5" Feb 16 02:23:11.161756 master-0 kubenswrapper[31559]: I0216 02:23:11.161625 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/be05b90a-997b-4183-935a-05ee1461cd65-v4-0-config-system-router-certs\") pod \"oauth-openshift-565b9bd659-h8ks5\" (UID: \"be05b90a-997b-4183-935a-05ee1461cd65\") " pod="openshift-authentication/oauth-openshift-565b9bd659-h8ks5" Feb 16 02:23:11.161756 master-0 kubenswrapper[31559]: I0216 02:23:11.161728 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be05b90a-997b-4183-935a-05ee1461cd65-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-565b9bd659-h8ks5\" (UID: \"be05b90a-997b-4183-935a-05ee1461cd65\") " pod="openshift-authentication/oauth-openshift-565b9bd659-h8ks5" Feb 16 02:23:11.161884 master-0 kubenswrapper[31559]: I0216 02:23:11.161868 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/be05b90a-997b-4183-935a-05ee1461cd65-v4-0-config-system-serving-cert\") pod \"oauth-openshift-565b9bd659-h8ks5\" (UID: \"be05b90a-997b-4183-935a-05ee1461cd65\") " pod="openshift-authentication/oauth-openshift-565b9bd659-h8ks5" Feb 16 02:23:11.161965 master-0 kubenswrapper[31559]: I0216 02:23:11.161911 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/be05b90a-997b-4183-935a-05ee1461cd65-v4-0-config-user-template-login\") pod \"oauth-openshift-565b9bd659-h8ks5\" (UID: \"be05b90a-997b-4183-935a-05ee1461cd65\") " pod="openshift-authentication/oauth-openshift-565b9bd659-h8ks5" Feb 16 02:23:11.162042 master-0 kubenswrapper[31559]: I0216 02:23:11.161961 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/be05b90a-997b-4183-935a-05ee1461cd65-v4-0-config-system-session\") pod \"oauth-openshift-565b9bd659-h8ks5\" (UID: \"be05b90a-997b-4183-935a-05ee1461cd65\") " pod="openshift-authentication/oauth-openshift-565b9bd659-h8ks5" Feb 16 02:23:11.162106 master-0 kubenswrapper[31559]: I0216 02:23:11.162068 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/be05b90a-997b-4183-935a-05ee1461cd65-audit-dir\") pod \"oauth-openshift-565b9bd659-h8ks5\" (UID: \"be05b90a-997b-4183-935a-05ee1461cd65\") " pod="openshift-authentication/oauth-openshift-565b9bd659-h8ks5" Feb 16 02:23:11.162177 master-0 kubenswrapper[31559]: I0216 02:23:11.162128 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/be05b90a-997b-4183-935a-05ee1461cd65-v4-0-config-user-template-error\") pod \"oauth-openshift-565b9bd659-h8ks5\" (UID: \"be05b90a-997b-4183-935a-05ee1461cd65\") " pod="openshift-authentication/oauth-openshift-565b9bd659-h8ks5" Feb 16 02:23:11.162249 master-0 kubenswrapper[31559]: I0216 02:23:11.162199 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/be05b90a-997b-4183-935a-05ee1461cd65-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-565b9bd659-h8ks5\" (UID: \"be05b90a-997b-4183-935a-05ee1461cd65\") " pod="openshift-authentication/oauth-openshift-565b9bd659-h8ks5" Feb 16 02:23:11.263422 master-0 kubenswrapper[31559]: I0216 02:23:11.263339 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/be05b90a-997b-4183-935a-05ee1461cd65-v4-0-config-system-cliconfig\") pod \"oauth-openshift-565b9bd659-h8ks5\" (UID: \"be05b90a-997b-4183-935a-05ee1461cd65\") " pod="openshift-authentication/oauth-openshift-565b9bd659-h8ks5" Feb 16 02:23:11.263422 master-0 kubenswrapper[31559]: I0216 02:23:11.263421 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/be05b90a-997b-4183-935a-05ee1461cd65-v4-0-config-system-router-certs\") pod \"oauth-openshift-565b9bd659-h8ks5\" (UID: \"be05b90a-997b-4183-935a-05ee1461cd65\") " pod="openshift-authentication/oauth-openshift-565b9bd659-h8ks5" Feb 16 02:23:11.263874 master-0 kubenswrapper[31559]: I0216 02:23:11.263797 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be05b90a-997b-4183-935a-05ee1461cd65-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-565b9bd659-h8ks5\" (UID: \"be05b90a-997b-4183-935a-05ee1461cd65\") " pod="openshift-authentication/oauth-openshift-565b9bd659-h8ks5" Feb 16 02:23:11.264050 master-0 kubenswrapper[31559]: I0216 02:23:11.264008 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/be05b90a-997b-4183-935a-05ee1461cd65-v4-0-config-system-serving-cert\") pod \"oauth-openshift-565b9bd659-h8ks5\" (UID: \"be05b90a-997b-4183-935a-05ee1461cd65\") " pod="openshift-authentication/oauth-openshift-565b9bd659-h8ks5" Feb 16 02:23:11.264141 master-0 kubenswrapper[31559]: I0216 02:23:11.264066 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/be05b90a-997b-4183-935a-05ee1461cd65-v4-0-config-user-template-login\") pod \"oauth-openshift-565b9bd659-h8ks5\" (UID: \"be05b90a-997b-4183-935a-05ee1461cd65\") " pod="openshift-authentication/oauth-openshift-565b9bd659-h8ks5" Feb 16 02:23:11.264141 master-0 kubenswrapper[31559]: I0216 02:23:11.264120 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/be05b90a-997b-4183-935a-05ee1461cd65-v4-0-config-system-session\") pod \"oauth-openshift-565b9bd659-h8ks5\" (UID: \"be05b90a-997b-4183-935a-05ee1461cd65\") " pod="openshift-authentication/oauth-openshift-565b9bd659-h8ks5" Feb 16 02:23:11.264290 master-0 kubenswrapper[31559]: I0216 02:23:11.264191 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/be05b90a-997b-4183-935a-05ee1461cd65-audit-dir\") pod \"oauth-openshift-565b9bd659-h8ks5\" (UID: \"be05b90a-997b-4183-935a-05ee1461cd65\") " pod="openshift-authentication/oauth-openshift-565b9bd659-h8ks5" Feb 16 02:23:11.264526 master-0 kubenswrapper[31559]: I0216 02:23:11.264495 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/be05b90a-997b-4183-935a-05ee1461cd65-audit-dir\") pod \"oauth-openshift-565b9bd659-h8ks5\" (UID: \"be05b90a-997b-4183-935a-05ee1461cd65\") " pod="openshift-authentication/oauth-openshift-565b9bd659-h8ks5" Feb 16 02:23:11.264763 master-0 kubenswrapper[31559]: I0216 02:23:11.264687 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/be05b90a-997b-4183-935a-05ee1461cd65-v4-0-config-user-template-error\") pod \"oauth-openshift-565b9bd659-h8ks5\" (UID: \"be05b90a-997b-4183-935a-05ee1461cd65\") " pod="openshift-authentication/oauth-openshift-565b9bd659-h8ks5" Feb 16 02:23:11.264884 master-0 kubenswrapper[31559]: I0216 02:23:11.264844 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/be05b90a-997b-4183-935a-05ee1461cd65-v4-0-config-system-cliconfig\") pod \"oauth-openshift-565b9bd659-h8ks5\" (UID: \"be05b90a-997b-4183-935a-05ee1461cd65\") " pod="openshift-authentication/oauth-openshift-565b9bd659-h8ks5" Feb 16 02:23:11.265501 master-0 kubenswrapper[31559]: I0216 02:23:11.265404 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/be05b90a-997b-4183-935a-05ee1461cd65-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-565b9bd659-h8ks5\" (UID: \"be05b90a-997b-4183-935a-05ee1461cd65\") " pod="openshift-authentication/oauth-openshift-565b9bd659-h8ks5" Feb 16 02:23:11.265708 master-0 kubenswrapper[31559]: I0216 02:23:11.265667 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/be05b90a-997b-4183-935a-05ee1461cd65-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-565b9bd659-h8ks5\" (UID: \"be05b90a-997b-4183-935a-05ee1461cd65\") " pod="openshift-authentication/oauth-openshift-565b9bd659-h8ks5" Feb 16 02:23:11.265818 master-0 kubenswrapper[31559]: I0216 02:23:11.265746 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/be05b90a-997b-4183-935a-05ee1461cd65-audit-policies\") pod \"oauth-openshift-565b9bd659-h8ks5\" (UID: \"be05b90a-997b-4183-935a-05ee1461cd65\") " pod="openshift-authentication/oauth-openshift-565b9bd659-h8ks5" Feb 16 02:23:11.266110 master-0 kubenswrapper[31559]: I0216 02:23:11.266035 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/be05b90a-997b-4183-935a-05ee1461cd65-v4-0-config-system-service-ca\") pod \"oauth-openshift-565b9bd659-h8ks5\" (UID: \"be05b90a-997b-4183-935a-05ee1461cd65\") " pod="openshift-authentication/oauth-openshift-565b9bd659-h8ks5" Feb 16 02:23:11.266220 master-0 kubenswrapper[31559]: I0216 02:23:11.266181 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxq4b\" (UniqueName: \"kubernetes.io/projected/be05b90a-997b-4183-935a-05ee1461cd65-kube-api-access-mxq4b\") pod \"oauth-openshift-565b9bd659-h8ks5\" (UID: \"be05b90a-997b-4183-935a-05ee1461cd65\") " pod="openshift-authentication/oauth-openshift-565b9bd659-h8ks5" Feb 16 02:23:11.267493 master-0 kubenswrapper[31559]: I0216 02:23:11.267374 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/be05b90a-997b-4183-935a-05ee1461cd65-v4-0-config-system-service-ca\") pod \"oauth-openshift-565b9bd659-h8ks5\" (UID: \"be05b90a-997b-4183-935a-05ee1461cd65\") " pod="openshift-authentication/oauth-openshift-565b9bd659-h8ks5" Feb 16 02:23:11.267716 master-0 kubenswrapper[31559]: I0216 02:23:11.267651 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/be05b90a-997b-4183-935a-05ee1461cd65-audit-policies\") pod \"oauth-openshift-565b9bd659-h8ks5\" (UID: \"be05b90a-997b-4183-935a-05ee1461cd65\") " pod="openshift-authentication/oauth-openshift-565b9bd659-h8ks5" Feb 16 02:23:11.268022 master-0 kubenswrapper[31559]: I0216 02:23:11.267963 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be05b90a-997b-4183-935a-05ee1461cd65-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-565b9bd659-h8ks5\" (UID: \"be05b90a-997b-4183-935a-05ee1461cd65\") " pod="openshift-authentication/oauth-openshift-565b9bd659-h8ks5" Feb 16 02:23:11.269776 master-0 kubenswrapper[31559]: I0216 02:23:11.269713 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/be05b90a-997b-4183-935a-05ee1461cd65-v4-0-config-system-router-certs\") pod \"oauth-openshift-565b9bd659-h8ks5\" (UID: \"be05b90a-997b-4183-935a-05ee1461cd65\") " pod="openshift-authentication/oauth-openshift-565b9bd659-h8ks5" Feb 16 02:23:11.281597 master-0 kubenswrapper[31559]: I0216 02:23:11.272114 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/be05b90a-997b-4183-935a-05ee1461cd65-v4-0-config-user-template-error\") pod \"oauth-openshift-565b9bd659-h8ks5\" (UID: \"be05b90a-997b-4183-935a-05ee1461cd65\") " pod="openshift-authentication/oauth-openshift-565b9bd659-h8ks5" Feb 16 02:23:11.281597 master-0 kubenswrapper[31559]: I0216 02:23:11.273969 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/be05b90a-997b-4183-935a-05ee1461cd65-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-565b9bd659-h8ks5\" (UID: \"be05b90a-997b-4183-935a-05ee1461cd65\") " pod="openshift-authentication/oauth-openshift-565b9bd659-h8ks5" Feb 16 02:23:11.281597 master-0 kubenswrapper[31559]: I0216 02:23:11.274308 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/be05b90a-997b-4183-935a-05ee1461cd65-v4-0-config-system-session\") pod \"oauth-openshift-565b9bd659-h8ks5\" (UID: \"be05b90a-997b-4183-935a-05ee1461cd65\") " pod="openshift-authentication/oauth-openshift-565b9bd659-h8ks5" Feb 16 02:23:11.281597 master-0 kubenswrapper[31559]: I0216 02:23:11.274379 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/be05b90a-997b-4183-935a-05ee1461cd65-v4-0-config-system-serving-cert\") pod \"oauth-openshift-565b9bd659-h8ks5\" (UID: \"be05b90a-997b-4183-935a-05ee1461cd65\") " pod="openshift-authentication/oauth-openshift-565b9bd659-h8ks5" Feb 16 02:23:11.281597 master-0 kubenswrapper[31559]: I0216 02:23:11.276267 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/be05b90a-997b-4183-935a-05ee1461cd65-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-565b9bd659-h8ks5\" (UID: \"be05b90a-997b-4183-935a-05ee1461cd65\") " pod="openshift-authentication/oauth-openshift-565b9bd659-h8ks5" Feb 16 02:23:11.281597 master-0 kubenswrapper[31559]: I0216 02:23:11.281129 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/be05b90a-997b-4183-935a-05ee1461cd65-v4-0-config-user-template-login\") pod \"oauth-openshift-565b9bd659-h8ks5\" (UID: \"be05b90a-997b-4183-935a-05ee1461cd65\") " pod="openshift-authentication/oauth-openshift-565b9bd659-h8ks5" Feb 16 02:23:11.294651 master-0 kubenswrapper[31559]: I0216 02:23:11.294587 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxq4b\" (UniqueName: \"kubernetes.io/projected/be05b90a-997b-4183-935a-05ee1461cd65-kube-api-access-mxq4b\") pod \"oauth-openshift-565b9bd659-h8ks5\" (UID: \"be05b90a-997b-4183-935a-05ee1461cd65\") " pod="openshift-authentication/oauth-openshift-565b9bd659-h8ks5" Feb 16 02:23:11.395367 master-0 kubenswrapper[31559]: I0216 02:23:11.395288 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-565b9bd659-h8ks5" Feb 16 02:23:11.884014 master-0 kubenswrapper[31559]: I0216 02:23:11.883864 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-565b9bd659-h8ks5"] Feb 16 02:23:11.894958 master-0 kubenswrapper[31559]: W0216 02:23:11.894875 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe05b90a_997b_4183_935a_05ee1461cd65.slice/crio-87803ec4d61e8a24398b474a4f909ce87b4c9d03a0f6b36650a71a600eadd419 WatchSource:0}: Error finding container 87803ec4d61e8a24398b474a4f909ce87b4c9d03a0f6b36650a71a600eadd419: Status 404 returned error can't find the container with id 87803ec4d61e8a24398b474a4f909ce87b4c9d03a0f6b36650a71a600eadd419 Feb 16 02:23:11.900872 master-0 kubenswrapper[31559]: I0216 02:23:11.900793 31559 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 02:23:11.946360 master-0 kubenswrapper[31559]: I0216 02:23:11.942023 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a4294a9-8cd3-4d7a-87d8-fa039261ec60" path="/var/lib/kubelet/pods/9a4294a9-8cd3-4d7a-87d8-fa039261ec60/volumes" Feb 16 02:23:12.686576 master-0 kubenswrapper[31559]: I0216 02:23:12.686508 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-565b9bd659-h8ks5" event={"ID":"be05b90a-997b-4183-935a-05ee1461cd65","Type":"ContainerStarted","Data":"87803ec4d61e8a24398b474a4f909ce87b4c9d03a0f6b36650a71a600eadd419"} Feb 16 02:23:13.997152 master-0 kubenswrapper[31559]: I0216 02:23:13.996680 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-565b9bd659-h8ks5"] Feb 16 02:23:14.709064 master-0 kubenswrapper[31559]: I0216 02:23:14.708973 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-565b9bd659-h8ks5" event={"ID":"be05b90a-997b-4183-935a-05ee1461cd65","Type":"ContainerStarted","Data":"cdf3936da8424e68d746f1222eb12daab021fe9d4d7c357f4216d0e298e452d3"} Feb 16 02:23:14.709527 master-0 kubenswrapper[31559]: I0216 02:23:14.709463 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-565b9bd659-h8ks5" Feb 16 02:23:14.748418 master-0 kubenswrapper[31559]: I0216 02:23:14.748281 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-565b9bd659-h8ks5" podStartSLOduration=3.5110905519999998 podStartE2EDuration="5.748250731s" podCreationTimestamp="2026-02-16 02:23:09 +0000 UTC" firstStartedPulling="2026-02-16 02:23:11.900682338 +0000 UTC m=+44.245288393" lastFinishedPulling="2026-02-16 02:23:14.137842557 +0000 UTC m=+46.482448572" observedRunningTime="2026-02-16 02:23:14.740233863 +0000 UTC m=+47.084839918" watchObservedRunningTime="2026-02-16 02:23:14.748250731 +0000 UTC m=+47.092856786" Feb 16 02:23:14.974889 master-0 kubenswrapper[31559]: I0216 02:23:14.974636 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-565b9bd659-h8ks5" Feb 16 02:23:16.107980 master-0 kubenswrapper[31559]: I0216 02:23:16.107878 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-67b79bd656-cs2n2" Feb 16 02:23:16.114619 master-0 kubenswrapper[31559]: I0216 02:23:16.114557 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-67b79bd656-cs2n2" Feb 16 02:23:21.447063 master-0 kubenswrapper[31559]: I0216 02:23:21.446950 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5788fc6459-29m25"] Feb 16 02:23:21.448202 master-0 kubenswrapper[31559]: I0216 02:23:21.447296 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5788fc6459-29m25" podUID="e491b5ed-9c09-4308-9843-fba8d43bd3ae" containerName="controller-manager" containerID="cri-o://ca909365df9d2aff41f044240436ffc1049e0e2619dc9a8b0a9a1fe204214291" gracePeriod=30 Feb 16 02:23:21.478369 master-0 kubenswrapper[31559]: I0216 02:23:21.478299 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-998bd8b4b-hm5k2"] Feb 16 02:23:21.478688 master-0 kubenswrapper[31559]: I0216 02:23:21.478639 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-998bd8b4b-hm5k2" podUID="83883885-f493-4559-9c0f-e28d69712475" containerName="route-controller-manager" containerID="cri-o://d4e52a7c33349d83246362e459328cc0aaf6ec2111900263828e0014dd564224" gracePeriod=30 Feb 16 02:23:21.789918 master-0 kubenswrapper[31559]: I0216 02:23:21.789728 31559 generic.go:334] "Generic (PLEG): container finished" podID="e491b5ed-9c09-4308-9843-fba8d43bd3ae" containerID="ca909365df9d2aff41f044240436ffc1049e0e2619dc9a8b0a9a1fe204214291" exitCode=0 Feb 16 02:23:21.790182 master-0 kubenswrapper[31559]: I0216 02:23:21.789960 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5788fc6459-29m25" event={"ID":"e491b5ed-9c09-4308-9843-fba8d43bd3ae","Type":"ContainerDied","Data":"ca909365df9d2aff41f044240436ffc1049e0e2619dc9a8b0a9a1fe204214291"} Feb 16 02:23:21.790182 master-0 kubenswrapper[31559]: I0216 02:23:21.790023 31559 scope.go:117] "RemoveContainer" containerID="fb538a41cea5f683a2ab8b99be06eb74affc528bd353ff7cabad5516264bee81" Feb 16 02:23:21.794599 master-0 kubenswrapper[31559]: I0216 02:23:21.794298 31559 generic.go:334] "Generic (PLEG): container finished" podID="83883885-f493-4559-9c0f-e28d69712475" containerID="d4e52a7c33349d83246362e459328cc0aaf6ec2111900263828e0014dd564224" exitCode=0 Feb 16 02:23:21.794599 master-0 kubenswrapper[31559]: I0216 02:23:21.794352 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-998bd8b4b-hm5k2" event={"ID":"83883885-f493-4559-9c0f-e28d69712475","Type":"ContainerDied","Data":"d4e52a7c33349d83246362e459328cc0aaf6ec2111900263828e0014dd564224"} Feb 16 02:23:21.842871 master-0 kubenswrapper[31559]: I0216 02:23:21.842808 31559 scope.go:117] "RemoveContainer" containerID="2a160f2c1742de9f4ba99becfe5db3107e11e652c40ff70cb3349e1627a9a147" Feb 16 02:23:21.969145 master-0 kubenswrapper[31559]: I0216 02:23:21.969101 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5788fc6459-29m25" Feb 16 02:23:21.978605 master-0 kubenswrapper[31559]: I0216 02:23:21.978565 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-998bd8b4b-hm5k2" Feb 16 02:23:22.047365 master-0 kubenswrapper[31559]: I0216 02:23:22.047212 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83883885-f493-4559-9c0f-e28d69712475-serving-cert\") pod \"83883885-f493-4559-9c0f-e28d69712475\" (UID: \"83883885-f493-4559-9c0f-e28d69712475\") " Feb 16 02:23:22.047365 master-0 kubenswrapper[31559]: I0216 02:23:22.047266 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e491b5ed-9c09-4308-9843-fba8d43bd3ae-proxy-ca-bundles\") pod \"e491b5ed-9c09-4308-9843-fba8d43bd3ae\" (UID: \"e491b5ed-9c09-4308-9843-fba8d43bd3ae\") " Feb 16 02:23:22.047365 master-0 kubenswrapper[31559]: I0216 02:23:22.047340 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e491b5ed-9c09-4308-9843-fba8d43bd3ae-serving-cert\") pod \"e491b5ed-9c09-4308-9843-fba8d43bd3ae\" (UID: \"e491b5ed-9c09-4308-9843-fba8d43bd3ae\") " Feb 16 02:23:22.047776 master-0 kubenswrapper[31559]: I0216 02:23:22.047394 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/83883885-f493-4559-9c0f-e28d69712475-client-ca\") pod \"83883885-f493-4559-9c0f-e28d69712475\" (UID: \"83883885-f493-4559-9c0f-e28d69712475\") " Feb 16 02:23:22.047776 master-0 kubenswrapper[31559]: I0216 02:23:22.047446 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83883885-f493-4559-9c0f-e28d69712475-config\") pod \"83883885-f493-4559-9c0f-e28d69712475\" (UID: \"83883885-f493-4559-9c0f-e28d69712475\") " Feb 16 02:23:22.047776 master-0 kubenswrapper[31559]: I0216 02:23:22.047478 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4p8p\" (UniqueName: \"kubernetes.io/projected/e491b5ed-9c09-4308-9843-fba8d43bd3ae-kube-api-access-j4p8p\") pod \"e491b5ed-9c09-4308-9843-fba8d43bd3ae\" (UID: \"e491b5ed-9c09-4308-9843-fba8d43bd3ae\") " Feb 16 02:23:22.047776 master-0 kubenswrapper[31559]: I0216 02:23:22.047589 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e491b5ed-9c09-4308-9843-fba8d43bd3ae-config\") pod \"e491b5ed-9c09-4308-9843-fba8d43bd3ae\" (UID: \"e491b5ed-9c09-4308-9843-fba8d43bd3ae\") " Feb 16 02:23:22.047776 master-0 kubenswrapper[31559]: I0216 02:23:22.047626 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6nmjz\" (UniqueName: \"kubernetes.io/projected/83883885-f493-4559-9c0f-e28d69712475-kube-api-access-6nmjz\") pod \"83883885-f493-4559-9c0f-e28d69712475\" (UID: \"83883885-f493-4559-9c0f-e28d69712475\") " Feb 16 02:23:22.048510 master-0 kubenswrapper[31559]: I0216 02:23:22.048394 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83883885-f493-4559-9c0f-e28d69712475-client-ca" (OuterVolumeSpecName: "client-ca") pod "83883885-f493-4559-9c0f-e28d69712475" (UID: "83883885-f493-4559-9c0f-e28d69712475"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:23:22.048634 master-0 kubenswrapper[31559]: I0216 02:23:22.048512 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e491b5ed-9c09-4308-9843-fba8d43bd3ae-client-ca\") pod \"e491b5ed-9c09-4308-9843-fba8d43bd3ae\" (UID: \"e491b5ed-9c09-4308-9843-fba8d43bd3ae\") " Feb 16 02:23:22.049490 master-0 kubenswrapper[31559]: I0216 02:23:22.049045 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e491b5ed-9c09-4308-9843-fba8d43bd3ae-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "e491b5ed-9c09-4308-9843-fba8d43bd3ae" (UID: "e491b5ed-9c09-4308-9843-fba8d43bd3ae"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:23:22.049490 master-0 kubenswrapper[31559]: I0216 02:23:22.049082 31559 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/83883885-f493-4559-9c0f-e28d69712475-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 16 02:23:22.049490 master-0 kubenswrapper[31559]: I0216 02:23:22.049077 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e491b5ed-9c09-4308-9843-fba8d43bd3ae-client-ca" (OuterVolumeSpecName: "client-ca") pod "e491b5ed-9c09-4308-9843-fba8d43bd3ae" (UID: "e491b5ed-9c09-4308-9843-fba8d43bd3ae"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:23:22.049490 master-0 kubenswrapper[31559]: I0216 02:23:22.049223 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e491b5ed-9c09-4308-9843-fba8d43bd3ae-config" (OuterVolumeSpecName: "config") pod "e491b5ed-9c09-4308-9843-fba8d43bd3ae" (UID: "e491b5ed-9c09-4308-9843-fba8d43bd3ae"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:23:22.049667 master-0 kubenswrapper[31559]: I0216 02:23:22.049629 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83883885-f493-4559-9c0f-e28d69712475-config" (OuterVolumeSpecName: "config") pod "83883885-f493-4559-9c0f-e28d69712475" (UID: "83883885-f493-4559-9c0f-e28d69712475"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:23:22.051648 master-0 kubenswrapper[31559]: I0216 02:23:22.051613 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e491b5ed-9c09-4308-9843-fba8d43bd3ae-kube-api-access-j4p8p" (OuterVolumeSpecName: "kube-api-access-j4p8p") pod "e491b5ed-9c09-4308-9843-fba8d43bd3ae" (UID: "e491b5ed-9c09-4308-9843-fba8d43bd3ae"). InnerVolumeSpecName "kube-api-access-j4p8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:23:22.051768 master-0 kubenswrapper[31559]: I0216 02:23:22.051728 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83883885-f493-4559-9c0f-e28d69712475-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "83883885-f493-4559-9c0f-e28d69712475" (UID: "83883885-f493-4559-9c0f-e28d69712475"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:23:22.052122 master-0 kubenswrapper[31559]: I0216 02:23:22.052062 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e491b5ed-9c09-4308-9843-fba8d43bd3ae-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e491b5ed-9c09-4308-9843-fba8d43bd3ae" (UID: "e491b5ed-9c09-4308-9843-fba8d43bd3ae"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:23:22.052854 master-0 kubenswrapper[31559]: I0216 02:23:22.052819 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83883885-f493-4559-9c0f-e28d69712475-kube-api-access-6nmjz" (OuterVolumeSpecName: "kube-api-access-6nmjz") pod "83883885-f493-4559-9c0f-e28d69712475" (UID: "83883885-f493-4559-9c0f-e28d69712475"). InnerVolumeSpecName "kube-api-access-6nmjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:23:22.150270 master-0 kubenswrapper[31559]: I0216 02:23:22.150205 31559 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83883885-f493-4559-9c0f-e28d69712475-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 16 02:23:22.150270 master-0 kubenswrapper[31559]: I0216 02:23:22.150263 31559 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e491b5ed-9c09-4308-9843-fba8d43bd3ae-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Feb 16 02:23:22.150561 master-0 kubenswrapper[31559]: I0216 02:23:22.150286 31559 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e491b5ed-9c09-4308-9843-fba8d43bd3ae-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 16 02:23:22.150561 master-0 kubenswrapper[31559]: I0216 02:23:22.150310 31559 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83883885-f493-4559-9c0f-e28d69712475-config\") on node \"master-0\" DevicePath \"\"" Feb 16 02:23:22.150561 master-0 kubenswrapper[31559]: I0216 02:23:22.150331 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4p8p\" (UniqueName: \"kubernetes.io/projected/e491b5ed-9c09-4308-9843-fba8d43bd3ae-kube-api-access-j4p8p\") on node \"master-0\" DevicePath \"\"" Feb 16 02:23:22.150561 master-0 kubenswrapper[31559]: I0216 02:23:22.150354 31559 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e491b5ed-9c09-4308-9843-fba8d43bd3ae-config\") on node \"master-0\" DevicePath \"\"" Feb 16 02:23:22.150561 master-0 kubenswrapper[31559]: I0216 02:23:22.150373 31559 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e491b5ed-9c09-4308-9843-fba8d43bd3ae-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 16 02:23:22.150561 master-0 kubenswrapper[31559]: I0216 02:23:22.150391 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6nmjz\" (UniqueName: \"kubernetes.io/projected/83883885-f493-4559-9c0f-e28d69712475-kube-api-access-6nmjz\") on node \"master-0\" DevicePath \"\"" Feb 16 02:23:22.807768 master-0 kubenswrapper[31559]: I0216 02:23:22.807657 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5788fc6459-29m25" event={"ID":"e491b5ed-9c09-4308-9843-fba8d43bd3ae","Type":"ContainerDied","Data":"debb4f03d3db8741a1ba37d50a33cf649d64e1d2b1233200aa072be50cd42b72"} Feb 16 02:23:22.807768 master-0 kubenswrapper[31559]: I0216 02:23:22.807757 31559 scope.go:117] "RemoveContainer" containerID="ca909365df9d2aff41f044240436ffc1049e0e2619dc9a8b0a9a1fe204214291" Feb 16 02:23:22.809534 master-0 kubenswrapper[31559]: I0216 02:23:22.807672 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5788fc6459-29m25" Feb 16 02:23:22.815494 master-0 kubenswrapper[31559]: I0216 02:23:22.811136 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-998bd8b4b-hm5k2" event={"ID":"83883885-f493-4559-9c0f-e28d69712475","Type":"ContainerDied","Data":"e55ecc9e109900a7552879fd5133496a07323a5858e7c3c83ce557b826a22cc6"} Feb 16 02:23:22.815494 master-0 kubenswrapper[31559]: I0216 02:23:22.811245 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-998bd8b4b-hm5k2" Feb 16 02:23:22.836467 master-0 kubenswrapper[31559]: I0216 02:23:22.835980 31559 scope.go:117] "RemoveContainer" containerID="d4e52a7c33349d83246362e459328cc0aaf6ec2111900263828e0014dd564224" Feb 16 02:23:22.872984 master-0 kubenswrapper[31559]: I0216 02:23:22.872894 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-998bd8b4b-hm5k2"] Feb 16 02:23:22.876738 master-0 kubenswrapper[31559]: I0216 02:23:22.876665 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-998bd8b4b-hm5k2"] Feb 16 02:23:22.893749 master-0 kubenswrapper[31559]: I0216 02:23:22.893672 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5788fc6459-29m25"] Feb 16 02:23:22.897297 master-0 kubenswrapper[31559]: I0216 02:23:22.897235 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5788fc6459-29m25"] Feb 16 02:23:23.270219 master-0 kubenswrapper[31559]: I0216 02:23:23.270110 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84bdbf4b8b-dwrfn"] Feb 16 02:23:23.270774 master-0 kubenswrapper[31559]: E0216 02:23:23.270724 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e491b5ed-9c09-4308-9843-fba8d43bd3ae" containerName="controller-manager" Feb 16 02:23:23.270774 master-0 kubenswrapper[31559]: I0216 02:23:23.270765 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="e491b5ed-9c09-4308-9843-fba8d43bd3ae" containerName="controller-manager" Feb 16 02:23:23.271016 master-0 kubenswrapper[31559]: E0216 02:23:23.270805 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e491b5ed-9c09-4308-9843-fba8d43bd3ae" containerName="controller-manager" Feb 16 02:23:23.271016 master-0 kubenswrapper[31559]: I0216 02:23:23.270823 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="e491b5ed-9c09-4308-9843-fba8d43bd3ae" containerName="controller-manager" Feb 16 02:23:23.271016 master-0 kubenswrapper[31559]: E0216 02:23:23.270843 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83883885-f493-4559-9c0f-e28d69712475" containerName="route-controller-manager" Feb 16 02:23:23.271016 master-0 kubenswrapper[31559]: I0216 02:23:23.270862 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="83883885-f493-4559-9c0f-e28d69712475" containerName="route-controller-manager" Feb 16 02:23:23.271016 master-0 kubenswrapper[31559]: E0216 02:23:23.270891 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83883885-f493-4559-9c0f-e28d69712475" containerName="route-controller-manager" Feb 16 02:23:23.271514 master-0 kubenswrapper[31559]: I0216 02:23:23.270909 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="83883885-f493-4559-9c0f-e28d69712475" containerName="route-controller-manager" Feb 16 02:23:23.271514 master-0 kubenswrapper[31559]: I0216 02:23:23.271393 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="e491b5ed-9c09-4308-9843-fba8d43bd3ae" containerName="controller-manager" Feb 16 02:23:23.271514 master-0 kubenswrapper[31559]: I0216 02:23:23.271490 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="83883885-f493-4559-9c0f-e28d69712475" containerName="route-controller-manager" Feb 16 02:23:23.271801 master-0 kubenswrapper[31559]: I0216 02:23:23.271520 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="83883885-f493-4559-9c0f-e28d69712475" containerName="route-controller-manager" Feb 16 02:23:23.271801 master-0 kubenswrapper[31559]: I0216 02:23:23.271561 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="e491b5ed-9c09-4308-9843-fba8d43bd3ae" containerName="controller-manager" Feb 16 02:23:23.272543 master-0 kubenswrapper[31559]: I0216 02:23:23.272486 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84bdbf4b8b-dwrfn" Feb 16 02:23:23.274016 master-0 kubenswrapper[31559]: I0216 02:23:23.273939 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-654bf44894-fc2ml"] Feb 16 02:23:23.275770 master-0 kubenswrapper[31559]: I0216 02:23:23.275702 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-j5nng" Feb 16 02:23:23.275940 master-0 kubenswrapper[31559]: I0216 02:23:23.275765 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 02:23:23.275940 master-0 kubenswrapper[31559]: I0216 02:23:23.275876 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-654bf44894-fc2ml" Feb 16 02:23:23.278058 master-0 kubenswrapper[31559]: I0216 02:23:23.277939 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 02:23:23.278239 master-0 kubenswrapper[31559]: I0216 02:23:23.277977 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 02:23:23.278239 master-0 kubenswrapper[31559]: I0216 02:23:23.278127 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 02:23:23.279345 master-0 kubenswrapper[31559]: I0216 02:23:23.278516 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 02:23:23.285388 master-0 kubenswrapper[31559]: I0216 02:23:23.285314 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 02:23:23.285736 master-0 kubenswrapper[31559]: I0216 02:23:23.285634 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 02:23:23.285736 master-0 kubenswrapper[31559]: I0216 02:23:23.285635 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 02:23:23.285942 master-0 kubenswrapper[31559]: I0216 02:23:23.285745 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 02:23:23.285942 master-0 kubenswrapper[31559]: I0216 02:23:23.285797 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 02:23:23.285942 master-0 kubenswrapper[31559]: I0216 02:23:23.285636 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-8pgh8" Feb 16 02:23:23.289702 master-0 kubenswrapper[31559]: I0216 02:23:23.289621 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84bdbf4b8b-dwrfn"] Feb 16 02:23:23.295623 master-0 kubenswrapper[31559]: I0216 02:23:23.295543 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-654bf44894-fc2ml"] Feb 16 02:23:23.312484 master-0 kubenswrapper[31559]: I0216 02:23:23.310569 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 02:23:23.373799 master-0 kubenswrapper[31559]: I0216 02:23:23.373737 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/27e35264-183d-4ecf-9e80-e622758470f6-client-ca\") pod \"route-controller-manager-84bdbf4b8b-dwrfn\" (UID: \"27e35264-183d-4ecf-9e80-e622758470f6\") " pod="openshift-route-controller-manager/route-controller-manager-84bdbf4b8b-dwrfn" Feb 16 02:23:23.374025 master-0 kubenswrapper[31559]: I0216 02:23:23.373834 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27e35264-183d-4ecf-9e80-e622758470f6-config\") pod \"route-controller-manager-84bdbf4b8b-dwrfn\" (UID: \"27e35264-183d-4ecf-9e80-e622758470f6\") " pod="openshift-route-controller-manager/route-controller-manager-84bdbf4b8b-dwrfn" Feb 16 02:23:23.374025 master-0 kubenswrapper[31559]: I0216 02:23:23.373907 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6c5ll\" (UniqueName: \"kubernetes.io/projected/89ca64ac-052f-498b-9b83-804bffd20d66-kube-api-access-6c5ll\") pod \"controller-manager-654bf44894-fc2ml\" (UID: \"89ca64ac-052f-498b-9b83-804bffd20d66\") " pod="openshift-controller-manager/controller-manager-654bf44894-fc2ml" Feb 16 02:23:23.374025 master-0 kubenswrapper[31559]: I0216 02:23:23.373939 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27e35264-183d-4ecf-9e80-e622758470f6-serving-cert\") pod \"route-controller-manager-84bdbf4b8b-dwrfn\" (UID: \"27e35264-183d-4ecf-9e80-e622758470f6\") " pod="openshift-route-controller-manager/route-controller-manager-84bdbf4b8b-dwrfn" Feb 16 02:23:23.374025 master-0 kubenswrapper[31559]: I0216 02:23:23.373968 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89ca64ac-052f-498b-9b83-804bffd20d66-config\") pod \"controller-manager-654bf44894-fc2ml\" (UID: \"89ca64ac-052f-498b-9b83-804bffd20d66\") " pod="openshift-controller-manager/controller-manager-654bf44894-fc2ml" Feb 16 02:23:23.374280 master-0 kubenswrapper[31559]: I0216 02:23:23.374136 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/89ca64ac-052f-498b-9b83-804bffd20d66-proxy-ca-bundles\") pod \"controller-manager-654bf44894-fc2ml\" (UID: \"89ca64ac-052f-498b-9b83-804bffd20d66\") " pod="openshift-controller-manager/controller-manager-654bf44894-fc2ml" Feb 16 02:23:23.374349 master-0 kubenswrapper[31559]: I0216 02:23:23.374298 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nc7md\" (UniqueName: \"kubernetes.io/projected/27e35264-183d-4ecf-9e80-e622758470f6-kube-api-access-nc7md\") pod \"route-controller-manager-84bdbf4b8b-dwrfn\" (UID: \"27e35264-183d-4ecf-9e80-e622758470f6\") " pod="openshift-route-controller-manager/route-controller-manager-84bdbf4b8b-dwrfn" Feb 16 02:23:23.374417 master-0 kubenswrapper[31559]: I0216 02:23:23.374354 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89ca64ac-052f-498b-9b83-804bffd20d66-serving-cert\") pod \"controller-manager-654bf44894-fc2ml\" (UID: \"89ca64ac-052f-498b-9b83-804bffd20d66\") " pod="openshift-controller-manager/controller-manager-654bf44894-fc2ml" Feb 16 02:23:23.374559 master-0 kubenswrapper[31559]: I0216 02:23:23.374513 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/89ca64ac-052f-498b-9b83-804bffd20d66-client-ca\") pod \"controller-manager-654bf44894-fc2ml\" (UID: \"89ca64ac-052f-498b-9b83-804bffd20d66\") " pod="openshift-controller-manager/controller-manager-654bf44894-fc2ml" Feb 16 02:23:23.476353 master-0 kubenswrapper[31559]: I0216 02:23:23.476245 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/27e35264-183d-4ecf-9e80-e622758470f6-client-ca\") pod \"route-controller-manager-84bdbf4b8b-dwrfn\" (UID: \"27e35264-183d-4ecf-9e80-e622758470f6\") " pod="openshift-route-controller-manager/route-controller-manager-84bdbf4b8b-dwrfn" Feb 16 02:23:23.476739 master-0 kubenswrapper[31559]: I0216 02:23:23.476551 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27e35264-183d-4ecf-9e80-e622758470f6-config\") pod \"route-controller-manager-84bdbf4b8b-dwrfn\" (UID: \"27e35264-183d-4ecf-9e80-e622758470f6\") " pod="openshift-route-controller-manager/route-controller-manager-84bdbf4b8b-dwrfn" Feb 16 02:23:23.476739 master-0 kubenswrapper[31559]: I0216 02:23:23.476695 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6c5ll\" (UniqueName: \"kubernetes.io/projected/89ca64ac-052f-498b-9b83-804bffd20d66-kube-api-access-6c5ll\") pod \"controller-manager-654bf44894-fc2ml\" (UID: \"89ca64ac-052f-498b-9b83-804bffd20d66\") " pod="openshift-controller-manager/controller-manager-654bf44894-fc2ml" Feb 16 02:23:23.476916 master-0 kubenswrapper[31559]: I0216 02:23:23.476768 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27e35264-183d-4ecf-9e80-e622758470f6-serving-cert\") pod \"route-controller-manager-84bdbf4b8b-dwrfn\" (UID: \"27e35264-183d-4ecf-9e80-e622758470f6\") " pod="openshift-route-controller-manager/route-controller-manager-84bdbf4b8b-dwrfn" Feb 16 02:23:23.476916 master-0 kubenswrapper[31559]: I0216 02:23:23.476827 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89ca64ac-052f-498b-9b83-804bffd20d66-config\") pod \"controller-manager-654bf44894-fc2ml\" (UID: \"89ca64ac-052f-498b-9b83-804bffd20d66\") " pod="openshift-controller-manager/controller-manager-654bf44894-fc2ml" Feb 16 02:23:23.476916 master-0 kubenswrapper[31559]: I0216 02:23:23.476886 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/89ca64ac-052f-498b-9b83-804bffd20d66-proxy-ca-bundles\") pod \"controller-manager-654bf44894-fc2ml\" (UID: \"89ca64ac-052f-498b-9b83-804bffd20d66\") " pod="openshift-controller-manager/controller-manager-654bf44894-fc2ml" Feb 16 02:23:23.477085 master-0 kubenswrapper[31559]: I0216 02:23:23.476970 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nc7md\" (UniqueName: \"kubernetes.io/projected/27e35264-183d-4ecf-9e80-e622758470f6-kube-api-access-nc7md\") pod \"route-controller-manager-84bdbf4b8b-dwrfn\" (UID: \"27e35264-183d-4ecf-9e80-e622758470f6\") " pod="openshift-route-controller-manager/route-controller-manager-84bdbf4b8b-dwrfn" Feb 16 02:23:23.477085 master-0 kubenswrapper[31559]: I0216 02:23:23.477020 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89ca64ac-052f-498b-9b83-804bffd20d66-serving-cert\") pod \"controller-manager-654bf44894-fc2ml\" (UID: \"89ca64ac-052f-498b-9b83-804bffd20d66\") " pod="openshift-controller-manager/controller-manager-654bf44894-fc2ml" Feb 16 02:23:23.477219 master-0 kubenswrapper[31559]: I0216 02:23:23.477086 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/89ca64ac-052f-498b-9b83-804bffd20d66-client-ca\") pod \"controller-manager-654bf44894-fc2ml\" (UID: \"89ca64ac-052f-498b-9b83-804bffd20d66\") " pod="openshift-controller-manager/controller-manager-654bf44894-fc2ml" Feb 16 02:23:23.479793 master-0 kubenswrapper[31559]: I0216 02:23:23.479146 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/89ca64ac-052f-498b-9b83-804bffd20d66-client-ca\") pod \"controller-manager-654bf44894-fc2ml\" (UID: \"89ca64ac-052f-498b-9b83-804bffd20d66\") " pod="openshift-controller-manager/controller-manager-654bf44894-fc2ml" Feb 16 02:23:23.479793 master-0 kubenswrapper[31559]: I0216 02:23:23.479223 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27e35264-183d-4ecf-9e80-e622758470f6-config\") pod \"route-controller-manager-84bdbf4b8b-dwrfn\" (UID: \"27e35264-183d-4ecf-9e80-e622758470f6\") " pod="openshift-route-controller-manager/route-controller-manager-84bdbf4b8b-dwrfn" Feb 16 02:23:23.479793 master-0 kubenswrapper[31559]: I0216 02:23:23.479304 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/27e35264-183d-4ecf-9e80-e622758470f6-client-ca\") pod \"route-controller-manager-84bdbf4b8b-dwrfn\" (UID: \"27e35264-183d-4ecf-9e80-e622758470f6\") " pod="openshift-route-controller-manager/route-controller-manager-84bdbf4b8b-dwrfn" Feb 16 02:23:23.479793 master-0 kubenswrapper[31559]: I0216 02:23:23.479682 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/89ca64ac-052f-498b-9b83-804bffd20d66-proxy-ca-bundles\") pod \"controller-manager-654bf44894-fc2ml\" (UID: \"89ca64ac-052f-498b-9b83-804bffd20d66\") " pod="openshift-controller-manager/controller-manager-654bf44894-fc2ml" Feb 16 02:23:23.480210 master-0 kubenswrapper[31559]: I0216 02:23:23.479813 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89ca64ac-052f-498b-9b83-804bffd20d66-config\") pod \"controller-manager-654bf44894-fc2ml\" (UID: \"89ca64ac-052f-498b-9b83-804bffd20d66\") " pod="openshift-controller-manager/controller-manager-654bf44894-fc2ml" Feb 16 02:23:23.489521 master-0 kubenswrapper[31559]: I0216 02:23:23.483156 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89ca64ac-052f-498b-9b83-804bffd20d66-serving-cert\") pod \"controller-manager-654bf44894-fc2ml\" (UID: \"89ca64ac-052f-498b-9b83-804bffd20d66\") " pod="openshift-controller-manager/controller-manager-654bf44894-fc2ml" Feb 16 02:23:23.489521 master-0 kubenswrapper[31559]: I0216 02:23:23.483395 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27e35264-183d-4ecf-9e80-e622758470f6-serving-cert\") pod \"route-controller-manager-84bdbf4b8b-dwrfn\" (UID: \"27e35264-183d-4ecf-9e80-e622758470f6\") " pod="openshift-route-controller-manager/route-controller-manager-84bdbf4b8b-dwrfn" Feb 16 02:23:23.511527 master-0 kubenswrapper[31559]: I0216 02:23:23.511240 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nc7md\" (UniqueName: \"kubernetes.io/projected/27e35264-183d-4ecf-9e80-e622758470f6-kube-api-access-nc7md\") pod \"route-controller-manager-84bdbf4b8b-dwrfn\" (UID: \"27e35264-183d-4ecf-9e80-e622758470f6\") " pod="openshift-route-controller-manager/route-controller-manager-84bdbf4b8b-dwrfn" Feb 16 02:23:23.511826 master-0 kubenswrapper[31559]: I0216 02:23:23.511748 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6c5ll\" (UniqueName: \"kubernetes.io/projected/89ca64ac-052f-498b-9b83-804bffd20d66-kube-api-access-6c5ll\") pod \"controller-manager-654bf44894-fc2ml\" (UID: \"89ca64ac-052f-498b-9b83-804bffd20d66\") " pod="openshift-controller-manager/controller-manager-654bf44894-fc2ml" Feb 16 02:23:23.622198 master-0 kubenswrapper[31559]: I0216 02:23:23.622033 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84bdbf4b8b-dwrfn" Feb 16 02:23:23.651532 master-0 kubenswrapper[31559]: I0216 02:23:23.650874 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-654bf44894-fc2ml" Feb 16 02:23:23.956466 master-0 kubenswrapper[31559]: I0216 02:23:23.951014 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83883885-f493-4559-9c0f-e28d69712475" path="/var/lib/kubelet/pods/83883885-f493-4559-9c0f-e28d69712475/volumes" Feb 16 02:23:23.956466 master-0 kubenswrapper[31559]: I0216 02:23:23.951982 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e491b5ed-9c09-4308-9843-fba8d43bd3ae" path="/var/lib/kubelet/pods/e491b5ed-9c09-4308-9843-fba8d43bd3ae/volumes" Feb 16 02:23:23.956466 master-0 kubenswrapper[31559]: I0216 02:23:23.952649 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84bdbf4b8b-dwrfn"] Feb 16 02:23:23.967032 master-0 kubenswrapper[31559]: W0216 02:23:23.966951 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod27e35264_183d_4ecf_9e80_e622758470f6.slice/crio-dda6d000fdc6be3fcd72cb19fcdb44909b223ba083071cd7f28c3256634330c1 WatchSource:0}: Error finding container dda6d000fdc6be3fcd72cb19fcdb44909b223ba083071cd7f28c3256634330c1: Status 404 returned error can't find the container with id dda6d000fdc6be3fcd72cb19fcdb44909b223ba083071cd7f28c3256634330c1 Feb 16 02:23:24.013910 master-0 kubenswrapper[31559]: I0216 02:23:24.006656 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-654bf44894-fc2ml"] Feb 16 02:23:24.836228 master-0 kubenswrapper[31559]: I0216 02:23:24.836157 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-654bf44894-fc2ml" event={"ID":"89ca64ac-052f-498b-9b83-804bffd20d66","Type":"ContainerStarted","Data":"ec0470d97ed3e70afba40b067de4c90b7bcacd0a24f6e61dba438fdac4562d1a"} Feb 16 02:23:24.836228 master-0 kubenswrapper[31559]: I0216 02:23:24.836219 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-654bf44894-fc2ml" event={"ID":"89ca64ac-052f-498b-9b83-804bffd20d66","Type":"ContainerStarted","Data":"5c530264923fbb8cfd37ef7682f58dcd83272f8cb797ba8b358e4e859b8d44d5"} Feb 16 02:23:24.836699 master-0 kubenswrapper[31559]: I0216 02:23:24.836458 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-654bf44894-fc2ml" Feb 16 02:23:24.837832 master-0 kubenswrapper[31559]: I0216 02:23:24.837789 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84bdbf4b8b-dwrfn" event={"ID":"27e35264-183d-4ecf-9e80-e622758470f6","Type":"ContainerStarted","Data":"6ca715b3cc545b72fc3ac069511630cf6e02937723f3e9cac7015d8db36f5c62"} Feb 16 02:23:24.837832 master-0 kubenswrapper[31559]: I0216 02:23:24.837824 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84bdbf4b8b-dwrfn" event={"ID":"27e35264-183d-4ecf-9e80-e622758470f6","Type":"ContainerStarted","Data":"dda6d000fdc6be3fcd72cb19fcdb44909b223ba083071cd7f28c3256634330c1"} Feb 16 02:23:24.838247 master-0 kubenswrapper[31559]: I0216 02:23:24.838215 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-84bdbf4b8b-dwrfn" Feb 16 02:23:24.846842 master-0 kubenswrapper[31559]: I0216 02:23:24.846781 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-654bf44894-fc2ml" Feb 16 02:23:24.849097 master-0 kubenswrapper[31559]: I0216 02:23:24.849067 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-84bdbf4b8b-dwrfn" Feb 16 02:23:24.861033 master-0 kubenswrapper[31559]: I0216 02:23:24.860958 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-654bf44894-fc2ml" podStartSLOduration=3.860945294 podStartE2EDuration="3.860945294s" podCreationTimestamp="2026-02-16 02:23:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:23:24.860021811 +0000 UTC m=+57.204627816" watchObservedRunningTime="2026-02-16 02:23:24.860945294 +0000 UTC m=+57.205551309" Feb 16 02:23:24.887061 master-0 kubenswrapper[31559]: I0216 02:23:24.886981 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-84bdbf4b8b-dwrfn" podStartSLOduration=3.886962894 podStartE2EDuration="3.886962894s" podCreationTimestamp="2026-02-16 02:23:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:23:24.885700823 +0000 UTC m=+57.230306828" watchObservedRunningTime="2026-02-16 02:23:24.886962894 +0000 UTC m=+57.231568909" Feb 16 02:23:27.885845 master-0 kubenswrapper[31559]: I0216 02:23:27.885676 31559 scope.go:117] "RemoveContainer" containerID="6bc4b5ee1e89ed7a76ec9068e6cdb19289d70c03bd852b3dc8e93c9d7f9e1ba4" Feb 16 02:23:36.863349 master-0 kubenswrapper[31559]: I0216 02:23:36.863220 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Feb 16 02:23:36.865056 master-0 kubenswrapper[31559]: I0216 02:23:36.865004 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Feb 16 02:23:36.875503 master-0 kubenswrapper[31559]: I0216 02:23:36.873641 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 16 02:23:36.875503 master-0 kubenswrapper[31559]: I0216 02:23:36.873967 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-wnpkt" Feb 16 02:23:36.884588 master-0 kubenswrapper[31559]: I0216 02:23:36.884515 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Feb 16 02:23:36.898552 master-0 kubenswrapper[31559]: I0216 02:23:36.896914 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9beb5b29-c163-47e3-9ec2-9f0a49946866-var-lock\") pod \"installer-4-master-0\" (UID: \"9beb5b29-c163-47e3-9ec2-9f0a49946866\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 16 02:23:36.898552 master-0 kubenswrapper[31559]: I0216 02:23:36.897067 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9beb5b29-c163-47e3-9ec2-9f0a49946866-kube-api-access\") pod \"installer-4-master-0\" (UID: \"9beb5b29-c163-47e3-9ec2-9f0a49946866\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 16 02:23:36.898552 master-0 kubenswrapper[31559]: I0216 02:23:36.897121 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9beb5b29-c163-47e3-9ec2-9f0a49946866-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"9beb5b29-c163-47e3-9ec2-9f0a49946866\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 16 02:23:36.999280 master-0 kubenswrapper[31559]: I0216 02:23:36.999157 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9beb5b29-c163-47e3-9ec2-9f0a49946866-var-lock\") pod \"installer-4-master-0\" (UID: \"9beb5b29-c163-47e3-9ec2-9f0a49946866\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 16 02:23:36.999675 master-0 kubenswrapper[31559]: I0216 02:23:36.999534 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9beb5b29-c163-47e3-9ec2-9f0a49946866-var-lock\") pod \"installer-4-master-0\" (UID: \"9beb5b29-c163-47e3-9ec2-9f0a49946866\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 16 02:23:36.999675 master-0 kubenswrapper[31559]: I0216 02:23:36.999598 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9beb5b29-c163-47e3-9ec2-9f0a49946866-kube-api-access\") pod \"installer-4-master-0\" (UID: \"9beb5b29-c163-47e3-9ec2-9f0a49946866\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 16 02:23:36.999883 master-0 kubenswrapper[31559]: I0216 02:23:36.999827 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9beb5b29-c163-47e3-9ec2-9f0a49946866-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"9beb5b29-c163-47e3-9ec2-9f0a49946866\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 16 02:23:36.999995 master-0 kubenswrapper[31559]: I0216 02:23:36.999956 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9beb5b29-c163-47e3-9ec2-9f0a49946866-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"9beb5b29-c163-47e3-9ec2-9f0a49946866\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 16 02:23:37.033861 master-0 kubenswrapper[31559]: I0216 02:23:37.033780 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9beb5b29-c163-47e3-9ec2-9f0a49946866-kube-api-access\") pod \"installer-4-master-0\" (UID: \"9beb5b29-c163-47e3-9ec2-9f0a49946866\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 16 02:23:37.194814 master-0 kubenswrapper[31559]: I0216 02:23:37.194716 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Feb 16 02:23:37.695716 master-0 kubenswrapper[31559]: I0216 02:23:37.694470 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Feb 16 02:23:37.712757 master-0 kubenswrapper[31559]: W0216 02:23:37.712681 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod9beb5b29_c163_47e3_9ec2_9f0a49946866.slice/crio-68dc693a74152dc0f9e5701c003a9b5be1a5054dad182d20b2686034f2ad1753 WatchSource:0}: Error finding container 68dc693a74152dc0f9e5701c003a9b5be1a5054dad182d20b2686034f2ad1753: Status 404 returned error can't find the container with id 68dc693a74152dc0f9e5701c003a9b5be1a5054dad182d20b2686034f2ad1753 Feb 16 02:23:37.974279 master-0 kubenswrapper[31559]: I0216 02:23:37.974079 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"9beb5b29-c163-47e3-9ec2-9f0a49946866","Type":"ContainerStarted","Data":"68dc693a74152dc0f9e5701c003a9b5be1a5054dad182d20b2686034f2ad1753"} Feb 16 02:23:38.984187 master-0 kubenswrapper[31559]: I0216 02:23:38.984084 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"9beb5b29-c163-47e3-9ec2-9f0a49946866","Type":"ContainerStarted","Data":"a1a8b11e5475aac3b80209d00b6b722fce406bfe793a6cf1ee6e4a948f24542e"} Feb 16 02:23:39.016585 master-0 kubenswrapper[31559]: I0216 02:23:39.016429 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-4-master-0" podStartSLOduration=3.016400835 podStartE2EDuration="3.016400835s" podCreationTimestamp="2026-02-16 02:23:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:23:39.01338748 +0000 UTC m=+71.357993565" watchObservedRunningTime="2026-02-16 02:23:39.016400835 +0000 UTC m=+71.361006890" Feb 16 02:23:39.750413 master-0 kubenswrapper[31559]: I0216 02:23:39.750251 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-565b9bd659-h8ks5" podUID="be05b90a-997b-4183-935a-05ee1461cd65" containerName="oauth-openshift" containerID="cri-o://cdf3936da8424e68d746f1222eb12daab021fe9d4d7c357f4216d0e298e452d3" gracePeriod=15 Feb 16 02:23:40.004598 master-0 kubenswrapper[31559]: I0216 02:23:40.001884 31559 generic.go:334] "Generic (PLEG): container finished" podID="be05b90a-997b-4183-935a-05ee1461cd65" containerID="cdf3936da8424e68d746f1222eb12daab021fe9d4d7c357f4216d0e298e452d3" exitCode=0 Feb 16 02:23:40.004598 master-0 kubenswrapper[31559]: I0216 02:23:40.002055 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-565b9bd659-h8ks5" event={"ID":"be05b90a-997b-4183-935a-05ee1461cd65","Type":"ContainerDied","Data":"cdf3936da8424e68d746f1222eb12daab021fe9d4d7c357f4216d0e298e452d3"} Feb 16 02:23:40.409693 master-0 kubenswrapper[31559]: I0216 02:23:40.409626 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-565b9bd659-h8ks5" Feb 16 02:23:40.456524 master-0 kubenswrapper[31559]: I0216 02:23:40.455639 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/be05b90a-997b-4183-935a-05ee1461cd65-v4-0-config-system-ocp-branding-template\") pod \"be05b90a-997b-4183-935a-05ee1461cd65\" (UID: \"be05b90a-997b-4183-935a-05ee1461cd65\") " Feb 16 02:23:40.456524 master-0 kubenswrapper[31559]: I0216 02:23:40.455737 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxq4b\" (UniqueName: \"kubernetes.io/projected/be05b90a-997b-4183-935a-05ee1461cd65-kube-api-access-mxq4b\") pod \"be05b90a-997b-4183-935a-05ee1461cd65\" (UID: \"be05b90a-997b-4183-935a-05ee1461cd65\") " Feb 16 02:23:40.456524 master-0 kubenswrapper[31559]: I0216 02:23:40.455798 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/be05b90a-997b-4183-935a-05ee1461cd65-v4-0-config-user-template-error\") pod \"be05b90a-997b-4183-935a-05ee1461cd65\" (UID: \"be05b90a-997b-4183-935a-05ee1461cd65\") " Feb 16 02:23:40.456524 master-0 kubenswrapper[31559]: I0216 02:23:40.455882 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be05b90a-997b-4183-935a-05ee1461cd65-v4-0-config-system-trusted-ca-bundle\") pod \"be05b90a-997b-4183-935a-05ee1461cd65\" (UID: \"be05b90a-997b-4183-935a-05ee1461cd65\") " Feb 16 02:23:40.456524 master-0 kubenswrapper[31559]: I0216 02:23:40.455954 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/be05b90a-997b-4183-935a-05ee1461cd65-v4-0-config-system-router-certs\") pod \"be05b90a-997b-4183-935a-05ee1461cd65\" (UID: \"be05b90a-997b-4183-935a-05ee1461cd65\") " Feb 16 02:23:40.457171 master-0 kubenswrapper[31559]: I0216 02:23:40.457010 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/be05b90a-997b-4183-935a-05ee1461cd65-v4-0-config-user-template-provider-selection\") pod \"be05b90a-997b-4183-935a-05ee1461cd65\" (UID: \"be05b90a-997b-4183-935a-05ee1461cd65\") " Feb 16 02:23:40.457171 master-0 kubenswrapper[31559]: I0216 02:23:40.457133 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/be05b90a-997b-4183-935a-05ee1461cd65-v4-0-config-system-serving-cert\") pod \"be05b90a-997b-4183-935a-05ee1461cd65\" (UID: \"be05b90a-997b-4183-935a-05ee1461cd65\") " Feb 16 02:23:40.457346 master-0 kubenswrapper[31559]: I0216 02:23:40.457196 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/be05b90a-997b-4183-935a-05ee1461cd65-audit-policies\") pod \"be05b90a-997b-4183-935a-05ee1461cd65\" (UID: \"be05b90a-997b-4183-935a-05ee1461cd65\") " Feb 16 02:23:40.457346 master-0 kubenswrapper[31559]: I0216 02:23:40.457240 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/be05b90a-997b-4183-935a-05ee1461cd65-audit-dir\") pod \"be05b90a-997b-4183-935a-05ee1461cd65\" (UID: \"be05b90a-997b-4183-935a-05ee1461cd65\") " Feb 16 02:23:40.457346 master-0 kubenswrapper[31559]: I0216 02:23:40.457315 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/be05b90a-997b-4183-935a-05ee1461cd65-v4-0-config-system-cliconfig\") pod \"be05b90a-997b-4183-935a-05ee1461cd65\" (UID: \"be05b90a-997b-4183-935a-05ee1461cd65\") " Feb 16 02:23:40.457566 master-0 kubenswrapper[31559]: I0216 02:23:40.457376 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/be05b90a-997b-4183-935a-05ee1461cd65-v4-0-config-system-service-ca\") pod \"be05b90a-997b-4183-935a-05ee1461cd65\" (UID: \"be05b90a-997b-4183-935a-05ee1461cd65\") " Feb 16 02:23:40.457566 master-0 kubenswrapper[31559]: I0216 02:23:40.457407 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/be05b90a-997b-4183-935a-05ee1461cd65-v4-0-config-user-template-login\") pod \"be05b90a-997b-4183-935a-05ee1461cd65\" (UID: \"be05b90a-997b-4183-935a-05ee1461cd65\") " Feb 16 02:23:40.457566 master-0 kubenswrapper[31559]: I0216 02:23:40.457457 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/be05b90a-997b-4183-935a-05ee1461cd65-v4-0-config-system-session\") pod \"be05b90a-997b-4183-935a-05ee1461cd65\" (UID: \"be05b90a-997b-4183-935a-05ee1461cd65\") " Feb 16 02:23:40.459190 master-0 kubenswrapper[31559]: I0216 02:23:40.458888 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be05b90a-997b-4183-935a-05ee1461cd65-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "be05b90a-997b-4183-935a-05ee1461cd65" (UID: "be05b90a-997b-4183-935a-05ee1461cd65"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:23:40.460096 master-0 kubenswrapper[31559]: I0216 02:23:40.460015 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be05b90a-997b-4183-935a-05ee1461cd65-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "be05b90a-997b-4183-935a-05ee1461cd65" (UID: "be05b90a-997b-4183-935a-05ee1461cd65"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:23:40.460279 master-0 kubenswrapper[31559]: I0216 02:23:40.460231 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be05b90a-997b-4183-935a-05ee1461cd65-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "be05b90a-997b-4183-935a-05ee1461cd65" (UID: "be05b90a-997b-4183-935a-05ee1461cd65"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:23:40.460477 master-0 kubenswrapper[31559]: I0216 02:23:40.460377 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be05b90a-997b-4183-935a-05ee1461cd65-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "be05b90a-997b-4183-935a-05ee1461cd65" (UID: "be05b90a-997b-4183-935a-05ee1461cd65"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:23:40.460591 master-0 kubenswrapper[31559]: I0216 02:23:40.460405 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be05b90a-997b-4183-935a-05ee1461cd65-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "be05b90a-997b-4183-935a-05ee1461cd65" (UID: "be05b90a-997b-4183-935a-05ee1461cd65"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:23:40.465916 master-0 kubenswrapper[31559]: I0216 02:23:40.462686 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be05b90a-997b-4183-935a-05ee1461cd65-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "be05b90a-997b-4183-935a-05ee1461cd65" (UID: "be05b90a-997b-4183-935a-05ee1461cd65"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:23:40.465916 master-0 kubenswrapper[31559]: I0216 02:23:40.462758 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be05b90a-997b-4183-935a-05ee1461cd65-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "be05b90a-997b-4183-935a-05ee1461cd65" (UID: "be05b90a-997b-4183-935a-05ee1461cd65"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:23:40.465916 master-0 kubenswrapper[31559]: I0216 02:23:40.464135 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be05b90a-997b-4183-935a-05ee1461cd65-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "be05b90a-997b-4183-935a-05ee1461cd65" (UID: "be05b90a-997b-4183-935a-05ee1461cd65"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:23:40.465916 master-0 kubenswrapper[31559]: I0216 02:23:40.465494 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be05b90a-997b-4183-935a-05ee1461cd65-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "be05b90a-997b-4183-935a-05ee1461cd65" (UID: "be05b90a-997b-4183-935a-05ee1461cd65"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:23:40.465916 master-0 kubenswrapper[31559]: I0216 02:23:40.465629 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be05b90a-997b-4183-935a-05ee1461cd65-kube-api-access-mxq4b" (OuterVolumeSpecName: "kube-api-access-mxq4b") pod "be05b90a-997b-4183-935a-05ee1461cd65" (UID: "be05b90a-997b-4183-935a-05ee1461cd65"). InnerVolumeSpecName "kube-api-access-mxq4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:23:40.465916 master-0 kubenswrapper[31559]: I0216 02:23:40.465834 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be05b90a-997b-4183-935a-05ee1461cd65-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "be05b90a-997b-4183-935a-05ee1461cd65" (UID: "be05b90a-997b-4183-935a-05ee1461cd65"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:23:40.466599 master-0 kubenswrapper[31559]: I0216 02:23:40.466529 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-645f9fcbc6-lsqk8"] Feb 16 02:23:40.467349 master-0 kubenswrapper[31559]: E0216 02:23:40.467300 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be05b90a-997b-4183-935a-05ee1461cd65" containerName="oauth-openshift" Feb 16 02:23:40.467349 master-0 kubenswrapper[31559]: I0216 02:23:40.467337 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="be05b90a-997b-4183-935a-05ee1461cd65" containerName="oauth-openshift" Feb 16 02:23:40.467622 master-0 kubenswrapper[31559]: I0216 02:23:40.467322 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be05b90a-997b-4183-935a-05ee1461cd65-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "be05b90a-997b-4183-935a-05ee1461cd65" (UID: "be05b90a-997b-4183-935a-05ee1461cd65"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:23:40.467821 master-0 kubenswrapper[31559]: I0216 02:23:40.467773 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="be05b90a-997b-4183-935a-05ee1461cd65" containerName="oauth-openshift" Feb 16 02:23:40.470307 master-0 kubenswrapper[31559]: I0216 02:23:40.470265 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-645f9fcbc6-lsqk8" Feb 16 02:23:40.472729 master-0 kubenswrapper[31559]: I0216 02:23:40.472606 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be05b90a-997b-4183-935a-05ee1461cd65-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "be05b90a-997b-4183-935a-05ee1461cd65" (UID: "be05b90a-997b-4183-935a-05ee1461cd65"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:23:40.479126 master-0 kubenswrapper[31559]: I0216 02:23:40.478065 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-645f9fcbc6-lsqk8"] Feb 16 02:23:40.559426 master-0 kubenswrapper[31559]: I0216 02:23:40.559349 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a19c5154-889e-41c6-8a31-1278fee76b9d-audit-dir\") pod \"oauth-openshift-645f9fcbc6-lsqk8\" (UID: \"a19c5154-889e-41c6-8a31-1278fee76b9d\") " pod="openshift-authentication/oauth-openshift-645f9fcbc6-lsqk8" Feb 16 02:23:40.559662 master-0 kubenswrapper[31559]: I0216 02:23:40.559415 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a19c5154-889e-41c6-8a31-1278fee76b9d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-645f9fcbc6-lsqk8\" (UID: \"a19c5154-889e-41c6-8a31-1278fee76b9d\") " pod="openshift-authentication/oauth-openshift-645f9fcbc6-lsqk8" Feb 16 02:23:40.559662 master-0 kubenswrapper[31559]: I0216 02:23:40.559501 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a19c5154-889e-41c6-8a31-1278fee76b9d-v4-0-config-system-service-ca\") pod \"oauth-openshift-645f9fcbc6-lsqk8\" (UID: \"a19c5154-889e-41c6-8a31-1278fee76b9d\") " pod="openshift-authentication/oauth-openshift-645f9fcbc6-lsqk8" Feb 16 02:23:40.559662 master-0 kubenswrapper[31559]: I0216 02:23:40.559544 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a19c5154-889e-41c6-8a31-1278fee76b9d-v4-0-config-user-template-error\") pod \"oauth-openshift-645f9fcbc6-lsqk8\" (UID: \"a19c5154-889e-41c6-8a31-1278fee76b9d\") " pod="openshift-authentication/oauth-openshift-645f9fcbc6-lsqk8" Feb 16 02:23:40.559662 master-0 kubenswrapper[31559]: I0216 02:23:40.559601 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a19c5154-889e-41c6-8a31-1278fee76b9d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-645f9fcbc6-lsqk8\" (UID: \"a19c5154-889e-41c6-8a31-1278fee76b9d\") " pod="openshift-authentication/oauth-openshift-645f9fcbc6-lsqk8" Feb 16 02:23:40.559662 master-0 kubenswrapper[31559]: I0216 02:23:40.559637 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a19c5154-889e-41c6-8a31-1278fee76b9d-v4-0-config-user-template-login\") pod \"oauth-openshift-645f9fcbc6-lsqk8\" (UID: \"a19c5154-889e-41c6-8a31-1278fee76b9d\") " pod="openshift-authentication/oauth-openshift-645f9fcbc6-lsqk8" Feb 16 02:23:40.559999 master-0 kubenswrapper[31559]: I0216 02:23:40.559717 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a19c5154-889e-41c6-8a31-1278fee76b9d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-645f9fcbc6-lsqk8\" (UID: \"a19c5154-889e-41c6-8a31-1278fee76b9d\") " pod="openshift-authentication/oauth-openshift-645f9fcbc6-lsqk8" Feb 16 02:23:40.559999 master-0 kubenswrapper[31559]: I0216 02:23:40.559777 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a19c5154-889e-41c6-8a31-1278fee76b9d-v4-0-config-system-session\") pod \"oauth-openshift-645f9fcbc6-lsqk8\" (UID: \"a19c5154-889e-41c6-8a31-1278fee76b9d\") " pod="openshift-authentication/oauth-openshift-645f9fcbc6-lsqk8" Feb 16 02:23:40.559999 master-0 kubenswrapper[31559]: I0216 02:23:40.559859 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a19c5154-889e-41c6-8a31-1278fee76b9d-v4-0-config-system-router-certs\") pod \"oauth-openshift-645f9fcbc6-lsqk8\" (UID: \"a19c5154-889e-41c6-8a31-1278fee76b9d\") " pod="openshift-authentication/oauth-openshift-645f9fcbc6-lsqk8" Feb 16 02:23:40.559999 master-0 kubenswrapper[31559]: I0216 02:23:40.559896 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfk6h\" (UniqueName: \"kubernetes.io/projected/a19c5154-889e-41c6-8a31-1278fee76b9d-kube-api-access-lfk6h\") pod \"oauth-openshift-645f9fcbc6-lsqk8\" (UID: \"a19c5154-889e-41c6-8a31-1278fee76b9d\") " pod="openshift-authentication/oauth-openshift-645f9fcbc6-lsqk8" Feb 16 02:23:40.559999 master-0 kubenswrapper[31559]: I0216 02:23:40.559984 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a19c5154-889e-41c6-8a31-1278fee76b9d-audit-policies\") pod \"oauth-openshift-645f9fcbc6-lsqk8\" (UID: \"a19c5154-889e-41c6-8a31-1278fee76b9d\") " pod="openshift-authentication/oauth-openshift-645f9fcbc6-lsqk8" Feb 16 02:23:40.560322 master-0 kubenswrapper[31559]: I0216 02:23:40.560020 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a19c5154-889e-41c6-8a31-1278fee76b9d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-645f9fcbc6-lsqk8\" (UID: \"a19c5154-889e-41c6-8a31-1278fee76b9d\") " pod="openshift-authentication/oauth-openshift-645f9fcbc6-lsqk8" Feb 16 02:23:40.560322 master-0 kubenswrapper[31559]: I0216 02:23:40.560076 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a19c5154-889e-41c6-8a31-1278fee76b9d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-645f9fcbc6-lsqk8\" (UID: \"a19c5154-889e-41c6-8a31-1278fee76b9d\") " pod="openshift-authentication/oauth-openshift-645f9fcbc6-lsqk8" Feb 16 02:23:40.560322 master-0 kubenswrapper[31559]: I0216 02:23:40.560155 31559 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/be05b90a-997b-4183-935a-05ee1461cd65-audit-policies\") on node \"master-0\" DevicePath \"\"" Feb 16 02:23:40.560322 master-0 kubenswrapper[31559]: I0216 02:23:40.560178 31559 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/be05b90a-997b-4183-935a-05ee1461cd65-audit-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 02:23:40.560322 master-0 kubenswrapper[31559]: I0216 02:23:40.560198 31559 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/be05b90a-997b-4183-935a-05ee1461cd65-v4-0-config-system-cliconfig\") on node \"master-0\" DevicePath \"\"" Feb 16 02:23:40.560322 master-0 kubenswrapper[31559]: I0216 02:23:40.560217 31559 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/be05b90a-997b-4183-935a-05ee1461cd65-v4-0-config-system-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 16 02:23:40.560322 master-0 kubenswrapper[31559]: I0216 02:23:40.560237 31559 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/be05b90a-997b-4183-935a-05ee1461cd65-v4-0-config-user-template-login\") on node \"master-0\" DevicePath \"\"" Feb 16 02:23:40.560322 master-0 kubenswrapper[31559]: I0216 02:23:40.560257 31559 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/be05b90a-997b-4183-935a-05ee1461cd65-v4-0-config-system-session\") on node \"master-0\" DevicePath \"\"" Feb 16 02:23:40.560322 master-0 kubenswrapper[31559]: I0216 02:23:40.560276 31559 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/be05b90a-997b-4183-935a-05ee1461cd65-v4-0-config-system-ocp-branding-template\") on node \"master-0\" DevicePath \"\"" Feb 16 02:23:40.560322 master-0 kubenswrapper[31559]: I0216 02:23:40.560298 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxq4b\" (UniqueName: \"kubernetes.io/projected/be05b90a-997b-4183-935a-05ee1461cd65-kube-api-access-mxq4b\") on node \"master-0\" DevicePath \"\"" Feb 16 02:23:40.560322 master-0 kubenswrapper[31559]: I0216 02:23:40.560317 31559 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/be05b90a-997b-4183-935a-05ee1461cd65-v4-0-config-user-template-error\") on node \"master-0\" DevicePath \"\"" Feb 16 02:23:40.561012 master-0 kubenswrapper[31559]: I0216 02:23:40.560339 31559 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be05b90a-997b-4183-935a-05ee1461cd65-v4-0-config-system-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 02:23:40.561012 master-0 kubenswrapper[31559]: I0216 02:23:40.560361 31559 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/be05b90a-997b-4183-935a-05ee1461cd65-v4-0-config-system-router-certs\") on node \"master-0\" DevicePath \"\"" Feb 16 02:23:40.561012 master-0 kubenswrapper[31559]: I0216 02:23:40.560382 31559 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/be05b90a-997b-4183-935a-05ee1461cd65-v4-0-config-user-template-provider-selection\") on node \"master-0\" DevicePath \"\"" Feb 16 02:23:40.561012 master-0 kubenswrapper[31559]: I0216 02:23:40.560402 31559 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/be05b90a-997b-4183-935a-05ee1461cd65-v4-0-config-system-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 16 02:23:40.661658 master-0 kubenswrapper[31559]: I0216 02:23:40.661489 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a19c5154-889e-41c6-8a31-1278fee76b9d-v4-0-config-system-router-certs\") pod \"oauth-openshift-645f9fcbc6-lsqk8\" (UID: \"a19c5154-889e-41c6-8a31-1278fee76b9d\") " pod="openshift-authentication/oauth-openshift-645f9fcbc6-lsqk8" Feb 16 02:23:40.661870 master-0 kubenswrapper[31559]: I0216 02:23:40.661832 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfk6h\" (UniqueName: \"kubernetes.io/projected/a19c5154-889e-41c6-8a31-1278fee76b9d-kube-api-access-lfk6h\") pod \"oauth-openshift-645f9fcbc6-lsqk8\" (UID: \"a19c5154-889e-41c6-8a31-1278fee76b9d\") " pod="openshift-authentication/oauth-openshift-645f9fcbc6-lsqk8" Feb 16 02:23:40.662171 master-0 kubenswrapper[31559]: I0216 02:23:40.662111 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a19c5154-889e-41c6-8a31-1278fee76b9d-audit-policies\") pod \"oauth-openshift-645f9fcbc6-lsqk8\" (UID: \"a19c5154-889e-41c6-8a31-1278fee76b9d\") " pod="openshift-authentication/oauth-openshift-645f9fcbc6-lsqk8" Feb 16 02:23:40.662242 master-0 kubenswrapper[31559]: I0216 02:23:40.662199 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a19c5154-889e-41c6-8a31-1278fee76b9d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-645f9fcbc6-lsqk8\" (UID: \"a19c5154-889e-41c6-8a31-1278fee76b9d\") " pod="openshift-authentication/oauth-openshift-645f9fcbc6-lsqk8" Feb 16 02:23:40.662314 master-0 kubenswrapper[31559]: I0216 02:23:40.662285 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a19c5154-889e-41c6-8a31-1278fee76b9d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-645f9fcbc6-lsqk8\" (UID: \"a19c5154-889e-41c6-8a31-1278fee76b9d\") " pod="openshift-authentication/oauth-openshift-645f9fcbc6-lsqk8" Feb 16 02:23:40.662715 master-0 kubenswrapper[31559]: I0216 02:23:40.662663 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a19c5154-889e-41c6-8a31-1278fee76b9d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-645f9fcbc6-lsqk8\" (UID: \"a19c5154-889e-41c6-8a31-1278fee76b9d\") " pod="openshift-authentication/oauth-openshift-645f9fcbc6-lsqk8" Feb 16 02:23:40.662798 master-0 kubenswrapper[31559]: I0216 02:23:40.662721 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a19c5154-889e-41c6-8a31-1278fee76b9d-audit-dir\") pod \"oauth-openshift-645f9fcbc6-lsqk8\" (UID: \"a19c5154-889e-41c6-8a31-1278fee76b9d\") " pod="openshift-authentication/oauth-openshift-645f9fcbc6-lsqk8" Feb 16 02:23:40.662798 master-0 kubenswrapper[31559]: I0216 02:23:40.662768 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a19c5154-889e-41c6-8a31-1278fee76b9d-v4-0-config-system-service-ca\") pod \"oauth-openshift-645f9fcbc6-lsqk8\" (UID: \"a19c5154-889e-41c6-8a31-1278fee76b9d\") " pod="openshift-authentication/oauth-openshift-645f9fcbc6-lsqk8" Feb 16 02:23:40.662915 master-0 kubenswrapper[31559]: I0216 02:23:40.662833 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a19c5154-889e-41c6-8a31-1278fee76b9d-v4-0-config-user-template-error\") pod \"oauth-openshift-645f9fcbc6-lsqk8\" (UID: \"a19c5154-889e-41c6-8a31-1278fee76b9d\") " pod="openshift-authentication/oauth-openshift-645f9fcbc6-lsqk8" Feb 16 02:23:40.662915 master-0 kubenswrapper[31559]: I0216 02:23:40.662906 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a19c5154-889e-41c6-8a31-1278fee76b9d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-645f9fcbc6-lsqk8\" (UID: \"a19c5154-889e-41c6-8a31-1278fee76b9d\") " pod="openshift-authentication/oauth-openshift-645f9fcbc6-lsqk8" Feb 16 02:23:40.663036 master-0 kubenswrapper[31559]: I0216 02:23:40.662941 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a19c5154-889e-41c6-8a31-1278fee76b9d-v4-0-config-user-template-login\") pod \"oauth-openshift-645f9fcbc6-lsqk8\" (UID: \"a19c5154-889e-41c6-8a31-1278fee76b9d\") " pod="openshift-authentication/oauth-openshift-645f9fcbc6-lsqk8" Feb 16 02:23:40.663036 master-0 kubenswrapper[31559]: I0216 02:23:40.662978 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a19c5154-889e-41c6-8a31-1278fee76b9d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-645f9fcbc6-lsqk8\" (UID: \"a19c5154-889e-41c6-8a31-1278fee76b9d\") " pod="openshift-authentication/oauth-openshift-645f9fcbc6-lsqk8" Feb 16 02:23:40.663036 master-0 kubenswrapper[31559]: I0216 02:23:40.663021 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a19c5154-889e-41c6-8a31-1278fee76b9d-v4-0-config-system-session\") pod \"oauth-openshift-645f9fcbc6-lsqk8\" (UID: \"a19c5154-889e-41c6-8a31-1278fee76b9d\") " pod="openshift-authentication/oauth-openshift-645f9fcbc6-lsqk8" Feb 16 02:23:40.663334 master-0 kubenswrapper[31559]: I0216 02:23:40.663271 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a19c5154-889e-41c6-8a31-1278fee76b9d-audit-dir\") pod \"oauth-openshift-645f9fcbc6-lsqk8\" (UID: \"a19c5154-889e-41c6-8a31-1278fee76b9d\") " pod="openshift-authentication/oauth-openshift-645f9fcbc6-lsqk8" Feb 16 02:23:40.663625 master-0 kubenswrapper[31559]: I0216 02:23:40.663559 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a19c5154-889e-41c6-8a31-1278fee76b9d-audit-policies\") pod \"oauth-openshift-645f9fcbc6-lsqk8\" (UID: \"a19c5154-889e-41c6-8a31-1278fee76b9d\") " pod="openshift-authentication/oauth-openshift-645f9fcbc6-lsqk8" Feb 16 02:23:40.665782 master-0 kubenswrapper[31559]: I0216 02:23:40.665682 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a19c5154-889e-41c6-8a31-1278fee76b9d-v4-0-config-system-service-ca\") pod \"oauth-openshift-645f9fcbc6-lsqk8\" (UID: \"a19c5154-889e-41c6-8a31-1278fee76b9d\") " pod="openshift-authentication/oauth-openshift-645f9fcbc6-lsqk8" Feb 16 02:23:40.666051 master-0 kubenswrapper[31559]: I0216 02:23:40.666009 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a19c5154-889e-41c6-8a31-1278fee76b9d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-645f9fcbc6-lsqk8\" (UID: \"a19c5154-889e-41c6-8a31-1278fee76b9d\") " pod="openshift-authentication/oauth-openshift-645f9fcbc6-lsqk8" Feb 16 02:23:40.666676 master-0 kubenswrapper[31559]: I0216 02:23:40.666610 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a19c5154-889e-41c6-8a31-1278fee76b9d-v4-0-config-system-router-certs\") pod \"oauth-openshift-645f9fcbc6-lsqk8\" (UID: \"a19c5154-889e-41c6-8a31-1278fee76b9d\") " pod="openshift-authentication/oauth-openshift-645f9fcbc6-lsqk8" Feb 16 02:23:40.667124 master-0 kubenswrapper[31559]: I0216 02:23:40.667042 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a19c5154-889e-41c6-8a31-1278fee76b9d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-645f9fcbc6-lsqk8\" (UID: \"a19c5154-889e-41c6-8a31-1278fee76b9d\") " pod="openshift-authentication/oauth-openshift-645f9fcbc6-lsqk8" Feb 16 02:23:40.667783 master-0 kubenswrapper[31559]: I0216 02:23:40.667726 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a19c5154-889e-41c6-8a31-1278fee76b9d-v4-0-config-user-template-error\") pod \"oauth-openshift-645f9fcbc6-lsqk8\" (UID: \"a19c5154-889e-41c6-8a31-1278fee76b9d\") " pod="openshift-authentication/oauth-openshift-645f9fcbc6-lsqk8" Feb 16 02:23:40.668272 master-0 kubenswrapper[31559]: I0216 02:23:40.668213 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a19c5154-889e-41c6-8a31-1278fee76b9d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-645f9fcbc6-lsqk8\" (UID: \"a19c5154-889e-41c6-8a31-1278fee76b9d\") " pod="openshift-authentication/oauth-openshift-645f9fcbc6-lsqk8" Feb 16 02:23:40.668954 master-0 kubenswrapper[31559]: I0216 02:23:40.668888 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a19c5154-889e-41c6-8a31-1278fee76b9d-v4-0-config-system-session\") pod \"oauth-openshift-645f9fcbc6-lsqk8\" (UID: \"a19c5154-889e-41c6-8a31-1278fee76b9d\") " pod="openshift-authentication/oauth-openshift-645f9fcbc6-lsqk8" Feb 16 02:23:40.671089 master-0 kubenswrapper[31559]: I0216 02:23:40.671021 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a19c5154-889e-41c6-8a31-1278fee76b9d-v4-0-config-user-template-login\") pod \"oauth-openshift-645f9fcbc6-lsqk8\" (UID: \"a19c5154-889e-41c6-8a31-1278fee76b9d\") " pod="openshift-authentication/oauth-openshift-645f9fcbc6-lsqk8" Feb 16 02:23:40.672033 master-0 kubenswrapper[31559]: I0216 02:23:40.671965 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a19c5154-889e-41c6-8a31-1278fee76b9d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-645f9fcbc6-lsqk8\" (UID: \"a19c5154-889e-41c6-8a31-1278fee76b9d\") " pod="openshift-authentication/oauth-openshift-645f9fcbc6-lsqk8" Feb 16 02:23:40.672131 master-0 kubenswrapper[31559]: I0216 02:23:40.672055 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a19c5154-889e-41c6-8a31-1278fee76b9d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-645f9fcbc6-lsqk8\" (UID: \"a19c5154-889e-41c6-8a31-1278fee76b9d\") " pod="openshift-authentication/oauth-openshift-645f9fcbc6-lsqk8" Feb 16 02:23:40.683357 master-0 kubenswrapper[31559]: I0216 02:23:40.683293 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfk6h\" (UniqueName: \"kubernetes.io/projected/a19c5154-889e-41c6-8a31-1278fee76b9d-kube-api-access-lfk6h\") pod \"oauth-openshift-645f9fcbc6-lsqk8\" (UID: \"a19c5154-889e-41c6-8a31-1278fee76b9d\") " pod="openshift-authentication/oauth-openshift-645f9fcbc6-lsqk8" Feb 16 02:23:40.847091 master-0 kubenswrapper[31559]: I0216 02:23:40.846776 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-645f9fcbc6-lsqk8" Feb 16 02:23:41.050393 master-0 kubenswrapper[31559]: I0216 02:23:41.050324 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-565b9bd659-h8ks5" Feb 16 02:23:41.050999 master-0 kubenswrapper[31559]: I0216 02:23:41.050344 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-565b9bd659-h8ks5" event={"ID":"be05b90a-997b-4183-935a-05ee1461cd65","Type":"ContainerDied","Data":"87803ec4d61e8a24398b474a4f909ce87b4c9d03a0f6b36650a71a600eadd419"} Feb 16 02:23:41.050999 master-0 kubenswrapper[31559]: I0216 02:23:41.050534 31559 scope.go:117] "RemoveContainer" containerID="cdf3936da8424e68d746f1222eb12daab021fe9d4d7c357f4216d0e298e452d3" Feb 16 02:23:41.133545 master-0 kubenswrapper[31559]: I0216 02:23:41.132472 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-565b9bd659-h8ks5"] Feb 16 02:23:41.134893 master-0 kubenswrapper[31559]: I0216 02:23:41.134818 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-565b9bd659-h8ks5"] Feb 16 02:23:41.429874 master-0 kubenswrapper[31559]: I0216 02:23:41.429813 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-645f9fcbc6-lsqk8"] Feb 16 02:23:41.444615 master-0 kubenswrapper[31559]: W0216 02:23:41.444539 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda19c5154_889e_41c6_8a31_1278fee76b9d.slice/crio-ac80794df23ebbe8cebc38b294eabbae9f74584205cd49d218bcbd26a54c14de WatchSource:0}: Error finding container ac80794df23ebbe8cebc38b294eabbae9f74584205cd49d218bcbd26a54c14de: Status 404 returned error can't find the container with id ac80794df23ebbe8cebc38b294eabbae9f74584205cd49d218bcbd26a54c14de Feb 16 02:23:41.937127 master-0 kubenswrapper[31559]: I0216 02:23:41.937046 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be05b90a-997b-4183-935a-05ee1461cd65" path="/var/lib/kubelet/pods/be05b90a-997b-4183-935a-05ee1461cd65/volumes" Feb 16 02:23:42.060735 master-0 kubenswrapper[31559]: I0216 02:23:42.060588 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-645f9fcbc6-lsqk8" event={"ID":"a19c5154-889e-41c6-8a31-1278fee76b9d","Type":"ContainerStarted","Data":"1e37cadb894aba0e691914d9f5cb8b6bc3df98b9a587bf6b74164aec63113fef"} Feb 16 02:23:42.060735 master-0 kubenswrapper[31559]: I0216 02:23:42.060653 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-645f9fcbc6-lsqk8" event={"ID":"a19c5154-889e-41c6-8a31-1278fee76b9d","Type":"ContainerStarted","Data":"ac80794df23ebbe8cebc38b294eabbae9f74584205cd49d218bcbd26a54c14de"} Feb 16 02:23:42.061921 master-0 kubenswrapper[31559]: I0216 02:23:42.060863 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-645f9fcbc6-lsqk8" Feb 16 02:23:42.128520 master-0 kubenswrapper[31559]: I0216 02:23:42.128372 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-645f9fcbc6-lsqk8" podStartSLOduration=28.128345498 podStartE2EDuration="28.128345498s" podCreationTimestamp="2026-02-16 02:23:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:23:42.12558937 +0000 UTC m=+74.470195395" watchObservedRunningTime="2026-02-16 02:23:42.128345498 +0000 UTC m=+74.472951523" Feb 16 02:23:42.188105 master-0 kubenswrapper[31559]: I0216 02:23:42.188025 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/20bf60f7-9e36-477e-96a5-4fc8dc1bca5e-kube-api-access\") pod \"installer-3-master-0\" (UID: \"20bf60f7-9e36-477e-96a5-4fc8dc1bca5e\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 16 02:23:42.193130 master-0 kubenswrapper[31559]: I0216 02:23:42.193075 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/20bf60f7-9e36-477e-96a5-4fc8dc1bca5e-kube-api-access\") pod \"installer-3-master-0\" (UID: \"20bf60f7-9e36-477e-96a5-4fc8dc1bca5e\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 16 02:23:42.268756 master-0 kubenswrapper[31559]: I0216 02:23:42.268658 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-645f9fcbc6-lsqk8" Feb 16 02:23:42.289004 master-0 kubenswrapper[31559]: I0216 02:23:42.288916 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/20bf60f7-9e36-477e-96a5-4fc8dc1bca5e-kube-api-access\") pod \"20bf60f7-9e36-477e-96a5-4fc8dc1bca5e\" (UID: \"20bf60f7-9e36-477e-96a5-4fc8dc1bca5e\") " Feb 16 02:23:42.295836 master-0 kubenswrapper[31559]: I0216 02:23:42.295741 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20bf60f7-9e36-477e-96a5-4fc8dc1bca5e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "20bf60f7-9e36-477e-96a5-4fc8dc1bca5e" (UID: "20bf60f7-9e36-477e-96a5-4fc8dc1bca5e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:23:42.390722 master-0 kubenswrapper[31559]: I0216 02:23:42.390665 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/20bf60f7-9e36-477e-96a5-4fc8dc1bca5e-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 16 02:23:51.850775 master-0 kubenswrapper[31559]: I0216 02:23:51.850688 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Feb 16 02:23:51.851781 master-0 kubenswrapper[31559]: I0216 02:23:51.851059 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-4-master-0" podUID="9beb5b29-c163-47e3-9ec2-9f0a49946866" containerName="installer" containerID="cri-o://a1a8b11e5475aac3b80209d00b6b722fce406bfe793a6cf1ee6e4a948f24542e" gracePeriod=30 Feb 16 02:23:55.653930 master-0 kubenswrapper[31559]: I0216 02:23:55.650575 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Feb 16 02:23:55.653930 master-0 kubenswrapper[31559]: I0216 02:23:55.652287 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Feb 16 02:23:55.668475 master-0 kubenswrapper[31559]: I0216 02:23:55.665984 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Feb 16 02:23:55.818151 master-0 kubenswrapper[31559]: I0216 02:23:55.818056 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/470e2f5e-6f60-49be-850b-8a0df6566fdd-kube-api-access\") pod \"installer-5-master-0\" (UID: \"470e2f5e-6f60-49be-850b-8a0df6566fdd\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 16 02:23:55.818763 master-0 kubenswrapper[31559]: I0216 02:23:55.818699 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/470e2f5e-6f60-49be-850b-8a0df6566fdd-var-lock\") pod \"installer-5-master-0\" (UID: \"470e2f5e-6f60-49be-850b-8a0df6566fdd\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 16 02:23:55.818877 master-0 kubenswrapper[31559]: I0216 02:23:55.818780 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/470e2f5e-6f60-49be-850b-8a0df6566fdd-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"470e2f5e-6f60-49be-850b-8a0df6566fdd\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 16 02:23:55.921033 master-0 kubenswrapper[31559]: I0216 02:23:55.920946 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/470e2f5e-6f60-49be-850b-8a0df6566fdd-var-lock\") pod \"installer-5-master-0\" (UID: \"470e2f5e-6f60-49be-850b-8a0df6566fdd\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 16 02:23:55.921208 master-0 kubenswrapper[31559]: I0216 02:23:55.921033 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/470e2f5e-6f60-49be-850b-8a0df6566fdd-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"470e2f5e-6f60-49be-850b-8a0df6566fdd\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 16 02:23:55.921208 master-0 kubenswrapper[31559]: I0216 02:23:55.921076 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/470e2f5e-6f60-49be-850b-8a0df6566fdd-var-lock\") pod \"installer-5-master-0\" (UID: \"470e2f5e-6f60-49be-850b-8a0df6566fdd\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 16 02:23:55.921208 master-0 kubenswrapper[31559]: I0216 02:23:55.921100 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/470e2f5e-6f60-49be-850b-8a0df6566fdd-kube-api-access\") pod \"installer-5-master-0\" (UID: \"470e2f5e-6f60-49be-850b-8a0df6566fdd\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 16 02:23:55.921208 master-0 kubenswrapper[31559]: I0216 02:23:55.921127 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/470e2f5e-6f60-49be-850b-8a0df6566fdd-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"470e2f5e-6f60-49be-850b-8a0df6566fdd\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 16 02:23:55.957683 master-0 kubenswrapper[31559]: I0216 02:23:55.957627 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/470e2f5e-6f60-49be-850b-8a0df6566fdd-kube-api-access\") pod \"installer-5-master-0\" (UID: \"470e2f5e-6f60-49be-850b-8a0df6566fdd\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 16 02:23:56.037788 master-0 kubenswrapper[31559]: I0216 02:23:56.037583 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Feb 16 02:23:56.562200 master-0 kubenswrapper[31559]: I0216 02:23:56.562041 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Feb 16 02:23:56.571690 master-0 kubenswrapper[31559]: W0216 02:23:56.571606 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod470e2f5e_6f60_49be_850b_8a0df6566fdd.slice/crio-060e3db121c315c88da13820615d026715b125f8cd2da5185c18e437df0b6cd7 WatchSource:0}: Error finding container 060e3db121c315c88da13820615d026715b125f8cd2da5185c18e437df0b6cd7: Status 404 returned error can't find the container with id 060e3db121c315c88da13820615d026715b125f8cd2da5185c18e437df0b6cd7 Feb 16 02:23:57.200363 master-0 kubenswrapper[31559]: I0216 02:23:57.200282 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"470e2f5e-6f60-49be-850b-8a0df6566fdd","Type":"ContainerStarted","Data":"b065b003f5a170ba47608c050d3ce15d97b13b2c5aafda2e2d02ab5ae8e130df"} Feb 16 02:23:57.201422 master-0 kubenswrapper[31559]: I0216 02:23:57.200475 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"470e2f5e-6f60-49be-850b-8a0df6566fdd","Type":"ContainerStarted","Data":"060e3db121c315c88da13820615d026715b125f8cd2da5185c18e437df0b6cd7"} Feb 16 02:23:57.225323 master-0 kubenswrapper[31559]: I0216 02:23:57.225087 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-5-master-0" podStartSLOduration=2.2250566689999998 podStartE2EDuration="2.225056669s" podCreationTimestamp="2026-02-16 02:23:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:23:57.223584953 +0000 UTC m=+89.568190998" watchObservedRunningTime="2026-02-16 02:23:57.225056669 +0000 UTC m=+89.569662724" Feb 16 02:24:10.209266 master-0 kubenswrapper[31559]: I0216 02:24:10.209221 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-4-master-0_9beb5b29-c163-47e3-9ec2-9f0a49946866/installer/0.log" Feb 16 02:24:10.210152 master-0 kubenswrapper[31559]: I0216 02:24:10.210121 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Feb 16 02:24:10.305960 master-0 kubenswrapper[31559]: I0216 02:24:10.305815 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9beb5b29-c163-47e3-9ec2-9f0a49946866-var-lock\") pod \"9beb5b29-c163-47e3-9ec2-9f0a49946866\" (UID: \"9beb5b29-c163-47e3-9ec2-9f0a49946866\") " Feb 16 02:24:10.306235 master-0 kubenswrapper[31559]: I0216 02:24:10.305991 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9beb5b29-c163-47e3-9ec2-9f0a49946866-var-lock" (OuterVolumeSpecName: "var-lock") pod "9beb5b29-c163-47e3-9ec2-9f0a49946866" (UID: "9beb5b29-c163-47e3-9ec2-9f0a49946866"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:24:10.306235 master-0 kubenswrapper[31559]: I0216 02:24:10.306106 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9beb5b29-c163-47e3-9ec2-9f0a49946866-kube-api-access\") pod \"9beb5b29-c163-47e3-9ec2-9f0a49946866\" (UID: \"9beb5b29-c163-47e3-9ec2-9f0a49946866\") " Feb 16 02:24:10.306235 master-0 kubenswrapper[31559]: I0216 02:24:10.306149 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9beb5b29-c163-47e3-9ec2-9f0a49946866-kubelet-dir\") pod \"9beb5b29-c163-47e3-9ec2-9f0a49946866\" (UID: \"9beb5b29-c163-47e3-9ec2-9f0a49946866\") " Feb 16 02:24:10.306681 master-0 kubenswrapper[31559]: I0216 02:24:10.306627 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9beb5b29-c163-47e3-9ec2-9f0a49946866-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9beb5b29-c163-47e3-9ec2-9f0a49946866" (UID: "9beb5b29-c163-47e3-9ec2-9f0a49946866"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:24:10.307039 master-0 kubenswrapper[31559]: I0216 02:24:10.306991 31559 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9beb5b29-c163-47e3-9ec2-9f0a49946866-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 16 02:24:10.307277 master-0 kubenswrapper[31559]: I0216 02:24:10.307036 31559 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9beb5b29-c163-47e3-9ec2-9f0a49946866-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 02:24:10.311613 master-0 kubenswrapper[31559]: I0216 02:24:10.311559 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9beb5b29-c163-47e3-9ec2-9f0a49946866-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9beb5b29-c163-47e3-9ec2-9f0a49946866" (UID: "9beb5b29-c163-47e3-9ec2-9f0a49946866"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:24:10.356633 master-0 kubenswrapper[31559]: I0216 02:24:10.356263 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-4-master-0_9beb5b29-c163-47e3-9ec2-9f0a49946866/installer/0.log" Feb 16 02:24:10.356633 master-0 kubenswrapper[31559]: I0216 02:24:10.356345 31559 generic.go:334] "Generic (PLEG): container finished" podID="9beb5b29-c163-47e3-9ec2-9f0a49946866" containerID="a1a8b11e5475aac3b80209d00b6b722fce406bfe793a6cf1ee6e4a948f24542e" exitCode=1 Feb 16 02:24:10.356633 master-0 kubenswrapper[31559]: I0216 02:24:10.356391 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"9beb5b29-c163-47e3-9ec2-9f0a49946866","Type":"ContainerDied","Data":"a1a8b11e5475aac3b80209d00b6b722fce406bfe793a6cf1ee6e4a948f24542e"} Feb 16 02:24:10.356633 master-0 kubenswrapper[31559]: I0216 02:24:10.356432 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Feb 16 02:24:10.356633 master-0 kubenswrapper[31559]: I0216 02:24:10.356475 31559 scope.go:117] "RemoveContainer" containerID="a1a8b11e5475aac3b80209d00b6b722fce406bfe793a6cf1ee6e4a948f24542e" Feb 16 02:24:10.356633 master-0 kubenswrapper[31559]: I0216 02:24:10.356429 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"9beb5b29-c163-47e3-9ec2-9f0a49946866","Type":"ContainerDied","Data":"68dc693a74152dc0f9e5701c003a9b5be1a5054dad182d20b2686034f2ad1753"} Feb 16 02:24:10.385261 master-0 kubenswrapper[31559]: I0216 02:24:10.385192 31559 scope.go:117] "RemoveContainer" containerID="a1a8b11e5475aac3b80209d00b6b722fce406bfe793a6cf1ee6e4a948f24542e" Feb 16 02:24:10.385888 master-0 kubenswrapper[31559]: E0216 02:24:10.385776 31559 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1a8b11e5475aac3b80209d00b6b722fce406bfe793a6cf1ee6e4a948f24542e\": container with ID starting with a1a8b11e5475aac3b80209d00b6b722fce406bfe793a6cf1ee6e4a948f24542e not found: ID does not exist" containerID="a1a8b11e5475aac3b80209d00b6b722fce406bfe793a6cf1ee6e4a948f24542e" Feb 16 02:24:10.385888 master-0 kubenswrapper[31559]: I0216 02:24:10.385874 31559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1a8b11e5475aac3b80209d00b6b722fce406bfe793a6cf1ee6e4a948f24542e"} err="failed to get container status \"a1a8b11e5475aac3b80209d00b6b722fce406bfe793a6cf1ee6e4a948f24542e\": rpc error: code = NotFound desc = could not find container \"a1a8b11e5475aac3b80209d00b6b722fce406bfe793a6cf1ee6e4a948f24542e\": container with ID starting with a1a8b11e5475aac3b80209d00b6b722fce406bfe793a6cf1ee6e4a948f24542e not found: ID does not exist" Feb 16 02:24:10.408670 master-0 kubenswrapper[31559]: I0216 02:24:10.408536 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9beb5b29-c163-47e3-9ec2-9f0a49946866-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 16 02:24:10.428598 master-0 kubenswrapper[31559]: I0216 02:24:10.427761 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Feb 16 02:24:10.439938 master-0 kubenswrapper[31559]: I0216 02:24:10.439090 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Feb 16 02:24:11.940402 master-0 kubenswrapper[31559]: I0216 02:24:11.940327 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9beb5b29-c163-47e3-9ec2-9f0a49946866" path="/var/lib/kubelet/pods/9beb5b29-c163-47e3-9ec2-9f0a49946866/volumes" Feb 16 02:24:55.448137 master-0 kubenswrapper[31559]: I0216 02:24:55.448045 31559 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 16 02:24:55.449466 master-0 kubenswrapper[31559]: E0216 02:24:55.448544 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9beb5b29-c163-47e3-9ec2-9f0a49946866" containerName="installer" Feb 16 02:24:55.449466 master-0 kubenswrapper[31559]: I0216 02:24:55.448567 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="9beb5b29-c163-47e3-9ec2-9f0a49946866" containerName="installer" Feb 16 02:24:55.449466 master-0 kubenswrapper[31559]: I0216 02:24:55.448804 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="9beb5b29-c163-47e3-9ec2-9f0a49946866" containerName="installer" Feb 16 02:24:55.449466 master-0 kubenswrapper[31559]: I0216 02:24:55.449386 31559 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Feb 16 02:24:55.449841 master-0 kubenswrapper[31559]: I0216 02:24:55.449638 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 02:24:55.449951 master-0 kubenswrapper[31559]: I0216 02:24:55.449883 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="619e637b8575311b72d43b7b782d610a" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://23f1844f084bd578a72562cf5fb2523c0cb62c0a661c5a07d573cb0c56ece51d" gracePeriod=15 Feb 16 02:24:55.450051 master-0 kubenswrapper[31559]: I0216 02:24:55.449951 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="619e637b8575311b72d43b7b782d610a" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://fd47e260d84ca6db305938f1f4e1895a6f6bdda99aeb361b11a3ab5204667a82" gracePeriod=15 Feb 16 02:24:55.450218 master-0 kubenswrapper[31559]: I0216 02:24:55.450065 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="619e637b8575311b72d43b7b782d610a" containerName="kube-apiserver-cert-syncer" containerID="cri-o://8390bbe4d8742fdad642a6f50a5fdda06aa95077fda8b2a4a38589b254209605" gracePeriod=15 Feb 16 02:24:55.450342 master-0 kubenswrapper[31559]: I0216 02:24:55.450114 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="619e637b8575311b72d43b7b782d610a" containerName="kube-apiserver-check-endpoints" containerID="cri-o://4f2b3293fb881e9e69c8104ce58e27052e815106ebd3a16fe20d387a1610c59c" gracePeriod=15 Feb 16 02:24:55.450596 master-0 kubenswrapper[31559]: I0216 02:24:55.450488 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="619e637b8575311b72d43b7b782d610a" containerName="kube-apiserver" containerID="cri-o://fec477df3910b6405819fe05a8d3d1b8456afafc6dfca3d23c64fa136cd595d6" gracePeriod=15 Feb 16 02:24:55.454371 master-0 kubenswrapper[31559]: I0216 02:24:55.454276 31559 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Feb 16 02:24:55.454842 master-0 kubenswrapper[31559]: E0216 02:24:55.454778 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="619e637b8575311b72d43b7b782d610a" containerName="kube-apiserver-cert-regeneration-controller" Feb 16 02:24:55.454842 master-0 kubenswrapper[31559]: I0216 02:24:55.454820 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="619e637b8575311b72d43b7b782d610a" containerName="kube-apiserver-cert-regeneration-controller" Feb 16 02:24:55.454842 master-0 kubenswrapper[31559]: E0216 02:24:55.454845 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="619e637b8575311b72d43b7b782d610a" containerName="kube-apiserver-check-endpoints" Feb 16 02:24:55.455125 master-0 kubenswrapper[31559]: I0216 02:24:55.454863 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="619e637b8575311b72d43b7b782d610a" containerName="kube-apiserver-check-endpoints" Feb 16 02:24:55.455125 master-0 kubenswrapper[31559]: E0216 02:24:55.454901 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="619e637b8575311b72d43b7b782d610a" containerName="kube-apiserver-cert-syncer" Feb 16 02:24:55.455125 master-0 kubenswrapper[31559]: I0216 02:24:55.454919 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="619e637b8575311b72d43b7b782d610a" containerName="kube-apiserver-cert-syncer" Feb 16 02:24:55.455125 master-0 kubenswrapper[31559]: E0216 02:24:55.454947 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="619e637b8575311b72d43b7b782d610a" containerName="kube-apiserver-insecure-readyz" Feb 16 02:24:55.455125 master-0 kubenswrapper[31559]: I0216 02:24:55.454962 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="619e637b8575311b72d43b7b782d610a" containerName="kube-apiserver-insecure-readyz" Feb 16 02:24:55.455125 master-0 kubenswrapper[31559]: E0216 02:24:55.454994 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="619e637b8575311b72d43b7b782d610a" containerName="kube-apiserver" Feb 16 02:24:55.455125 master-0 kubenswrapper[31559]: I0216 02:24:55.455011 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="619e637b8575311b72d43b7b782d610a" containerName="kube-apiserver" Feb 16 02:24:55.455125 master-0 kubenswrapper[31559]: E0216 02:24:55.455049 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="619e637b8575311b72d43b7b782d610a" containerName="setup" Feb 16 02:24:55.455125 master-0 kubenswrapper[31559]: I0216 02:24:55.455066 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="619e637b8575311b72d43b7b782d610a" containerName="setup" Feb 16 02:24:55.460629 master-0 kubenswrapper[31559]: I0216 02:24:55.455334 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="619e637b8575311b72d43b7b782d610a" containerName="kube-apiserver" Feb 16 02:24:55.460629 master-0 kubenswrapper[31559]: I0216 02:24:55.455377 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="619e637b8575311b72d43b7b782d610a" containerName="kube-apiserver-cert-regeneration-controller" Feb 16 02:24:55.460629 master-0 kubenswrapper[31559]: I0216 02:24:55.455408 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="619e637b8575311b72d43b7b782d610a" containerName="kube-apiserver-check-endpoints" Feb 16 02:24:55.460629 master-0 kubenswrapper[31559]: I0216 02:24:55.455471 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="619e637b8575311b72d43b7b782d610a" containerName="kube-apiserver-insecure-readyz" Feb 16 02:24:55.460629 master-0 kubenswrapper[31559]: I0216 02:24:55.455574 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="619e637b8575311b72d43b7b782d610a" containerName="kube-apiserver-cert-syncer" Feb 16 02:24:55.460629 master-0 kubenswrapper[31559]: E0216 02:24:55.458741 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="619e637b8575311b72d43b7b782d610a" containerName="kube-apiserver-check-endpoints" Feb 16 02:24:55.460629 master-0 kubenswrapper[31559]: I0216 02:24:55.458778 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="619e637b8575311b72d43b7b782d610a" containerName="kube-apiserver-check-endpoints" Feb 16 02:24:55.460629 master-0 kubenswrapper[31559]: I0216 02:24:55.459113 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="619e637b8575311b72d43b7b782d610a" containerName="kube-apiserver-check-endpoints" Feb 16 02:24:55.629250 master-0 kubenswrapper[31559]: I0216 02:24:55.629161 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0ef22dacc42282620a76fbbcd3b157ad-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"0ef22dacc42282620a76fbbcd3b157ad\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 02:24:55.629412 master-0 kubenswrapper[31559]: I0216 02:24:55.629260 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0ef22dacc42282620a76fbbcd3b157ad-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"0ef22dacc42282620a76fbbcd3b157ad\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 02:24:55.629412 master-0 kubenswrapper[31559]: I0216 02:24:55.629343 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/faa15f80078a2bfbe2234a74ab4da87c-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"faa15f80078a2bfbe2234a74ab4da87c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 02:24:55.629533 master-0 kubenswrapper[31559]: I0216 02:24:55.629450 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/0ef22dacc42282620a76fbbcd3b157ad-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"0ef22dacc42282620a76fbbcd3b157ad\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 02:24:55.629533 master-0 kubenswrapper[31559]: I0216 02:24:55.629475 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/faa15f80078a2bfbe2234a74ab4da87c-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"faa15f80078a2bfbe2234a74ab4da87c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 02:24:55.629533 master-0 kubenswrapper[31559]: I0216 02:24:55.629497 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/0ef22dacc42282620a76fbbcd3b157ad-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"0ef22dacc42282620a76fbbcd3b157ad\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 02:24:55.629773 master-0 kubenswrapper[31559]: I0216 02:24:55.629623 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/faa15f80078a2bfbe2234a74ab4da87c-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"faa15f80078a2bfbe2234a74ab4da87c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 02:24:55.629773 master-0 kubenswrapper[31559]: I0216 02:24:55.629693 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/0ef22dacc42282620a76fbbcd3b157ad-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"0ef22dacc42282620a76fbbcd3b157ad\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 02:24:55.709858 master-0 kubenswrapper[31559]: I0216 02:24:55.709747 31559 patch_prober.go:28] interesting pod/kube-apiserver-master-0 container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.32.10:6443/readyz\": dial tcp 192.168.32.10:6443: connect: connection refused" start-of-body= Feb 16 02:24:55.709858 master-0 kubenswrapper[31559]: I0216 02:24:55.709833 31559 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="619e637b8575311b72d43b7b782d610a" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.32.10:6443/readyz\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:24:55.710933 master-0 kubenswrapper[31559]: E0216 02:24:55.710762 31559 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event=< Feb 16 02:24:55.710933 master-0 kubenswrapper[31559]: &Event{ObjectMeta:{kube-apiserver-master-0.189498ecaaddad3b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-master-0,UID:619e637b8575311b72d43b7b782d610a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.32.10:6443/readyz": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 02:24:55.710933 master-0 kubenswrapper[31559]: body: Feb 16 02:24:55.710933 master-0 kubenswrapper[31559]: ,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:24:55.709805883 +0000 UTC m=+148.054411938,LastTimestamp:2026-02-16 02:24:55.709805883 +0000 UTC m=+148.054411938,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,} Feb 16 02:24:55.710933 master-0 kubenswrapper[31559]: > Feb 16 02:24:55.731045 master-0 kubenswrapper[31559]: I0216 02:24:55.730969 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/faa15f80078a2bfbe2234a74ab4da87c-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"faa15f80078a2bfbe2234a74ab4da87c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 02:24:55.731045 master-0 kubenswrapper[31559]: I0216 02:24:55.731027 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/0ef22dacc42282620a76fbbcd3b157ad-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"0ef22dacc42282620a76fbbcd3b157ad\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 02:24:55.731158 master-0 kubenswrapper[31559]: I0216 02:24:55.731081 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/faa15f80078a2bfbe2234a74ab4da87c-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"faa15f80078a2bfbe2234a74ab4da87c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 02:24:55.731158 master-0 kubenswrapper[31559]: I0216 02:24:55.731093 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/faa15f80078a2bfbe2234a74ab4da87c-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"faa15f80078a2bfbe2234a74ab4da87c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 02:24:55.731158 master-0 kubenswrapper[31559]: I0216 02:24:55.731109 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/0ef22dacc42282620a76fbbcd3b157ad-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"0ef22dacc42282620a76fbbcd3b157ad\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 02:24:55.731252 master-0 kubenswrapper[31559]: I0216 02:24:55.731182 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/0ef22dacc42282620a76fbbcd3b157ad-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"0ef22dacc42282620a76fbbcd3b157ad\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 02:24:55.731339 master-0 kubenswrapper[31559]: I0216 02:24:55.731304 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0ef22dacc42282620a76fbbcd3b157ad-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"0ef22dacc42282620a76fbbcd3b157ad\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 02:24:55.731404 master-0 kubenswrapper[31559]: I0216 02:24:55.731381 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0ef22dacc42282620a76fbbcd3b157ad-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"0ef22dacc42282620a76fbbcd3b157ad\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 02:24:55.731404 master-0 kubenswrapper[31559]: I0216 02:24:55.731307 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/0ef22dacc42282620a76fbbcd3b157ad-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"0ef22dacc42282620a76fbbcd3b157ad\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 02:24:55.731489 master-0 kubenswrapper[31559]: I0216 02:24:55.731399 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/faa15f80078a2bfbe2234a74ab4da87c-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"faa15f80078a2bfbe2234a74ab4da87c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 02:24:55.731489 master-0 kubenswrapper[31559]: I0216 02:24:55.731458 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0ef22dacc42282620a76fbbcd3b157ad-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"0ef22dacc42282620a76fbbcd3b157ad\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 02:24:55.731558 master-0 kubenswrapper[31559]: I0216 02:24:55.731487 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0ef22dacc42282620a76fbbcd3b157ad-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"0ef22dacc42282620a76fbbcd3b157ad\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 02:24:55.731558 master-0 kubenswrapper[31559]: I0216 02:24:55.731535 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/faa15f80078a2bfbe2234a74ab4da87c-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"faa15f80078a2bfbe2234a74ab4da87c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 02:24:55.731617 master-0 kubenswrapper[31559]: I0216 02:24:55.731598 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/0ef22dacc42282620a76fbbcd3b157ad-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"0ef22dacc42282620a76fbbcd3b157ad\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 02:24:55.731650 master-0 kubenswrapper[31559]: I0216 02:24:55.731632 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/faa15f80078a2bfbe2234a74ab4da87c-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"faa15f80078a2bfbe2234a74ab4da87c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 02:24:55.731733 master-0 kubenswrapper[31559]: I0216 02:24:55.731706 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/0ef22dacc42282620a76fbbcd3b157ad-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"0ef22dacc42282620a76fbbcd3b157ad\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 02:24:55.743572 master-0 kubenswrapper[31559]: I0216 02:24:55.742382 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_619e637b8575311b72d43b7b782d610a/kube-apiserver-check-endpoints/0.log" Feb 16 02:24:55.745756 master-0 kubenswrapper[31559]: I0216 02:24:55.745696 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_619e637b8575311b72d43b7b782d610a/kube-apiserver-cert-syncer/0.log" Feb 16 02:24:55.746930 master-0 kubenswrapper[31559]: I0216 02:24:55.746824 31559 generic.go:334] "Generic (PLEG): container finished" podID="619e637b8575311b72d43b7b782d610a" containerID="4f2b3293fb881e9e69c8104ce58e27052e815106ebd3a16fe20d387a1610c59c" exitCode=0 Feb 16 02:24:55.747031 master-0 kubenswrapper[31559]: I0216 02:24:55.746948 31559 generic.go:334] "Generic (PLEG): container finished" podID="619e637b8575311b72d43b7b782d610a" containerID="23f1844f084bd578a72562cf5fb2523c0cb62c0a661c5a07d573cb0c56ece51d" exitCode=0 Feb 16 02:24:55.747031 master-0 kubenswrapper[31559]: I0216 02:24:55.746892 31559 scope.go:117] "RemoveContainer" containerID="c1ba2d68a64d6fb932ae524cee345f61dbf00431978608d5398de81a322f1f49" Feb 16 02:24:55.747120 master-0 kubenswrapper[31559]: I0216 02:24:55.746975 31559 generic.go:334] "Generic (PLEG): container finished" podID="619e637b8575311b72d43b7b782d610a" containerID="fd47e260d84ca6db305938f1f4e1895a6f6bdda99aeb361b11a3ab5204667a82" exitCode=0 Feb 16 02:24:55.747120 master-0 kubenswrapper[31559]: I0216 02:24:55.747095 31559 generic.go:334] "Generic (PLEG): container finished" podID="619e637b8575311b72d43b7b782d610a" containerID="8390bbe4d8742fdad642a6f50a5fdda06aa95077fda8b2a4a38589b254209605" exitCode=2 Feb 16 02:24:55.749334 master-0 kubenswrapper[31559]: I0216 02:24:55.749286 31559 generic.go:334] "Generic (PLEG): container finished" podID="470e2f5e-6f60-49be-850b-8a0df6566fdd" containerID="b065b003f5a170ba47608c050d3ce15d97b13b2c5aafda2e2d02ab5ae8e130df" exitCode=0 Feb 16 02:24:55.749402 master-0 kubenswrapper[31559]: I0216 02:24:55.749335 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"470e2f5e-6f60-49be-850b-8a0df6566fdd","Type":"ContainerDied","Data":"b065b003f5a170ba47608c050d3ce15d97b13b2c5aafda2e2d02ab5ae8e130df"} Feb 16 02:24:55.750603 master-0 kubenswrapper[31559]: I0216 02:24:55.750505 31559 status_manager.go:851] "Failed to get status for pod" podUID="619e637b8575311b72d43b7b782d610a" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:24:55.752930 master-0 kubenswrapper[31559]: I0216 02:24:55.752834 31559 status_manager.go:851] "Failed to get status for pod" podUID="470e2f5e-6f60-49be-850b-8a0df6566fdd" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:24:56.764179 master-0 kubenswrapper[31559]: I0216 02:24:56.764097 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_619e637b8575311b72d43b7b782d610a/kube-apiserver-cert-syncer/0.log" Feb 16 02:24:57.200338 master-0 kubenswrapper[31559]: E0216 02:24:57.200249 31559 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:24:57.201286 master-0 kubenswrapper[31559]: E0216 02:24:57.201223 31559 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:24:57.202194 master-0 kubenswrapper[31559]: E0216 02:24:57.202139 31559 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:24:57.202915 master-0 kubenswrapper[31559]: E0216 02:24:57.202847 31559 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:24:57.204137 master-0 kubenswrapper[31559]: E0216 02:24:57.204082 31559 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:24:57.204208 master-0 kubenswrapper[31559]: I0216 02:24:57.204136 31559 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 16 02:24:57.204974 master-0 kubenswrapper[31559]: E0216 02:24:57.204911 31559 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Feb 16 02:24:57.210314 master-0 kubenswrapper[31559]: I0216 02:24:57.210280 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Feb 16 02:24:57.211375 master-0 kubenswrapper[31559]: I0216 02:24:57.211259 31559 status_manager.go:851] "Failed to get status for pod" podUID="470e2f5e-6f60-49be-850b-8a0df6566fdd" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:24:57.278469 master-0 kubenswrapper[31559]: I0216 02:24:57.278354 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/470e2f5e-6f60-49be-850b-8a0df6566fdd-kube-api-access\") pod \"470e2f5e-6f60-49be-850b-8a0df6566fdd\" (UID: \"470e2f5e-6f60-49be-850b-8a0df6566fdd\") " Feb 16 02:24:57.278697 master-0 kubenswrapper[31559]: I0216 02:24:57.278577 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/470e2f5e-6f60-49be-850b-8a0df6566fdd-kubelet-dir\") pod \"470e2f5e-6f60-49be-850b-8a0df6566fdd\" (UID: \"470e2f5e-6f60-49be-850b-8a0df6566fdd\") " Feb 16 02:24:57.278776 master-0 kubenswrapper[31559]: I0216 02:24:57.278702 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/470e2f5e-6f60-49be-850b-8a0df6566fdd-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "470e2f5e-6f60-49be-850b-8a0df6566fdd" (UID: "470e2f5e-6f60-49be-850b-8a0df6566fdd"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:24:57.278853 master-0 kubenswrapper[31559]: I0216 02:24:57.278790 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/470e2f5e-6f60-49be-850b-8a0df6566fdd-var-lock" (OuterVolumeSpecName: "var-lock") pod "470e2f5e-6f60-49be-850b-8a0df6566fdd" (UID: "470e2f5e-6f60-49be-850b-8a0df6566fdd"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:24:57.278853 master-0 kubenswrapper[31559]: I0216 02:24:57.278740 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/470e2f5e-6f60-49be-850b-8a0df6566fdd-var-lock\") pod \"470e2f5e-6f60-49be-850b-8a0df6566fdd\" (UID: \"470e2f5e-6f60-49be-850b-8a0df6566fdd\") " Feb 16 02:24:57.279396 master-0 kubenswrapper[31559]: I0216 02:24:57.279337 31559 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/470e2f5e-6f60-49be-850b-8a0df6566fdd-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 16 02:24:57.279396 master-0 kubenswrapper[31559]: I0216 02:24:57.279372 31559 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/470e2f5e-6f60-49be-850b-8a0df6566fdd-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 02:24:57.282932 master-0 kubenswrapper[31559]: I0216 02:24:57.282852 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/470e2f5e-6f60-49be-850b-8a0df6566fdd-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "470e2f5e-6f60-49be-850b-8a0df6566fdd" (UID: "470e2f5e-6f60-49be-850b-8a0df6566fdd"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:24:57.381187 master-0 kubenswrapper[31559]: I0216 02:24:57.381094 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/470e2f5e-6f60-49be-850b-8a0df6566fdd-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 16 02:24:57.406806 master-0 kubenswrapper[31559]: E0216 02:24:57.406721 31559 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Feb 16 02:24:57.775289 master-0 kubenswrapper[31559]: I0216 02:24:57.775232 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_619e637b8575311b72d43b7b782d610a/kube-apiserver-cert-syncer/0.log" Feb 16 02:24:57.776181 master-0 kubenswrapper[31559]: I0216 02:24:57.776123 31559 generic.go:334] "Generic (PLEG): container finished" podID="619e637b8575311b72d43b7b782d610a" containerID="fec477df3910b6405819fe05a8d3d1b8456afafc6dfca3d23c64fa136cd595d6" exitCode=0 Feb 16 02:24:57.777620 master-0 kubenswrapper[31559]: I0216 02:24:57.777590 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"470e2f5e-6f60-49be-850b-8a0df6566fdd","Type":"ContainerDied","Data":"060e3db121c315c88da13820615d026715b125f8cd2da5185c18e437df0b6cd7"} Feb 16 02:24:57.777721 master-0 kubenswrapper[31559]: I0216 02:24:57.777682 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Feb 16 02:24:57.777861 master-0 kubenswrapper[31559]: I0216 02:24:57.777690 31559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="060e3db121c315c88da13820615d026715b125f8cd2da5185c18e437df0b6cd7" Feb 16 02:24:57.808754 master-0 kubenswrapper[31559]: E0216 02:24:57.808659 31559 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Feb 16 02:24:57.909830 master-0 kubenswrapper[31559]: I0216 02:24:57.909689 31559 status_manager.go:851] "Failed to get status for pod" podUID="470e2f5e-6f60-49be-850b-8a0df6566fdd" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:24:57.912195 master-0 kubenswrapper[31559]: I0216 02:24:57.912128 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_619e637b8575311b72d43b7b782d610a/kube-apiserver-cert-syncer/0.log" Feb 16 02:24:57.914830 master-0 kubenswrapper[31559]: I0216 02:24:57.914758 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 02:24:57.917503 master-0 kubenswrapper[31559]: I0216 02:24:57.917290 31559 status_manager.go:851] "Failed to get status for pod" podUID="619e637b8575311b72d43b7b782d610a" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:24:57.918565 master-0 kubenswrapper[31559]: I0216 02:24:57.918496 31559 status_manager.go:851] "Failed to get status for pod" podUID="470e2f5e-6f60-49be-850b-8a0df6566fdd" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:24:57.934289 master-0 kubenswrapper[31559]: I0216 02:24:57.934220 31559 status_manager.go:851] "Failed to get status for pod" podUID="619e637b8575311b72d43b7b782d610a" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:24:57.935842 master-0 kubenswrapper[31559]: I0216 02:24:57.935731 31559 status_manager.go:851] "Failed to get status for pod" podUID="470e2f5e-6f60-49be-850b-8a0df6566fdd" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:24:57.996184 master-0 kubenswrapper[31559]: I0216 02:24:57.996001 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/619e637b8575311b72d43b7b782d610a-resource-dir\") pod \"619e637b8575311b72d43b7b782d610a\" (UID: \"619e637b8575311b72d43b7b782d610a\") " Feb 16 02:24:57.996184 master-0 kubenswrapper[31559]: I0216 02:24:57.996113 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/619e637b8575311b72d43b7b782d610a-audit-dir\") pod \"619e637b8575311b72d43b7b782d610a\" (UID: \"619e637b8575311b72d43b7b782d610a\") " Feb 16 02:24:57.996577 master-0 kubenswrapper[31559]: I0216 02:24:57.996174 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/619e637b8575311b72d43b7b782d610a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "619e637b8575311b72d43b7b782d610a" (UID: "619e637b8575311b72d43b7b782d610a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:24:57.996577 master-0 kubenswrapper[31559]: I0216 02:24:57.996231 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/619e637b8575311b72d43b7b782d610a-cert-dir\") pod \"619e637b8575311b72d43b7b782d610a\" (UID: \"619e637b8575311b72d43b7b782d610a\") " Feb 16 02:24:57.996577 master-0 kubenswrapper[31559]: I0216 02:24:57.996300 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/619e637b8575311b72d43b7b782d610a-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "619e637b8575311b72d43b7b782d610a" (UID: "619e637b8575311b72d43b7b782d610a"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:24:57.996577 master-0 kubenswrapper[31559]: I0216 02:24:57.996505 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/619e637b8575311b72d43b7b782d610a-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "619e637b8575311b72d43b7b782d610a" (UID: "619e637b8575311b72d43b7b782d610a"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:24:57.998063 master-0 kubenswrapper[31559]: I0216 02:24:57.997982 31559 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/619e637b8575311b72d43b7b782d610a-cert-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 02:24:57.998063 master-0 kubenswrapper[31559]: I0216 02:24:57.998039 31559 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/619e637b8575311b72d43b7b782d610a-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 02:24:57.998432 master-0 kubenswrapper[31559]: I0216 02:24:57.998082 31559 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/619e637b8575311b72d43b7b782d610a-audit-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 02:24:58.610349 master-0 kubenswrapper[31559]: E0216 02:24:58.610279 31559 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Feb 16 02:24:58.793574 master-0 kubenswrapper[31559]: I0216 02:24:58.793522 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_619e637b8575311b72d43b7b782d610a/kube-apiserver-cert-syncer/0.log" Feb 16 02:24:58.795066 master-0 kubenswrapper[31559]: I0216 02:24:58.795020 31559 scope.go:117] "RemoveContainer" containerID="4f2b3293fb881e9e69c8104ce58e27052e815106ebd3a16fe20d387a1610c59c" Feb 16 02:24:58.795215 master-0 kubenswrapper[31559]: I0216 02:24:58.795105 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 02:24:58.799088 master-0 kubenswrapper[31559]: I0216 02:24:58.799018 31559 status_manager.go:851] "Failed to get status for pod" podUID="619e637b8575311b72d43b7b782d610a" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:24:58.800278 master-0 kubenswrapper[31559]: I0216 02:24:58.800208 31559 status_manager.go:851] "Failed to get status for pod" podUID="470e2f5e-6f60-49be-850b-8a0df6566fdd" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:24:58.822115 master-0 kubenswrapper[31559]: I0216 02:24:58.822045 31559 status_manager.go:851] "Failed to get status for pod" podUID="470e2f5e-6f60-49be-850b-8a0df6566fdd" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:24:58.822733 master-0 kubenswrapper[31559]: I0216 02:24:58.822684 31559 status_manager.go:851] "Failed to get status for pod" podUID="619e637b8575311b72d43b7b782d610a" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:24:58.823086 master-0 kubenswrapper[31559]: I0216 02:24:58.823024 31559 scope.go:117] "RemoveContainer" containerID="23f1844f084bd578a72562cf5fb2523c0cb62c0a661c5a07d573cb0c56ece51d" Feb 16 02:24:58.848878 master-0 kubenswrapper[31559]: I0216 02:24:58.848828 31559 scope.go:117] "RemoveContainer" containerID="fd47e260d84ca6db305938f1f4e1895a6f6bdda99aeb361b11a3ab5204667a82" Feb 16 02:24:58.873505 master-0 kubenswrapper[31559]: I0216 02:24:58.873418 31559 scope.go:117] "RemoveContainer" containerID="8390bbe4d8742fdad642a6f50a5fdda06aa95077fda8b2a4a38589b254209605" Feb 16 02:24:58.903767 master-0 kubenswrapper[31559]: I0216 02:24:58.903712 31559 scope.go:117] "RemoveContainer" containerID="fec477df3910b6405819fe05a8d3d1b8456afafc6dfca3d23c64fa136cd595d6" Feb 16 02:24:58.930288 master-0 kubenswrapper[31559]: I0216 02:24:58.930229 31559 scope.go:117] "RemoveContainer" containerID="2f315c09e62d7e5ecdac8433decccf201da1935e2dc178927c912fe29e35daf4" Feb 16 02:24:59.938885 master-0 kubenswrapper[31559]: I0216 02:24:59.938817 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="619e637b8575311b72d43b7b782d610a" path="/var/lib/kubelet/pods/619e637b8575311b72d43b7b782d610a/volumes" Feb 16 02:25:00.213493 master-0 kubenswrapper[31559]: E0216 02:25:00.213307 31559 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Feb 16 02:25:00.511059 master-0 kubenswrapper[31559]: E0216 02:25:00.510874 31559 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 02:25:00.511766 master-0 kubenswrapper[31559]: I0216 02:25:00.511712 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 02:25:00.544625 master-0 kubenswrapper[31559]: W0216 02:25:00.544521 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ef22dacc42282620a76fbbcd3b157ad.slice/crio-6e1714a1a356feb9b8223e5ebc79742c102f4b710726d83a678e25e7826ae06d WatchSource:0}: Error finding container 6e1714a1a356feb9b8223e5ebc79742c102f4b710726d83a678e25e7826ae06d: Status 404 returned error can't find the container with id 6e1714a1a356feb9b8223e5ebc79742c102f4b710726d83a678e25e7826ae06d Feb 16 02:25:00.817121 master-0 kubenswrapper[31559]: I0216 02:25:00.817059 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"0ef22dacc42282620a76fbbcd3b157ad","Type":"ContainerStarted","Data":"6e1714a1a356feb9b8223e5ebc79742c102f4b710726d83a678e25e7826ae06d"} Feb 16 02:25:01.830592 master-0 kubenswrapper[31559]: I0216 02:25:01.830501 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"0ef22dacc42282620a76fbbcd3b157ad","Type":"ContainerStarted","Data":"eefd72e3b857d5b0e4035c6bbedbcb39985d274f0b37e2420fd97b23fdfa4997"} Feb 16 02:25:01.832006 master-0 kubenswrapper[31559]: E0216 02:25:01.831886 31559 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 02:25:01.832006 master-0 kubenswrapper[31559]: I0216 02:25:01.831947 31559 status_manager.go:851] "Failed to get status for pod" podUID="470e2f5e-6f60-49be-850b-8a0df6566fdd" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:25:02.839413 master-0 kubenswrapper[31559]: E0216 02:25:02.839286 31559 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 02:25:03.415186 master-0 kubenswrapper[31559]: E0216 02:25:03.415096 31559 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="6.4s" Feb 16 02:25:04.693662 master-0 kubenswrapper[31559]: E0216 02:25:04.693365 31559 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event=< Feb 16 02:25:04.693662 master-0 kubenswrapper[31559]: &Event{ObjectMeta:{kube-apiserver-master-0.189498ecaaddad3b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-master-0,UID:619e637b8575311b72d43b7b782d610a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.32.10:6443/readyz": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 02:25:04.693662 master-0 kubenswrapper[31559]: body: Feb 16 02:25:04.693662 master-0 kubenswrapper[31559]: ,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:24:55.709805883 +0000 UTC m=+148.054411938,LastTimestamp:2026-02-16 02:24:55.709805883 +0000 UTC m=+148.054411938,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,} Feb 16 02:25:04.693662 master-0 kubenswrapper[31559]: > Feb 16 02:25:07.924864 master-0 kubenswrapper[31559]: I0216 02:25:07.924723 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 02:25:07.933797 master-0 kubenswrapper[31559]: I0216 02:25:07.933481 31559 status_manager.go:851] "Failed to get status for pod" podUID="470e2f5e-6f60-49be-850b-8a0df6566fdd" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:25:07.934377 master-0 kubenswrapper[31559]: I0216 02:25:07.934285 31559 status_manager.go:851] "Failed to get status for pod" podUID="470e2f5e-6f60-49be-850b-8a0df6566fdd" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:25:07.964540 master-0 kubenswrapper[31559]: I0216 02:25:07.964468 31559 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="859f0343-1915-4ebf-a758-4b73ffbfc401" Feb 16 02:25:07.964540 master-0 kubenswrapper[31559]: I0216 02:25:07.964516 31559 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="859f0343-1915-4ebf-a758-4b73ffbfc401" Feb 16 02:25:07.965677 master-0 kubenswrapper[31559]: E0216 02:25:07.965594 31559 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 02:25:07.966274 master-0 kubenswrapper[31559]: I0216 02:25:07.966228 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 02:25:07.998368 master-0 kubenswrapper[31559]: W0216 02:25:07.998288 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfaa15f80078a2bfbe2234a74ab4da87c.slice/crio-3a8c6eb565e6a082a57364dba0406d74d3be50c1d4ab9d2eb66a7a22c9ab8cda WatchSource:0}: Error finding container 3a8c6eb565e6a082a57364dba0406d74d3be50c1d4ab9d2eb66a7a22c9ab8cda: Status 404 returned error can't find the container with id 3a8c6eb565e6a082a57364dba0406d74d3be50c1d4ab9d2eb66a7a22c9ab8cda Feb 16 02:25:08.646384 master-0 kubenswrapper[31559]: I0216 02:25:08.646234 31559 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Feb 16 02:25:08.646384 master-0 kubenswrapper[31559]: I0216 02:25:08.646320 31559 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="532487ad51c30257b744e7c1c79fb34f" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Feb 16 02:25:08.895096 master-0 kubenswrapper[31559]: I0216 02:25:08.895014 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_532487ad51c30257b744e7c1c79fb34f/cluster-policy-controller/3.log" Feb 16 02:25:08.896083 master-0 kubenswrapper[31559]: I0216 02:25:08.896028 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_532487ad51c30257b744e7c1c79fb34f/kube-controller-manager/2.log" Feb 16 02:25:08.897181 master-0 kubenswrapper[31559]: I0216 02:25:08.897050 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_532487ad51c30257b744e7c1c79fb34f/kube-controller-manager/1.log" Feb 16 02:25:08.898590 master-0 kubenswrapper[31559]: I0216 02:25:08.898537 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_532487ad51c30257b744e7c1c79fb34f/kube-controller-manager-cert-syncer/0.log" Feb 16 02:25:08.898717 master-0 kubenswrapper[31559]: I0216 02:25:08.898621 31559 generic.go:334] "Generic (PLEG): container finished" podID="532487ad51c30257b744e7c1c79fb34f" containerID="5fc96fb916b196b3dbc229cfd525c7d85b5052106365d264bf8c22b6c5329dbb" exitCode=1 Feb 16 02:25:08.898796 master-0 kubenswrapper[31559]: I0216 02:25:08.898757 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"532487ad51c30257b744e7c1c79fb34f","Type":"ContainerDied","Data":"5fc96fb916b196b3dbc229cfd525c7d85b5052106365d264bf8c22b6c5329dbb"} Feb 16 02:25:08.898872 master-0 kubenswrapper[31559]: I0216 02:25:08.898848 31559 scope.go:117] "RemoveContainer" containerID="c315dd4013bf47ddda1fd2c99a095489c35ec0eda907e0f77d5a4d2d27ec8d89" Feb 16 02:25:08.900746 master-0 kubenswrapper[31559]: I0216 02:25:08.900250 31559 scope.go:117] "RemoveContainer" containerID="5fc96fb916b196b3dbc229cfd525c7d85b5052106365d264bf8c22b6c5329dbb" Feb 16 02:25:08.900878 master-0 kubenswrapper[31559]: I0216 02:25:08.900821 31559 status_manager.go:851] "Failed to get status for pod" podUID="532487ad51c30257b744e7c1c79fb34f" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:25:08.901682 master-0 kubenswrapper[31559]: I0216 02:25:08.901631 31559 generic.go:334] "Generic (PLEG): container finished" podID="faa15f80078a2bfbe2234a74ab4da87c" containerID="5ff322ca44f3862c17754d1f775bd6f38d2521a29ea7b308d3c99dba1d568471" exitCode=0 Feb 16 02:25:08.901796 master-0 kubenswrapper[31559]: I0216 02:25:08.901681 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"faa15f80078a2bfbe2234a74ab4da87c","Type":"ContainerDied","Data":"5ff322ca44f3862c17754d1f775bd6f38d2521a29ea7b308d3c99dba1d568471"} Feb 16 02:25:08.901796 master-0 kubenswrapper[31559]: I0216 02:25:08.901739 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"faa15f80078a2bfbe2234a74ab4da87c","Type":"ContainerStarted","Data":"3a8c6eb565e6a082a57364dba0406d74d3be50c1d4ab9d2eb66a7a22c9ab8cda"} Feb 16 02:25:08.901930 master-0 kubenswrapper[31559]: I0216 02:25:08.901833 31559 status_manager.go:851] "Failed to get status for pod" podUID="470e2f5e-6f60-49be-850b-8a0df6566fdd" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:25:08.902194 master-0 kubenswrapper[31559]: I0216 02:25:08.902154 31559 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="859f0343-1915-4ebf-a758-4b73ffbfc401" Feb 16 02:25:08.902273 master-0 kubenswrapper[31559]: I0216 02:25:08.902200 31559 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="859f0343-1915-4ebf-a758-4b73ffbfc401" Feb 16 02:25:08.903225 master-0 kubenswrapper[31559]: I0216 02:25:08.903150 31559 status_manager.go:851] "Failed to get status for pod" podUID="532487ad51c30257b744e7c1c79fb34f" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:25:08.903428 master-0 kubenswrapper[31559]: E0216 02:25:08.903162 31559 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 02:25:08.904031 master-0 kubenswrapper[31559]: I0216 02:25:08.903967 31559 status_manager.go:851] "Failed to get status for pod" podUID="470e2f5e-6f60-49be-850b-8a0df6566fdd" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:25:09.920463 master-0 kubenswrapper[31559]: I0216 02:25:09.919726 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_532487ad51c30257b744e7c1c79fb34f/cluster-policy-controller/3.log" Feb 16 02:25:09.921059 master-0 kubenswrapper[31559]: I0216 02:25:09.920473 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_532487ad51c30257b744e7c1c79fb34f/kube-controller-manager/2.log" Feb 16 02:25:09.922404 master-0 kubenswrapper[31559]: I0216 02:25:09.921517 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_532487ad51c30257b744e7c1c79fb34f/kube-controller-manager-cert-syncer/0.log" Feb 16 02:25:09.922404 master-0 kubenswrapper[31559]: I0216 02:25:09.921610 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"532487ad51c30257b744e7c1c79fb34f","Type":"ContainerStarted","Data":"7936c2730f8175510de2d253b1e10cfbfb35dc725232ac6b9454cae07e1ba691"} Feb 16 02:25:09.938464 master-0 kubenswrapper[31559]: I0216 02:25:09.936468 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"faa15f80078a2bfbe2234a74ab4da87c","Type":"ContainerStarted","Data":"fc25e646752f2eefc1069034fc9ce62011699a37685d3736d305e53a5721016c"} Feb 16 02:25:09.938464 master-0 kubenswrapper[31559]: I0216 02:25:09.936536 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"faa15f80078a2bfbe2234a74ab4da87c","Type":"ContainerStarted","Data":"c5e856a5a1deabc8ec437ec153f6dc9513c1142be01650534dba41f3a69aa9ac"} Feb 16 02:25:10.385331 master-0 kubenswrapper[31559]: I0216 02:25:10.385268 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:25:10.385789 master-0 kubenswrapper[31559]: I0216 02:25:10.385670 31559 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Feb 16 02:25:10.385789 master-0 kubenswrapper[31559]: I0216 02:25:10.385730 31559 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="532487ad51c30257b744e7c1c79fb34f" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Feb 16 02:25:10.953466 master-0 kubenswrapper[31559]: I0216 02:25:10.953334 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"faa15f80078a2bfbe2234a74ab4da87c","Type":"ContainerStarted","Data":"b0745343c4d73d707339ae202ee2843192faeca2cb8a695a44972b86e95a9611"} Feb 16 02:25:10.953466 master-0 kubenswrapper[31559]: I0216 02:25:10.953410 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"faa15f80078a2bfbe2234a74ab4da87c","Type":"ContainerStarted","Data":"7560250703a4f95f92cb4830137ba4c28861f553762a780e9ed897c205c248ce"} Feb 16 02:25:10.953946 master-0 kubenswrapper[31559]: I0216 02:25:10.953468 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 02:25:10.953946 master-0 kubenswrapper[31559]: I0216 02:25:10.953494 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"faa15f80078a2bfbe2234a74ab4da87c","Type":"ContainerStarted","Data":"9bdf2f568c4df3e9b3963f330037f1881dfe657c9df363c02007e717c60ce8ef"} Feb 16 02:25:10.953946 master-0 kubenswrapper[31559]: I0216 02:25:10.953615 31559 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="859f0343-1915-4ebf-a758-4b73ffbfc401" Feb 16 02:25:10.953946 master-0 kubenswrapper[31559]: I0216 02:25:10.953643 31559 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="859f0343-1915-4ebf-a758-4b73ffbfc401" Feb 16 02:25:12.862933 master-0 kubenswrapper[31559]: I0216 02:25:12.862824 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:25:12.967068 master-0 kubenswrapper[31559]: I0216 02:25:12.966984 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 02:25:12.967068 master-0 kubenswrapper[31559]: I0216 02:25:12.967059 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 02:25:12.974154 master-0 kubenswrapper[31559]: I0216 02:25:12.974080 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 02:25:15.975858 master-0 kubenswrapper[31559]: I0216 02:25:15.975805 31559 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 02:25:16.001040 master-0 kubenswrapper[31559]: I0216 02:25:16.000840 31559 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="859f0343-1915-4ebf-a758-4b73ffbfc401" Feb 16 02:25:16.001040 master-0 kubenswrapper[31559]: I0216 02:25:16.000887 31559 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="859f0343-1915-4ebf-a758-4b73ffbfc401" Feb 16 02:25:16.007317 master-0 kubenswrapper[31559]: I0216 02:25:16.007273 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 02:25:17.005922 master-0 kubenswrapper[31559]: I0216 02:25:17.005822 31559 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="859f0343-1915-4ebf-a758-4b73ffbfc401" Feb 16 02:25:17.005922 master-0 kubenswrapper[31559]: I0216 02:25:17.005859 31559 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="859f0343-1915-4ebf-a758-4b73ffbfc401" Feb 16 02:25:17.943547 master-0 kubenswrapper[31559]: I0216 02:25:17.943479 31559 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="faa15f80078a2bfbe2234a74ab4da87c" podUID="37e5f834-e7ad-4cfa-ad87-ac0d0310ccfc" Feb 16 02:25:20.389525 master-0 kubenswrapper[31559]: I0216 02:25:20.389416 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:25:20.397276 master-0 kubenswrapper[31559]: I0216 02:25:20.397196 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:25:25.543920 master-0 kubenswrapper[31559]: I0216 02:25:25.543796 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 16 02:25:25.967019 master-0 kubenswrapper[31559]: I0216 02:25:25.966901 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Feb 16 02:25:26.194861 master-0 kubenswrapper[31559]: I0216 02:25:26.194764 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Feb 16 02:25:26.430367 master-0 kubenswrapper[31559]: I0216 02:25:26.430272 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 16 02:25:27.198391 master-0 kubenswrapper[31559]: I0216 02:25:27.198318 31559 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 16 02:25:27.257626 master-0 kubenswrapper[31559]: I0216 02:25:27.257574 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Feb 16 02:25:27.734128 master-0 kubenswrapper[31559]: I0216 02:25:27.734034 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 16 02:25:27.739554 master-0 kubenswrapper[31559]: I0216 02:25:27.739504 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-bxb2h" Feb 16 02:25:27.902803 master-0 kubenswrapper[31559]: I0216 02:25:27.896470 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 16 02:25:27.924338 master-0 kubenswrapper[31559]: I0216 02:25:27.924292 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-tw6fq" Feb 16 02:25:27.952215 master-0 kubenswrapper[31559]: I0216 02:25:27.952130 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 16 02:25:28.068749 master-0 kubenswrapper[31559]: I0216 02:25:28.068573 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Feb 16 02:25:28.092864 master-0 kubenswrapper[31559]: I0216 02:25:28.092780 31559 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 16 02:25:28.352465 master-0 kubenswrapper[31559]: I0216 02:25:28.352229 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-4zs9t" Feb 16 02:25:28.442598 master-0 kubenswrapper[31559]: I0216 02:25:28.442480 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 16 02:25:28.453467 master-0 kubenswrapper[31559]: I0216 02:25:28.453383 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 16 02:25:28.573059 master-0 kubenswrapper[31559]: I0216 02:25:28.572939 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Feb 16 02:25:28.614186 master-0 kubenswrapper[31559]: I0216 02:25:28.614024 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 02:25:28.979696 master-0 kubenswrapper[31559]: I0216 02:25:28.979596 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 02:25:29.126272 master-0 kubenswrapper[31559]: I0216 02:25:29.126195 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 16 02:25:29.193989 master-0 kubenswrapper[31559]: I0216 02:25:29.193859 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Feb 16 02:25:29.208670 master-0 kubenswrapper[31559]: I0216 02:25:29.208582 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 16 02:25:29.321040 master-0 kubenswrapper[31559]: I0216 02:25:29.320874 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Feb 16 02:25:29.373160 master-0 kubenswrapper[31559]: I0216 02:25:29.373059 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 16 02:25:29.382023 master-0 kubenswrapper[31559]: I0216 02:25:29.381981 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 16 02:25:29.443600 master-0 kubenswrapper[31559]: I0216 02:25:29.443556 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 16 02:25:29.477617 master-0 kubenswrapper[31559]: I0216 02:25:29.477538 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 16 02:25:29.532650 master-0 kubenswrapper[31559]: I0216 02:25:29.532397 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 16 02:25:29.580466 master-0 kubenswrapper[31559]: I0216 02:25:29.580219 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 02:25:29.646216 master-0 kubenswrapper[31559]: I0216 02:25:29.646121 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 02:25:29.676764 master-0 kubenswrapper[31559]: I0216 02:25:29.676697 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 16 02:25:29.698385 master-0 kubenswrapper[31559]: I0216 02:25:29.698241 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-whkzn" Feb 16 02:25:29.723522 master-0 kubenswrapper[31559]: I0216 02:25:29.723426 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Feb 16 02:25:29.733332 master-0 kubenswrapper[31559]: I0216 02:25:29.733261 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 16 02:25:29.776796 master-0 kubenswrapper[31559]: I0216 02:25:29.776731 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 16 02:25:29.860011 master-0 kubenswrapper[31559]: I0216 02:25:29.859909 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-wsv7k" Feb 16 02:25:29.955337 master-0 kubenswrapper[31559]: I0216 02:25:29.955257 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 16 02:25:29.979332 master-0 kubenswrapper[31559]: I0216 02:25:29.977517 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 16 02:25:30.000888 master-0 kubenswrapper[31559]: I0216 02:25:30.000799 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Feb 16 02:25:30.130818 master-0 kubenswrapper[31559]: I0216 02:25:30.130647 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 16 02:25:30.272527 master-0 kubenswrapper[31559]: I0216 02:25:30.272392 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 16 02:25:30.455305 master-0 kubenswrapper[31559]: I0216 02:25:30.455207 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 02:25:30.471421 master-0 kubenswrapper[31559]: I0216 02:25:30.471343 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 02:25:30.712702 master-0 kubenswrapper[31559]: I0216 02:25:30.712545 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 16 02:25:30.754876 master-0 kubenswrapper[31559]: I0216 02:25:30.754785 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 16 02:25:30.762402 master-0 kubenswrapper[31559]: I0216 02:25:30.762334 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-kvrqk" Feb 16 02:25:30.784224 master-0 kubenswrapper[31559]: I0216 02:25:30.784154 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 16 02:25:30.790776 master-0 kubenswrapper[31559]: I0216 02:25:30.790715 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 16 02:25:30.936920 master-0 kubenswrapper[31559]: I0216 02:25:30.936785 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Feb 16 02:25:30.964067 master-0 kubenswrapper[31559]: I0216 02:25:30.963897 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 16 02:25:30.995398 master-0 kubenswrapper[31559]: I0216 02:25:30.995311 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 16 02:25:31.009902 master-0 kubenswrapper[31559]: I0216 02:25:31.009835 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 16 02:25:31.033873 master-0 kubenswrapper[31559]: I0216 02:25:31.033814 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 16 02:25:31.045034 master-0 kubenswrapper[31559]: I0216 02:25:31.044953 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Feb 16 02:25:31.082388 master-0 kubenswrapper[31559]: I0216 02:25:31.082279 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 16 02:25:31.116461 master-0 kubenswrapper[31559]: I0216 02:25:31.116343 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 16 02:25:31.117817 master-0 kubenswrapper[31559]: I0216 02:25:31.117759 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 16 02:25:31.318521 master-0 kubenswrapper[31559]: I0216 02:25:31.318328 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Feb 16 02:25:31.359206 master-0 kubenswrapper[31559]: I0216 02:25:31.359094 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Feb 16 02:25:31.454272 master-0 kubenswrapper[31559]: I0216 02:25:31.454198 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 16 02:25:31.468226 master-0 kubenswrapper[31559]: I0216 02:25:31.468097 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 16 02:25:31.468226 master-0 kubenswrapper[31559]: I0216 02:25:31.468143 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 16 02:25:31.554148 master-0 kubenswrapper[31559]: I0216 02:25:31.554040 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 16 02:25:31.555900 master-0 kubenswrapper[31559]: I0216 02:25:31.555845 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 16 02:25:31.583701 master-0 kubenswrapper[31559]: I0216 02:25:31.583536 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 16 02:25:31.602522 master-0 kubenswrapper[31559]: I0216 02:25:31.602424 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 16 02:25:31.637086 master-0 kubenswrapper[31559]: I0216 02:25:31.637005 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Feb 16 02:25:31.656750 master-0 kubenswrapper[31559]: I0216 02:25:31.656679 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-bh67v" Feb 16 02:25:31.710269 master-0 kubenswrapper[31559]: I0216 02:25:31.710168 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 16 02:25:31.721167 master-0 kubenswrapper[31559]: I0216 02:25:31.721094 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 16 02:25:31.742184 master-0 kubenswrapper[31559]: I0216 02:25:31.742144 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Feb 16 02:25:31.802512 master-0 kubenswrapper[31559]: I0216 02:25:31.802418 31559 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 16 02:25:31.811431 master-0 kubenswrapper[31559]: I0216 02:25:31.811365 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Feb 16 02:25:31.811571 master-0 kubenswrapper[31559]: I0216 02:25:31.811471 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Feb 16 02:25:31.818961 master-0 kubenswrapper[31559]: I0216 02:25:31.818898 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 02:25:31.851191 master-0 kubenswrapper[31559]: I0216 02:25:31.851006 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-0" podStartSLOduration=16.850974918 podStartE2EDuration="16.850974918s" podCreationTimestamp="2026-02-16 02:25:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:25:31.843230762 +0000 UTC m=+184.187836807" watchObservedRunningTime="2026-02-16 02:25:31.850974918 +0000 UTC m=+184.195580963" Feb 16 02:25:31.887814 master-0 kubenswrapper[31559]: I0216 02:25:31.887746 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-w8x86" Feb 16 02:25:32.032868 master-0 kubenswrapper[31559]: I0216 02:25:32.032529 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Feb 16 02:25:32.044295 master-0 kubenswrapper[31559]: I0216 02:25:32.044244 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-r9rvr" Feb 16 02:25:32.070620 master-0 kubenswrapper[31559]: I0216 02:25:32.070575 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 16 02:25:32.078309 master-0 kubenswrapper[31559]: I0216 02:25:32.078266 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 16 02:25:32.100414 master-0 kubenswrapper[31559]: I0216 02:25:32.100342 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-j5nng" Feb 16 02:25:32.182953 master-0 kubenswrapper[31559]: I0216 02:25:32.182905 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 16 02:25:32.186669 master-0 kubenswrapper[31559]: I0216 02:25:32.186580 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Feb 16 02:25:32.203316 master-0 kubenswrapper[31559]: I0216 02:25:32.203239 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 16 02:25:32.257955 master-0 kubenswrapper[31559]: I0216 02:25:32.257893 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 16 02:25:32.304237 master-0 kubenswrapper[31559]: I0216 02:25:32.304162 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Feb 16 02:25:32.359239 master-0 kubenswrapper[31559]: I0216 02:25:32.359142 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Feb 16 02:25:32.398983 master-0 kubenswrapper[31559]: I0216 02:25:32.398925 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 16 02:25:32.455222 master-0 kubenswrapper[31559]: I0216 02:25:32.455070 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Feb 16 02:25:32.455222 master-0 kubenswrapper[31559]: I0216 02:25:32.455072 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 16 02:25:32.517634 master-0 kubenswrapper[31559]: I0216 02:25:32.517554 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 16 02:25:32.560518 master-0 kubenswrapper[31559]: I0216 02:25:32.560474 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Feb 16 02:25:32.603648 master-0 kubenswrapper[31559]: I0216 02:25:32.603581 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 16 02:25:32.664380 master-0 kubenswrapper[31559]: I0216 02:25:32.664291 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-8pgh8" Feb 16 02:25:32.668733 master-0 kubenswrapper[31559]: I0216 02:25:32.668671 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 16 02:25:32.789800 master-0 kubenswrapper[31559]: I0216 02:25:32.789624 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 16 02:25:32.865797 master-0 kubenswrapper[31559]: I0216 02:25:32.865743 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Feb 16 02:25:32.907087 master-0 kubenswrapper[31559]: I0216 02:25:32.906979 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 16 02:25:32.921991 master-0 kubenswrapper[31559]: I0216 02:25:32.921919 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 16 02:25:32.975696 master-0 kubenswrapper[31559]: I0216 02:25:32.975564 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Feb 16 02:25:33.000384 master-0 kubenswrapper[31559]: I0216 02:25:33.000296 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 16 02:25:33.065281 master-0 kubenswrapper[31559]: I0216 02:25:33.065106 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 16 02:25:33.067039 master-0 kubenswrapper[31559]: I0216 02:25:33.066985 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 16 02:25:33.070338 master-0 kubenswrapper[31559]: I0216 02:25:33.070281 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Feb 16 02:25:33.087648 master-0 kubenswrapper[31559]: I0216 02:25:33.087570 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 16 02:25:33.174799 master-0 kubenswrapper[31559]: I0216 02:25:33.174701 31559 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 16 02:25:33.215003 master-0 kubenswrapper[31559]: I0216 02:25:33.214934 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 02:25:33.289367 master-0 kubenswrapper[31559]: I0216 02:25:33.289281 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Feb 16 02:25:33.350280 master-0 kubenswrapper[31559]: I0216 02:25:33.350128 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 16 02:25:33.468466 master-0 kubenswrapper[31559]: I0216 02:25:33.468350 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-mhjgf" Feb 16 02:25:33.519296 master-0 kubenswrapper[31559]: I0216 02:25:33.519102 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 16 02:25:33.557690 master-0 kubenswrapper[31559]: I0216 02:25:33.557604 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-ccbvw" Feb 16 02:25:33.583015 master-0 kubenswrapper[31559]: I0216 02:25:33.582907 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-xrwft" Feb 16 02:25:33.750281 master-0 kubenswrapper[31559]: I0216 02:25:33.750190 31559 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 16 02:25:33.796195 master-0 kubenswrapper[31559]: I0216 02:25:33.796115 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-wp42g" Feb 16 02:25:33.802392 master-0 kubenswrapper[31559]: I0216 02:25:33.802316 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 16 02:25:33.875054 master-0 kubenswrapper[31559]: I0216 02:25:33.874992 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 16 02:25:33.930829 master-0 kubenswrapper[31559]: I0216 02:25:33.929332 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 16 02:25:33.950874 master-0 kubenswrapper[31559]: I0216 02:25:33.950790 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 16 02:25:33.977762 master-0 kubenswrapper[31559]: I0216 02:25:33.977698 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 02:25:34.025684 master-0 kubenswrapper[31559]: I0216 02:25:34.024992 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 16 02:25:34.125082 master-0 kubenswrapper[31559]: I0216 02:25:34.125013 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 16 02:25:34.128352 master-0 kubenswrapper[31559]: I0216 02:25:34.128300 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 16 02:25:34.164664 master-0 kubenswrapper[31559]: I0216 02:25:34.164604 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-785jj" Feb 16 02:25:34.198846 master-0 kubenswrapper[31559]: I0216 02:25:34.198774 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 16 02:25:34.241423 master-0 kubenswrapper[31559]: I0216 02:25:34.241369 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-xg7sl" Feb 16 02:25:34.355831 master-0 kubenswrapper[31559]: I0216 02:25:34.355670 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 16 02:25:34.429282 master-0 kubenswrapper[31559]: I0216 02:25:34.429210 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 16 02:25:34.437932 master-0 kubenswrapper[31559]: I0216 02:25:34.437879 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 16 02:25:34.522253 master-0 kubenswrapper[31559]: I0216 02:25:34.522197 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-2md94v7udfjth" Feb 16 02:25:34.645419 master-0 kubenswrapper[31559]: I0216 02:25:34.645258 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 16 02:25:34.647241 master-0 kubenswrapper[31559]: I0216 02:25:34.647186 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 16 02:25:34.710241 master-0 kubenswrapper[31559]: I0216 02:25:34.710168 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 16 02:25:34.741921 master-0 kubenswrapper[31559]: I0216 02:25:34.741830 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 16 02:25:34.905060 master-0 kubenswrapper[31559]: I0216 02:25:34.904873 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 16 02:25:34.942361 master-0 kubenswrapper[31559]: I0216 02:25:34.942292 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-9dqnm" Feb 16 02:25:34.945636 master-0 kubenswrapper[31559]: I0216 02:25:34.945592 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Feb 16 02:25:34.948857 master-0 kubenswrapper[31559]: I0216 02:25:34.948807 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 16 02:25:35.044570 master-0 kubenswrapper[31559]: I0216 02:25:35.044494 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 16 02:25:35.290596 master-0 kubenswrapper[31559]: I0216 02:25:35.290514 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 16 02:25:35.328581 master-0 kubenswrapper[31559]: I0216 02:25:35.328491 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 16 02:25:35.330339 master-0 kubenswrapper[31559]: I0216 02:25:35.330295 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 16 02:25:35.368827 master-0 kubenswrapper[31559]: I0216 02:25:35.368757 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 16 02:25:35.388078 master-0 kubenswrapper[31559]: I0216 02:25:35.387899 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-qmfsl" Feb 16 02:25:35.545919 master-0 kubenswrapper[31559]: I0216 02:25:35.545781 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 16 02:25:35.558034 master-0 kubenswrapper[31559]: I0216 02:25:35.557973 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Feb 16 02:25:35.633277 master-0 kubenswrapper[31559]: I0216 02:25:35.633193 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Feb 16 02:25:35.643209 master-0 kubenswrapper[31559]: I0216 02:25:35.643166 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 16 02:25:35.661204 master-0 kubenswrapper[31559]: I0216 02:25:35.661168 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 16 02:25:35.663243 master-0 kubenswrapper[31559]: I0216 02:25:35.663219 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 16 02:25:35.741220 master-0 kubenswrapper[31559]: I0216 02:25:35.741158 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 16 02:25:35.819413 master-0 kubenswrapper[31559]: I0216 02:25:35.819199 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Feb 16 02:25:35.837936 master-0 kubenswrapper[31559]: I0216 02:25:35.837878 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Feb 16 02:25:35.902362 master-0 kubenswrapper[31559]: I0216 02:25:35.902262 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 16 02:25:35.903946 master-0 kubenswrapper[31559]: I0216 02:25:35.903897 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 02:25:35.928095 master-0 kubenswrapper[31559]: I0216 02:25:35.928023 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 16 02:25:35.947633 master-0 kubenswrapper[31559]: I0216 02:25:35.947537 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 16 02:25:36.036220 master-0 kubenswrapper[31559]: I0216 02:25:36.036115 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Feb 16 02:25:36.069398 master-0 kubenswrapper[31559]: I0216 02:25:36.069313 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 16 02:25:36.088463 master-0 kubenswrapper[31559]: I0216 02:25:36.088332 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Feb 16 02:25:36.168577 master-0 kubenswrapper[31559]: I0216 02:25:36.168505 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Feb 16 02:25:36.253402 master-0 kubenswrapper[31559]: I0216 02:25:36.253269 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Feb 16 02:25:36.268113 master-0 kubenswrapper[31559]: I0216 02:25:36.268029 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 16 02:25:36.288698 master-0 kubenswrapper[31559]: I0216 02:25:36.288559 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Feb 16 02:25:36.404279 master-0 kubenswrapper[31559]: I0216 02:25:36.404136 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-s4hmw" Feb 16 02:25:36.436464 master-0 kubenswrapper[31559]: I0216 02:25:36.436390 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 16 02:25:36.444348 master-0 kubenswrapper[31559]: I0216 02:25:36.444290 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 16 02:25:36.741744 master-0 kubenswrapper[31559]: I0216 02:25:36.741660 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Feb 16 02:25:36.749128 master-0 kubenswrapper[31559]: I0216 02:25:36.749056 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 16 02:25:36.845311 master-0 kubenswrapper[31559]: I0216 02:25:36.845250 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Feb 16 02:25:36.896139 master-0 kubenswrapper[31559]: I0216 02:25:36.896035 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 16 02:25:36.966381 master-0 kubenswrapper[31559]: I0216 02:25:36.966319 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 16 02:25:36.976927 master-0 kubenswrapper[31559]: I0216 02:25:36.976866 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-tmpwz" Feb 16 02:25:36.999993 master-0 kubenswrapper[31559]: I0216 02:25:36.999871 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 16 02:25:37.022318 master-0 kubenswrapper[31559]: I0216 02:25:37.022252 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 16 02:25:37.026392 master-0 kubenswrapper[31559]: I0216 02:25:37.026350 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 16 02:25:37.118862 master-0 kubenswrapper[31559]: I0216 02:25:37.118766 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Feb 16 02:25:37.181880 master-0 kubenswrapper[31559]: I0216 02:25:37.181785 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 16 02:25:37.322798 master-0 kubenswrapper[31559]: I0216 02:25:37.322598 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 16 02:25:37.368221 master-0 kubenswrapper[31559]: I0216 02:25:37.368158 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 16 02:25:37.395780 master-0 kubenswrapper[31559]: I0216 02:25:37.395704 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 16 02:25:37.430068 master-0 kubenswrapper[31559]: I0216 02:25:37.430011 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 02:25:37.494276 master-0 kubenswrapper[31559]: I0216 02:25:37.493996 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 16 02:25:37.557880 master-0 kubenswrapper[31559]: I0216 02:25:37.557810 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 16 02:25:37.601598 master-0 kubenswrapper[31559]: I0216 02:25:37.599043 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Feb 16 02:25:37.601598 master-0 kubenswrapper[31559]: I0216 02:25:37.600782 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 16 02:25:37.602769 master-0 kubenswrapper[31559]: I0216 02:25:37.602626 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Feb 16 02:25:37.605518 master-0 kubenswrapper[31559]: I0216 02:25:37.604344 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 16 02:25:37.768979 master-0 kubenswrapper[31559]: I0216 02:25:37.768868 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 16 02:25:37.813427 master-0 kubenswrapper[31559]: I0216 02:25:37.813344 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 16 02:25:37.831341 master-0 kubenswrapper[31559]: I0216 02:25:37.831222 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 16 02:25:37.964985 master-0 kubenswrapper[31559]: I0216 02:25:37.964898 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Feb 16 02:25:37.976887 master-0 kubenswrapper[31559]: I0216 02:25:37.976808 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 16 02:25:37.995027 master-0 kubenswrapper[31559]: I0216 02:25:37.994942 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 16 02:25:38.125384 master-0 kubenswrapper[31559]: I0216 02:25:38.125311 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 16 02:25:38.128266 master-0 kubenswrapper[31559]: I0216 02:25:38.128186 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 16 02:25:38.139811 master-0 kubenswrapper[31559]: I0216 02:25:38.139732 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 16 02:25:38.190863 master-0 kubenswrapper[31559]: I0216 02:25:38.190785 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 16 02:25:38.342006 master-0 kubenswrapper[31559]: I0216 02:25:38.341848 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 16 02:25:38.383378 master-0 kubenswrapper[31559]: I0216 02:25:38.383306 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 16 02:25:38.451720 master-0 kubenswrapper[31559]: I0216 02:25:38.451664 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 16 02:25:38.453316 master-0 kubenswrapper[31559]: I0216 02:25:38.453275 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 16 02:25:38.469859 master-0 kubenswrapper[31559]: I0216 02:25:38.469792 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 16 02:25:38.540956 master-0 kubenswrapper[31559]: I0216 02:25:38.540873 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 16 02:25:38.584089 master-0 kubenswrapper[31559]: I0216 02:25:38.583999 31559 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 16 02:25:38.584467 master-0 kubenswrapper[31559]: I0216 02:25:38.584359 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="0ef22dacc42282620a76fbbcd3b157ad" containerName="startup-monitor" containerID="cri-o://eefd72e3b857d5b0e4035c6bbedbcb39985d274f0b37e2420fd97b23fdfa4997" gracePeriod=5 Feb 16 02:25:38.772005 master-0 kubenswrapper[31559]: I0216 02:25:38.771847 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 16 02:25:38.773111 master-0 kubenswrapper[31559]: I0216 02:25:38.773036 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 16 02:25:38.835571 master-0 kubenswrapper[31559]: I0216 02:25:38.835494 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Feb 16 02:25:38.839069 master-0 kubenswrapper[31559]: I0216 02:25:38.838999 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 16 02:25:38.918318 master-0 kubenswrapper[31559]: I0216 02:25:38.918230 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 16 02:25:38.943462 master-0 kubenswrapper[31559]: I0216 02:25:38.943337 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Feb 16 02:25:38.994117 master-0 kubenswrapper[31559]: I0216 02:25:38.994041 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 16 02:25:39.005265 master-0 kubenswrapper[31559]: I0216 02:25:39.005197 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Feb 16 02:25:39.035919 master-0 kubenswrapper[31559]: I0216 02:25:39.035752 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 16 02:25:39.049349 master-0 kubenswrapper[31559]: I0216 02:25:39.049276 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 16 02:25:39.145007 master-0 kubenswrapper[31559]: I0216 02:25:39.144935 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 02:25:39.292223 master-0 kubenswrapper[31559]: I0216 02:25:39.292007 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Feb 16 02:25:39.327364 master-0 kubenswrapper[31559]: I0216 02:25:39.327266 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 16 02:25:39.356809 master-0 kubenswrapper[31559]: I0216 02:25:39.356718 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 16 02:25:39.365186 master-0 kubenswrapper[31559]: I0216 02:25:39.365101 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 16 02:25:39.487025 master-0 kubenswrapper[31559]: I0216 02:25:39.486965 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 16 02:25:39.495067 master-0 kubenswrapper[31559]: I0216 02:25:39.494999 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 16 02:25:39.585297 master-0 kubenswrapper[31559]: I0216 02:25:39.585113 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-nfl29" Feb 16 02:25:39.632012 master-0 kubenswrapper[31559]: I0216 02:25:39.631686 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Feb 16 02:25:39.651977 master-0 kubenswrapper[31559]: I0216 02:25:39.651879 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Feb 16 02:25:39.703931 master-0 kubenswrapper[31559]: I0216 02:25:39.703822 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 16 02:25:39.733391 master-0 kubenswrapper[31559]: I0216 02:25:39.733309 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Feb 16 02:25:39.818196 master-0 kubenswrapper[31559]: I0216 02:25:39.818097 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 16 02:25:39.836122 master-0 kubenswrapper[31559]: I0216 02:25:39.835965 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 16 02:25:39.960198 master-0 kubenswrapper[31559]: I0216 02:25:39.960118 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 16 02:25:39.964565 master-0 kubenswrapper[31559]: I0216 02:25:39.964494 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 16 02:25:39.967926 master-0 kubenswrapper[31559]: I0216 02:25:39.967881 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Feb 16 02:25:40.079694 master-0 kubenswrapper[31559]: I0216 02:25:40.079639 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 16 02:25:40.121962 master-0 kubenswrapper[31559]: I0216 02:25:40.121852 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 16 02:25:40.137720 master-0 kubenswrapper[31559]: I0216 02:25:40.137656 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Feb 16 02:25:40.153495 master-0 kubenswrapper[31559]: I0216 02:25:40.153452 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Feb 16 02:25:40.303813 master-0 kubenswrapper[31559]: I0216 02:25:40.303744 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 16 02:25:40.489537 master-0 kubenswrapper[31559]: I0216 02:25:40.489472 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 16 02:25:40.679139 master-0 kubenswrapper[31559]: I0216 02:25:40.679077 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 16 02:25:40.698258 master-0 kubenswrapper[31559]: I0216 02:25:40.698197 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Feb 16 02:25:40.726742 master-0 kubenswrapper[31559]: I0216 02:25:40.726628 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 16 02:25:40.734646 master-0 kubenswrapper[31559]: I0216 02:25:40.734591 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Feb 16 02:25:40.765489 master-0 kubenswrapper[31559]: I0216 02:25:40.765314 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 16 02:25:40.808123 master-0 kubenswrapper[31559]: I0216 02:25:40.808052 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Feb 16 02:25:40.901506 master-0 kubenswrapper[31559]: I0216 02:25:40.901425 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 16 02:25:41.108565 master-0 kubenswrapper[31559]: I0216 02:25:41.108371 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 16 02:25:41.255493 master-0 kubenswrapper[31559]: I0216 02:25:41.255396 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 16 02:25:41.348794 master-0 kubenswrapper[31559]: I0216 02:25:41.348736 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Feb 16 02:25:41.396306 master-0 kubenswrapper[31559]: I0216 02:25:41.396217 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 16 02:25:41.447855 master-0 kubenswrapper[31559]: I0216 02:25:41.447809 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Feb 16 02:25:41.662790 master-0 kubenswrapper[31559]: I0216 02:25:41.662668 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Feb 16 02:25:41.705288 master-0 kubenswrapper[31559]: I0216 02:25:41.705241 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 16 02:25:41.717702 master-0 kubenswrapper[31559]: I0216 02:25:41.717639 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 16 02:25:41.755187 master-0 kubenswrapper[31559]: I0216 02:25:41.755135 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 16 02:25:41.774366 master-0 kubenswrapper[31559]: I0216 02:25:41.774339 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 16 02:25:41.852413 master-0 kubenswrapper[31559]: I0216 02:25:41.852386 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 16 02:25:42.075321 master-0 kubenswrapper[31559]: I0216 02:25:42.075254 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 16 02:25:42.128602 master-0 kubenswrapper[31559]: I0216 02:25:42.128489 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Feb 16 02:25:42.311781 master-0 kubenswrapper[31559]: I0216 02:25:42.311722 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 16 02:25:42.358350 master-0 kubenswrapper[31559]: I0216 02:25:42.358213 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 16 02:25:42.677036 master-0 kubenswrapper[31559]: I0216 02:25:42.676983 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 16 02:25:42.742599 master-0 kubenswrapper[31559]: I0216 02:25:42.742539 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 16 02:25:43.005500 master-0 kubenswrapper[31559]: I0216 02:25:43.005279 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 16 02:25:44.208691 master-0 kubenswrapper[31559]: I0216 02:25:44.208600 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_0ef22dacc42282620a76fbbcd3b157ad/startup-monitor/0.log" Feb 16 02:25:44.209526 master-0 kubenswrapper[31559]: I0216 02:25:44.208712 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 02:25:44.252493 master-0 kubenswrapper[31559]: I0216 02:25:44.252364 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_0ef22dacc42282620a76fbbcd3b157ad/startup-monitor/0.log" Feb 16 02:25:44.252763 master-0 kubenswrapper[31559]: I0216 02:25:44.252511 31559 generic.go:334] "Generic (PLEG): container finished" podID="0ef22dacc42282620a76fbbcd3b157ad" containerID="eefd72e3b857d5b0e4035c6bbedbcb39985d274f0b37e2420fd97b23fdfa4997" exitCode=137 Feb 16 02:25:44.252763 master-0 kubenswrapper[31559]: I0216 02:25:44.252593 31559 scope.go:117] "RemoveContainer" containerID="eefd72e3b857d5b0e4035c6bbedbcb39985d274f0b37e2420fd97b23fdfa4997" Feb 16 02:25:44.252763 master-0 kubenswrapper[31559]: I0216 02:25:44.252691 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 02:25:44.291016 master-0 kubenswrapper[31559]: I0216 02:25:44.290399 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/0ef22dacc42282620a76fbbcd3b157ad-manifests\") pod \"0ef22dacc42282620a76fbbcd3b157ad\" (UID: \"0ef22dacc42282620a76fbbcd3b157ad\") " Feb 16 02:25:44.291016 master-0 kubenswrapper[31559]: I0216 02:25:44.290577 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/0ef22dacc42282620a76fbbcd3b157ad-pod-resource-dir\") pod \"0ef22dacc42282620a76fbbcd3b157ad\" (UID: \"0ef22dacc42282620a76fbbcd3b157ad\") " Feb 16 02:25:44.291016 master-0 kubenswrapper[31559]: I0216 02:25:44.290592 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ef22dacc42282620a76fbbcd3b157ad-manifests" (OuterVolumeSpecName: "manifests") pod "0ef22dacc42282620a76fbbcd3b157ad" (UID: "0ef22dacc42282620a76fbbcd3b157ad"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:25:44.291016 master-0 kubenswrapper[31559]: I0216 02:25:44.290784 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0ef22dacc42282620a76fbbcd3b157ad-var-lock\") pod \"0ef22dacc42282620a76fbbcd3b157ad\" (UID: \"0ef22dacc42282620a76fbbcd3b157ad\") " Feb 16 02:25:44.291016 master-0 kubenswrapper[31559]: I0216 02:25:44.290885 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/0ef22dacc42282620a76fbbcd3b157ad-var-log\") pod \"0ef22dacc42282620a76fbbcd3b157ad\" (UID: \"0ef22dacc42282620a76fbbcd3b157ad\") " Feb 16 02:25:44.291016 master-0 kubenswrapper[31559]: I0216 02:25:44.290922 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0ef22dacc42282620a76fbbcd3b157ad-resource-dir\") pod \"0ef22dacc42282620a76fbbcd3b157ad\" (UID: \"0ef22dacc42282620a76fbbcd3b157ad\") " Feb 16 02:25:44.291016 master-0 kubenswrapper[31559]: I0216 02:25:44.290949 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ef22dacc42282620a76fbbcd3b157ad-var-lock" (OuterVolumeSpecName: "var-lock") pod "0ef22dacc42282620a76fbbcd3b157ad" (UID: "0ef22dacc42282620a76fbbcd3b157ad"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:25:44.291016 master-0 kubenswrapper[31559]: I0216 02:25:44.291026 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ef22dacc42282620a76fbbcd3b157ad-var-log" (OuterVolumeSpecName: "var-log") pod "0ef22dacc42282620a76fbbcd3b157ad" (UID: "0ef22dacc42282620a76fbbcd3b157ad"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:25:44.292049 master-0 kubenswrapper[31559]: I0216 02:25:44.291103 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ef22dacc42282620a76fbbcd3b157ad-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "0ef22dacc42282620a76fbbcd3b157ad" (UID: "0ef22dacc42282620a76fbbcd3b157ad"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:25:44.292049 master-0 kubenswrapper[31559]: I0216 02:25:44.291383 31559 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/0ef22dacc42282620a76fbbcd3b157ad-var-log\") on node \"master-0\" DevicePath \"\"" Feb 16 02:25:44.292049 master-0 kubenswrapper[31559]: I0216 02:25:44.291410 31559 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0ef22dacc42282620a76fbbcd3b157ad-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 02:25:44.292049 master-0 kubenswrapper[31559]: I0216 02:25:44.291432 31559 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/0ef22dacc42282620a76fbbcd3b157ad-manifests\") on node \"master-0\" DevicePath \"\"" Feb 16 02:25:44.292049 master-0 kubenswrapper[31559]: I0216 02:25:44.291544 31559 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0ef22dacc42282620a76fbbcd3b157ad-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 16 02:25:44.298647 master-0 kubenswrapper[31559]: I0216 02:25:44.298544 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ef22dacc42282620a76fbbcd3b157ad-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "0ef22dacc42282620a76fbbcd3b157ad" (UID: "0ef22dacc42282620a76fbbcd3b157ad"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:25:44.299899 master-0 kubenswrapper[31559]: I0216 02:25:44.299840 31559 scope.go:117] "RemoveContainer" containerID="eefd72e3b857d5b0e4035c6bbedbcb39985d274f0b37e2420fd97b23fdfa4997" Feb 16 02:25:44.304216 master-0 kubenswrapper[31559]: E0216 02:25:44.304143 31559 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eefd72e3b857d5b0e4035c6bbedbcb39985d274f0b37e2420fd97b23fdfa4997\": container with ID starting with eefd72e3b857d5b0e4035c6bbedbcb39985d274f0b37e2420fd97b23fdfa4997 not found: ID does not exist" containerID="eefd72e3b857d5b0e4035c6bbedbcb39985d274f0b37e2420fd97b23fdfa4997" Feb 16 02:25:44.304360 master-0 kubenswrapper[31559]: I0216 02:25:44.304253 31559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eefd72e3b857d5b0e4035c6bbedbcb39985d274f0b37e2420fd97b23fdfa4997"} err="failed to get container status \"eefd72e3b857d5b0e4035c6bbedbcb39985d274f0b37e2420fd97b23fdfa4997\": rpc error: code = NotFound desc = could not find container \"eefd72e3b857d5b0e4035c6bbedbcb39985d274f0b37e2420fd97b23fdfa4997\": container with ID starting with eefd72e3b857d5b0e4035c6bbedbcb39985d274f0b37e2420fd97b23fdfa4997 not found: ID does not exist" Feb 16 02:25:44.393419 master-0 kubenswrapper[31559]: I0216 02:25:44.393209 31559 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/0ef22dacc42282620a76fbbcd3b157ad-pod-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 02:25:44.537873 master-0 kubenswrapper[31559]: I0216 02:25:44.537787 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 16 02:25:44.695196 master-0 kubenswrapper[31559]: I0216 02:25:44.695088 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 16 02:25:45.938242 master-0 kubenswrapper[31559]: I0216 02:25:45.938168 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ef22dacc42282620a76fbbcd3b157ad" path="/var/lib/kubelet/pods/0ef22dacc42282620a76fbbcd3b157ad/volumes" Feb 16 02:26:05.400105 master-0 kubenswrapper[31559]: I0216 02:26:05.400026 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-6fdf8859cd-rtsp7"] Feb 16 02:26:05.401216 master-0 kubenswrapper[31559]: E0216 02:26:05.400386 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="470e2f5e-6f60-49be-850b-8a0df6566fdd" containerName="installer" Feb 16 02:26:05.401216 master-0 kubenswrapper[31559]: I0216 02:26:05.400413 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="470e2f5e-6f60-49be-850b-8a0df6566fdd" containerName="installer" Feb 16 02:26:05.401216 master-0 kubenswrapper[31559]: E0216 02:26:05.400509 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ef22dacc42282620a76fbbcd3b157ad" containerName="startup-monitor" Feb 16 02:26:05.401216 master-0 kubenswrapper[31559]: I0216 02:26:05.400532 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ef22dacc42282620a76fbbcd3b157ad" containerName="startup-monitor" Feb 16 02:26:05.401216 master-0 kubenswrapper[31559]: I0216 02:26:05.400867 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="470e2f5e-6f60-49be-850b-8a0df6566fdd" containerName="installer" Feb 16 02:26:05.401216 master-0 kubenswrapper[31559]: I0216 02:26:05.400915 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ef22dacc42282620a76fbbcd3b157ad" containerName="startup-monitor" Feb 16 02:26:05.403677 master-0 kubenswrapper[31559]: I0216 02:26:05.403622 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-6fdf8859cd-rtsp7" Feb 16 02:26:05.406793 master-0 kubenswrapper[31559]: I0216 02:26:05.406153 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Feb 16 02:26:05.406793 master-0 kubenswrapper[31559]: I0216 02:26:05.406281 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Feb 16 02:26:05.407520 master-0 kubenswrapper[31559]: I0216 02:26:05.407473 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-70ac8e3ti0c8f" Feb 16 02:26:05.407685 master-0 kubenswrapper[31559]: I0216 02:26:05.407478 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Feb 16 02:26:05.409906 master-0 kubenswrapper[31559]: I0216 02:26:05.409861 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Feb 16 02:26:05.411272 master-0 kubenswrapper[31559]: I0216 02:26:05.411222 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Feb 16 02:26:05.435942 master-0 kubenswrapper[31559]: I0216 02:26:05.435893 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-6fdf8859cd-rtsp7"] Feb 16 02:26:05.524354 master-0 kubenswrapper[31559]: I0216 02:26:05.524154 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/61a29892-8d42-4f6d-baa6-9ae46f580453-secret-grpc-tls\") pod \"thanos-querier-6fdf8859cd-rtsp7\" (UID: \"61a29892-8d42-4f6d-baa6-9ae46f580453\") " pod="openshift-monitoring/thanos-querier-6fdf8859cd-rtsp7" Feb 16 02:26:05.524354 master-0 kubenswrapper[31559]: I0216 02:26:05.524328 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/61a29892-8d42-4f6d-baa6-9ae46f580453-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-6fdf8859cd-rtsp7\" (UID: \"61a29892-8d42-4f6d-baa6-9ae46f580453\") " pod="openshift-monitoring/thanos-querier-6fdf8859cd-rtsp7" Feb 16 02:26:05.524830 master-0 kubenswrapper[31559]: I0216 02:26:05.524379 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/61a29892-8d42-4f6d-baa6-9ae46f580453-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-6fdf8859cd-rtsp7\" (UID: \"61a29892-8d42-4f6d-baa6-9ae46f580453\") " pod="openshift-monitoring/thanos-querier-6fdf8859cd-rtsp7" Feb 16 02:26:05.524830 master-0 kubenswrapper[31559]: I0216 02:26:05.524432 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/61a29892-8d42-4f6d-baa6-9ae46f580453-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-6fdf8859cd-rtsp7\" (UID: \"61a29892-8d42-4f6d-baa6-9ae46f580453\") " pod="openshift-monitoring/thanos-querier-6fdf8859cd-rtsp7" Feb 16 02:26:05.524830 master-0 kubenswrapper[31559]: I0216 02:26:05.524687 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/61a29892-8d42-4f6d-baa6-9ae46f580453-secret-thanos-querier-tls\") pod \"thanos-querier-6fdf8859cd-rtsp7\" (UID: \"61a29892-8d42-4f6d-baa6-9ae46f580453\") " pod="openshift-monitoring/thanos-querier-6fdf8859cd-rtsp7" Feb 16 02:26:05.524830 master-0 kubenswrapper[31559]: I0216 02:26:05.524800 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbzxq\" (UniqueName: \"kubernetes.io/projected/61a29892-8d42-4f6d-baa6-9ae46f580453-kube-api-access-mbzxq\") pod \"thanos-querier-6fdf8859cd-rtsp7\" (UID: \"61a29892-8d42-4f6d-baa6-9ae46f580453\") " pod="openshift-monitoring/thanos-querier-6fdf8859cd-rtsp7" Feb 16 02:26:05.525221 master-0 kubenswrapper[31559]: I0216 02:26:05.524899 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/61a29892-8d42-4f6d-baa6-9ae46f580453-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-6fdf8859cd-rtsp7\" (UID: \"61a29892-8d42-4f6d-baa6-9ae46f580453\") " pod="openshift-monitoring/thanos-querier-6fdf8859cd-rtsp7" Feb 16 02:26:05.525221 master-0 kubenswrapper[31559]: I0216 02:26:05.525019 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/61a29892-8d42-4f6d-baa6-9ae46f580453-metrics-client-ca\") pod \"thanos-querier-6fdf8859cd-rtsp7\" (UID: \"61a29892-8d42-4f6d-baa6-9ae46f580453\") " pod="openshift-monitoring/thanos-querier-6fdf8859cd-rtsp7" Feb 16 02:26:05.626673 master-0 kubenswrapper[31559]: I0216 02:26:05.626579 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/61a29892-8d42-4f6d-baa6-9ae46f580453-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-6fdf8859cd-rtsp7\" (UID: \"61a29892-8d42-4f6d-baa6-9ae46f580453\") " pod="openshift-monitoring/thanos-querier-6fdf8859cd-rtsp7" Feb 16 02:26:05.626888 master-0 kubenswrapper[31559]: I0216 02:26:05.626765 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/61a29892-8d42-4f6d-baa6-9ae46f580453-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-6fdf8859cd-rtsp7\" (UID: \"61a29892-8d42-4f6d-baa6-9ae46f580453\") " pod="openshift-monitoring/thanos-querier-6fdf8859cd-rtsp7" Feb 16 02:26:05.627798 master-0 kubenswrapper[31559]: I0216 02:26:05.627728 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/61a29892-8d42-4f6d-baa6-9ae46f580453-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-6fdf8859cd-rtsp7\" (UID: \"61a29892-8d42-4f6d-baa6-9ae46f580453\") " pod="openshift-monitoring/thanos-querier-6fdf8859cd-rtsp7" Feb 16 02:26:05.627948 master-0 kubenswrapper[31559]: I0216 02:26:05.627833 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/61a29892-8d42-4f6d-baa6-9ae46f580453-secret-thanos-querier-tls\") pod \"thanos-querier-6fdf8859cd-rtsp7\" (UID: \"61a29892-8d42-4f6d-baa6-9ae46f580453\") " pod="openshift-monitoring/thanos-querier-6fdf8859cd-rtsp7" Feb 16 02:26:05.628048 master-0 kubenswrapper[31559]: I0216 02:26:05.628010 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbzxq\" (UniqueName: \"kubernetes.io/projected/61a29892-8d42-4f6d-baa6-9ae46f580453-kube-api-access-mbzxq\") pod \"thanos-querier-6fdf8859cd-rtsp7\" (UID: \"61a29892-8d42-4f6d-baa6-9ae46f580453\") " pod="openshift-monitoring/thanos-querier-6fdf8859cd-rtsp7" Feb 16 02:26:05.628135 master-0 kubenswrapper[31559]: I0216 02:26:05.628102 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/61a29892-8d42-4f6d-baa6-9ae46f580453-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-6fdf8859cd-rtsp7\" (UID: \"61a29892-8d42-4f6d-baa6-9ae46f580453\") " pod="openshift-monitoring/thanos-querier-6fdf8859cd-rtsp7" Feb 16 02:26:05.628188 master-0 kubenswrapper[31559]: I0216 02:26:05.628161 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/61a29892-8d42-4f6d-baa6-9ae46f580453-metrics-client-ca\") pod \"thanos-querier-6fdf8859cd-rtsp7\" (UID: \"61a29892-8d42-4f6d-baa6-9ae46f580453\") " pod="openshift-monitoring/thanos-querier-6fdf8859cd-rtsp7" Feb 16 02:26:05.628642 master-0 kubenswrapper[31559]: I0216 02:26:05.628543 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/61a29892-8d42-4f6d-baa6-9ae46f580453-secret-grpc-tls\") pod \"thanos-querier-6fdf8859cd-rtsp7\" (UID: \"61a29892-8d42-4f6d-baa6-9ae46f580453\") " pod="openshift-monitoring/thanos-querier-6fdf8859cd-rtsp7" Feb 16 02:26:05.630055 master-0 kubenswrapper[31559]: I0216 02:26:05.630011 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/61a29892-8d42-4f6d-baa6-9ae46f580453-metrics-client-ca\") pod \"thanos-querier-6fdf8859cd-rtsp7\" (UID: \"61a29892-8d42-4f6d-baa6-9ae46f580453\") " pod="openshift-monitoring/thanos-querier-6fdf8859cd-rtsp7" Feb 16 02:26:05.631770 master-0 kubenswrapper[31559]: I0216 02:26:05.631607 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/61a29892-8d42-4f6d-baa6-9ae46f580453-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-6fdf8859cd-rtsp7\" (UID: \"61a29892-8d42-4f6d-baa6-9ae46f580453\") " pod="openshift-monitoring/thanos-querier-6fdf8859cd-rtsp7" Feb 16 02:26:05.632269 master-0 kubenswrapper[31559]: I0216 02:26:05.632217 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/61a29892-8d42-4f6d-baa6-9ae46f580453-secret-grpc-tls\") pod \"thanos-querier-6fdf8859cd-rtsp7\" (UID: \"61a29892-8d42-4f6d-baa6-9ae46f580453\") " pod="openshift-monitoring/thanos-querier-6fdf8859cd-rtsp7" Feb 16 02:26:05.632324 master-0 kubenswrapper[31559]: I0216 02:26:05.632266 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/61a29892-8d42-4f6d-baa6-9ae46f580453-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-6fdf8859cd-rtsp7\" (UID: \"61a29892-8d42-4f6d-baa6-9ae46f580453\") " pod="openshift-monitoring/thanos-querier-6fdf8859cd-rtsp7" Feb 16 02:26:05.633560 master-0 kubenswrapper[31559]: I0216 02:26:05.633486 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/61a29892-8d42-4f6d-baa6-9ae46f580453-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-6fdf8859cd-rtsp7\" (UID: \"61a29892-8d42-4f6d-baa6-9ae46f580453\") " pod="openshift-monitoring/thanos-querier-6fdf8859cd-rtsp7" Feb 16 02:26:05.634153 master-0 kubenswrapper[31559]: I0216 02:26:05.634115 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/61a29892-8d42-4f6d-baa6-9ae46f580453-secret-thanos-querier-tls\") pod \"thanos-querier-6fdf8859cd-rtsp7\" (UID: \"61a29892-8d42-4f6d-baa6-9ae46f580453\") " pod="openshift-monitoring/thanos-querier-6fdf8859cd-rtsp7" Feb 16 02:26:05.636128 master-0 kubenswrapper[31559]: I0216 02:26:05.636077 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/61a29892-8d42-4f6d-baa6-9ae46f580453-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-6fdf8859cd-rtsp7\" (UID: \"61a29892-8d42-4f6d-baa6-9ae46f580453\") " pod="openshift-monitoring/thanos-querier-6fdf8859cd-rtsp7" Feb 16 02:26:05.656957 master-0 kubenswrapper[31559]: I0216 02:26:05.656799 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbzxq\" (UniqueName: \"kubernetes.io/projected/61a29892-8d42-4f6d-baa6-9ae46f580453-kube-api-access-mbzxq\") pod \"thanos-querier-6fdf8859cd-rtsp7\" (UID: \"61a29892-8d42-4f6d-baa6-9ae46f580453\") " pod="openshift-monitoring/thanos-querier-6fdf8859cd-rtsp7" Feb 16 02:26:05.742220 master-0 kubenswrapper[31559]: I0216 02:26:05.742112 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-6fdf8859cd-rtsp7" Feb 16 02:26:06.289072 master-0 kubenswrapper[31559]: I0216 02:26:06.288909 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-6fdf8859cd-rtsp7"] Feb 16 02:26:06.460516 master-0 kubenswrapper[31559]: I0216 02:26:06.459769 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6fdf8859cd-rtsp7" event={"ID":"61a29892-8d42-4f6d-baa6-9ae46f580453","Type":"ContainerStarted","Data":"8b3086168d071c34bee3fa6cea9fd0fd7bfe750dbc0346ea530fa96be91baada"} Feb 16 02:26:07.998854 master-0 kubenswrapper[31559]: I0216 02:26:07.998766 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-7777d5cc66-jw7d4"] Feb 16 02:26:08.000352 master-0 kubenswrapper[31559]: I0216 02:26:08.000324 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-7777d5cc66-jw7d4" Feb 16 02:26:08.003615 master-0 kubenswrapper[31559]: I0216 02:26:08.003580 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 16 02:26:08.003820 master-0 kubenswrapper[31559]: I0216 02:26:08.003803 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 16 02:26:08.003958 master-0 kubenswrapper[31559]: I0216 02:26:08.003943 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 16 02:26:08.004077 master-0 kubenswrapper[31559]: I0216 02:26:08.004062 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 16 02:26:08.004215 master-0 kubenswrapper[31559]: I0216 02:26:08.004194 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 16 02:26:08.010846 master-0 kubenswrapper[31559]: I0216 02:26:08.010777 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-7777d5cc66-jw7d4"] Feb 16 02:26:08.056909 master-0 kubenswrapper[31559]: I0216 02:26:08.055899 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-77778cfd78-cjbrf"] Feb 16 02:26:08.057807 master-0 kubenswrapper[31559]: I0216 02:26:08.057784 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-77778cfd78-cjbrf" Feb 16 02:26:08.059892 master-0 kubenswrapper[31559]: I0216 02:26:08.059833 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-7m5el3rgfcivj" Feb 16 02:26:08.068180 master-0 kubenswrapper[31559]: I0216 02:26:08.068083 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/metrics-server-67b79bd656-cs2n2"] Feb 16 02:26:08.068391 master-0 kubenswrapper[31559]: I0216 02:26:08.068359 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/metrics-server-67b79bd656-cs2n2" podUID="8c267cc7-a51a-4b14-baee-e584254eefc5" containerName="metrics-server" containerID="cri-o://f80750b41fcca97bf8458c1b6044d45377e09a5a0f5619c086ed38bf7a1478e0" gracePeriod=170 Feb 16 02:26:08.074251 master-0 kubenswrapper[31559]: I0216 02:26:08.074196 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c128df4-87b3-4b63-a9d3-fe06ae66ab51-client-ca-bundle\") pod \"metrics-server-77778cfd78-cjbrf\" (UID: \"7c128df4-87b3-4b63-a9d3-fe06ae66ab51\") " pod="openshift-monitoring/metrics-server-77778cfd78-cjbrf" Feb 16 02:26:08.074335 master-0 kubenswrapper[31559]: I0216 02:26:08.074266 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e140264-da5d-471d-8fee-a401deeadc83-config\") pod \"console-operator-7777d5cc66-jw7d4\" (UID: \"8e140264-da5d-471d-8fee-a401deeadc83\") " pod="openshift-console-operator/console-operator-7777d5cc66-jw7d4" Feb 16 02:26:08.074335 master-0 kubenswrapper[31559]: I0216 02:26:08.074305 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-277zh\" (UniqueName: \"kubernetes.io/projected/7c128df4-87b3-4b63-a9d3-fe06ae66ab51-kube-api-access-277zh\") pod \"metrics-server-77778cfd78-cjbrf\" (UID: \"7c128df4-87b3-4b63-a9d3-fe06ae66ab51\") " pod="openshift-monitoring/metrics-server-77778cfd78-cjbrf" Feb 16 02:26:08.074407 master-0 kubenswrapper[31559]: I0216 02:26:08.074340 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/7c128df4-87b3-4b63-a9d3-fe06ae66ab51-audit-log\") pod \"metrics-server-77778cfd78-cjbrf\" (UID: \"7c128df4-87b3-4b63-a9d3-fe06ae66ab51\") " pod="openshift-monitoring/metrics-server-77778cfd78-cjbrf" Feb 16 02:26:08.074586 master-0 kubenswrapper[31559]: I0216 02:26:08.074547 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8e140264-da5d-471d-8fee-a401deeadc83-trusted-ca\") pod \"console-operator-7777d5cc66-jw7d4\" (UID: \"8e140264-da5d-471d-8fee-a401deeadc83\") " pod="openshift-console-operator/console-operator-7777d5cc66-jw7d4" Feb 16 02:26:08.074647 master-0 kubenswrapper[31559]: I0216 02:26:08.074601 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/7c128df4-87b3-4b63-a9d3-fe06ae66ab51-metrics-server-audit-profiles\") pod \"metrics-server-77778cfd78-cjbrf\" (UID: \"7c128df4-87b3-4b63-a9d3-fe06ae66ab51\") " pod="openshift-monitoring/metrics-server-77778cfd78-cjbrf" Feb 16 02:26:08.074684 master-0 kubenswrapper[31559]: I0216 02:26:08.074659 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/7c128df4-87b3-4b63-a9d3-fe06ae66ab51-secret-metrics-client-certs\") pod \"metrics-server-77778cfd78-cjbrf\" (UID: \"7c128df4-87b3-4b63-a9d3-fe06ae66ab51\") " pod="openshift-monitoring/metrics-server-77778cfd78-cjbrf" Feb 16 02:26:08.074784 master-0 kubenswrapper[31559]: I0216 02:26:08.074748 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/7c128df4-87b3-4b63-a9d3-fe06ae66ab51-secret-metrics-server-tls\") pod \"metrics-server-77778cfd78-cjbrf\" (UID: \"7c128df4-87b3-4b63-a9d3-fe06ae66ab51\") " pod="openshift-monitoring/metrics-server-77778cfd78-cjbrf" Feb 16 02:26:08.074888 master-0 kubenswrapper[31559]: I0216 02:26:08.074862 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c128df4-87b3-4b63-a9d3-fe06ae66ab51-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-77778cfd78-cjbrf\" (UID: \"7c128df4-87b3-4b63-a9d3-fe06ae66ab51\") " pod="openshift-monitoring/metrics-server-77778cfd78-cjbrf" Feb 16 02:26:08.074983 master-0 kubenswrapper[31559]: I0216 02:26:08.074964 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hf984\" (UniqueName: \"kubernetes.io/projected/8e140264-da5d-471d-8fee-a401deeadc83-kube-api-access-hf984\") pod \"console-operator-7777d5cc66-jw7d4\" (UID: \"8e140264-da5d-471d-8fee-a401deeadc83\") " pod="openshift-console-operator/console-operator-7777d5cc66-jw7d4" Feb 16 02:26:08.075032 master-0 kubenswrapper[31559]: I0216 02:26:08.074995 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e140264-da5d-471d-8fee-a401deeadc83-serving-cert\") pod \"console-operator-7777d5cc66-jw7d4\" (UID: \"8e140264-da5d-471d-8fee-a401deeadc83\") " pod="openshift-console-operator/console-operator-7777d5cc66-jw7d4" Feb 16 02:26:08.077864 master-0 kubenswrapper[31559]: I0216 02:26:08.077681 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-77778cfd78-cjbrf"] Feb 16 02:26:08.178092 master-0 kubenswrapper[31559]: I0216 02:26:08.178024 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c128df4-87b3-4b63-a9d3-fe06ae66ab51-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-77778cfd78-cjbrf\" (UID: \"7c128df4-87b3-4b63-a9d3-fe06ae66ab51\") " pod="openshift-monitoring/metrics-server-77778cfd78-cjbrf" Feb 16 02:26:08.178092 master-0 kubenswrapper[31559]: I0216 02:26:08.178101 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hf984\" (UniqueName: \"kubernetes.io/projected/8e140264-da5d-471d-8fee-a401deeadc83-kube-api-access-hf984\") pod \"console-operator-7777d5cc66-jw7d4\" (UID: \"8e140264-da5d-471d-8fee-a401deeadc83\") " pod="openshift-console-operator/console-operator-7777d5cc66-jw7d4" Feb 16 02:26:08.178367 master-0 kubenswrapper[31559]: I0216 02:26:08.178124 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e140264-da5d-471d-8fee-a401deeadc83-serving-cert\") pod \"console-operator-7777d5cc66-jw7d4\" (UID: \"8e140264-da5d-471d-8fee-a401deeadc83\") " pod="openshift-console-operator/console-operator-7777d5cc66-jw7d4" Feb 16 02:26:08.178367 master-0 kubenswrapper[31559]: I0216 02:26:08.178248 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c128df4-87b3-4b63-a9d3-fe06ae66ab51-client-ca-bundle\") pod \"metrics-server-77778cfd78-cjbrf\" (UID: \"7c128df4-87b3-4b63-a9d3-fe06ae66ab51\") " pod="openshift-monitoring/metrics-server-77778cfd78-cjbrf" Feb 16 02:26:08.178367 master-0 kubenswrapper[31559]: I0216 02:26:08.178289 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e140264-da5d-471d-8fee-a401deeadc83-config\") pod \"console-operator-7777d5cc66-jw7d4\" (UID: \"8e140264-da5d-471d-8fee-a401deeadc83\") " pod="openshift-console-operator/console-operator-7777d5cc66-jw7d4" Feb 16 02:26:08.178367 master-0 kubenswrapper[31559]: I0216 02:26:08.178318 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-277zh\" (UniqueName: \"kubernetes.io/projected/7c128df4-87b3-4b63-a9d3-fe06ae66ab51-kube-api-access-277zh\") pod \"metrics-server-77778cfd78-cjbrf\" (UID: \"7c128df4-87b3-4b63-a9d3-fe06ae66ab51\") " pod="openshift-monitoring/metrics-server-77778cfd78-cjbrf" Feb 16 02:26:08.178610 master-0 kubenswrapper[31559]: I0216 02:26:08.178369 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/7c128df4-87b3-4b63-a9d3-fe06ae66ab51-audit-log\") pod \"metrics-server-77778cfd78-cjbrf\" (UID: \"7c128df4-87b3-4b63-a9d3-fe06ae66ab51\") " pod="openshift-monitoring/metrics-server-77778cfd78-cjbrf" Feb 16 02:26:08.178610 master-0 kubenswrapper[31559]: I0216 02:26:08.178423 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8e140264-da5d-471d-8fee-a401deeadc83-trusted-ca\") pod \"console-operator-7777d5cc66-jw7d4\" (UID: \"8e140264-da5d-471d-8fee-a401deeadc83\") " pod="openshift-console-operator/console-operator-7777d5cc66-jw7d4" Feb 16 02:26:08.178610 master-0 kubenswrapper[31559]: I0216 02:26:08.178462 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/7c128df4-87b3-4b63-a9d3-fe06ae66ab51-metrics-server-audit-profiles\") pod \"metrics-server-77778cfd78-cjbrf\" (UID: \"7c128df4-87b3-4b63-a9d3-fe06ae66ab51\") " pod="openshift-monitoring/metrics-server-77778cfd78-cjbrf" Feb 16 02:26:08.178610 master-0 kubenswrapper[31559]: I0216 02:26:08.178492 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/7c128df4-87b3-4b63-a9d3-fe06ae66ab51-secret-metrics-client-certs\") pod \"metrics-server-77778cfd78-cjbrf\" (UID: \"7c128df4-87b3-4b63-a9d3-fe06ae66ab51\") " pod="openshift-monitoring/metrics-server-77778cfd78-cjbrf" Feb 16 02:26:08.178786 master-0 kubenswrapper[31559]: I0216 02:26:08.178681 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/7c128df4-87b3-4b63-a9d3-fe06ae66ab51-secret-metrics-server-tls\") pod \"metrics-server-77778cfd78-cjbrf\" (UID: \"7c128df4-87b3-4b63-a9d3-fe06ae66ab51\") " pod="openshift-monitoring/metrics-server-77778cfd78-cjbrf" Feb 16 02:26:08.179274 master-0 kubenswrapper[31559]: I0216 02:26:08.179209 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c128df4-87b3-4b63-a9d3-fe06ae66ab51-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-77778cfd78-cjbrf\" (UID: \"7c128df4-87b3-4b63-a9d3-fe06ae66ab51\") " pod="openshift-monitoring/metrics-server-77778cfd78-cjbrf" Feb 16 02:26:08.180937 master-0 kubenswrapper[31559]: E0216 02:26:08.180321 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8e140264-da5d-471d-8fee-a401deeadc83-trusted-ca podName:8e140264-da5d-471d-8fee-a401deeadc83 nodeName:}" failed. No retries permitted until 2026-02-16 02:26:08.680292346 +0000 UTC m=+221.024898361 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/8e140264-da5d-471d-8fee-a401deeadc83-trusted-ca") pod "console-operator-7777d5cc66-jw7d4" (UID: "8e140264-da5d-471d-8fee-a401deeadc83") : configmap references non-existent config key: ca-bundle.crt Feb 16 02:26:08.181050 master-0 kubenswrapper[31559]: I0216 02:26:08.180978 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/7c128df4-87b3-4b63-a9d3-fe06ae66ab51-audit-log\") pod \"metrics-server-77778cfd78-cjbrf\" (UID: \"7c128df4-87b3-4b63-a9d3-fe06ae66ab51\") " pod="openshift-monitoring/metrics-server-77778cfd78-cjbrf" Feb 16 02:26:08.191469 master-0 kubenswrapper[31559]: I0216 02:26:08.181739 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e140264-da5d-471d-8fee-a401deeadc83-config\") pod \"console-operator-7777d5cc66-jw7d4\" (UID: \"8e140264-da5d-471d-8fee-a401deeadc83\") " pod="openshift-console-operator/console-operator-7777d5cc66-jw7d4" Feb 16 02:26:08.191469 master-0 kubenswrapper[31559]: I0216 02:26:08.182041 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/7c128df4-87b3-4b63-a9d3-fe06ae66ab51-metrics-server-audit-profiles\") pod \"metrics-server-77778cfd78-cjbrf\" (UID: \"7c128df4-87b3-4b63-a9d3-fe06ae66ab51\") " pod="openshift-monitoring/metrics-server-77778cfd78-cjbrf" Feb 16 02:26:08.191469 master-0 kubenswrapper[31559]: I0216 02:26:08.184117 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/7c128df4-87b3-4b63-a9d3-fe06ae66ab51-secret-metrics-client-certs\") pod \"metrics-server-77778cfd78-cjbrf\" (UID: \"7c128df4-87b3-4b63-a9d3-fe06ae66ab51\") " pod="openshift-monitoring/metrics-server-77778cfd78-cjbrf" Feb 16 02:26:08.191469 master-0 kubenswrapper[31559]: I0216 02:26:08.184345 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/7c128df4-87b3-4b63-a9d3-fe06ae66ab51-secret-metrics-server-tls\") pod \"metrics-server-77778cfd78-cjbrf\" (UID: \"7c128df4-87b3-4b63-a9d3-fe06ae66ab51\") " pod="openshift-monitoring/metrics-server-77778cfd78-cjbrf" Feb 16 02:26:08.191469 master-0 kubenswrapper[31559]: I0216 02:26:08.189136 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e140264-da5d-471d-8fee-a401deeadc83-serving-cert\") pod \"console-operator-7777d5cc66-jw7d4\" (UID: \"8e140264-da5d-471d-8fee-a401deeadc83\") " pod="openshift-console-operator/console-operator-7777d5cc66-jw7d4" Feb 16 02:26:08.191469 master-0 kubenswrapper[31559]: I0216 02:26:08.189146 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c128df4-87b3-4b63-a9d3-fe06ae66ab51-client-ca-bundle\") pod \"metrics-server-77778cfd78-cjbrf\" (UID: \"7c128df4-87b3-4b63-a9d3-fe06ae66ab51\") " pod="openshift-monitoring/metrics-server-77778cfd78-cjbrf" Feb 16 02:26:08.196981 master-0 kubenswrapper[31559]: I0216 02:26:08.196939 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hf984\" (UniqueName: \"kubernetes.io/projected/8e140264-da5d-471d-8fee-a401deeadc83-kube-api-access-hf984\") pod \"console-operator-7777d5cc66-jw7d4\" (UID: \"8e140264-da5d-471d-8fee-a401deeadc83\") " pod="openshift-console-operator/console-operator-7777d5cc66-jw7d4" Feb 16 02:26:08.201535 master-0 kubenswrapper[31559]: I0216 02:26:08.201494 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-277zh\" (UniqueName: \"kubernetes.io/projected/7c128df4-87b3-4b63-a9d3-fe06ae66ab51-kube-api-access-277zh\") pod \"metrics-server-77778cfd78-cjbrf\" (UID: \"7c128df4-87b3-4b63-a9d3-fe06ae66ab51\") " pod="openshift-monitoring/metrics-server-77778cfd78-cjbrf" Feb 16 02:26:08.378867 master-0 kubenswrapper[31559]: I0216 02:26:08.378757 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-77778cfd78-cjbrf" Feb 16 02:26:08.689250 master-0 kubenswrapper[31559]: I0216 02:26:08.687811 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8e140264-da5d-471d-8fee-a401deeadc83-trusted-ca\") pod \"console-operator-7777d5cc66-jw7d4\" (UID: \"8e140264-da5d-471d-8fee-a401deeadc83\") " pod="openshift-console-operator/console-operator-7777d5cc66-jw7d4" Feb 16 02:26:08.689250 master-0 kubenswrapper[31559]: E0216 02:26:08.688180 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8e140264-da5d-471d-8fee-a401deeadc83-trusted-ca podName:8e140264-da5d-471d-8fee-a401deeadc83 nodeName:}" failed. No retries permitted until 2026-02-16 02:26:09.688161238 +0000 UTC m=+222.032767253 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/8e140264-da5d-471d-8fee-a401deeadc83-trusted-ca") pod "console-operator-7777d5cc66-jw7d4" (UID: "8e140264-da5d-471d-8fee-a401deeadc83") : configmap references non-existent config key: ca-bundle.crt Feb 16 02:26:09.498020 master-0 kubenswrapper[31559]: I0216 02:26:09.496153 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6fdf8859cd-rtsp7" event={"ID":"61a29892-8d42-4f6d-baa6-9ae46f580453","Type":"ContainerStarted","Data":"d8b339a3c03410c456a2bd4546ba76cdceaf3deeb78664fe667d45c6d455d7ba"} Feb 16 02:26:09.669239 master-0 kubenswrapper[31559]: I0216 02:26:09.669051 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-77778cfd78-cjbrf"] Feb 16 02:26:09.705873 master-0 kubenswrapper[31559]: I0216 02:26:09.705765 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8e140264-da5d-471d-8fee-a401deeadc83-trusted-ca\") pod \"console-operator-7777d5cc66-jw7d4\" (UID: \"8e140264-da5d-471d-8fee-a401deeadc83\") " pod="openshift-console-operator/console-operator-7777d5cc66-jw7d4" Feb 16 02:26:09.706202 master-0 kubenswrapper[31559]: E0216 02:26:09.706136 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8e140264-da5d-471d-8fee-a401deeadc83-trusted-ca podName:8e140264-da5d-471d-8fee-a401deeadc83 nodeName:}" failed. No retries permitted until 2026-02-16 02:26:11.706114917 +0000 UTC m=+224.050720922 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/8e140264-da5d-471d-8fee-a401deeadc83-trusted-ca") pod "console-operator-7777d5cc66-jw7d4" (UID: "8e140264-da5d-471d-8fee-a401deeadc83") : configmap references non-existent config key: ca-bundle.crt Feb 16 02:26:10.504586 master-0 kubenswrapper[31559]: I0216 02:26:10.504489 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-77778cfd78-cjbrf" event={"ID":"7c128df4-87b3-4b63-a9d3-fe06ae66ab51","Type":"ContainerStarted","Data":"b3a0fe189a2645d1d1075c4360df8c52f355e78debad711ec4c539e6d850e6c4"} Feb 16 02:26:10.504586 master-0 kubenswrapper[31559]: I0216 02:26:10.504551 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-77778cfd78-cjbrf" event={"ID":"7c128df4-87b3-4b63-a9d3-fe06ae66ab51","Type":"ContainerStarted","Data":"b70af1f29846e8a2e756876f76252e1626708bc3a68ad1bf163e38c863e29524"} Feb 16 02:26:10.507938 master-0 kubenswrapper[31559]: I0216 02:26:10.507681 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6fdf8859cd-rtsp7" event={"ID":"61a29892-8d42-4f6d-baa6-9ae46f580453","Type":"ContainerStarted","Data":"2f8e8b0457bb917b607cd0e4ab66079ee62e6ddb7a6bce91832be45d6132a074"} Feb 16 02:26:10.507938 master-0 kubenswrapper[31559]: I0216 02:26:10.507932 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6fdf8859cd-rtsp7" event={"ID":"61a29892-8d42-4f6d-baa6-9ae46f580453","Type":"ContainerStarted","Data":"2344167d52118046ab59fe00566ea70a599a044477b6d4458dec2ff0bf53b43b"} Feb 16 02:26:10.534102 master-0 kubenswrapper[31559]: I0216 02:26:10.534005 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-77778cfd78-cjbrf" podStartSLOduration=2.5339834359999998 podStartE2EDuration="2.533983436s" podCreationTimestamp="2026-02-16 02:26:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:26:10.52938205 +0000 UTC m=+222.873988105" watchObservedRunningTime="2026-02-16 02:26:10.533983436 +0000 UTC m=+222.878589471" Feb 16 02:26:11.521702 master-0 kubenswrapper[31559]: I0216 02:26:11.521497 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6fdf8859cd-rtsp7" event={"ID":"61a29892-8d42-4f6d-baa6-9ae46f580453","Type":"ContainerStarted","Data":"0e82a8720f3f25fb8b2aa7f975295a31d52d30a7f50b7f53cd05a640efc42889"} Feb 16 02:26:11.521702 master-0 kubenswrapper[31559]: I0216 02:26:11.521616 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6fdf8859cd-rtsp7" event={"ID":"61a29892-8d42-4f6d-baa6-9ae46f580453","Type":"ContainerStarted","Data":"422ba955191253d60a707570ca2c5f80b1c87da6d6badf437feb7df0fc307708"} Feb 16 02:26:11.750517 master-0 kubenswrapper[31559]: I0216 02:26:11.750384 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8e140264-da5d-471d-8fee-a401deeadc83-trusted-ca\") pod \"console-operator-7777d5cc66-jw7d4\" (UID: \"8e140264-da5d-471d-8fee-a401deeadc83\") " pod="openshift-console-operator/console-operator-7777d5cc66-jw7d4" Feb 16 02:26:11.750942 master-0 kubenswrapper[31559]: E0216 02:26:11.750714 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8e140264-da5d-471d-8fee-a401deeadc83-trusted-ca podName:8e140264-da5d-471d-8fee-a401deeadc83 nodeName:}" failed. No retries permitted until 2026-02-16 02:26:15.750674725 +0000 UTC m=+228.095280770 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/8e140264-da5d-471d-8fee-a401deeadc83-trusted-ca") pod "console-operator-7777d5cc66-jw7d4" (UID: "8e140264-da5d-471d-8fee-a401deeadc83") : configmap references non-existent config key: ca-bundle.crt Feb 16 02:26:12.531530 master-0 kubenswrapper[31559]: I0216 02:26:12.531484 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6fdf8859cd-rtsp7" event={"ID":"61a29892-8d42-4f6d-baa6-9ae46f580453","Type":"ContainerStarted","Data":"addf0082cdddd791208393ec5ad857af6f15782a12364098040a934d8f8ccae1"} Feb 16 02:26:12.532154 master-0 kubenswrapper[31559]: I0216 02:26:12.532132 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-6fdf8859cd-rtsp7" Feb 16 02:26:12.556199 master-0 kubenswrapper[31559]: I0216 02:26:12.556100 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-6fdf8859cd-rtsp7" podStartSLOduration=2.891794576 podStartE2EDuration="7.556075476s" podCreationTimestamp="2026-02-16 02:26:05 +0000 UTC" firstStartedPulling="2026-02-16 02:26:06.303513304 +0000 UTC m=+218.648119319" lastFinishedPulling="2026-02-16 02:26:10.967794194 +0000 UTC m=+223.312400219" observedRunningTime="2026-02-16 02:26:12.553396618 +0000 UTC m=+224.898002673" watchObservedRunningTime="2026-02-16 02:26:12.556075476 +0000 UTC m=+224.900681501" Feb 16 02:26:12.666725 master-0 kubenswrapper[31559]: I0216 02:26:12.666660 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 16 02:26:12.669223 master-0 kubenswrapper[31559]: I0216 02:26:12.669184 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:26:12.674072 master-0 kubenswrapper[31559]: I0216 02:26:12.674038 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Feb 16 02:26:12.674460 master-0 kubenswrapper[31559]: I0216 02:26:12.674421 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-3lmddhljekc4u" Feb 16 02:26:12.674738 master-0 kubenswrapper[31559]: I0216 02:26:12.674717 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Feb 16 02:26:12.674964 master-0 kubenswrapper[31559]: I0216 02:26:12.674940 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Feb 16 02:26:12.675115 master-0 kubenswrapper[31559]: I0216 02:26:12.675092 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Feb 16 02:26:12.675283 master-0 kubenswrapper[31559]: I0216 02:26:12.675260 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Feb 16 02:26:12.675455 master-0 kubenswrapper[31559]: I0216 02:26:12.675419 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Feb 16 02:26:12.675628 master-0 kubenswrapper[31559]: I0216 02:26:12.675597 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Feb 16 02:26:12.675713 master-0 kubenswrapper[31559]: I0216 02:26:12.675684 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Feb 16 02:26:12.675763 master-0 kubenswrapper[31559]: I0216 02:26:12.675683 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Feb 16 02:26:12.678605 master-0 kubenswrapper[31559]: I0216 02:26:12.678578 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Feb 16 02:26:12.679265 master-0 kubenswrapper[31559]: I0216 02:26:12.679167 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Feb 16 02:26:12.689280 master-0 kubenswrapper[31559]: I0216 02:26:12.689245 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 16 02:26:12.771497 master-0 kubenswrapper[31559]: I0216 02:26:12.771462 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9b50da24-3f10-4b81-be90-912874ed2629-web-config\") pod \"prometheus-k8s-0\" (UID: \"9b50da24-3f10-4b81-be90-912874ed2629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:26:12.771740 master-0 kubenswrapper[31559]: I0216 02:26:12.771723 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9b50da24-3f10-4b81-be90-912874ed2629-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"9b50da24-3f10-4b81-be90-912874ed2629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:26:12.771835 master-0 kubenswrapper[31559]: I0216 02:26:12.771821 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9b50da24-3f10-4b81-be90-912874ed2629-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"9b50da24-3f10-4b81-be90-912874ed2629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:26:12.771917 master-0 kubenswrapper[31559]: I0216 02:26:12.771904 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/9b50da24-3f10-4b81-be90-912874ed2629-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"9b50da24-3f10-4b81-be90-912874ed2629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:26:12.771999 master-0 kubenswrapper[31559]: I0216 02:26:12.771987 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b50da24-3f10-4b81-be90-912874ed2629-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"9b50da24-3f10-4b81-be90-912874ed2629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:26:12.773602 master-0 kubenswrapper[31559]: I0216 02:26:12.772108 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/9b50da24-3f10-4b81-be90-912874ed2629-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"9b50da24-3f10-4b81-be90-912874ed2629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:26:12.773602 master-0 kubenswrapper[31559]: I0216 02:26:12.772200 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/9b50da24-3f10-4b81-be90-912874ed2629-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"9b50da24-3f10-4b81-be90-912874ed2629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:26:12.773602 master-0 kubenswrapper[31559]: I0216 02:26:12.772234 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/9b50da24-3f10-4b81-be90-912874ed2629-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"9b50da24-3f10-4b81-be90-912874ed2629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:26:12.773602 master-0 kubenswrapper[31559]: I0216 02:26:12.772277 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b50da24-3f10-4b81-be90-912874ed2629-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"9b50da24-3f10-4b81-be90-912874ed2629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:26:12.773602 master-0 kubenswrapper[31559]: I0216 02:26:12.772327 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9b50da24-3f10-4b81-be90-912874ed2629-config-out\") pod \"prometheus-k8s-0\" (UID: \"9b50da24-3f10-4b81-be90-912874ed2629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:26:12.773602 master-0 kubenswrapper[31559]: I0216 02:26:12.772376 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/9b50da24-3f10-4b81-be90-912874ed2629-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"9b50da24-3f10-4b81-be90-912874ed2629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:26:12.773602 master-0 kubenswrapper[31559]: I0216 02:26:12.772418 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9b50da24-3f10-4b81-be90-912874ed2629-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"9b50da24-3f10-4b81-be90-912874ed2629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:26:12.773602 master-0 kubenswrapper[31559]: I0216 02:26:12.772475 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9b50da24-3f10-4b81-be90-912874ed2629-config\") pod \"prometheus-k8s-0\" (UID: \"9b50da24-3f10-4b81-be90-912874ed2629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:26:12.773602 master-0 kubenswrapper[31559]: I0216 02:26:12.772500 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/9b50da24-3f10-4b81-be90-912874ed2629-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"9b50da24-3f10-4b81-be90-912874ed2629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:26:12.773602 master-0 kubenswrapper[31559]: I0216 02:26:12.772542 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b50da24-3f10-4b81-be90-912874ed2629-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"9b50da24-3f10-4b81-be90-912874ed2629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:26:12.773602 master-0 kubenswrapper[31559]: I0216 02:26:12.772580 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/9b50da24-3f10-4b81-be90-912874ed2629-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"9b50da24-3f10-4b81-be90-912874ed2629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:26:12.773602 master-0 kubenswrapper[31559]: I0216 02:26:12.772621 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9b50da24-3f10-4b81-be90-912874ed2629-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"9b50da24-3f10-4b81-be90-912874ed2629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:26:12.773602 master-0 kubenswrapper[31559]: I0216 02:26:12.772699 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qj9nn\" (UniqueName: \"kubernetes.io/projected/9b50da24-3f10-4b81-be90-912874ed2629-kube-api-access-qj9nn\") pod \"prometheus-k8s-0\" (UID: \"9b50da24-3f10-4b81-be90-912874ed2629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:26:12.874883 master-0 kubenswrapper[31559]: I0216 02:26:12.874749 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9b50da24-3f10-4b81-be90-912874ed2629-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"9b50da24-3f10-4b81-be90-912874ed2629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:26:12.874883 master-0 kubenswrapper[31559]: I0216 02:26:12.874829 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9b50da24-3f10-4b81-be90-912874ed2629-config\") pod \"prometheus-k8s-0\" (UID: \"9b50da24-3f10-4b81-be90-912874ed2629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:26:12.874883 master-0 kubenswrapper[31559]: I0216 02:26:12.874869 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/9b50da24-3f10-4b81-be90-912874ed2629-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"9b50da24-3f10-4b81-be90-912874ed2629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:26:12.875363 master-0 kubenswrapper[31559]: I0216 02:26:12.875093 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b50da24-3f10-4b81-be90-912874ed2629-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"9b50da24-3f10-4b81-be90-912874ed2629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:26:12.875363 master-0 kubenswrapper[31559]: I0216 02:26:12.875138 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/9b50da24-3f10-4b81-be90-912874ed2629-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"9b50da24-3f10-4b81-be90-912874ed2629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:26:12.875363 master-0 kubenswrapper[31559]: I0216 02:26:12.875182 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9b50da24-3f10-4b81-be90-912874ed2629-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"9b50da24-3f10-4b81-be90-912874ed2629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:26:12.875363 master-0 kubenswrapper[31559]: I0216 02:26:12.875254 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qj9nn\" (UniqueName: \"kubernetes.io/projected/9b50da24-3f10-4b81-be90-912874ed2629-kube-api-access-qj9nn\") pod \"prometheus-k8s-0\" (UID: \"9b50da24-3f10-4b81-be90-912874ed2629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:26:12.875363 master-0 kubenswrapper[31559]: I0216 02:26:12.875327 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9b50da24-3f10-4b81-be90-912874ed2629-web-config\") pod \"prometheus-k8s-0\" (UID: \"9b50da24-3f10-4b81-be90-912874ed2629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:26:12.875707 master-0 kubenswrapper[31559]: I0216 02:26:12.875366 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9b50da24-3f10-4b81-be90-912874ed2629-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"9b50da24-3f10-4b81-be90-912874ed2629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:26:12.875707 master-0 kubenswrapper[31559]: I0216 02:26:12.875401 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9b50da24-3f10-4b81-be90-912874ed2629-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"9b50da24-3f10-4b81-be90-912874ed2629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:26:12.875707 master-0 kubenswrapper[31559]: I0216 02:26:12.875464 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/9b50da24-3f10-4b81-be90-912874ed2629-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"9b50da24-3f10-4b81-be90-912874ed2629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:26:12.875707 master-0 kubenswrapper[31559]: I0216 02:26:12.875497 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b50da24-3f10-4b81-be90-912874ed2629-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"9b50da24-3f10-4b81-be90-912874ed2629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:26:12.875707 master-0 kubenswrapper[31559]: I0216 02:26:12.875526 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/9b50da24-3f10-4b81-be90-912874ed2629-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"9b50da24-3f10-4b81-be90-912874ed2629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:26:12.875707 master-0 kubenswrapper[31559]: I0216 02:26:12.875567 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/9b50da24-3f10-4b81-be90-912874ed2629-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"9b50da24-3f10-4b81-be90-912874ed2629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:26:12.875707 master-0 kubenswrapper[31559]: I0216 02:26:12.875605 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/9b50da24-3f10-4b81-be90-912874ed2629-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"9b50da24-3f10-4b81-be90-912874ed2629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:26:12.875707 master-0 kubenswrapper[31559]: I0216 02:26:12.875639 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b50da24-3f10-4b81-be90-912874ed2629-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"9b50da24-3f10-4b81-be90-912874ed2629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:26:12.875707 master-0 kubenswrapper[31559]: I0216 02:26:12.875684 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9b50da24-3f10-4b81-be90-912874ed2629-config-out\") pod \"prometheus-k8s-0\" (UID: \"9b50da24-3f10-4b81-be90-912874ed2629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:26:12.876091 master-0 kubenswrapper[31559]: I0216 02:26:12.875728 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/9b50da24-3f10-4b81-be90-912874ed2629-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"9b50da24-3f10-4b81-be90-912874ed2629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:26:12.877702 master-0 kubenswrapper[31559]: E0216 02:26:12.877365 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9b50da24-3f10-4b81-be90-912874ed2629-prometheus-trusted-ca-bundle podName:9b50da24-3f10-4b81-be90-912874ed2629 nodeName:}" failed. No retries permitted until 2026-02-16 02:26:13.377321716 +0000 UTC m=+225.721927771 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/9b50da24-3f10-4b81-be90-912874ed2629-prometheus-trusted-ca-bundle") pod "prometheus-k8s-0" (UID: "9b50da24-3f10-4b81-be90-912874ed2629") : configmap references non-existent config key: ca-bundle.crt Feb 16 02:26:12.877702 master-0 kubenswrapper[31559]: I0216 02:26:12.877641 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/9b50da24-3f10-4b81-be90-912874ed2629-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"9b50da24-3f10-4b81-be90-912874ed2629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:26:12.877874 master-0 kubenswrapper[31559]: I0216 02:26:12.877745 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b50da24-3f10-4b81-be90-912874ed2629-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"9b50da24-3f10-4b81-be90-912874ed2629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:26:12.878240 master-0 kubenswrapper[31559]: I0216 02:26:12.878183 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b50da24-3f10-4b81-be90-912874ed2629-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"9b50da24-3f10-4b81-be90-912874ed2629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:26:12.879257 master-0 kubenswrapper[31559]: I0216 02:26:12.878833 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9b50da24-3f10-4b81-be90-912874ed2629-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"9b50da24-3f10-4b81-be90-912874ed2629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:26:12.879638 master-0 kubenswrapper[31559]: I0216 02:26:12.879573 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/9b50da24-3f10-4b81-be90-912874ed2629-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"9b50da24-3f10-4b81-be90-912874ed2629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:26:12.881071 master-0 kubenswrapper[31559]: I0216 02:26:12.881030 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/9b50da24-3f10-4b81-be90-912874ed2629-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"9b50da24-3f10-4b81-be90-912874ed2629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:26:12.881071 master-0 kubenswrapper[31559]: I0216 02:26:12.881055 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/9b50da24-3f10-4b81-be90-912874ed2629-config\") pod \"prometheus-k8s-0\" (UID: \"9b50da24-3f10-4b81-be90-912874ed2629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:26:12.881228 master-0 kubenswrapper[31559]: I0216 02:26:12.881075 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/9b50da24-3f10-4b81-be90-912874ed2629-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"9b50da24-3f10-4b81-be90-912874ed2629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:26:12.882185 master-0 kubenswrapper[31559]: I0216 02:26:12.881511 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9b50da24-3f10-4b81-be90-912874ed2629-web-config\") pod \"prometheus-k8s-0\" (UID: \"9b50da24-3f10-4b81-be90-912874ed2629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:26:12.882185 master-0 kubenswrapper[31559]: I0216 02:26:12.881700 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9b50da24-3f10-4b81-be90-912874ed2629-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"9b50da24-3f10-4b81-be90-912874ed2629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:26:12.882185 master-0 kubenswrapper[31559]: I0216 02:26:12.881904 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/9b50da24-3f10-4b81-be90-912874ed2629-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"9b50da24-3f10-4b81-be90-912874ed2629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:26:12.882756 master-0 kubenswrapper[31559]: I0216 02:26:12.882719 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/9b50da24-3f10-4b81-be90-912874ed2629-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"9b50da24-3f10-4b81-be90-912874ed2629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:26:12.883225 master-0 kubenswrapper[31559]: I0216 02:26:12.883179 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/9b50da24-3f10-4b81-be90-912874ed2629-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"9b50da24-3f10-4b81-be90-912874ed2629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:26:12.883630 master-0 kubenswrapper[31559]: I0216 02:26:12.883605 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9b50da24-3f10-4b81-be90-912874ed2629-config-out\") pod \"prometheus-k8s-0\" (UID: \"9b50da24-3f10-4b81-be90-912874ed2629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:26:12.884588 master-0 kubenswrapper[31559]: I0216 02:26:12.884548 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9b50da24-3f10-4b81-be90-912874ed2629-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"9b50da24-3f10-4b81-be90-912874ed2629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:26:12.897541 master-0 kubenswrapper[31559]: I0216 02:26:12.896578 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9b50da24-3f10-4b81-be90-912874ed2629-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"9b50da24-3f10-4b81-be90-912874ed2629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:26:12.906564 master-0 kubenswrapper[31559]: I0216 02:26:12.906510 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qj9nn\" (UniqueName: \"kubernetes.io/projected/9b50da24-3f10-4b81-be90-912874ed2629-kube-api-access-qj9nn\") pod \"prometheus-k8s-0\" (UID: \"9b50da24-3f10-4b81-be90-912874ed2629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:26:13.388484 master-0 kubenswrapper[31559]: I0216 02:26:13.385287 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b50da24-3f10-4b81-be90-912874ed2629-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"9b50da24-3f10-4b81-be90-912874ed2629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:26:13.388484 master-0 kubenswrapper[31559]: E0216 02:26:13.385511 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9b50da24-3f10-4b81-be90-912874ed2629-prometheus-trusted-ca-bundle podName:9b50da24-3f10-4b81-be90-912874ed2629 nodeName:}" failed. No retries permitted until 2026-02-16 02:26:14.385480004 +0000 UTC m=+226.730086059 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/9b50da24-3f10-4b81-be90-912874ed2629-prometheus-trusted-ca-bundle") pod "prometheus-k8s-0" (UID: "9b50da24-3f10-4b81-be90-912874ed2629") : configmap references non-existent config key: ca-bundle.crt Feb 16 02:26:14.010011 master-0 kubenswrapper[31559]: I0216 02:26:14.009900 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-6d686d4f46-cxb9s"] Feb 16 02:26:14.011668 master-0 kubenswrapper[31559]: I0216 02:26:14.011611 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-6d686d4f46-cxb9s" Feb 16 02:26:14.016646 master-0 kubenswrapper[31559]: I0216 02:26:14.016570 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-f82q5" Feb 16 02:26:14.016876 master-0 kubenswrapper[31559]: I0216 02:26:14.016582 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Feb 16 02:26:14.032922 master-0 kubenswrapper[31559]: I0216 02:26:14.032843 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-6d686d4f46-cxb9s"] Feb 16 02:26:14.111651 master-0 kubenswrapper[31559]: I0216 02:26:14.111541 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/3e421e82-707d-46cf-bd96-7d9853aa5bf4-monitoring-plugin-cert\") pod \"monitoring-plugin-6d686d4f46-cxb9s\" (UID: \"3e421e82-707d-46cf-bd96-7d9853aa5bf4\") " pod="openshift-monitoring/monitoring-plugin-6d686d4f46-cxb9s" Feb 16 02:26:14.214211 master-0 kubenswrapper[31559]: I0216 02:26:14.214130 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/3e421e82-707d-46cf-bd96-7d9853aa5bf4-monitoring-plugin-cert\") pod \"monitoring-plugin-6d686d4f46-cxb9s\" (UID: \"3e421e82-707d-46cf-bd96-7d9853aa5bf4\") " pod="openshift-monitoring/monitoring-plugin-6d686d4f46-cxb9s" Feb 16 02:26:14.220478 master-0 kubenswrapper[31559]: I0216 02:26:14.220391 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/3e421e82-707d-46cf-bd96-7d9853aa5bf4-monitoring-plugin-cert\") pod \"monitoring-plugin-6d686d4f46-cxb9s\" (UID: \"3e421e82-707d-46cf-bd96-7d9853aa5bf4\") " pod="openshift-monitoring/monitoring-plugin-6d686d4f46-cxb9s" Feb 16 02:26:14.344779 master-0 kubenswrapper[31559]: I0216 02:26:14.344598 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-6d686d4f46-cxb9s" Feb 16 02:26:14.418009 master-0 kubenswrapper[31559]: I0216 02:26:14.417937 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b50da24-3f10-4b81-be90-912874ed2629-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"9b50da24-3f10-4b81-be90-912874ed2629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:26:14.418333 master-0 kubenswrapper[31559]: E0216 02:26:14.418206 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9b50da24-3f10-4b81-be90-912874ed2629-prometheus-trusted-ca-bundle podName:9b50da24-3f10-4b81-be90-912874ed2629 nodeName:}" failed. No retries permitted until 2026-02-16 02:26:16.418169216 +0000 UTC m=+228.762775261 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/9b50da24-3f10-4b81-be90-912874ed2629-prometheus-trusted-ca-bundle") pod "prometheus-k8s-0" (UID: "9b50da24-3f10-4b81-be90-912874ed2629") : configmap references non-existent config key: ca-bundle.crt Feb 16 02:26:14.837805 master-0 kubenswrapper[31559]: I0216 02:26:14.837729 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-6d686d4f46-cxb9s"] Feb 16 02:26:14.842212 master-0 kubenswrapper[31559]: W0216 02:26:14.842147 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3e421e82_707d_46cf_bd96_7d9853aa5bf4.slice/crio-622e46c144d7014b33da56323793c7e4662ef2ab209ef2d6d3496eb1f963bc31 WatchSource:0}: Error finding container 622e46c144d7014b33da56323793c7e4662ef2ab209ef2d6d3496eb1f963bc31: Status 404 returned error can't find the container with id 622e46c144d7014b33da56323793c7e4662ef2ab209ef2d6d3496eb1f963bc31 Feb 16 02:26:15.563114 master-0 kubenswrapper[31559]: I0216 02:26:15.563024 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-6d686d4f46-cxb9s" event={"ID":"3e421e82-707d-46cf-bd96-7d9853aa5bf4","Type":"ContainerStarted","Data":"622e46c144d7014b33da56323793c7e4662ef2ab209ef2d6d3496eb1f963bc31"} Feb 16 02:26:15.751832 master-0 kubenswrapper[31559]: I0216 02:26:15.751765 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-6fdf8859cd-rtsp7" Feb 16 02:26:15.846027 master-0 kubenswrapper[31559]: I0216 02:26:15.845671 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8e140264-da5d-471d-8fee-a401deeadc83-trusted-ca\") pod \"console-operator-7777d5cc66-jw7d4\" (UID: \"8e140264-da5d-471d-8fee-a401deeadc83\") " pod="openshift-console-operator/console-operator-7777d5cc66-jw7d4" Feb 16 02:26:15.846749 master-0 kubenswrapper[31559]: E0216 02:26:15.846701 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8e140264-da5d-471d-8fee-a401deeadc83-trusted-ca podName:8e140264-da5d-471d-8fee-a401deeadc83 nodeName:}" failed. No retries permitted until 2026-02-16 02:26:23.846670795 +0000 UTC m=+236.191276840 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/8e140264-da5d-471d-8fee-a401deeadc83-trusted-ca") pod "console-operator-7777d5cc66-jw7d4" (UID: "8e140264-da5d-471d-8fee-a401deeadc83") : configmap references non-existent config key: ca-bundle.crt Feb 16 02:26:16.457875 master-0 kubenswrapper[31559]: I0216 02:26:16.457813 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b50da24-3f10-4b81-be90-912874ed2629-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"9b50da24-3f10-4b81-be90-912874ed2629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:26:16.458155 master-0 kubenswrapper[31559]: E0216 02:26:16.458103 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9b50da24-3f10-4b81-be90-912874ed2629-prometheus-trusted-ca-bundle podName:9b50da24-3f10-4b81-be90-912874ed2629 nodeName:}" failed. No retries permitted until 2026-02-16 02:26:20.458068626 +0000 UTC m=+232.802674671 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/9b50da24-3f10-4b81-be90-912874ed2629-prometheus-trusted-ca-bundle") pod "prometheus-k8s-0" (UID: "9b50da24-3f10-4b81-be90-912874ed2629") : configmap references non-existent config key: ca-bundle.crt Feb 16 02:26:16.575030 master-0 kubenswrapper[31559]: I0216 02:26:16.574910 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-6d686d4f46-cxb9s" event={"ID":"3e421e82-707d-46cf-bd96-7d9853aa5bf4","Type":"ContainerStarted","Data":"0d5f6146056ca98e2fc5efec5c040f3e5b6b6ccdc51c2cf8b4309881804f7539"} Feb 16 02:26:16.575986 master-0 kubenswrapper[31559]: I0216 02:26:16.575333 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-6d686d4f46-cxb9s" Feb 16 02:26:16.578718 master-0 kubenswrapper[31559]: I0216 02:26:16.578655 31559 patch_prober.go:28] interesting pod/monitoring-plugin-6d686d4f46-cxb9s container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.128.0.101:9443/health\": dial tcp 10.128.0.101:9443: connect: connection refused" start-of-body= Feb 16 02:26:16.578871 master-0 kubenswrapper[31559]: I0216 02:26:16.578721 31559 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-6d686d4f46-cxb9s" podUID="3e421e82-707d-46cf-bd96-7d9853aa5bf4" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.128.0.101:9443/health\": dial tcp 10.128.0.101:9443: connect: connection refused" Feb 16 02:26:16.602886 master-0 kubenswrapper[31559]: I0216 02:26:16.602667 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-6d686d4f46-cxb9s" podStartSLOduration=2.106567447 podStartE2EDuration="3.602643995s" podCreationTimestamp="2026-02-16 02:26:13 +0000 UTC" firstStartedPulling="2026-02-16 02:26:14.846429874 +0000 UTC m=+227.191035929" lastFinishedPulling="2026-02-16 02:26:16.342506432 +0000 UTC m=+228.687112477" observedRunningTime="2026-02-16 02:26:16.597796502 +0000 UTC m=+228.942402557" watchObservedRunningTime="2026-02-16 02:26:16.602643995 +0000 UTC m=+228.947250050" Feb 16 02:26:17.591763 master-0 kubenswrapper[31559]: I0216 02:26:17.591679 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-6d686d4f46-cxb9s" Feb 16 02:26:20.512934 master-0 kubenswrapper[31559]: I0216 02:26:20.512851 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b50da24-3f10-4b81-be90-912874ed2629-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"9b50da24-3f10-4b81-be90-912874ed2629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:26:20.513419 master-0 kubenswrapper[31559]: E0216 02:26:20.513150 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9b50da24-3f10-4b81-be90-912874ed2629-prometheus-trusted-ca-bundle podName:9b50da24-3f10-4b81-be90-912874ed2629 nodeName:}" failed. No retries permitted until 2026-02-16 02:26:28.513114259 +0000 UTC m=+240.857720314 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/9b50da24-3f10-4b81-be90-912874ed2629-prometheus-trusted-ca-bundle") pod "prometheus-k8s-0" (UID: "9b50da24-3f10-4b81-be90-912874ed2629") : configmap references non-existent config key: ca-bundle.crt Feb 16 02:26:23.902901 master-0 kubenswrapper[31559]: I0216 02:26:23.902807 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8e140264-da5d-471d-8fee-a401deeadc83-trusted-ca\") pod \"console-operator-7777d5cc66-jw7d4\" (UID: \"8e140264-da5d-471d-8fee-a401deeadc83\") " pod="openshift-console-operator/console-operator-7777d5cc66-jw7d4" Feb 16 02:26:23.903919 master-0 kubenswrapper[31559]: E0216 02:26:23.903066 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8e140264-da5d-471d-8fee-a401deeadc83-trusted-ca podName:8e140264-da5d-471d-8fee-a401deeadc83 nodeName:}" failed. No retries permitted until 2026-02-16 02:26:39.903039718 +0000 UTC m=+252.247645743 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/8e140264-da5d-471d-8fee-a401deeadc83-trusted-ca") pod "console-operator-7777d5cc66-jw7d4" (UID: "8e140264-da5d-471d-8fee-a401deeadc83") : configmap references non-existent config key: ca-bundle.crt Feb 16 02:26:28.379646 master-0 kubenswrapper[31559]: I0216 02:26:28.379554 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-77778cfd78-cjbrf" Feb 16 02:26:28.380578 master-0 kubenswrapper[31559]: I0216 02:26:28.379732 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-77778cfd78-cjbrf" Feb 16 02:26:28.594065 master-0 kubenswrapper[31559]: I0216 02:26:28.593930 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b50da24-3f10-4b81-be90-912874ed2629-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"9b50da24-3f10-4b81-be90-912874ed2629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:26:28.594401 master-0 kubenswrapper[31559]: E0216 02:26:28.594228 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9b50da24-3f10-4b81-be90-912874ed2629-prometheus-trusted-ca-bundle podName:9b50da24-3f10-4b81-be90-912874ed2629 nodeName:}" failed. No retries permitted until 2026-02-16 02:26:44.594192953 +0000 UTC m=+256.938799048 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/9b50da24-3f10-4b81-be90-912874ed2629-prometheus-trusted-ca-bundle") pod "prometheus-k8s-0" (UID: "9b50da24-3f10-4b81-be90-912874ed2629") : configmap references non-existent config key: ca-bundle.crt Feb 16 02:26:39.989977 master-0 kubenswrapper[31559]: I0216 02:26:39.989870 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8e140264-da5d-471d-8fee-a401deeadc83-trusted-ca\") pod \"console-operator-7777d5cc66-jw7d4\" (UID: \"8e140264-da5d-471d-8fee-a401deeadc83\") " pod="openshift-console-operator/console-operator-7777d5cc66-jw7d4" Feb 16 02:26:39.990918 master-0 kubenswrapper[31559]: E0216 02:26:39.990142 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8e140264-da5d-471d-8fee-a401deeadc83-trusted-ca podName:8e140264-da5d-471d-8fee-a401deeadc83 nodeName:}" failed. No retries permitted until 2026-02-16 02:27:11.99011782 +0000 UTC m=+284.334723875 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/8e140264-da5d-471d-8fee-a401deeadc83-trusted-ca") pod "console-operator-7777d5cc66-jw7d4" (UID: "8e140264-da5d-471d-8fee-a401deeadc83") : configmap references non-existent config key: ca-bundle.crt Feb 16 02:26:44.662814 master-0 kubenswrapper[31559]: I0216 02:26:44.662579 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b50da24-3f10-4b81-be90-912874ed2629-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"9b50da24-3f10-4b81-be90-912874ed2629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:26:44.663907 master-0 kubenswrapper[31559]: E0216 02:26:44.662845 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9b50da24-3f10-4b81-be90-912874ed2629-prometheus-trusted-ca-bundle podName:9b50da24-3f10-4b81-be90-912874ed2629 nodeName:}" failed. No retries permitted until 2026-02-16 02:27:16.662810713 +0000 UTC m=+289.007416758 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/9b50da24-3f10-4b81-be90-912874ed2629-prometheus-trusted-ca-bundle") pod "prometheus-k8s-0" (UID: "9b50da24-3f10-4b81-be90-912874ed2629") : configmap references non-existent config key: ca-bundle.crt Feb 16 02:26:48.392997 master-0 kubenswrapper[31559]: I0216 02:26:48.392876 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-77778cfd78-cjbrf" Feb 16 02:26:48.400421 master-0 kubenswrapper[31559]: I0216 02:26:48.400342 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-77778cfd78-cjbrf" Feb 16 02:26:50.616853 master-0 kubenswrapper[31559]: I0216 02:26:50.616783 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-hvk5x"] Feb 16 02:26:50.617979 master-0 kubenswrapper[31559]: I0216 02:26:50.617938 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-hvk5x" Feb 16 02:26:50.620183 master-0 kubenswrapper[31559]: I0216 02:26:50.620140 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-bhr6t" Feb 16 02:26:50.620412 master-0 kubenswrapper[31559]: I0216 02:26:50.620352 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-sysctl-allowlist" Feb 16 02:26:50.673120 master-0 kubenswrapper[31559]: I0216 02:26:50.673035 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6khql\" (UniqueName: \"kubernetes.io/projected/98d5d80d-94e2-4110-9a60-22778825aafa-kube-api-access-6khql\") pod \"cni-sysctl-allowlist-ds-hvk5x\" (UID: \"98d5d80d-94e2-4110-9a60-22778825aafa\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hvk5x" Feb 16 02:26:50.673357 master-0 kubenswrapper[31559]: I0216 02:26:50.673136 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/98d5d80d-94e2-4110-9a60-22778825aafa-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-hvk5x\" (UID: \"98d5d80d-94e2-4110-9a60-22778825aafa\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hvk5x" Feb 16 02:26:50.673357 master-0 kubenswrapper[31559]: I0216 02:26:50.673191 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/98d5d80d-94e2-4110-9a60-22778825aafa-ready\") pod \"cni-sysctl-allowlist-ds-hvk5x\" (UID: \"98d5d80d-94e2-4110-9a60-22778825aafa\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hvk5x" Feb 16 02:26:50.673357 master-0 kubenswrapper[31559]: I0216 02:26:50.673220 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/98d5d80d-94e2-4110-9a60-22778825aafa-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-hvk5x\" (UID: \"98d5d80d-94e2-4110-9a60-22778825aafa\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hvk5x" Feb 16 02:26:50.774712 master-0 kubenswrapper[31559]: I0216 02:26:50.774610 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6khql\" (UniqueName: \"kubernetes.io/projected/98d5d80d-94e2-4110-9a60-22778825aafa-kube-api-access-6khql\") pod \"cni-sysctl-allowlist-ds-hvk5x\" (UID: \"98d5d80d-94e2-4110-9a60-22778825aafa\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hvk5x" Feb 16 02:26:50.775036 master-0 kubenswrapper[31559]: I0216 02:26:50.774748 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/98d5d80d-94e2-4110-9a60-22778825aafa-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-hvk5x\" (UID: \"98d5d80d-94e2-4110-9a60-22778825aafa\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hvk5x" Feb 16 02:26:50.775036 master-0 kubenswrapper[31559]: I0216 02:26:50.774848 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/98d5d80d-94e2-4110-9a60-22778825aafa-ready\") pod \"cni-sysctl-allowlist-ds-hvk5x\" (UID: \"98d5d80d-94e2-4110-9a60-22778825aafa\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hvk5x" Feb 16 02:26:50.775036 master-0 kubenswrapper[31559]: I0216 02:26:50.774905 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/98d5d80d-94e2-4110-9a60-22778825aafa-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-hvk5x\" (UID: \"98d5d80d-94e2-4110-9a60-22778825aafa\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hvk5x" Feb 16 02:26:50.775036 master-0 kubenswrapper[31559]: I0216 02:26:50.774916 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/98d5d80d-94e2-4110-9a60-22778825aafa-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-hvk5x\" (UID: \"98d5d80d-94e2-4110-9a60-22778825aafa\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hvk5x" Feb 16 02:26:50.776027 master-0 kubenswrapper[31559]: I0216 02:26:50.775969 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/98d5d80d-94e2-4110-9a60-22778825aafa-ready\") pod \"cni-sysctl-allowlist-ds-hvk5x\" (UID: \"98d5d80d-94e2-4110-9a60-22778825aafa\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hvk5x" Feb 16 02:26:50.776494 master-0 kubenswrapper[31559]: I0216 02:26:50.776424 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/98d5d80d-94e2-4110-9a60-22778825aafa-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-hvk5x\" (UID: \"98d5d80d-94e2-4110-9a60-22778825aafa\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hvk5x" Feb 16 02:26:50.792029 master-0 kubenswrapper[31559]: I0216 02:26:50.791956 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6khql\" (UniqueName: \"kubernetes.io/projected/98d5d80d-94e2-4110-9a60-22778825aafa-kube-api-access-6khql\") pod \"cni-sysctl-allowlist-ds-hvk5x\" (UID: \"98d5d80d-94e2-4110-9a60-22778825aafa\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hvk5x" Feb 16 02:26:50.941822 master-0 kubenswrapper[31559]: I0216 02:26:50.941727 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-hvk5x" Feb 16 02:26:50.979596 master-0 kubenswrapper[31559]: W0216 02:26:50.979518 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98d5d80d_94e2_4110_9a60_22778825aafa.slice/crio-df6e9461a193c2f9468a309489fadaeec2fecd9e6a63f507139c4b21ea0e609e WatchSource:0}: Error finding container df6e9461a193c2f9468a309489fadaeec2fecd9e6a63f507139c4b21ea0e609e: Status 404 returned error can't find the container with id df6e9461a193c2f9468a309489fadaeec2fecd9e6a63f507139c4b21ea0e609e Feb 16 02:26:51.877987 master-0 kubenswrapper[31559]: I0216 02:26:51.877912 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-hvk5x" event={"ID":"98d5d80d-94e2-4110-9a60-22778825aafa","Type":"ContainerStarted","Data":"4ae7fbb377e485e07836359913aacb47c32cf66aed58a1093fb7dd0ba0700be6"} Feb 16 02:26:51.877987 master-0 kubenswrapper[31559]: I0216 02:26:51.877986 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-hvk5x" event={"ID":"98d5d80d-94e2-4110-9a60-22778825aafa","Type":"ContainerStarted","Data":"df6e9461a193c2f9468a309489fadaeec2fecd9e6a63f507139c4b21ea0e609e"} Feb 16 02:26:51.879306 master-0 kubenswrapper[31559]: I0216 02:26:51.878594 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-multus/cni-sysctl-allowlist-ds-hvk5x" Feb 16 02:26:51.914292 master-0 kubenswrapper[31559]: I0216 02:26:51.914168 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-hvk5x" podStartSLOduration=1.914139886 podStartE2EDuration="1.914139886s" podCreationTimestamp="2026-02-16 02:26:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:26:51.90739486 +0000 UTC m=+264.252000915" watchObservedRunningTime="2026-02-16 02:26:51.914139886 +0000 UTC m=+264.258745941" Feb 16 02:26:52.922397 master-0 kubenswrapper[31559]: I0216 02:26:52.922307 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-hvk5x" Feb 16 02:26:53.614074 master-0 kubenswrapper[31559]: I0216 02:26:53.613979 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-hvk5x"] Feb 16 02:26:55.037652 master-0 kubenswrapper[31559]: I0216 02:26:55.037522 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-hvk5x" podUID="98d5d80d-94e2-4110-9a60-22778825aafa" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://4ae7fbb377e485e07836359913aacb47c32cf66aed58a1093fb7dd0ba0700be6" gracePeriod=30 Feb 16 02:26:58.169046 master-0 kubenswrapper[31559]: I0216 02:26:58.168153 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/telemeter-client-6d8bc77978-wt9kt"] Feb 16 02:26:58.170473 master-0 kubenswrapper[31559]: I0216 02:26:58.170214 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-6d8bc77978-wt9kt" Feb 16 02:26:58.173161 master-0 kubenswrapper[31559]: I0216 02:26:58.172307 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Feb 16 02:26:58.174476 master-0 kubenswrapper[31559]: I0216 02:26:58.173329 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Feb 16 02:26:58.174476 master-0 kubenswrapper[31559]: I0216 02:26:58.173581 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-dockercfg-sfwp5" Feb 16 02:26:58.174476 master-0 kubenswrapper[31559]: I0216 02:26:58.173832 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Feb 16 02:26:58.174476 master-0 kubenswrapper[31559]: I0216 02:26:58.173982 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Feb 16 02:26:58.174476 master-0 kubenswrapper[31559]: I0216 02:26:58.174126 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Feb 16 02:26:58.198409 master-0 kubenswrapper[31559]: I0216 02:26:58.198123 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" Feb 16 02:26:58.211584 master-0 kubenswrapper[31559]: I0216 02:26:58.211510 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-6d8bc77978-wt9kt"] Feb 16 02:26:58.290304 master-0 kubenswrapper[31559]: I0216 02:26:58.290225 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/01fa5926-2e51-440f-a93c-44b07c7d37de-federate-client-tls\") pod \"telemeter-client-6d8bc77978-wt9kt\" (UID: \"01fa5926-2e51-440f-a93c-44b07c7d37de\") " pod="openshift-monitoring/telemeter-client-6d8bc77978-wt9kt" Feb 16 02:26:58.290304 master-0 kubenswrapper[31559]: I0216 02:26:58.290294 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wdmz\" (UniqueName: \"kubernetes.io/projected/01fa5926-2e51-440f-a93c-44b07c7d37de-kube-api-access-5wdmz\") pod \"telemeter-client-6d8bc77978-wt9kt\" (UID: \"01fa5926-2e51-440f-a93c-44b07c7d37de\") " pod="openshift-monitoring/telemeter-client-6d8bc77978-wt9kt" Feb 16 02:26:58.290665 master-0 kubenswrapper[31559]: I0216 02:26:58.290360 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/01fa5926-2e51-440f-a93c-44b07c7d37de-metrics-client-ca\") pod \"telemeter-client-6d8bc77978-wt9kt\" (UID: \"01fa5926-2e51-440f-a93c-44b07c7d37de\") " pod="openshift-monitoring/telemeter-client-6d8bc77978-wt9kt" Feb 16 02:26:58.290665 master-0 kubenswrapper[31559]: I0216 02:26:58.290393 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/01fa5926-2e51-440f-a93c-44b07c7d37de-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-6d8bc77978-wt9kt\" (UID: \"01fa5926-2e51-440f-a93c-44b07c7d37de\") " pod="openshift-monitoring/telemeter-client-6d8bc77978-wt9kt" Feb 16 02:26:58.290665 master-0 kubenswrapper[31559]: I0216 02:26:58.290429 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/01fa5926-2e51-440f-a93c-44b07c7d37de-telemeter-client-tls\") pod \"telemeter-client-6d8bc77978-wt9kt\" (UID: \"01fa5926-2e51-440f-a93c-44b07c7d37de\") " pod="openshift-monitoring/telemeter-client-6d8bc77978-wt9kt" Feb 16 02:26:58.290665 master-0 kubenswrapper[31559]: I0216 02:26:58.290485 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/01fa5926-2e51-440f-a93c-44b07c7d37de-secret-telemeter-client\") pod \"telemeter-client-6d8bc77978-wt9kt\" (UID: \"01fa5926-2e51-440f-a93c-44b07c7d37de\") " pod="openshift-monitoring/telemeter-client-6d8bc77978-wt9kt" Feb 16 02:26:58.290665 master-0 kubenswrapper[31559]: I0216 02:26:58.290513 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01fa5926-2e51-440f-a93c-44b07c7d37de-serving-certs-ca-bundle\") pod \"telemeter-client-6d8bc77978-wt9kt\" (UID: \"01fa5926-2e51-440f-a93c-44b07c7d37de\") " pod="openshift-monitoring/telemeter-client-6d8bc77978-wt9kt" Feb 16 02:26:58.290665 master-0 kubenswrapper[31559]: I0216 02:26:58.290583 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01fa5926-2e51-440f-a93c-44b07c7d37de-telemeter-trusted-ca-bundle\") pod \"telemeter-client-6d8bc77978-wt9kt\" (UID: \"01fa5926-2e51-440f-a93c-44b07c7d37de\") " pod="openshift-monitoring/telemeter-client-6d8bc77978-wt9kt" Feb 16 02:26:58.392456 master-0 kubenswrapper[31559]: I0216 02:26:58.392377 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/01fa5926-2e51-440f-a93c-44b07c7d37de-metrics-client-ca\") pod \"telemeter-client-6d8bc77978-wt9kt\" (UID: \"01fa5926-2e51-440f-a93c-44b07c7d37de\") " pod="openshift-monitoring/telemeter-client-6d8bc77978-wt9kt" Feb 16 02:26:58.392456 master-0 kubenswrapper[31559]: I0216 02:26:58.392465 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/01fa5926-2e51-440f-a93c-44b07c7d37de-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-6d8bc77978-wt9kt\" (UID: \"01fa5926-2e51-440f-a93c-44b07c7d37de\") " pod="openshift-monitoring/telemeter-client-6d8bc77978-wt9kt" Feb 16 02:26:58.392733 master-0 kubenswrapper[31559]: I0216 02:26:58.392496 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/01fa5926-2e51-440f-a93c-44b07c7d37de-telemeter-client-tls\") pod \"telemeter-client-6d8bc77978-wt9kt\" (UID: \"01fa5926-2e51-440f-a93c-44b07c7d37de\") " pod="openshift-monitoring/telemeter-client-6d8bc77978-wt9kt" Feb 16 02:26:58.392733 master-0 kubenswrapper[31559]: I0216 02:26:58.392524 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/01fa5926-2e51-440f-a93c-44b07c7d37de-secret-telemeter-client\") pod \"telemeter-client-6d8bc77978-wt9kt\" (UID: \"01fa5926-2e51-440f-a93c-44b07c7d37de\") " pod="openshift-monitoring/telemeter-client-6d8bc77978-wt9kt" Feb 16 02:26:58.393195 master-0 kubenswrapper[31559]: I0216 02:26:58.393149 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01fa5926-2e51-440f-a93c-44b07c7d37de-serving-certs-ca-bundle\") pod \"telemeter-client-6d8bc77978-wt9kt\" (UID: \"01fa5926-2e51-440f-a93c-44b07c7d37de\") " pod="openshift-monitoring/telemeter-client-6d8bc77978-wt9kt" Feb 16 02:26:58.393332 master-0 kubenswrapper[31559]: I0216 02:26:58.393299 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01fa5926-2e51-440f-a93c-44b07c7d37de-telemeter-trusted-ca-bundle\") pod \"telemeter-client-6d8bc77978-wt9kt\" (UID: \"01fa5926-2e51-440f-a93c-44b07c7d37de\") " pod="openshift-monitoring/telemeter-client-6d8bc77978-wt9kt" Feb 16 02:26:58.394298 master-0 kubenswrapper[31559]: I0216 02:26:58.393680 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/01fa5926-2e51-440f-a93c-44b07c7d37de-federate-client-tls\") pod \"telemeter-client-6d8bc77978-wt9kt\" (UID: \"01fa5926-2e51-440f-a93c-44b07c7d37de\") " pod="openshift-monitoring/telemeter-client-6d8bc77978-wt9kt" Feb 16 02:26:58.394560 master-0 kubenswrapper[31559]: I0216 02:26:58.394501 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/01fa5926-2e51-440f-a93c-44b07c7d37de-metrics-client-ca\") pod \"telemeter-client-6d8bc77978-wt9kt\" (UID: \"01fa5926-2e51-440f-a93c-44b07c7d37de\") " pod="openshift-monitoring/telemeter-client-6d8bc77978-wt9kt" Feb 16 02:26:58.394641 master-0 kubenswrapper[31559]: I0216 02:26:58.394561 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wdmz\" (UniqueName: \"kubernetes.io/projected/01fa5926-2e51-440f-a93c-44b07c7d37de-kube-api-access-5wdmz\") pod \"telemeter-client-6d8bc77978-wt9kt\" (UID: \"01fa5926-2e51-440f-a93c-44b07c7d37de\") " pod="openshift-monitoring/telemeter-client-6d8bc77978-wt9kt" Feb 16 02:26:58.394686 master-0 kubenswrapper[31559]: I0216 02:26:58.394080 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01fa5926-2e51-440f-a93c-44b07c7d37de-serving-certs-ca-bundle\") pod \"telemeter-client-6d8bc77978-wt9kt\" (UID: \"01fa5926-2e51-440f-a93c-44b07c7d37de\") " pod="openshift-monitoring/telemeter-client-6d8bc77978-wt9kt" Feb 16 02:26:58.394969 master-0 kubenswrapper[31559]: I0216 02:26:58.394932 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01fa5926-2e51-440f-a93c-44b07c7d37de-telemeter-trusted-ca-bundle\") pod \"telemeter-client-6d8bc77978-wt9kt\" (UID: \"01fa5926-2e51-440f-a93c-44b07c7d37de\") " pod="openshift-monitoring/telemeter-client-6d8bc77978-wt9kt" Feb 16 02:26:58.395932 master-0 kubenswrapper[31559]: I0216 02:26:58.395899 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/01fa5926-2e51-440f-a93c-44b07c7d37de-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-6d8bc77978-wt9kt\" (UID: \"01fa5926-2e51-440f-a93c-44b07c7d37de\") " pod="openshift-monitoring/telemeter-client-6d8bc77978-wt9kt" Feb 16 02:26:58.396427 master-0 kubenswrapper[31559]: I0216 02:26:58.396398 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/01fa5926-2e51-440f-a93c-44b07c7d37de-secret-telemeter-client\") pod \"telemeter-client-6d8bc77978-wt9kt\" (UID: \"01fa5926-2e51-440f-a93c-44b07c7d37de\") " pod="openshift-monitoring/telemeter-client-6d8bc77978-wt9kt" Feb 16 02:26:58.396692 master-0 kubenswrapper[31559]: I0216 02:26:58.396649 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/01fa5926-2e51-440f-a93c-44b07c7d37de-telemeter-client-tls\") pod \"telemeter-client-6d8bc77978-wt9kt\" (UID: \"01fa5926-2e51-440f-a93c-44b07c7d37de\") " pod="openshift-monitoring/telemeter-client-6d8bc77978-wt9kt" Feb 16 02:26:58.397050 master-0 kubenswrapper[31559]: I0216 02:26:58.396979 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/01fa5926-2e51-440f-a93c-44b07c7d37de-federate-client-tls\") pod \"telemeter-client-6d8bc77978-wt9kt\" (UID: \"01fa5926-2e51-440f-a93c-44b07c7d37de\") " pod="openshift-monitoring/telemeter-client-6d8bc77978-wt9kt" Feb 16 02:26:58.420501 master-0 kubenswrapper[31559]: I0216 02:26:58.420398 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wdmz\" (UniqueName: \"kubernetes.io/projected/01fa5926-2e51-440f-a93c-44b07c7d37de-kube-api-access-5wdmz\") pod \"telemeter-client-6d8bc77978-wt9kt\" (UID: \"01fa5926-2e51-440f-a93c-44b07c7d37de\") " pod="openshift-monitoring/telemeter-client-6d8bc77978-wt9kt" Feb 16 02:26:58.500247 master-0 kubenswrapper[31559]: I0216 02:26:58.500172 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-6d8bc77978-wt9kt" Feb 16 02:26:58.962631 master-0 kubenswrapper[31559]: I0216 02:26:58.962543 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-6d8bc77978-wt9kt"] Feb 16 02:26:58.968882 master-0 kubenswrapper[31559]: W0216 02:26:58.968806 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod01fa5926_2e51_440f_a93c_44b07c7d37de.slice/crio-5a6c49c70261c013a1e3488a2991401f84fec6f4bd6557b99b2cc25126f34597 WatchSource:0}: Error finding container 5a6c49c70261c013a1e3488a2991401f84fec6f4bd6557b99b2cc25126f34597: Status 404 returned error can't find the container with id 5a6c49c70261c013a1e3488a2991401f84fec6f4bd6557b99b2cc25126f34597 Feb 16 02:26:59.074854 master-0 kubenswrapper[31559]: I0216 02:26:59.074765 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-6d8bc77978-wt9kt" event={"ID":"01fa5926-2e51-440f-a93c-44b07c7d37de","Type":"ContainerStarted","Data":"5a6c49c70261c013a1e3488a2991401f84fec6f4bd6557b99b2cc25126f34597"} Feb 16 02:26:59.723643 master-0 kubenswrapper[31559]: I0216 02:26:59.723544 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 16 02:26:59.731459 master-0 kubenswrapper[31559]: I0216 02:26:59.731353 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 16 02:26:59.744572 master-0 kubenswrapper[31559]: I0216 02:26:59.741208 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Feb 16 02:26:59.744572 master-0 kubenswrapper[31559]: I0216 02:26:59.741243 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Feb 16 02:26:59.744572 master-0 kubenswrapper[31559]: I0216 02:26:59.742544 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Feb 16 02:26:59.744572 master-0 kubenswrapper[31559]: I0216 02:26:59.743244 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Feb 16 02:26:59.744572 master-0 kubenswrapper[31559]: I0216 02:26:59.743893 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Feb 16 02:26:59.745057 master-0 kubenswrapper[31559]: I0216 02:26:59.744688 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Feb 16 02:26:59.745125 master-0 kubenswrapper[31559]: I0216 02:26:59.745086 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Feb 16 02:26:59.751885 master-0 kubenswrapper[31559]: I0216 02:26:59.751799 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Feb 16 02:26:59.753278 master-0 kubenswrapper[31559]: I0216 02:26:59.753199 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 16 02:26:59.918549 master-0 kubenswrapper[31559]: I0216 02:26:59.918475 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2snbx\" (UniqueName: \"kubernetes.io/projected/40960ec6-becf-40a6-ad1e-828df1c42847-kube-api-access-2snbx\") pod \"alertmanager-main-0\" (UID: \"40960ec6-becf-40a6-ad1e-828df1c42847\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 02:26:59.918549 master-0 kubenswrapper[31559]: I0216 02:26:59.918542 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/40960ec6-becf-40a6-ad1e-828df1c42847-web-config\") pod \"alertmanager-main-0\" (UID: \"40960ec6-becf-40a6-ad1e-828df1c42847\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 02:26:59.918845 master-0 kubenswrapper[31559]: I0216 02:26:59.918647 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/40960ec6-becf-40a6-ad1e-828df1c42847-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"40960ec6-becf-40a6-ad1e-828df1c42847\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 02:26:59.918889 master-0 kubenswrapper[31559]: I0216 02:26:59.918813 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/40960ec6-becf-40a6-ad1e-828df1c42847-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"40960ec6-becf-40a6-ad1e-828df1c42847\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 02:26:59.918984 master-0 kubenswrapper[31559]: I0216 02:26:59.918949 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/40960ec6-becf-40a6-ad1e-828df1c42847-config-out\") pod \"alertmanager-main-0\" (UID: \"40960ec6-becf-40a6-ad1e-828df1c42847\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 02:26:59.919642 master-0 kubenswrapper[31559]: I0216 02:26:59.919611 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40960ec6-becf-40a6-ad1e-828df1c42847-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"40960ec6-becf-40a6-ad1e-828df1c42847\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 02:26:59.919824 master-0 kubenswrapper[31559]: I0216 02:26:59.919669 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/40960ec6-becf-40a6-ad1e-828df1c42847-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"40960ec6-becf-40a6-ad1e-828df1c42847\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 02:26:59.919824 master-0 kubenswrapper[31559]: I0216 02:26:59.919719 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/40960ec6-becf-40a6-ad1e-828df1c42847-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"40960ec6-becf-40a6-ad1e-828df1c42847\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 02:26:59.919824 master-0 kubenswrapper[31559]: I0216 02:26:59.919762 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/40960ec6-becf-40a6-ad1e-828df1c42847-config-volume\") pod \"alertmanager-main-0\" (UID: \"40960ec6-becf-40a6-ad1e-828df1c42847\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 02:26:59.919824 master-0 kubenswrapper[31559]: I0216 02:26:59.919788 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/40960ec6-becf-40a6-ad1e-828df1c42847-tls-assets\") pod \"alertmanager-main-0\" (UID: \"40960ec6-becf-40a6-ad1e-828df1c42847\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 02:26:59.920027 master-0 kubenswrapper[31559]: I0216 02:26:59.919835 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/40960ec6-becf-40a6-ad1e-828df1c42847-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"40960ec6-becf-40a6-ad1e-828df1c42847\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 02:26:59.920027 master-0 kubenswrapper[31559]: I0216 02:26:59.919897 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/40960ec6-becf-40a6-ad1e-828df1c42847-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"40960ec6-becf-40a6-ad1e-828df1c42847\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 02:27:00.022245 master-0 kubenswrapper[31559]: I0216 02:27:00.022025 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/40960ec6-becf-40a6-ad1e-828df1c42847-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"40960ec6-becf-40a6-ad1e-828df1c42847\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 02:27:00.022472 master-0 kubenswrapper[31559]: I0216 02:27:00.022303 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/40960ec6-becf-40a6-ad1e-828df1c42847-config-volume\") pod \"alertmanager-main-0\" (UID: \"40960ec6-becf-40a6-ad1e-828df1c42847\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 02:27:00.022522 master-0 kubenswrapper[31559]: I0216 02:27:00.022505 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/40960ec6-becf-40a6-ad1e-828df1c42847-tls-assets\") pod \"alertmanager-main-0\" (UID: \"40960ec6-becf-40a6-ad1e-828df1c42847\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 02:27:00.022636 master-0 kubenswrapper[31559]: I0216 02:27:00.022595 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/40960ec6-becf-40a6-ad1e-828df1c42847-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"40960ec6-becf-40a6-ad1e-828df1c42847\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 02:27:00.023008 master-0 kubenswrapper[31559]: I0216 02:27:00.022925 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/40960ec6-becf-40a6-ad1e-828df1c42847-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"40960ec6-becf-40a6-ad1e-828df1c42847\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 02:27:00.023329 master-0 kubenswrapper[31559]: I0216 02:27:00.023287 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2snbx\" (UniqueName: \"kubernetes.io/projected/40960ec6-becf-40a6-ad1e-828df1c42847-kube-api-access-2snbx\") pod \"alertmanager-main-0\" (UID: \"40960ec6-becf-40a6-ad1e-828df1c42847\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 02:27:00.023529 master-0 kubenswrapper[31559]: I0216 02:27:00.023474 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/40960ec6-becf-40a6-ad1e-828df1c42847-web-config\") pod \"alertmanager-main-0\" (UID: \"40960ec6-becf-40a6-ad1e-828df1c42847\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 02:27:00.023698 master-0 kubenswrapper[31559]: I0216 02:27:00.023656 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/40960ec6-becf-40a6-ad1e-828df1c42847-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"40960ec6-becf-40a6-ad1e-828df1c42847\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 02:27:00.024048 master-0 kubenswrapper[31559]: I0216 02:27:00.024005 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/40960ec6-becf-40a6-ad1e-828df1c42847-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"40960ec6-becf-40a6-ad1e-828df1c42847\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 02:27:00.025123 master-0 kubenswrapper[31559]: I0216 02:27:00.024406 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/40960ec6-becf-40a6-ad1e-828df1c42847-config-out\") pod \"alertmanager-main-0\" (UID: \"40960ec6-becf-40a6-ad1e-828df1c42847\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 02:27:00.025123 master-0 kubenswrapper[31559]: I0216 02:27:00.024855 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/40960ec6-becf-40a6-ad1e-828df1c42847-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"40960ec6-becf-40a6-ad1e-828df1c42847\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 02:27:00.025123 master-0 kubenswrapper[31559]: I0216 02:27:00.024760 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40960ec6-becf-40a6-ad1e-828df1c42847-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"40960ec6-becf-40a6-ad1e-828df1c42847\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 02:27:00.025123 master-0 kubenswrapper[31559]: I0216 02:27:00.025075 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/40960ec6-becf-40a6-ad1e-828df1c42847-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"40960ec6-becf-40a6-ad1e-828df1c42847\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 02:27:00.028052 master-0 kubenswrapper[31559]: I0216 02:27:00.028006 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40960ec6-becf-40a6-ad1e-828df1c42847-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"40960ec6-becf-40a6-ad1e-828df1c42847\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 02:27:00.029367 master-0 kubenswrapper[31559]: I0216 02:27:00.029322 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/40960ec6-becf-40a6-ad1e-828df1c42847-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"40960ec6-becf-40a6-ad1e-828df1c42847\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 02:27:00.030629 master-0 kubenswrapper[31559]: I0216 02:27:00.030563 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/40960ec6-becf-40a6-ad1e-828df1c42847-config-out\") pod \"alertmanager-main-0\" (UID: \"40960ec6-becf-40a6-ad1e-828df1c42847\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 02:27:00.030707 master-0 kubenswrapper[31559]: I0216 02:27:00.030633 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/40960ec6-becf-40a6-ad1e-828df1c42847-web-config\") pod \"alertmanager-main-0\" (UID: \"40960ec6-becf-40a6-ad1e-828df1c42847\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 02:27:00.030823 master-0 kubenswrapper[31559]: I0216 02:27:00.030753 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/40960ec6-becf-40a6-ad1e-828df1c42847-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"40960ec6-becf-40a6-ad1e-828df1c42847\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 02:27:00.030876 master-0 kubenswrapper[31559]: I0216 02:27:00.030821 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/40960ec6-becf-40a6-ad1e-828df1c42847-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"40960ec6-becf-40a6-ad1e-828df1c42847\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 02:27:00.030876 master-0 kubenswrapper[31559]: I0216 02:27:00.030759 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/40960ec6-becf-40a6-ad1e-828df1c42847-config-volume\") pod \"alertmanager-main-0\" (UID: \"40960ec6-becf-40a6-ad1e-828df1c42847\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 02:27:00.031363 master-0 kubenswrapper[31559]: I0216 02:27:00.031313 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/40960ec6-becf-40a6-ad1e-828df1c42847-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"40960ec6-becf-40a6-ad1e-828df1c42847\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 02:27:00.032207 master-0 kubenswrapper[31559]: I0216 02:27:00.032150 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/40960ec6-becf-40a6-ad1e-828df1c42847-tls-assets\") pod \"alertmanager-main-0\" (UID: \"40960ec6-becf-40a6-ad1e-828df1c42847\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 02:27:00.038057 master-0 kubenswrapper[31559]: I0216 02:27:00.038038 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/40960ec6-becf-40a6-ad1e-828df1c42847-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"40960ec6-becf-40a6-ad1e-828df1c42847\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 02:27:00.054948 master-0 kubenswrapper[31559]: I0216 02:27:00.054885 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2snbx\" (UniqueName: \"kubernetes.io/projected/40960ec6-becf-40a6-ad1e-828df1c42847-kube-api-access-2snbx\") pod \"alertmanager-main-0\" (UID: \"40960ec6-becf-40a6-ad1e-828df1c42847\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 02:27:00.077526 master-0 kubenswrapper[31559]: I0216 02:27:00.077480 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 16 02:27:00.233243 master-0 kubenswrapper[31559]: I0216 02:27:00.233180 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-bb4ff5654-2w5wl"] Feb 16 02:27:00.234581 master-0 kubenswrapper[31559]: I0216 02:27:00.234543 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-bb4ff5654-2w5wl" Feb 16 02:27:00.246644 master-0 kubenswrapper[31559]: I0216 02:27:00.246521 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-bb4ff5654-2w5wl"] Feb 16 02:27:00.330364 master-0 kubenswrapper[31559]: I0216 02:27:00.330270 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4cn8\" (UniqueName: \"kubernetes.io/projected/493388e5-bf96-498a-88ec-0a21fb5a5a1e-kube-api-access-w4cn8\") pod \"multus-admission-controller-bb4ff5654-2w5wl\" (UID: \"493388e5-bf96-498a-88ec-0a21fb5a5a1e\") " pod="openshift-multus/multus-admission-controller-bb4ff5654-2w5wl" Feb 16 02:27:00.330687 master-0 kubenswrapper[31559]: I0216 02:27:00.330666 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/493388e5-bf96-498a-88ec-0a21fb5a5a1e-webhook-certs\") pod \"multus-admission-controller-bb4ff5654-2w5wl\" (UID: \"493388e5-bf96-498a-88ec-0a21fb5a5a1e\") " pod="openshift-multus/multus-admission-controller-bb4ff5654-2w5wl" Feb 16 02:27:00.431893 master-0 kubenswrapper[31559]: I0216 02:27:00.431748 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/493388e5-bf96-498a-88ec-0a21fb5a5a1e-webhook-certs\") pod \"multus-admission-controller-bb4ff5654-2w5wl\" (UID: \"493388e5-bf96-498a-88ec-0a21fb5a5a1e\") " pod="openshift-multus/multus-admission-controller-bb4ff5654-2w5wl" Feb 16 02:27:00.431893 master-0 kubenswrapper[31559]: I0216 02:27:00.431828 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4cn8\" (UniqueName: \"kubernetes.io/projected/493388e5-bf96-498a-88ec-0a21fb5a5a1e-kube-api-access-w4cn8\") pod \"multus-admission-controller-bb4ff5654-2w5wl\" (UID: \"493388e5-bf96-498a-88ec-0a21fb5a5a1e\") " pod="openshift-multus/multus-admission-controller-bb4ff5654-2w5wl" Feb 16 02:27:00.436756 master-0 kubenswrapper[31559]: I0216 02:27:00.436701 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/493388e5-bf96-498a-88ec-0a21fb5a5a1e-webhook-certs\") pod \"multus-admission-controller-bb4ff5654-2w5wl\" (UID: \"493388e5-bf96-498a-88ec-0a21fb5a5a1e\") " pod="openshift-multus/multus-admission-controller-bb4ff5654-2w5wl" Feb 16 02:27:00.446742 master-0 kubenswrapper[31559]: I0216 02:27:00.446645 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4cn8\" (UniqueName: \"kubernetes.io/projected/493388e5-bf96-498a-88ec-0a21fb5a5a1e-kube-api-access-w4cn8\") pod \"multus-admission-controller-bb4ff5654-2w5wl\" (UID: \"493388e5-bf96-498a-88ec-0a21fb5a5a1e\") " pod="openshift-multus/multus-admission-controller-bb4ff5654-2w5wl" Feb 16 02:27:00.581611 master-0 kubenswrapper[31559]: I0216 02:27:00.580781 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-bb4ff5654-2w5wl" Feb 16 02:27:00.655512 master-0 kubenswrapper[31559]: I0216 02:27:00.654175 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 16 02:27:00.944765 master-0 kubenswrapper[31559]: E0216 02:27:00.944711 31559 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4ae7fbb377e485e07836359913aacb47c32cf66aed58a1093fb7dd0ba0700be6" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 16 02:27:00.947463 master-0 kubenswrapper[31559]: E0216 02:27:00.947420 31559 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4ae7fbb377e485e07836359913aacb47c32cf66aed58a1093fb7dd0ba0700be6" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 16 02:27:00.948735 master-0 kubenswrapper[31559]: E0216 02:27:00.948669 31559 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4ae7fbb377e485e07836359913aacb47c32cf66aed58a1093fb7dd0ba0700be6" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 16 02:27:00.948786 master-0 kubenswrapper[31559]: E0216 02:27:00.948752 31559 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-hvk5x" podUID="98d5d80d-94e2-4110-9a60-22778825aafa" containerName="kube-multus-additional-cni-plugins" Feb 16 02:27:01.239221 master-0 kubenswrapper[31559]: W0216 02:27:01.239086 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod40960ec6_becf_40a6_ad1e_828df1c42847.slice/crio-30bc49d8eb52a4442f4fee04967ac9e14801e56406eb0ee531f12d2e49257060 WatchSource:0}: Error finding container 30bc49d8eb52a4442f4fee04967ac9e14801e56406eb0ee531f12d2e49257060: Status 404 returned error can't find the container with id 30bc49d8eb52a4442f4fee04967ac9e14801e56406eb0ee531f12d2e49257060 Feb 16 02:27:01.770035 master-0 kubenswrapper[31559]: I0216 02:27:01.769961 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-bb4ff5654-2w5wl"] Feb 16 02:27:02.134121 master-0 kubenswrapper[31559]: I0216 02:27:02.134027 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-6d8bc77978-wt9kt" event={"ID":"01fa5926-2e51-440f-a93c-44b07c7d37de","Type":"ContainerStarted","Data":"c7bd87338976ef796c4223be4f55ba07b050267fa07755fc02d7a8a9c9ccf1ee"} Feb 16 02:27:02.137177 master-0 kubenswrapper[31559]: I0216 02:27:02.136819 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"40960ec6-becf-40a6-ad1e-828df1c42847","Type":"ContainerStarted","Data":"30bc49d8eb52a4442f4fee04967ac9e14801e56406eb0ee531f12d2e49257060"} Feb 16 02:27:02.142586 master-0 kubenswrapper[31559]: I0216 02:27:02.142529 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-bb4ff5654-2w5wl" event={"ID":"493388e5-bf96-498a-88ec-0a21fb5a5a1e","Type":"ContainerStarted","Data":"68d2954495d6de740b57f68b820f2033c38d2d620ee3a8a670c7cd9b1c9de7ba"} Feb 16 02:27:03.160459 master-0 kubenswrapper[31559]: I0216 02:27:03.160372 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-6d8bc77978-wt9kt" event={"ID":"01fa5926-2e51-440f-a93c-44b07c7d37de","Type":"ContainerStarted","Data":"d95af1965788349854911cab31957514d11b29d9cd08e073e17153fc63abaf1e"} Feb 16 02:27:03.163357 master-0 kubenswrapper[31559]: I0216 02:27:03.163299 31559 generic.go:334] "Generic (PLEG): container finished" podID="40960ec6-becf-40a6-ad1e-828df1c42847" containerID="3c67c940a1d1d9bb40cd9ec1e8f020cd76c38ef2a18a5d3e2bb8915c2c5b1a3c" exitCode=0 Feb 16 02:27:03.163497 master-0 kubenswrapper[31559]: I0216 02:27:03.163421 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"40960ec6-becf-40a6-ad1e-828df1c42847","Type":"ContainerDied","Data":"3c67c940a1d1d9bb40cd9ec1e8f020cd76c38ef2a18a5d3e2bb8915c2c5b1a3c"} Feb 16 02:27:03.169710 master-0 kubenswrapper[31559]: I0216 02:27:03.169630 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-bb4ff5654-2w5wl" event={"ID":"493388e5-bf96-498a-88ec-0a21fb5a5a1e","Type":"ContainerStarted","Data":"1fcb05a0d33f96e0847a145395191950bccf394ad315c05c1d8173a922ca2af7"} Feb 16 02:27:03.169710 master-0 kubenswrapper[31559]: I0216 02:27:03.169709 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-bb4ff5654-2w5wl" event={"ID":"493388e5-bf96-498a-88ec-0a21fb5a5a1e","Type":"ContainerStarted","Data":"ccd9c8d1a2a7de0c43564dbf0c07380fc9b99f8e8b5b09214c1672e4637d4ecb"} Feb 16 02:27:03.240579 master-0 kubenswrapper[31559]: I0216 02:27:03.240410 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-bb4ff5654-2w5wl" podStartSLOduration=3.240383146 podStartE2EDuration="3.240383146s" podCreationTimestamp="2026-02-16 02:27:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:27:03.233516247 +0000 UTC m=+275.578122302" watchObservedRunningTime="2026-02-16 02:27:03.240383146 +0000 UTC m=+275.584989171" Feb 16 02:27:03.283482 master-0 kubenswrapper[31559]: I0216 02:27:03.283328 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-6d678b8d67-8gzlx"] Feb 16 02:27:03.283759 master-0 kubenswrapper[31559]: I0216 02:27:03.283690 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-6d678b8d67-8gzlx" podUID="c8086f93-2d98-4218-afac-20a65e6bf943" containerName="multus-admission-controller" containerID="cri-o://30e7ac434b2ff8376d8f01a24e4deb497d95be6f36eeba191f63ffea76f881d2" gracePeriod=30 Feb 16 02:27:03.284295 master-0 kubenswrapper[31559]: I0216 02:27:03.284222 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-6d678b8d67-8gzlx" podUID="c8086f93-2d98-4218-afac-20a65e6bf943" containerName="kube-rbac-proxy" containerID="cri-o://98ee9d5b95bc1d66557aa51d3718ad8ae4d6135a3674e22406a779cde9ce0095" gracePeriod=30 Feb 16 02:27:04.179069 master-0 kubenswrapper[31559]: I0216 02:27:04.179002 31559 generic.go:334] "Generic (PLEG): container finished" podID="c8086f93-2d98-4218-afac-20a65e6bf943" containerID="98ee9d5b95bc1d66557aa51d3718ad8ae4d6135a3674e22406a779cde9ce0095" exitCode=0 Feb 16 02:27:04.179857 master-0 kubenswrapper[31559]: I0216 02:27:04.179076 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-6d678b8d67-8gzlx" event={"ID":"c8086f93-2d98-4218-afac-20a65e6bf943","Type":"ContainerDied","Data":"98ee9d5b95bc1d66557aa51d3718ad8ae4d6135a3674e22406a779cde9ce0095"} Feb 16 02:27:04.181205 master-0 kubenswrapper[31559]: I0216 02:27:04.181154 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-6d8bc77978-wt9kt" event={"ID":"01fa5926-2e51-440f-a93c-44b07c7d37de","Type":"ContainerStarted","Data":"8f51c681508b596d68c4003a4e152fdc8ba2beebea7ab6454963a99205a957f6"} Feb 16 02:27:04.243942 master-0 kubenswrapper[31559]: I0216 02:27:04.243590 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/telemeter-client-6d8bc77978-wt9kt" podStartSLOduration=2.455408813 podStartE2EDuration="6.243553411s" podCreationTimestamp="2026-02-16 02:26:58 +0000 UTC" firstStartedPulling="2026-02-16 02:26:58.973022264 +0000 UTC m=+271.317628339" lastFinishedPulling="2026-02-16 02:27:02.761166882 +0000 UTC m=+275.105772937" observedRunningTime="2026-02-16 02:27:04.209209206 +0000 UTC m=+276.553815251" watchObservedRunningTime="2026-02-16 02:27:04.243553411 +0000 UTC m=+276.588159466" Feb 16 02:27:05.193990 master-0 kubenswrapper[31559]: I0216 02:27:05.193905 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"40960ec6-becf-40a6-ad1e-828df1c42847","Type":"ContainerStarted","Data":"99aee9a4c80979659413e7d34094c8562f17b775b31490e30d0158db7a668e2f"} Feb 16 02:27:06.217420 master-0 kubenswrapper[31559]: I0216 02:27:06.217357 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"40960ec6-becf-40a6-ad1e-828df1c42847","Type":"ContainerStarted","Data":"262f433d546e0685ed7e4ccfbe391174ee356b8b31595276fca13f3ac6e8fa87"} Feb 16 02:27:06.218328 master-0 kubenswrapper[31559]: I0216 02:27:06.217426 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"40960ec6-becf-40a6-ad1e-828df1c42847","Type":"ContainerStarted","Data":"d698d4f4ed5397da53ee6958c378cbd852fbe16cca431ad7281cf26b7ddc6f25"} Feb 16 02:27:06.218328 master-0 kubenswrapper[31559]: I0216 02:27:06.217482 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"40960ec6-becf-40a6-ad1e-828df1c42847","Type":"ContainerStarted","Data":"2c60cfdfded888fe1d8a63276e7725e268f9cf7599efabdc3a94932d6b91f55f"} Feb 16 02:27:06.218328 master-0 kubenswrapper[31559]: I0216 02:27:06.217504 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"40960ec6-becf-40a6-ad1e-828df1c42847","Type":"ContainerStarted","Data":"fcb86f3336f6e01e939e46616a2206b7d0d1035dfeebea23c097729bb4c05fc8"} Feb 16 02:27:06.218328 master-0 kubenswrapper[31559]: I0216 02:27:06.217522 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"40960ec6-becf-40a6-ad1e-828df1c42847","Type":"ContainerStarted","Data":"4ccaee1c7cc0a1374adb4f0df1c083c907e88185b22476eaad2a88d8cf10106d"} Feb 16 02:27:06.267263 master-0 kubenswrapper[31559]: I0216 02:27:06.267038 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=3.58946859 podStartE2EDuration="7.267009657s" podCreationTimestamp="2026-02-16 02:26:59 +0000 UTC" firstStartedPulling="2026-02-16 02:27:01.244307335 +0000 UTC m=+273.588913350" lastFinishedPulling="2026-02-16 02:27:04.921848402 +0000 UTC m=+277.266454417" observedRunningTime="2026-02-16 02:27:06.264714667 +0000 UTC m=+278.609320722" watchObservedRunningTime="2026-02-16 02:27:06.267009657 +0000 UTC m=+278.611615712" Feb 16 02:27:10.944179 master-0 kubenswrapper[31559]: E0216 02:27:10.944113 31559 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4ae7fbb377e485e07836359913aacb47c32cf66aed58a1093fb7dd0ba0700be6" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 16 02:27:10.945710 master-0 kubenswrapper[31559]: E0216 02:27:10.945622 31559 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4ae7fbb377e485e07836359913aacb47c32cf66aed58a1093fb7dd0ba0700be6" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 16 02:27:10.947788 master-0 kubenswrapper[31559]: E0216 02:27:10.947719 31559 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4ae7fbb377e485e07836359913aacb47c32cf66aed58a1093fb7dd0ba0700be6" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 16 02:27:10.947896 master-0 kubenswrapper[31559]: E0216 02:27:10.947800 31559 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-hvk5x" podUID="98d5d80d-94e2-4110-9a60-22778825aafa" containerName="kube-multus-additional-cni-plugins" Feb 16 02:27:12.091022 master-0 kubenswrapper[31559]: I0216 02:27:12.090932 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8e140264-da5d-471d-8fee-a401deeadc83-trusted-ca\") pod \"console-operator-7777d5cc66-jw7d4\" (UID: \"8e140264-da5d-471d-8fee-a401deeadc83\") " pod="openshift-console-operator/console-operator-7777d5cc66-jw7d4" Feb 16 02:27:12.093671 master-0 kubenswrapper[31559]: I0216 02:27:12.093603 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8e140264-da5d-471d-8fee-a401deeadc83-trusted-ca\") pod \"console-operator-7777d5cc66-jw7d4\" (UID: \"8e140264-da5d-471d-8fee-a401deeadc83\") " pod="openshift-console-operator/console-operator-7777d5cc66-jw7d4" Feb 16 02:27:12.235230 master-0 kubenswrapper[31559]: I0216 02:27:12.235175 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-7777d5cc66-jw7d4" Feb 16 02:27:12.789099 master-0 kubenswrapper[31559]: I0216 02:27:12.789015 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-7777d5cc66-jw7d4"] Feb 16 02:27:13.281269 master-0 kubenswrapper[31559]: I0216 02:27:13.281197 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-7777d5cc66-jw7d4" event={"ID":"8e140264-da5d-471d-8fee-a401deeadc83","Type":"ContainerStarted","Data":"0a8a15fecb0c47e76fc69a265ecff96d2b804ba07b28d69f3581f87a58d40925"} Feb 16 02:27:16.306447 master-0 kubenswrapper[31559]: I0216 02:27:16.306292 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-7777d5cc66-jw7d4" event={"ID":"8e140264-da5d-471d-8fee-a401deeadc83","Type":"ContainerStarted","Data":"fa542fa69934c49a901df2cd2ed21a21226d8757f8a393f6958c0849867fbdd4"} Feb 16 02:27:16.307688 master-0 kubenswrapper[31559]: I0216 02:27:16.307621 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-7777d5cc66-jw7d4" Feb 16 02:27:16.331356 master-0 kubenswrapper[31559]: I0216 02:27:16.331278 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-7777d5cc66-jw7d4" podStartSLOduration=66.146337968 podStartE2EDuration="1m9.331259001s" podCreationTimestamp="2026-02-16 02:26:07 +0000 UTC" firstStartedPulling="2026-02-16 02:27:12.796577835 +0000 UTC m=+285.141183881" lastFinishedPulling="2026-02-16 02:27:15.981498899 +0000 UTC m=+288.326104914" observedRunningTime="2026-02-16 02:27:16.331080626 +0000 UTC m=+288.675686661" watchObservedRunningTime="2026-02-16 02:27:16.331259001 +0000 UTC m=+288.675865016" Feb 16 02:27:16.588676 master-0 kubenswrapper[31559]: I0216 02:27:16.588419 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-7777d5cc66-jw7d4" Feb 16 02:27:16.677383 master-0 kubenswrapper[31559]: I0216 02:27:16.677249 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b50da24-3f10-4b81-be90-912874ed2629-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"9b50da24-3f10-4b81-be90-912874ed2629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:27:16.678417 master-0 kubenswrapper[31559]: I0216 02:27:16.678386 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b50da24-3f10-4b81-be90-912874ed2629-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"9b50da24-3f10-4b81-be90-912874ed2629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:27:16.777199 master-0 kubenswrapper[31559]: I0216 02:27:16.777144 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-dcd7b7d95-kdt9k"] Feb 16 02:27:16.778103 master-0 kubenswrapper[31559]: I0216 02:27:16.778084 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-dcd7b7d95-kdt9k" Feb 16 02:27:16.779780 master-0 kubenswrapper[31559]: I0216 02:27:16.779726 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 16 02:27:16.780006 master-0 kubenswrapper[31559]: I0216 02:27:16.779980 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 16 02:27:16.788189 master-0 kubenswrapper[31559]: I0216 02:27:16.788122 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-dcd7b7d95-kdt9k"] Feb 16 02:27:16.880308 master-0 kubenswrapper[31559]: I0216 02:27:16.880153 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jhfb\" (UniqueName: \"kubernetes.io/projected/05a7bdc0-0106-4039-8076-ebc8b286de47-kube-api-access-8jhfb\") pod \"downloads-dcd7b7d95-kdt9k\" (UID: \"05a7bdc0-0106-4039-8076-ebc8b286de47\") " pod="openshift-console/downloads-dcd7b7d95-kdt9k" Feb 16 02:27:16.883534 master-0 kubenswrapper[31559]: I0216 02:27:16.883501 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:27:16.981722 master-0 kubenswrapper[31559]: I0216 02:27:16.981634 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jhfb\" (UniqueName: \"kubernetes.io/projected/05a7bdc0-0106-4039-8076-ebc8b286de47-kube-api-access-8jhfb\") pod \"downloads-dcd7b7d95-kdt9k\" (UID: \"05a7bdc0-0106-4039-8076-ebc8b286de47\") " pod="openshift-console/downloads-dcd7b7d95-kdt9k" Feb 16 02:27:16.998671 master-0 kubenswrapper[31559]: I0216 02:27:16.998629 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jhfb\" (UniqueName: \"kubernetes.io/projected/05a7bdc0-0106-4039-8076-ebc8b286de47-kube-api-access-8jhfb\") pod \"downloads-dcd7b7d95-kdt9k\" (UID: \"05a7bdc0-0106-4039-8076-ebc8b286de47\") " pod="openshift-console/downloads-dcd7b7d95-kdt9k" Feb 16 02:27:17.096857 master-0 kubenswrapper[31559]: I0216 02:27:17.096365 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-dcd7b7d95-kdt9k" Feb 16 02:27:17.353118 master-0 kubenswrapper[31559]: I0216 02:27:17.353068 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 16 02:27:17.391064 master-0 kubenswrapper[31559]: I0216 02:27:17.390764 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-bd6d6f87f-5f9t8"] Feb 16 02:27:17.401145 master-0 kubenswrapper[31559]: I0216 02:27:17.399973 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-bd6d6f87f-5f9t8" Feb 16 02:27:17.403396 master-0 kubenswrapper[31559]: I0216 02:27:17.403320 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 16 02:27:17.403881 master-0 kubenswrapper[31559]: I0216 02:27:17.403812 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 16 02:27:17.411826 master-0 kubenswrapper[31559]: I0216 02:27:17.409784 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-bd6d6f87f-5f9t8"] Feb 16 02:27:17.492404 master-0 kubenswrapper[31559]: I0216 02:27:17.492201 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/3d09e6f1-f673-4f7a-9b09-6fa48874103c-networking-console-plugin-cert\") pod \"networking-console-plugin-bd6d6f87f-5f9t8\" (UID: \"3d09e6f1-f673-4f7a-9b09-6fa48874103c\") " pod="openshift-network-console/networking-console-plugin-bd6d6f87f-5f9t8" Feb 16 02:27:17.492734 master-0 kubenswrapper[31559]: I0216 02:27:17.492649 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/3d09e6f1-f673-4f7a-9b09-6fa48874103c-nginx-conf\") pod \"networking-console-plugin-bd6d6f87f-5f9t8\" (UID: \"3d09e6f1-f673-4f7a-9b09-6fa48874103c\") " pod="openshift-network-console/networking-console-plugin-bd6d6f87f-5f9t8" Feb 16 02:27:17.573379 master-0 kubenswrapper[31559]: I0216 02:27:17.573151 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-dcd7b7d95-kdt9k"] Feb 16 02:27:17.580134 master-0 kubenswrapper[31559]: W0216 02:27:17.580099 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05a7bdc0_0106_4039_8076_ebc8b286de47.slice/crio-07a2db305d0946a64515be6fcd987cbe565cbee0cf39676a7d7cafe7da8961e2 WatchSource:0}: Error finding container 07a2db305d0946a64515be6fcd987cbe565cbee0cf39676a7d7cafe7da8961e2: Status 404 returned error can't find the container with id 07a2db305d0946a64515be6fcd987cbe565cbee0cf39676a7d7cafe7da8961e2 Feb 16 02:27:17.594770 master-0 kubenswrapper[31559]: I0216 02:27:17.594719 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/3d09e6f1-f673-4f7a-9b09-6fa48874103c-networking-console-plugin-cert\") pod \"networking-console-plugin-bd6d6f87f-5f9t8\" (UID: \"3d09e6f1-f673-4f7a-9b09-6fa48874103c\") " pod="openshift-network-console/networking-console-plugin-bd6d6f87f-5f9t8" Feb 16 02:27:17.594979 master-0 kubenswrapper[31559]: E0216 02:27:17.594929 31559 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Feb 16 02:27:17.596623 master-0 kubenswrapper[31559]: E0216 02:27:17.595016 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d09e6f1-f673-4f7a-9b09-6fa48874103c-networking-console-plugin-cert podName:3d09e6f1-f673-4f7a-9b09-6fa48874103c nodeName:}" failed. No retries permitted until 2026-02-16 02:27:18.094995633 +0000 UTC m=+290.439601658 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/3d09e6f1-f673-4f7a-9b09-6fa48874103c-networking-console-plugin-cert") pod "networking-console-plugin-bd6d6f87f-5f9t8" (UID: "3d09e6f1-f673-4f7a-9b09-6fa48874103c") : secret "networking-console-plugin-cert" not found Feb 16 02:27:17.597142 master-0 kubenswrapper[31559]: I0216 02:27:17.597104 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/3d09e6f1-f673-4f7a-9b09-6fa48874103c-nginx-conf\") pod \"networking-console-plugin-bd6d6f87f-5f9t8\" (UID: \"3d09e6f1-f673-4f7a-9b09-6fa48874103c\") " pod="openshift-network-console/networking-console-plugin-bd6d6f87f-5f9t8" Feb 16 02:27:17.598678 master-0 kubenswrapper[31559]: I0216 02:27:17.598628 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/3d09e6f1-f673-4f7a-9b09-6fa48874103c-nginx-conf\") pod \"networking-console-plugin-bd6d6f87f-5f9t8\" (UID: \"3d09e6f1-f673-4f7a-9b09-6fa48874103c\") " pod="openshift-network-console/networking-console-plugin-bd6d6f87f-5f9t8" Feb 16 02:27:18.105995 master-0 kubenswrapper[31559]: I0216 02:27:18.105926 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/3d09e6f1-f673-4f7a-9b09-6fa48874103c-networking-console-plugin-cert\") pod \"networking-console-plugin-bd6d6f87f-5f9t8\" (UID: \"3d09e6f1-f673-4f7a-9b09-6fa48874103c\") " pod="openshift-network-console/networking-console-plugin-bd6d6f87f-5f9t8" Feb 16 02:27:18.110725 master-0 kubenswrapper[31559]: I0216 02:27:18.110685 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/3d09e6f1-f673-4f7a-9b09-6fa48874103c-networking-console-plugin-cert\") pod \"networking-console-plugin-bd6d6f87f-5f9t8\" (UID: \"3d09e6f1-f673-4f7a-9b09-6fa48874103c\") " pod="openshift-network-console/networking-console-plugin-bd6d6f87f-5f9t8" Feb 16 02:27:18.171571 master-0 kubenswrapper[31559]: I0216 02:27:18.171504 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-bd6d6f87f-5f9t8" Feb 16 02:27:18.332044 master-0 kubenswrapper[31559]: I0216 02:27:18.331938 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-dcd7b7d95-kdt9k" event={"ID":"05a7bdc0-0106-4039-8076-ebc8b286de47","Type":"ContainerStarted","Data":"07a2db305d0946a64515be6fcd987cbe565cbee0cf39676a7d7cafe7da8961e2"} Feb 16 02:27:18.337635 master-0 kubenswrapper[31559]: I0216 02:27:18.337596 31559 generic.go:334] "Generic (PLEG): container finished" podID="9b50da24-3f10-4b81-be90-912874ed2629" containerID="557e60bc17eac08530d704e0dda9c831397e8de1d926a03728413a4cf53d59ab" exitCode=0 Feb 16 02:27:18.338046 master-0 kubenswrapper[31559]: I0216 02:27:18.337972 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"9b50da24-3f10-4b81-be90-912874ed2629","Type":"ContainerDied","Data":"557e60bc17eac08530d704e0dda9c831397e8de1d926a03728413a4cf53d59ab"} Feb 16 02:27:18.338119 master-0 kubenswrapper[31559]: I0216 02:27:18.338094 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"9b50da24-3f10-4b81-be90-912874ed2629","Type":"ContainerStarted","Data":"f22f528c13db514a186ce08f072b116dc1c1208712f39c43bbeee8730ef240a7"} Feb 16 02:27:18.636845 master-0 kubenswrapper[31559]: I0216 02:27:18.636788 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-bd6d6f87f-5f9t8"] Feb 16 02:27:19.347406 master-0 kubenswrapper[31559]: I0216 02:27:19.347335 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-bd6d6f87f-5f9t8" event={"ID":"3d09e6f1-f673-4f7a-9b09-6fa48874103c","Type":"ContainerStarted","Data":"03e1cbd9c728ef5a7940c6c98aee8c6d78892931d56af079757e866c9a89e06d"} Feb 16 02:27:20.368831 master-0 kubenswrapper[31559]: I0216 02:27:20.368785 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-6twvl"] Feb 16 02:27:20.372766 master-0 kubenswrapper[31559]: I0216 02:27:20.372730 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-6twvl" Feb 16 02:27:20.375579 master-0 kubenswrapper[31559]: I0216 02:27:20.375377 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 16 02:27:20.376059 master-0 kubenswrapper[31559]: I0216 02:27:20.376027 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-cltf9" Feb 16 02:27:20.482007 master-0 kubenswrapper[31559]: I0216 02:27:20.481936 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/7bd243d6-9430-4e30-8fc5-11f550a1be22-serviceca\") pod \"node-ca-6twvl\" (UID: \"7bd243d6-9430-4e30-8fc5-11f550a1be22\") " pod="openshift-image-registry/node-ca-6twvl" Feb 16 02:27:20.482007 master-0 kubenswrapper[31559]: I0216 02:27:20.482008 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmj5f\" (UniqueName: \"kubernetes.io/projected/7bd243d6-9430-4e30-8fc5-11f550a1be22-kube-api-access-qmj5f\") pod \"node-ca-6twvl\" (UID: \"7bd243d6-9430-4e30-8fc5-11f550a1be22\") " pod="openshift-image-registry/node-ca-6twvl" Feb 16 02:27:20.482301 master-0 kubenswrapper[31559]: I0216 02:27:20.482092 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7bd243d6-9430-4e30-8fc5-11f550a1be22-host\") pod \"node-ca-6twvl\" (UID: \"7bd243d6-9430-4e30-8fc5-11f550a1be22\") " pod="openshift-image-registry/node-ca-6twvl" Feb 16 02:27:20.582900 master-0 kubenswrapper[31559]: I0216 02:27:20.582840 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/7bd243d6-9430-4e30-8fc5-11f550a1be22-serviceca\") pod \"node-ca-6twvl\" (UID: \"7bd243d6-9430-4e30-8fc5-11f550a1be22\") " pod="openshift-image-registry/node-ca-6twvl" Feb 16 02:27:20.583101 master-0 kubenswrapper[31559]: I0216 02:27:20.582919 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmj5f\" (UniqueName: \"kubernetes.io/projected/7bd243d6-9430-4e30-8fc5-11f550a1be22-kube-api-access-qmj5f\") pod \"node-ca-6twvl\" (UID: \"7bd243d6-9430-4e30-8fc5-11f550a1be22\") " pod="openshift-image-registry/node-ca-6twvl" Feb 16 02:27:20.583101 master-0 kubenswrapper[31559]: I0216 02:27:20.583007 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7bd243d6-9430-4e30-8fc5-11f550a1be22-host\") pod \"node-ca-6twvl\" (UID: \"7bd243d6-9430-4e30-8fc5-11f550a1be22\") " pod="openshift-image-registry/node-ca-6twvl" Feb 16 02:27:20.583101 master-0 kubenswrapper[31559]: I0216 02:27:20.583076 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7bd243d6-9430-4e30-8fc5-11f550a1be22-host\") pod \"node-ca-6twvl\" (UID: \"7bd243d6-9430-4e30-8fc5-11f550a1be22\") " pod="openshift-image-registry/node-ca-6twvl" Feb 16 02:27:20.583838 master-0 kubenswrapper[31559]: I0216 02:27:20.583603 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/7bd243d6-9430-4e30-8fc5-11f550a1be22-serviceca\") pod \"node-ca-6twvl\" (UID: \"7bd243d6-9430-4e30-8fc5-11f550a1be22\") " pod="openshift-image-registry/node-ca-6twvl" Feb 16 02:27:20.599049 master-0 kubenswrapper[31559]: I0216 02:27:20.599016 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmj5f\" (UniqueName: \"kubernetes.io/projected/7bd243d6-9430-4e30-8fc5-11f550a1be22-kube-api-access-qmj5f\") pod \"node-ca-6twvl\" (UID: \"7bd243d6-9430-4e30-8fc5-11f550a1be22\") " pod="openshift-image-registry/node-ca-6twvl" Feb 16 02:27:20.710184 master-0 kubenswrapper[31559]: I0216 02:27:20.710140 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-6twvl" Feb 16 02:27:20.944654 master-0 kubenswrapper[31559]: E0216 02:27:20.944590 31559 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4ae7fbb377e485e07836359913aacb47c32cf66aed58a1093fb7dd0ba0700be6" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 16 02:27:20.949895 master-0 kubenswrapper[31559]: E0216 02:27:20.949841 31559 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4ae7fbb377e485e07836359913aacb47c32cf66aed58a1093fb7dd0ba0700be6" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 16 02:27:20.951327 master-0 kubenswrapper[31559]: E0216 02:27:20.951273 31559 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4ae7fbb377e485e07836359913aacb47c32cf66aed58a1093fb7dd0ba0700be6" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 16 02:27:20.951387 master-0 kubenswrapper[31559]: E0216 02:27:20.951343 31559 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-hvk5x" podUID="98d5d80d-94e2-4110-9a60-22778825aafa" containerName="kube-multus-additional-cni-plugins" Feb 16 02:27:21.506956 master-0 kubenswrapper[31559]: W0216 02:27:21.506902 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7bd243d6_9430_4e30_8fc5_11f550a1be22.slice/crio-7c2d6cd489e1f6c01489505a18b8bb9d4d69897c01ae317d6aae6d321317beff WatchSource:0}: Error finding container 7c2d6cd489e1f6c01489505a18b8bb9d4d69897c01ae317d6aae6d321317beff: Status 404 returned error can't find the container with id 7c2d6cd489e1f6c01489505a18b8bb9d4d69897c01ae317d6aae6d321317beff Feb 16 02:27:22.378566 master-0 kubenswrapper[31559]: I0216 02:27:22.378510 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"9b50da24-3f10-4b81-be90-912874ed2629","Type":"ContainerStarted","Data":"75cad3478045470f2040f88da8c40c241bef15b6fd03a907d8a57534c70ec620"} Feb 16 02:27:22.378566 master-0 kubenswrapper[31559]: I0216 02:27:22.378562 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"9b50da24-3f10-4b81-be90-912874ed2629","Type":"ContainerStarted","Data":"c35a553e2747f5aac1b45381ac10c69bcf39cea169e699e46724c0742a84361e"} Feb 16 02:27:22.378566 master-0 kubenswrapper[31559]: I0216 02:27:22.378573 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"9b50da24-3f10-4b81-be90-912874ed2629","Type":"ContainerStarted","Data":"a66be50108e13d548689d1184b0c507616dfc0f43d0669e813aa03fb0dfedc02"} Feb 16 02:27:22.378861 master-0 kubenswrapper[31559]: I0216 02:27:22.378583 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"9b50da24-3f10-4b81-be90-912874ed2629","Type":"ContainerStarted","Data":"4dd0ca3d1ef720a8082afa260fcbaf109837a0b326a0d5205fe9539b323bea7b"} Feb 16 02:27:22.380032 master-0 kubenswrapper[31559]: I0216 02:27:22.379994 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-bd6d6f87f-5f9t8" event={"ID":"3d09e6f1-f673-4f7a-9b09-6fa48874103c","Type":"ContainerStarted","Data":"80c715a0da0aa8ec6b9f4c1bb0c796314438c0bb69793d7213473ad9c3dd04d1"} Feb 16 02:27:22.382642 master-0 kubenswrapper[31559]: I0216 02:27:22.382609 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-6twvl" event={"ID":"7bd243d6-9430-4e30-8fc5-11f550a1be22","Type":"ContainerStarted","Data":"7c2d6cd489e1f6c01489505a18b8bb9d4d69897c01ae317d6aae6d321317beff"} Feb 16 02:27:22.401098 master-0 kubenswrapper[31559]: I0216 02:27:22.401027 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-console/networking-console-plugin-bd6d6f87f-5f9t8" podStartSLOduration=2.533765403 podStartE2EDuration="5.400952989s" podCreationTimestamp="2026-02-16 02:27:17 +0000 UTC" firstStartedPulling="2026-02-16 02:27:18.638400546 +0000 UTC m=+290.983006561" lastFinishedPulling="2026-02-16 02:27:21.505588112 +0000 UTC m=+293.850194147" observedRunningTime="2026-02-16 02:27:22.395806315 +0000 UTC m=+294.740412340" watchObservedRunningTime="2026-02-16 02:27:22.400952989 +0000 UTC m=+294.745559004" Feb 16 02:27:23.028476 master-0 kubenswrapper[31559]: I0216 02:27:23.028144 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5ddb9565b5-mcp7q"] Feb 16 02:27:23.037655 master-0 kubenswrapper[31559]: I0216 02:27:23.036198 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5ddb9565b5-mcp7q" Feb 16 02:27:23.045181 master-0 kubenswrapper[31559]: I0216 02:27:23.043072 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 16 02:27:23.045181 master-0 kubenswrapper[31559]: I0216 02:27:23.043210 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 16 02:27:23.045181 master-0 kubenswrapper[31559]: I0216 02:27:23.043251 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 16 02:27:23.045181 master-0 kubenswrapper[31559]: I0216 02:27:23.043074 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 16 02:27:23.045181 master-0 kubenswrapper[31559]: I0216 02:27:23.043452 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 16 02:27:23.045946 master-0 kubenswrapper[31559]: I0216 02:27:23.045892 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5ddb9565b5-mcp7q"] Feb 16 02:27:23.126250 master-0 kubenswrapper[31559]: I0216 02:27:23.126189 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ac79c0aa-10de-4da9-8e8b-95685d9fc609-console-oauth-config\") pod \"console-5ddb9565b5-mcp7q\" (UID: \"ac79c0aa-10de-4da9-8e8b-95685d9fc609\") " pod="openshift-console/console-5ddb9565b5-mcp7q" Feb 16 02:27:23.126250 master-0 kubenswrapper[31559]: I0216 02:27:23.126248 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kdhn\" (UniqueName: \"kubernetes.io/projected/ac79c0aa-10de-4da9-8e8b-95685d9fc609-kube-api-access-8kdhn\") pod \"console-5ddb9565b5-mcp7q\" (UID: \"ac79c0aa-10de-4da9-8e8b-95685d9fc609\") " pod="openshift-console/console-5ddb9565b5-mcp7q" Feb 16 02:27:23.126510 master-0 kubenswrapper[31559]: I0216 02:27:23.126276 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ac79c0aa-10de-4da9-8e8b-95685d9fc609-console-config\") pod \"console-5ddb9565b5-mcp7q\" (UID: \"ac79c0aa-10de-4da9-8e8b-95685d9fc609\") " pod="openshift-console/console-5ddb9565b5-mcp7q" Feb 16 02:27:23.126510 master-0 kubenswrapper[31559]: I0216 02:27:23.126300 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ac79c0aa-10de-4da9-8e8b-95685d9fc609-oauth-serving-cert\") pod \"console-5ddb9565b5-mcp7q\" (UID: \"ac79c0aa-10de-4da9-8e8b-95685d9fc609\") " pod="openshift-console/console-5ddb9565b5-mcp7q" Feb 16 02:27:23.126578 master-0 kubenswrapper[31559]: I0216 02:27:23.126505 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ac79c0aa-10de-4da9-8e8b-95685d9fc609-console-serving-cert\") pod \"console-5ddb9565b5-mcp7q\" (UID: \"ac79c0aa-10de-4da9-8e8b-95685d9fc609\") " pod="openshift-console/console-5ddb9565b5-mcp7q" Feb 16 02:27:23.126632 master-0 kubenswrapper[31559]: I0216 02:27:23.126607 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ac79c0aa-10de-4da9-8e8b-95685d9fc609-service-ca\") pod \"console-5ddb9565b5-mcp7q\" (UID: \"ac79c0aa-10de-4da9-8e8b-95685d9fc609\") " pod="openshift-console/console-5ddb9565b5-mcp7q" Feb 16 02:27:23.228817 master-0 kubenswrapper[31559]: I0216 02:27:23.228737 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ac79c0aa-10de-4da9-8e8b-95685d9fc609-console-oauth-config\") pod \"console-5ddb9565b5-mcp7q\" (UID: \"ac79c0aa-10de-4da9-8e8b-95685d9fc609\") " pod="openshift-console/console-5ddb9565b5-mcp7q" Feb 16 02:27:23.229029 master-0 kubenswrapper[31559]: I0216 02:27:23.228871 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kdhn\" (UniqueName: \"kubernetes.io/projected/ac79c0aa-10de-4da9-8e8b-95685d9fc609-kube-api-access-8kdhn\") pod \"console-5ddb9565b5-mcp7q\" (UID: \"ac79c0aa-10de-4da9-8e8b-95685d9fc609\") " pod="openshift-console/console-5ddb9565b5-mcp7q" Feb 16 02:27:23.229029 master-0 kubenswrapper[31559]: I0216 02:27:23.228941 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ac79c0aa-10de-4da9-8e8b-95685d9fc609-console-config\") pod \"console-5ddb9565b5-mcp7q\" (UID: \"ac79c0aa-10de-4da9-8e8b-95685d9fc609\") " pod="openshift-console/console-5ddb9565b5-mcp7q" Feb 16 02:27:23.229139 master-0 kubenswrapper[31559]: I0216 02:27:23.229087 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ac79c0aa-10de-4da9-8e8b-95685d9fc609-oauth-serving-cert\") pod \"console-5ddb9565b5-mcp7q\" (UID: \"ac79c0aa-10de-4da9-8e8b-95685d9fc609\") " pod="openshift-console/console-5ddb9565b5-mcp7q" Feb 16 02:27:23.229351 master-0 kubenswrapper[31559]: I0216 02:27:23.229283 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ac79c0aa-10de-4da9-8e8b-95685d9fc609-console-serving-cert\") pod \"console-5ddb9565b5-mcp7q\" (UID: \"ac79c0aa-10de-4da9-8e8b-95685d9fc609\") " pod="openshift-console/console-5ddb9565b5-mcp7q" Feb 16 02:27:23.229579 master-0 kubenswrapper[31559]: I0216 02:27:23.229540 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ac79c0aa-10de-4da9-8e8b-95685d9fc609-service-ca\") pod \"console-5ddb9565b5-mcp7q\" (UID: \"ac79c0aa-10de-4da9-8e8b-95685d9fc609\") " pod="openshift-console/console-5ddb9565b5-mcp7q" Feb 16 02:27:23.229966 master-0 kubenswrapper[31559]: I0216 02:27:23.229922 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ac79c0aa-10de-4da9-8e8b-95685d9fc609-console-config\") pod \"console-5ddb9565b5-mcp7q\" (UID: \"ac79c0aa-10de-4da9-8e8b-95685d9fc609\") " pod="openshift-console/console-5ddb9565b5-mcp7q" Feb 16 02:27:23.230243 master-0 kubenswrapper[31559]: I0216 02:27:23.230198 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ac79c0aa-10de-4da9-8e8b-95685d9fc609-oauth-serving-cert\") pod \"console-5ddb9565b5-mcp7q\" (UID: \"ac79c0aa-10de-4da9-8e8b-95685d9fc609\") " pod="openshift-console/console-5ddb9565b5-mcp7q" Feb 16 02:27:23.231075 master-0 kubenswrapper[31559]: I0216 02:27:23.231024 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ac79c0aa-10de-4da9-8e8b-95685d9fc609-service-ca\") pod \"console-5ddb9565b5-mcp7q\" (UID: \"ac79c0aa-10de-4da9-8e8b-95685d9fc609\") " pod="openshift-console/console-5ddb9565b5-mcp7q" Feb 16 02:27:23.234006 master-0 kubenswrapper[31559]: I0216 02:27:23.233952 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ac79c0aa-10de-4da9-8e8b-95685d9fc609-console-serving-cert\") pod \"console-5ddb9565b5-mcp7q\" (UID: \"ac79c0aa-10de-4da9-8e8b-95685d9fc609\") " pod="openshift-console/console-5ddb9565b5-mcp7q" Feb 16 02:27:23.238947 master-0 kubenswrapper[31559]: I0216 02:27:23.238907 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ac79c0aa-10de-4da9-8e8b-95685d9fc609-console-oauth-config\") pod \"console-5ddb9565b5-mcp7q\" (UID: \"ac79c0aa-10de-4da9-8e8b-95685d9fc609\") " pod="openshift-console/console-5ddb9565b5-mcp7q" Feb 16 02:27:23.252206 master-0 kubenswrapper[31559]: I0216 02:27:23.252170 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kdhn\" (UniqueName: \"kubernetes.io/projected/ac79c0aa-10de-4da9-8e8b-95685d9fc609-kube-api-access-8kdhn\") pod \"console-5ddb9565b5-mcp7q\" (UID: \"ac79c0aa-10de-4da9-8e8b-95685d9fc609\") " pod="openshift-console/console-5ddb9565b5-mcp7q" Feb 16 02:27:23.385581 master-0 kubenswrapper[31559]: I0216 02:27:23.379752 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5ddb9565b5-mcp7q" Feb 16 02:27:23.393934 master-0 kubenswrapper[31559]: I0216 02:27:23.393814 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"9b50da24-3f10-4b81-be90-912874ed2629","Type":"ContainerStarted","Data":"fef27121ceeb80a7aebc31c20224bc11cf1a9cbb5557ffff946afefa4448b15a"} Feb 16 02:27:23.393934 master-0 kubenswrapper[31559]: I0216 02:27:23.393875 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"9b50da24-3f10-4b81-be90-912874ed2629","Type":"ContainerStarted","Data":"01b27fa108a3b5894a94255befeb44230fc5897f8f6ce1acf532932fc2b53d83"} Feb 16 02:27:23.431554 master-0 kubenswrapper[31559]: I0216 02:27:23.431502 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=68.260734062 podStartE2EDuration="1m11.431486336s" podCreationTimestamp="2026-02-16 02:26:12 +0000 UTC" firstStartedPulling="2026-02-16 02:27:18.341587864 +0000 UTC m=+290.686193929" lastFinishedPulling="2026-02-16 02:27:21.512340178 +0000 UTC m=+293.856946203" observedRunningTime="2026-02-16 02:27:23.426747963 +0000 UTC m=+295.771353978" watchObservedRunningTime="2026-02-16 02:27:23.431486336 +0000 UTC m=+295.776092351" Feb 16 02:27:24.191479 master-0 kubenswrapper[31559]: I0216 02:27:24.191294 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5ddb9565b5-mcp7q"] Feb 16 02:27:24.193650 master-0 kubenswrapper[31559]: W0216 02:27:24.193598 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac79c0aa_10de_4da9_8e8b_95685d9fc609.slice/crio-e4be6d6268e664a7ed005d090a0a44c11587bff48eb931785193a226660be17c WatchSource:0}: Error finding container e4be6d6268e664a7ed005d090a0a44c11587bff48eb931785193a226660be17c: Status 404 returned error can't find the container with id e4be6d6268e664a7ed005d090a0a44c11587bff48eb931785193a226660be17c Feb 16 02:27:24.407253 master-0 kubenswrapper[31559]: I0216 02:27:24.407076 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5ddb9565b5-mcp7q" event={"ID":"ac79c0aa-10de-4da9-8e8b-95685d9fc609","Type":"ContainerStarted","Data":"e4be6d6268e664a7ed005d090a0a44c11587bff48eb931785193a226660be17c"} Feb 16 02:27:24.409490 master-0 kubenswrapper[31559]: I0216 02:27:24.409399 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-6twvl" event={"ID":"7bd243d6-9430-4e30-8fc5-11f550a1be22","Type":"ContainerStarted","Data":"b8b7fa488fa8c9139ba41df2867d154167acd43269dfa82ca0c1d1b27fdb8d97"} Feb 16 02:27:24.447693 master-0 kubenswrapper[31559]: I0216 02:27:24.447564 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-6twvl" podStartSLOduration=2.110170623 podStartE2EDuration="4.447536857s" podCreationTimestamp="2026-02-16 02:27:20 +0000 UTC" firstStartedPulling="2026-02-16 02:27:21.50897111 +0000 UTC m=+293.853577125" lastFinishedPulling="2026-02-16 02:27:23.846337324 +0000 UTC m=+296.190943359" observedRunningTime="2026-02-16 02:27:24.434763764 +0000 UTC m=+296.779369809" watchObservedRunningTime="2026-02-16 02:27:24.447536857 +0000 UTC m=+296.792142912" Feb 16 02:27:25.419487 master-0 kubenswrapper[31559]: I0216 02:27:25.419412 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-hvk5x_98d5d80d-94e2-4110-9a60-22778825aafa/kube-multus-additional-cni-plugins/0.log" Feb 16 02:27:25.420077 master-0 kubenswrapper[31559]: I0216 02:27:25.419519 31559 generic.go:334] "Generic (PLEG): container finished" podID="98d5d80d-94e2-4110-9a60-22778825aafa" containerID="4ae7fbb377e485e07836359913aacb47c32cf66aed58a1093fb7dd0ba0700be6" exitCode=137 Feb 16 02:27:25.420077 master-0 kubenswrapper[31559]: I0216 02:27:25.419632 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-hvk5x" event={"ID":"98d5d80d-94e2-4110-9a60-22778825aafa","Type":"ContainerDied","Data":"4ae7fbb377e485e07836359913aacb47c32cf66aed58a1093fb7dd0ba0700be6"} Feb 16 02:27:25.420077 master-0 kubenswrapper[31559]: I0216 02:27:25.419708 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-hvk5x" event={"ID":"98d5d80d-94e2-4110-9a60-22778825aafa","Type":"ContainerDied","Data":"df6e9461a193c2f9468a309489fadaeec2fecd9e6a63f507139c4b21ea0e609e"} Feb 16 02:27:25.420077 master-0 kubenswrapper[31559]: I0216 02:27:25.419730 31559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df6e9461a193c2f9468a309489fadaeec2fecd9e6a63f507139c4b21ea0e609e" Feb 16 02:27:25.938042 master-0 kubenswrapper[31559]: I0216 02:27:25.937990 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-hvk5x_98d5d80d-94e2-4110-9a60-22778825aafa/kube-multus-additional-cni-plugins/0.log" Feb 16 02:27:25.938271 master-0 kubenswrapper[31559]: I0216 02:27:25.938063 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-hvk5x" Feb 16 02:27:25.973200 master-0 kubenswrapper[31559]: I0216 02:27:25.973086 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6khql\" (UniqueName: \"kubernetes.io/projected/98d5d80d-94e2-4110-9a60-22778825aafa-kube-api-access-6khql\") pod \"98d5d80d-94e2-4110-9a60-22778825aafa\" (UID: \"98d5d80d-94e2-4110-9a60-22778825aafa\") " Feb 16 02:27:25.973424 master-0 kubenswrapper[31559]: I0216 02:27:25.973222 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/98d5d80d-94e2-4110-9a60-22778825aafa-cni-sysctl-allowlist\") pod \"98d5d80d-94e2-4110-9a60-22778825aafa\" (UID: \"98d5d80d-94e2-4110-9a60-22778825aafa\") " Feb 16 02:27:25.973424 master-0 kubenswrapper[31559]: I0216 02:27:25.973306 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/98d5d80d-94e2-4110-9a60-22778825aafa-tuning-conf-dir\") pod \"98d5d80d-94e2-4110-9a60-22778825aafa\" (UID: \"98d5d80d-94e2-4110-9a60-22778825aafa\") " Feb 16 02:27:25.973564 master-0 kubenswrapper[31559]: I0216 02:27:25.973504 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/98d5d80d-94e2-4110-9a60-22778825aafa-ready\") pod \"98d5d80d-94e2-4110-9a60-22778825aafa\" (UID: \"98d5d80d-94e2-4110-9a60-22778825aafa\") " Feb 16 02:27:25.973564 master-0 kubenswrapper[31559]: I0216 02:27:25.973506 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98d5d80d-94e2-4110-9a60-22778825aafa-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "98d5d80d-94e2-4110-9a60-22778825aafa" (UID: "98d5d80d-94e2-4110-9a60-22778825aafa"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:27:25.973939 master-0 kubenswrapper[31559]: I0216 02:27:25.973906 31559 reconciler_common.go:293] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/98d5d80d-94e2-4110-9a60-22778825aafa-tuning-conf-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 02:27:25.974003 master-0 kubenswrapper[31559]: I0216 02:27:25.973939 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98d5d80d-94e2-4110-9a60-22778825aafa-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "98d5d80d-94e2-4110-9a60-22778825aafa" (UID: "98d5d80d-94e2-4110-9a60-22778825aafa"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:27:25.974057 master-0 kubenswrapper[31559]: I0216 02:27:25.973987 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98d5d80d-94e2-4110-9a60-22778825aafa-ready" (OuterVolumeSpecName: "ready") pod "98d5d80d-94e2-4110-9a60-22778825aafa" (UID: "98d5d80d-94e2-4110-9a60-22778825aafa"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 02:27:25.978909 master-0 kubenswrapper[31559]: I0216 02:27:25.978821 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98d5d80d-94e2-4110-9a60-22778825aafa-kube-api-access-6khql" (OuterVolumeSpecName: "kube-api-access-6khql") pod "98d5d80d-94e2-4110-9a60-22778825aafa" (UID: "98d5d80d-94e2-4110-9a60-22778825aafa"). InnerVolumeSpecName "kube-api-access-6khql". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:27:26.075566 master-0 kubenswrapper[31559]: I0216 02:27:26.075509 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6khql\" (UniqueName: \"kubernetes.io/projected/98d5d80d-94e2-4110-9a60-22778825aafa-kube-api-access-6khql\") on node \"master-0\" DevicePath \"\"" Feb 16 02:27:26.075566 master-0 kubenswrapper[31559]: I0216 02:27:26.075547 31559 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/98d5d80d-94e2-4110-9a60-22778825aafa-cni-sysctl-allowlist\") on node \"master-0\" DevicePath \"\"" Feb 16 02:27:26.075566 master-0 kubenswrapper[31559]: I0216 02:27:26.075557 31559 reconciler_common.go:293] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/98d5d80d-94e2-4110-9a60-22778825aafa-ready\") on node \"master-0\" DevicePath \"\"" Feb 16 02:27:26.429372 master-0 kubenswrapper[31559]: I0216 02:27:26.429267 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-hvk5x" Feb 16 02:27:26.478750 master-0 kubenswrapper[31559]: I0216 02:27:26.478591 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-hvk5x"] Feb 16 02:27:26.485380 master-0 kubenswrapper[31559]: I0216 02:27:26.485323 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-hvk5x"] Feb 16 02:27:26.884555 master-0 kubenswrapper[31559]: I0216 02:27:26.884365 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:27:27.935114 master-0 kubenswrapper[31559]: I0216 02:27:27.935066 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98d5d80d-94e2-4110-9a60-22778825aafa" path="/var/lib/kubelet/pods/98d5d80d-94e2-4110-9a60-22778825aafa/volumes" Feb 16 02:27:28.803105 master-0 kubenswrapper[31559]: I0216 02:27:28.803046 31559 kubelet.go:1505] "Image garbage collection succeeded" Feb 16 02:27:29.464192 master-0 kubenswrapper[31559]: I0216 02:27:29.464111 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5ddb9565b5-mcp7q" event={"ID":"ac79c0aa-10de-4da9-8e8b-95685d9fc609","Type":"ContainerStarted","Data":"7eb02b6580c83c0d72cbcba2b2249fffb4ba61714c193833e659156340a42966"} Feb 16 02:27:29.506043 master-0 kubenswrapper[31559]: I0216 02:27:29.505900 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5ddb9565b5-mcp7q" podStartSLOduration=2.891903435 podStartE2EDuration="7.505870552s" podCreationTimestamp="2026-02-16 02:27:22 +0000 UTC" firstStartedPulling="2026-02-16 02:27:24.196835305 +0000 UTC m=+296.541441320" lastFinishedPulling="2026-02-16 02:27:28.810802422 +0000 UTC m=+301.155408437" observedRunningTime="2026-02-16 02:27:29.493175349 +0000 UTC m=+301.837781404" watchObservedRunningTime="2026-02-16 02:27:29.505870552 +0000 UTC m=+301.850476607" Feb 16 02:27:33.382104 master-0 kubenswrapper[31559]: I0216 02:27:33.381044 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5ddb9565b5-mcp7q" Feb 16 02:27:33.382104 master-0 kubenswrapper[31559]: I0216 02:27:33.381178 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5ddb9565b5-mcp7q" Feb 16 02:27:33.386827 master-0 kubenswrapper[31559]: I0216 02:27:33.386751 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5ddb9565b5-mcp7q" Feb 16 02:27:33.514056 master-0 kubenswrapper[31559]: I0216 02:27:33.513971 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-6d678b8d67-8gzlx_c8086f93-2d98-4218-afac-20a65e6bf943/multus-admission-controller/0.log" Feb 16 02:27:33.514056 master-0 kubenswrapper[31559]: I0216 02:27:33.514037 31559 generic.go:334] "Generic (PLEG): container finished" podID="c8086f93-2d98-4218-afac-20a65e6bf943" containerID="30e7ac434b2ff8376d8f01a24e4deb497d95be6f36eeba191f63ffea76f881d2" exitCode=137 Feb 16 02:27:33.515570 master-0 kubenswrapper[31559]: I0216 02:27:33.515496 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-6d678b8d67-8gzlx" event={"ID":"c8086f93-2d98-4218-afac-20a65e6bf943","Type":"ContainerDied","Data":"30e7ac434b2ff8376d8f01a24e4deb497d95be6f36eeba191f63ffea76f881d2"} Feb 16 02:27:33.520887 master-0 kubenswrapper[31559]: I0216 02:27:33.520822 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5ddb9565b5-mcp7q" Feb 16 02:27:33.724057 master-0 kubenswrapper[31559]: I0216 02:27:33.723987 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-6d678b8d67-8gzlx_c8086f93-2d98-4218-afac-20a65e6bf943/multus-admission-controller/0.log" Feb 16 02:27:33.724233 master-0 kubenswrapper[31559]: I0216 02:27:33.724072 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6d678b8d67-8gzlx" Feb 16 02:27:33.836593 master-0 kubenswrapper[31559]: I0216 02:27:33.835892 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cz49l\" (UniqueName: \"kubernetes.io/projected/c8086f93-2d98-4218-afac-20a65e6bf943-kube-api-access-cz49l\") pod \"c8086f93-2d98-4218-afac-20a65e6bf943\" (UID: \"c8086f93-2d98-4218-afac-20a65e6bf943\") " Feb 16 02:27:33.836593 master-0 kubenswrapper[31559]: I0216 02:27:33.836095 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c8086f93-2d98-4218-afac-20a65e6bf943-webhook-certs\") pod \"c8086f93-2d98-4218-afac-20a65e6bf943\" (UID: \"c8086f93-2d98-4218-afac-20a65e6bf943\") " Feb 16 02:27:33.839780 master-0 kubenswrapper[31559]: I0216 02:27:33.839659 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8086f93-2d98-4218-afac-20a65e6bf943-kube-api-access-cz49l" (OuterVolumeSpecName: "kube-api-access-cz49l") pod "c8086f93-2d98-4218-afac-20a65e6bf943" (UID: "c8086f93-2d98-4218-afac-20a65e6bf943"). InnerVolumeSpecName "kube-api-access-cz49l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:27:33.841182 master-0 kubenswrapper[31559]: I0216 02:27:33.841125 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8086f93-2d98-4218-afac-20a65e6bf943-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "c8086f93-2d98-4218-afac-20a65e6bf943" (UID: "c8086f93-2d98-4218-afac-20a65e6bf943"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:27:33.909613 master-0 kubenswrapper[31559]: I0216 02:27:33.909401 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6ddf646cb9-nx4kk"] Feb 16 02:27:33.909862 master-0 kubenswrapper[31559]: E0216 02:27:33.909829 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98d5d80d-94e2-4110-9a60-22778825aafa" containerName="kube-multus-additional-cni-plugins" Feb 16 02:27:33.909862 master-0 kubenswrapper[31559]: I0216 02:27:33.909853 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="98d5d80d-94e2-4110-9a60-22778825aafa" containerName="kube-multus-additional-cni-plugins" Feb 16 02:27:33.909960 master-0 kubenswrapper[31559]: E0216 02:27:33.909896 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8086f93-2d98-4218-afac-20a65e6bf943" containerName="kube-rbac-proxy" Feb 16 02:27:33.909960 master-0 kubenswrapper[31559]: I0216 02:27:33.909905 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8086f93-2d98-4218-afac-20a65e6bf943" containerName="kube-rbac-proxy" Feb 16 02:27:33.909960 master-0 kubenswrapper[31559]: E0216 02:27:33.909925 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8086f93-2d98-4218-afac-20a65e6bf943" containerName="multus-admission-controller" Feb 16 02:27:33.909960 master-0 kubenswrapper[31559]: I0216 02:27:33.909933 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8086f93-2d98-4218-afac-20a65e6bf943" containerName="multus-admission-controller" Feb 16 02:27:33.910148 master-0 kubenswrapper[31559]: I0216 02:27:33.910135 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="98d5d80d-94e2-4110-9a60-22778825aafa" containerName="kube-multus-additional-cni-plugins" Feb 16 02:27:33.910205 master-0 kubenswrapper[31559]: I0216 02:27:33.910169 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8086f93-2d98-4218-afac-20a65e6bf943" containerName="kube-rbac-proxy" Feb 16 02:27:33.910205 master-0 kubenswrapper[31559]: I0216 02:27:33.910193 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8086f93-2d98-4218-afac-20a65e6bf943" containerName="multus-admission-controller" Feb 16 02:27:33.912677 master-0 kubenswrapper[31559]: I0216 02:27:33.911736 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6ddf646cb9-nx4kk" Feb 16 02:27:33.929655 master-0 kubenswrapper[31559]: I0216 02:27:33.929591 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 16 02:27:33.938997 master-0 kubenswrapper[31559]: I0216 02:27:33.938913 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cz49l\" (UniqueName: \"kubernetes.io/projected/c8086f93-2d98-4218-afac-20a65e6bf943-kube-api-access-cz49l\") on node \"master-0\" DevicePath \"\"" Feb 16 02:27:33.938997 master-0 kubenswrapper[31559]: I0216 02:27:33.939010 31559 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c8086f93-2d98-4218-afac-20a65e6bf943-webhook-certs\") on node \"master-0\" DevicePath \"\"" Feb 16 02:27:33.951491 master-0 kubenswrapper[31559]: I0216 02:27:33.949801 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6ddf646cb9-nx4kk"] Feb 16 02:27:34.046760 master-0 kubenswrapper[31559]: I0216 02:27:34.046669 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkxgb\" (UniqueName: \"kubernetes.io/projected/f6000700-cc1a-4a76-9156-a466cc6e99ef-kube-api-access-zkxgb\") pod \"console-6ddf646cb9-nx4kk\" (UID: \"f6000700-cc1a-4a76-9156-a466cc6e99ef\") " pod="openshift-console/console-6ddf646cb9-nx4kk" Feb 16 02:27:34.046760 master-0 kubenswrapper[31559]: I0216 02:27:34.046752 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f6000700-cc1a-4a76-9156-a466cc6e99ef-console-config\") pod \"console-6ddf646cb9-nx4kk\" (UID: \"f6000700-cc1a-4a76-9156-a466cc6e99ef\") " pod="openshift-console/console-6ddf646cb9-nx4kk" Feb 16 02:27:34.047107 master-0 kubenswrapper[31559]: I0216 02:27:34.046799 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f6000700-cc1a-4a76-9156-a466cc6e99ef-oauth-serving-cert\") pod \"console-6ddf646cb9-nx4kk\" (UID: \"f6000700-cc1a-4a76-9156-a466cc6e99ef\") " pod="openshift-console/console-6ddf646cb9-nx4kk" Feb 16 02:27:34.047107 master-0 kubenswrapper[31559]: I0216 02:27:34.046881 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6000700-cc1a-4a76-9156-a466cc6e99ef-trusted-ca-bundle\") pod \"console-6ddf646cb9-nx4kk\" (UID: \"f6000700-cc1a-4a76-9156-a466cc6e99ef\") " pod="openshift-console/console-6ddf646cb9-nx4kk" Feb 16 02:27:34.047107 master-0 kubenswrapper[31559]: I0216 02:27:34.047050 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f6000700-cc1a-4a76-9156-a466cc6e99ef-service-ca\") pod \"console-6ddf646cb9-nx4kk\" (UID: \"f6000700-cc1a-4a76-9156-a466cc6e99ef\") " pod="openshift-console/console-6ddf646cb9-nx4kk" Feb 16 02:27:34.047480 master-0 kubenswrapper[31559]: I0216 02:27:34.047381 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f6000700-cc1a-4a76-9156-a466cc6e99ef-console-oauth-config\") pod \"console-6ddf646cb9-nx4kk\" (UID: \"f6000700-cc1a-4a76-9156-a466cc6e99ef\") " pod="openshift-console/console-6ddf646cb9-nx4kk" Feb 16 02:27:34.047708 master-0 kubenswrapper[31559]: I0216 02:27:34.047655 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f6000700-cc1a-4a76-9156-a466cc6e99ef-console-serving-cert\") pod \"console-6ddf646cb9-nx4kk\" (UID: \"f6000700-cc1a-4a76-9156-a466cc6e99ef\") " pod="openshift-console/console-6ddf646cb9-nx4kk" Feb 16 02:27:34.135274 master-0 kubenswrapper[31559]: I0216 02:27:34.135208 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-645f9fcbc6-lsqk8"] Feb 16 02:27:34.149651 master-0 kubenswrapper[31559]: I0216 02:27:34.149592 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkxgb\" (UniqueName: \"kubernetes.io/projected/f6000700-cc1a-4a76-9156-a466cc6e99ef-kube-api-access-zkxgb\") pod \"console-6ddf646cb9-nx4kk\" (UID: \"f6000700-cc1a-4a76-9156-a466cc6e99ef\") " pod="openshift-console/console-6ddf646cb9-nx4kk" Feb 16 02:27:34.150004 master-0 kubenswrapper[31559]: I0216 02:27:34.149971 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f6000700-cc1a-4a76-9156-a466cc6e99ef-console-config\") pod \"console-6ddf646cb9-nx4kk\" (UID: \"f6000700-cc1a-4a76-9156-a466cc6e99ef\") " pod="openshift-console/console-6ddf646cb9-nx4kk" Feb 16 02:27:34.150183 master-0 kubenswrapper[31559]: I0216 02:27:34.150151 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f6000700-cc1a-4a76-9156-a466cc6e99ef-oauth-serving-cert\") pod \"console-6ddf646cb9-nx4kk\" (UID: \"f6000700-cc1a-4a76-9156-a466cc6e99ef\") " pod="openshift-console/console-6ddf646cb9-nx4kk" Feb 16 02:27:34.150342 master-0 kubenswrapper[31559]: I0216 02:27:34.150317 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6000700-cc1a-4a76-9156-a466cc6e99ef-trusted-ca-bundle\") pod \"console-6ddf646cb9-nx4kk\" (UID: \"f6000700-cc1a-4a76-9156-a466cc6e99ef\") " pod="openshift-console/console-6ddf646cb9-nx4kk" Feb 16 02:27:34.150641 master-0 kubenswrapper[31559]: I0216 02:27:34.150613 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f6000700-cc1a-4a76-9156-a466cc6e99ef-service-ca\") pod \"console-6ddf646cb9-nx4kk\" (UID: \"f6000700-cc1a-4a76-9156-a466cc6e99ef\") " pod="openshift-console/console-6ddf646cb9-nx4kk" Feb 16 02:27:34.150828 master-0 kubenswrapper[31559]: I0216 02:27:34.150797 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f6000700-cc1a-4a76-9156-a466cc6e99ef-console-oauth-config\") pod \"console-6ddf646cb9-nx4kk\" (UID: \"f6000700-cc1a-4a76-9156-a466cc6e99ef\") " pod="openshift-console/console-6ddf646cb9-nx4kk" Feb 16 02:27:34.151030 master-0 kubenswrapper[31559]: I0216 02:27:34.151004 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f6000700-cc1a-4a76-9156-a466cc6e99ef-console-serving-cert\") pod \"console-6ddf646cb9-nx4kk\" (UID: \"f6000700-cc1a-4a76-9156-a466cc6e99ef\") " pod="openshift-console/console-6ddf646cb9-nx4kk" Feb 16 02:27:34.153282 master-0 kubenswrapper[31559]: I0216 02:27:34.153245 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f6000700-cc1a-4a76-9156-a466cc6e99ef-service-ca\") pod \"console-6ddf646cb9-nx4kk\" (UID: \"f6000700-cc1a-4a76-9156-a466cc6e99ef\") " pod="openshift-console/console-6ddf646cb9-nx4kk" Feb 16 02:27:34.153536 master-0 kubenswrapper[31559]: I0216 02:27:34.153480 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f6000700-cc1a-4a76-9156-a466cc6e99ef-oauth-serving-cert\") pod \"console-6ddf646cb9-nx4kk\" (UID: \"f6000700-cc1a-4a76-9156-a466cc6e99ef\") " pod="openshift-console/console-6ddf646cb9-nx4kk" Feb 16 02:27:34.153958 master-0 kubenswrapper[31559]: I0216 02:27:34.153921 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6000700-cc1a-4a76-9156-a466cc6e99ef-trusted-ca-bundle\") pod \"console-6ddf646cb9-nx4kk\" (UID: \"f6000700-cc1a-4a76-9156-a466cc6e99ef\") " pod="openshift-console/console-6ddf646cb9-nx4kk" Feb 16 02:27:34.154239 master-0 kubenswrapper[31559]: I0216 02:27:34.154187 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f6000700-cc1a-4a76-9156-a466cc6e99ef-console-config\") pod \"console-6ddf646cb9-nx4kk\" (UID: \"f6000700-cc1a-4a76-9156-a466cc6e99ef\") " pod="openshift-console/console-6ddf646cb9-nx4kk" Feb 16 02:27:34.155527 master-0 kubenswrapper[31559]: I0216 02:27:34.155421 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f6000700-cc1a-4a76-9156-a466cc6e99ef-console-serving-cert\") pod \"console-6ddf646cb9-nx4kk\" (UID: \"f6000700-cc1a-4a76-9156-a466cc6e99ef\") " pod="openshift-console/console-6ddf646cb9-nx4kk" Feb 16 02:27:34.160481 master-0 kubenswrapper[31559]: I0216 02:27:34.160364 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f6000700-cc1a-4a76-9156-a466cc6e99ef-console-oauth-config\") pod \"console-6ddf646cb9-nx4kk\" (UID: \"f6000700-cc1a-4a76-9156-a466cc6e99ef\") " pod="openshift-console/console-6ddf646cb9-nx4kk" Feb 16 02:27:34.173466 master-0 kubenswrapper[31559]: I0216 02:27:34.171309 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkxgb\" (UniqueName: \"kubernetes.io/projected/f6000700-cc1a-4a76-9156-a466cc6e99ef-kube-api-access-zkxgb\") pod \"console-6ddf646cb9-nx4kk\" (UID: \"f6000700-cc1a-4a76-9156-a466cc6e99ef\") " pod="openshift-console/console-6ddf646cb9-nx4kk" Feb 16 02:27:34.254395 master-0 kubenswrapper[31559]: I0216 02:27:34.254343 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6ddf646cb9-nx4kk" Feb 16 02:27:34.524650 master-0 kubenswrapper[31559]: I0216 02:27:34.524571 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-6d678b8d67-8gzlx_c8086f93-2d98-4218-afac-20a65e6bf943/multus-admission-controller/0.log" Feb 16 02:27:34.525549 master-0 kubenswrapper[31559]: I0216 02:27:34.524787 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6d678b8d67-8gzlx" Feb 16 02:27:34.525549 master-0 kubenswrapper[31559]: I0216 02:27:34.525346 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-6d678b8d67-8gzlx" event={"ID":"c8086f93-2d98-4218-afac-20a65e6bf943","Type":"ContainerDied","Data":"bd5a6caddc3fbffdc59300004e09c460ee8e769674df58e5a2d88b92c736576e"} Feb 16 02:27:34.525549 master-0 kubenswrapper[31559]: I0216 02:27:34.525454 31559 scope.go:117] "RemoveContainer" containerID="98ee9d5b95bc1d66557aa51d3718ad8ae4d6135a3674e22406a779cde9ce0095" Feb 16 02:27:34.558747 master-0 kubenswrapper[31559]: I0216 02:27:34.558664 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-6d678b8d67-8gzlx"] Feb 16 02:27:34.562132 master-0 kubenswrapper[31559]: I0216 02:27:34.562078 31559 scope.go:117] "RemoveContainer" containerID="30e7ac434b2ff8376d8f01a24e4deb497d95be6f36eeba191f63ffea76f881d2" Feb 16 02:27:34.571144 master-0 kubenswrapper[31559]: I0216 02:27:34.571039 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/multus-admission-controller-6d678b8d67-8gzlx"] Feb 16 02:27:34.742568 master-0 kubenswrapper[31559]: I0216 02:27:34.742487 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6ddf646cb9-nx4kk"] Feb 16 02:27:34.746372 master-0 kubenswrapper[31559]: W0216 02:27:34.746292 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6000700_cc1a_4a76_9156_a466cc6e99ef.slice/crio-b35706d38726aa11438a45ed268076d3052a4a7368e20e7d7b9ee2e5a3a08664 WatchSource:0}: Error finding container b35706d38726aa11438a45ed268076d3052a4a7368e20e7d7b9ee2e5a3a08664: Status 404 returned error can't find the container with id b35706d38726aa11438a45ed268076d3052a4a7368e20e7d7b9ee2e5a3a08664 Feb 16 02:27:35.536930 master-0 kubenswrapper[31559]: I0216 02:27:35.536706 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6ddf646cb9-nx4kk" event={"ID":"f6000700-cc1a-4a76-9156-a466cc6e99ef","Type":"ContainerStarted","Data":"fb249c68f5877e1ecec16d0e14920df2d5ec238ed03f598db9b30e3e603f66c5"} Feb 16 02:27:35.536930 master-0 kubenswrapper[31559]: I0216 02:27:35.536800 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6ddf646cb9-nx4kk" event={"ID":"f6000700-cc1a-4a76-9156-a466cc6e99ef","Type":"ContainerStarted","Data":"b35706d38726aa11438a45ed268076d3052a4a7368e20e7d7b9ee2e5a3a08664"} Feb 16 02:27:35.574493 master-0 kubenswrapper[31559]: I0216 02:27:35.573710 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6ddf646cb9-nx4kk" podStartSLOduration=2.573673906 podStartE2EDuration="2.573673906s" podCreationTimestamp="2026-02-16 02:27:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:27:35.570980275 +0000 UTC m=+307.915586290" watchObservedRunningTime="2026-02-16 02:27:35.573673906 +0000 UTC m=+307.918279951" Feb 16 02:27:35.943744 master-0 kubenswrapper[31559]: I0216 02:27:35.943634 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8086f93-2d98-4218-afac-20a65e6bf943" path="/var/lib/kubelet/pods/c8086f93-2d98-4218-afac-20a65e6bf943/volumes" Feb 16 02:27:44.255010 master-0 kubenswrapper[31559]: I0216 02:27:44.254943 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6ddf646cb9-nx4kk" Feb 16 02:27:44.255010 master-0 kubenswrapper[31559]: I0216 02:27:44.255000 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6ddf646cb9-nx4kk" Feb 16 02:27:44.259164 master-0 kubenswrapper[31559]: I0216 02:27:44.258612 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6ddf646cb9-nx4kk" Feb 16 02:27:44.631193 master-0 kubenswrapper[31559]: I0216 02:27:44.631054 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6ddf646cb9-nx4kk" Feb 16 02:27:44.712591 master-0 kubenswrapper[31559]: I0216 02:27:44.711425 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5ddb9565b5-mcp7q"] Feb 16 02:27:50.190378 master-0 kubenswrapper[31559]: I0216 02:27:50.188143 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-6-master-0"] Feb 16 02:27:50.192883 master-0 kubenswrapper[31559]: I0216 02:27:50.192824 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Feb 16 02:27:50.197565 master-0 kubenswrapper[31559]: I0216 02:27:50.197506 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7dc8db6b-5b2f-4fd7-b0fe-3a45a5012cdb-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"7dc8db6b-5b2f-4fd7-b0fe-3a45a5012cdb\") " pod="openshift-kube-apiserver/installer-6-master-0" Feb 16 02:27:50.197766 master-0 kubenswrapper[31559]: I0216 02:27:50.197726 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7dc8db6b-5b2f-4fd7-b0fe-3a45a5012cdb-var-lock\") pod \"installer-6-master-0\" (UID: \"7dc8db6b-5b2f-4fd7-b0fe-3a45a5012cdb\") " pod="openshift-kube-apiserver/installer-6-master-0" Feb 16 02:27:50.197986 master-0 kubenswrapper[31559]: I0216 02:27:50.197947 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7dc8db6b-5b2f-4fd7-b0fe-3a45a5012cdb-kube-api-access\") pod \"installer-6-master-0\" (UID: \"7dc8db6b-5b2f-4fd7-b0fe-3a45a5012cdb\") " pod="openshift-kube-apiserver/installer-6-master-0" Feb 16 02:27:50.200563 master-0 kubenswrapper[31559]: I0216 02:27:50.198749 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 16 02:27:50.200563 master-0 kubenswrapper[31559]: I0216 02:27:50.198878 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-wnpkt" Feb 16 02:27:50.219556 master-0 kubenswrapper[31559]: I0216 02:27:50.215010 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-6-master-0"] Feb 16 02:27:50.300009 master-0 kubenswrapper[31559]: I0216 02:27:50.299917 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7dc8db6b-5b2f-4fd7-b0fe-3a45a5012cdb-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"7dc8db6b-5b2f-4fd7-b0fe-3a45a5012cdb\") " pod="openshift-kube-apiserver/installer-6-master-0" Feb 16 02:27:50.300384 master-0 kubenswrapper[31559]: I0216 02:27:50.300021 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7dc8db6b-5b2f-4fd7-b0fe-3a45a5012cdb-var-lock\") pod \"installer-6-master-0\" (UID: \"7dc8db6b-5b2f-4fd7-b0fe-3a45a5012cdb\") " pod="openshift-kube-apiserver/installer-6-master-0" Feb 16 02:27:50.300384 master-0 kubenswrapper[31559]: I0216 02:27:50.300065 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7dc8db6b-5b2f-4fd7-b0fe-3a45a5012cdb-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"7dc8db6b-5b2f-4fd7-b0fe-3a45a5012cdb\") " pod="openshift-kube-apiserver/installer-6-master-0" Feb 16 02:27:50.300384 master-0 kubenswrapper[31559]: I0216 02:27:50.300087 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7dc8db6b-5b2f-4fd7-b0fe-3a45a5012cdb-kube-api-access\") pod \"installer-6-master-0\" (UID: \"7dc8db6b-5b2f-4fd7-b0fe-3a45a5012cdb\") " pod="openshift-kube-apiserver/installer-6-master-0" Feb 16 02:27:50.300384 master-0 kubenswrapper[31559]: I0216 02:27:50.300193 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7dc8db6b-5b2f-4fd7-b0fe-3a45a5012cdb-var-lock\") pod \"installer-6-master-0\" (UID: \"7dc8db6b-5b2f-4fd7-b0fe-3a45a5012cdb\") " pod="openshift-kube-apiserver/installer-6-master-0" Feb 16 02:27:50.318559 master-0 kubenswrapper[31559]: I0216 02:27:50.318494 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7dc8db6b-5b2f-4fd7-b0fe-3a45a5012cdb-kube-api-access\") pod \"installer-6-master-0\" (UID: \"7dc8db6b-5b2f-4fd7-b0fe-3a45a5012cdb\") " pod="openshift-kube-apiserver/installer-6-master-0" Feb 16 02:27:50.544141 master-0 kubenswrapper[31559]: I0216 02:27:50.543862 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Feb 16 02:27:55.643427 master-0 kubenswrapper[31559]: W0216 02:27:55.642898 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod7dc8db6b_5b2f_4fd7_b0fe_3a45a5012cdb.slice/crio-9ade67ff50d2f5bdab488b585ded346b7adde45a9678d0034659fda89d38a311 WatchSource:0}: Error finding container 9ade67ff50d2f5bdab488b585ded346b7adde45a9678d0034659fda89d38a311: Status 404 returned error can't find the container with id 9ade67ff50d2f5bdab488b585ded346b7adde45a9678d0034659fda89d38a311 Feb 16 02:27:55.644777 master-0 kubenswrapper[31559]: I0216 02:27:55.642967 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-6-master-0"] Feb 16 02:27:55.723954 master-0 kubenswrapper[31559]: I0216 02:27:55.723879 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-dcd7b7d95-kdt9k" event={"ID":"05a7bdc0-0106-4039-8076-ebc8b286de47","Type":"ContainerStarted","Data":"4f2fc7776243989fad76018d2716405509fbbeb56567130608252664dc062077"} Feb 16 02:27:55.725902 master-0 kubenswrapper[31559]: I0216 02:27:55.725848 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-0" event={"ID":"7dc8db6b-5b2f-4fd7-b0fe-3a45a5012cdb","Type":"ContainerStarted","Data":"9ade67ff50d2f5bdab488b585ded346b7adde45a9678d0034659fda89d38a311"} Feb 16 02:27:55.750657 master-0 kubenswrapper[31559]: I0216 02:27:55.750557 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-dcd7b7d95-kdt9k" podStartSLOduration=2.063339192 podStartE2EDuration="39.75052854s" podCreationTimestamp="2026-02-16 02:27:16 +0000 UTC" firstStartedPulling="2026-02-16 02:27:17.584042438 +0000 UTC m=+289.928648453" lastFinishedPulling="2026-02-16 02:27:55.271231746 +0000 UTC m=+327.615837801" observedRunningTime="2026-02-16 02:27:55.748147617 +0000 UTC m=+328.092753642" watchObservedRunningTime="2026-02-16 02:27:55.75052854 +0000 UTC m=+328.095134595" Feb 16 02:27:56.739328 master-0 kubenswrapper[31559]: I0216 02:27:56.739246 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-0" event={"ID":"7dc8db6b-5b2f-4fd7-b0fe-3a45a5012cdb","Type":"ContainerStarted","Data":"af787ba662c256e085e431417aed5dc09012adb3789a28544bea627d8db37a48"} Feb 16 02:27:56.740309 master-0 kubenswrapper[31559]: I0216 02:27:56.740267 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-dcd7b7d95-kdt9k" Feb 16 02:27:56.742201 master-0 kubenswrapper[31559]: I0216 02:27:56.742139 31559 patch_prober.go:28] interesting pod/downloads-dcd7b7d95-kdt9k container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.105:8080/\": dial tcp 10.128.0.105:8080: connect: connection refused" start-of-body= Feb 16 02:27:56.742352 master-0 kubenswrapper[31559]: I0216 02:27:56.742217 31559 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-dcd7b7d95-kdt9k" podUID="05a7bdc0-0106-4039-8076-ebc8b286de47" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.105:8080/\": dial tcp 10.128.0.105:8080: connect: connection refused" Feb 16 02:27:56.775503 master-0 kubenswrapper[31559]: I0216 02:27:56.775327 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-6-master-0" podStartSLOduration=6.775295842 podStartE2EDuration="6.775295842s" podCreationTimestamp="2026-02-16 02:27:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:27:56.766796729 +0000 UTC m=+329.111402784" watchObservedRunningTime="2026-02-16 02:27:56.775295842 +0000 UTC m=+329.119901897" Feb 16 02:27:57.097946 master-0 kubenswrapper[31559]: I0216 02:27:57.097808 31559 patch_prober.go:28] interesting pod/downloads-dcd7b7d95-kdt9k container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.105:8080/\": dial tcp 10.128.0.105:8080: connect: connection refused" start-of-body= Feb 16 02:27:57.098629 master-0 kubenswrapper[31559]: I0216 02:27:57.098585 31559 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-dcd7b7d95-kdt9k" podUID="05a7bdc0-0106-4039-8076-ebc8b286de47" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.105:8080/\": dial tcp 10.128.0.105:8080: connect: connection refused" Feb 16 02:27:57.099673 master-0 kubenswrapper[31559]: I0216 02:27:57.099641 31559 patch_prober.go:28] interesting pod/downloads-dcd7b7d95-kdt9k container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.128.0.105:8080/\": dial tcp 10.128.0.105:8080: connect: connection refused" start-of-body= Feb 16 02:27:57.099892 master-0 kubenswrapper[31559]: I0216 02:27:57.099861 31559 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-dcd7b7d95-kdt9k" podUID="05a7bdc0-0106-4039-8076-ebc8b286de47" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.105:8080/\": dial tcp 10.128.0.105:8080: connect: connection refused" Feb 16 02:27:57.749583 master-0 kubenswrapper[31559]: I0216 02:27:57.749493 31559 patch_prober.go:28] interesting pod/downloads-dcd7b7d95-kdt9k container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.105:8080/\": dial tcp 10.128.0.105:8080: connect: connection refused" start-of-body= Feb 16 02:27:57.750908 master-0 kubenswrapper[31559]: I0216 02:27:57.750782 31559 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-dcd7b7d95-kdt9k" podUID="05a7bdc0-0106-4039-8076-ebc8b286de47" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.105:8080/\": dial tcp 10.128.0.105:8080: connect: connection refused" Feb 16 02:27:59.195575 master-0 kubenswrapper[31559]: I0216 02:27:59.195429 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-645f9fcbc6-lsqk8" podUID="a19c5154-889e-41c6-8a31-1278fee76b9d" containerName="oauth-openshift" containerID="cri-o://1e37cadb894aba0e691914d9f5cb8b6bc3df98b9a587bf6b74164aec63113fef" gracePeriod=15 Feb 16 02:27:59.771018 master-0 kubenswrapper[31559]: I0216 02:27:59.770939 31559 generic.go:334] "Generic (PLEG): container finished" podID="a19c5154-889e-41c6-8a31-1278fee76b9d" containerID="1e37cadb894aba0e691914d9f5cb8b6bc3df98b9a587bf6b74164aec63113fef" exitCode=0 Feb 16 02:27:59.771018 master-0 kubenswrapper[31559]: I0216 02:27:59.771020 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-645f9fcbc6-lsqk8" event={"ID":"a19c5154-889e-41c6-8a31-1278fee76b9d","Type":"ContainerDied","Data":"1e37cadb894aba0e691914d9f5cb8b6bc3df98b9a587bf6b74164aec63113fef"} Feb 16 02:28:00.456696 master-0 kubenswrapper[31559]: I0216 02:28:00.456316 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-645f9fcbc6-lsqk8" Feb 16 02:28:00.529366 master-0 kubenswrapper[31559]: I0216 02:28:00.528971 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6b97984bd5-mgf8l"] Feb 16 02:28:00.536692 master-0 kubenswrapper[31559]: E0216 02:28:00.529534 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a19c5154-889e-41c6-8a31-1278fee76b9d" containerName="oauth-openshift" Feb 16 02:28:00.536692 master-0 kubenswrapper[31559]: I0216 02:28:00.529554 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="a19c5154-889e-41c6-8a31-1278fee76b9d" containerName="oauth-openshift" Feb 16 02:28:00.536692 master-0 kubenswrapper[31559]: I0216 02:28:00.531299 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="a19c5154-889e-41c6-8a31-1278fee76b9d" containerName="oauth-openshift" Feb 16 02:28:00.536692 master-0 kubenswrapper[31559]: I0216 02:28:00.532492 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6b97984bd5-mgf8l" Feb 16 02:28:00.541779 master-0 kubenswrapper[31559]: I0216 02:28:00.541675 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6b97984bd5-mgf8l"] Feb 16 02:28:00.575393 master-0 kubenswrapper[31559]: I0216 02:28:00.575342 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a19c5154-889e-41c6-8a31-1278fee76b9d-v4-0-config-system-session\") pod \"a19c5154-889e-41c6-8a31-1278fee76b9d\" (UID: \"a19c5154-889e-41c6-8a31-1278fee76b9d\") " Feb 16 02:28:00.575759 master-0 kubenswrapper[31559]: I0216 02:28:00.575723 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a19c5154-889e-41c6-8a31-1278fee76b9d-v4-0-config-system-service-ca\") pod \"a19c5154-889e-41c6-8a31-1278fee76b9d\" (UID: \"a19c5154-889e-41c6-8a31-1278fee76b9d\") " Feb 16 02:28:00.575937 master-0 kubenswrapper[31559]: I0216 02:28:00.575909 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a19c5154-889e-41c6-8a31-1278fee76b9d-v4-0-config-system-serving-cert\") pod \"a19c5154-889e-41c6-8a31-1278fee76b9d\" (UID: \"a19c5154-889e-41c6-8a31-1278fee76b9d\") " Feb 16 02:28:00.576183 master-0 kubenswrapper[31559]: I0216 02:28:00.576152 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a19c5154-889e-41c6-8a31-1278fee76b9d-v4-0-config-user-template-provider-selection\") pod \"a19c5154-889e-41c6-8a31-1278fee76b9d\" (UID: \"a19c5154-889e-41c6-8a31-1278fee76b9d\") " Feb 16 02:28:00.576374 master-0 kubenswrapper[31559]: I0216 02:28:00.576348 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a19c5154-889e-41c6-8a31-1278fee76b9d-audit-policies\") pod \"a19c5154-889e-41c6-8a31-1278fee76b9d\" (UID: \"a19c5154-889e-41c6-8a31-1278fee76b9d\") " Feb 16 02:28:00.576579 master-0 kubenswrapper[31559]: I0216 02:28:00.576552 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfk6h\" (UniqueName: \"kubernetes.io/projected/a19c5154-889e-41c6-8a31-1278fee76b9d-kube-api-access-lfk6h\") pod \"a19c5154-889e-41c6-8a31-1278fee76b9d\" (UID: \"a19c5154-889e-41c6-8a31-1278fee76b9d\") " Feb 16 02:28:00.577970 master-0 kubenswrapper[31559]: I0216 02:28:00.577907 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a19c5154-889e-41c6-8a31-1278fee76b9d-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "a19c5154-889e-41c6-8a31-1278fee76b9d" (UID: "a19c5154-889e-41c6-8a31-1278fee76b9d"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:28:00.577970 master-0 kubenswrapper[31559]: I0216 02:28:00.577780 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a19c5154-889e-41c6-8a31-1278fee76b9d-v4-0-config-system-ocp-branding-template\") pod \"a19c5154-889e-41c6-8a31-1278fee76b9d\" (UID: \"a19c5154-889e-41c6-8a31-1278fee76b9d\") " Feb 16 02:28:00.578193 master-0 kubenswrapper[31559]: I0216 02:28:00.578048 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a19c5154-889e-41c6-8a31-1278fee76b9d-v4-0-config-user-template-error\") pod \"a19c5154-889e-41c6-8a31-1278fee76b9d\" (UID: \"a19c5154-889e-41c6-8a31-1278fee76b9d\") " Feb 16 02:28:00.578193 master-0 kubenswrapper[31559]: I0216 02:28:00.578131 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a19c5154-889e-41c6-8a31-1278fee76b9d-v4-0-config-user-template-login\") pod \"a19c5154-889e-41c6-8a31-1278fee76b9d\" (UID: \"a19c5154-889e-41c6-8a31-1278fee76b9d\") " Feb 16 02:28:00.578340 master-0 kubenswrapper[31559]: I0216 02:28:00.578204 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a19c5154-889e-41c6-8a31-1278fee76b9d-v4-0-config-system-cliconfig\") pod \"a19c5154-889e-41c6-8a31-1278fee76b9d\" (UID: \"a19c5154-889e-41c6-8a31-1278fee76b9d\") " Feb 16 02:28:00.578340 master-0 kubenswrapper[31559]: I0216 02:28:00.578253 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a19c5154-889e-41c6-8a31-1278fee76b9d-audit-dir\") pod \"a19c5154-889e-41c6-8a31-1278fee76b9d\" (UID: \"a19c5154-889e-41c6-8a31-1278fee76b9d\") " Feb 16 02:28:00.578340 master-0 kubenswrapper[31559]: I0216 02:28:00.578309 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a19c5154-889e-41c6-8a31-1278fee76b9d-v4-0-config-system-router-certs\") pod \"a19c5154-889e-41c6-8a31-1278fee76b9d\" (UID: \"a19c5154-889e-41c6-8a31-1278fee76b9d\") " Feb 16 02:28:00.578587 master-0 kubenswrapper[31559]: I0216 02:28:00.578362 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a19c5154-889e-41c6-8a31-1278fee76b9d-v4-0-config-system-trusted-ca-bundle\") pod \"a19c5154-889e-41c6-8a31-1278fee76b9d\" (UID: \"a19c5154-889e-41c6-8a31-1278fee76b9d\") " Feb 16 02:28:00.578587 master-0 kubenswrapper[31559]: I0216 02:28:00.578384 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a19c5154-889e-41c6-8a31-1278fee76b9d-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "a19c5154-889e-41c6-8a31-1278fee76b9d" (UID: "a19c5154-889e-41c6-8a31-1278fee76b9d"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:28:00.578727 master-0 kubenswrapper[31559]: I0216 02:28:00.578661 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6b97984bd5-mgf8l\" (UID: \"cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0\") " pod="openshift-authentication/oauth-openshift-6b97984bd5-mgf8l" Feb 16 02:28:00.578876 master-0 kubenswrapper[31559]: I0216 02:28:00.578823 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0-v4-0-config-user-template-login\") pod \"oauth-openshift-6b97984bd5-mgf8l\" (UID: \"cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0\") " pod="openshift-authentication/oauth-openshift-6b97984bd5-mgf8l" Feb 16 02:28:00.578876 master-0 kubenswrapper[31559]: I0216 02:28:00.578839 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a19c5154-889e-41c6-8a31-1278fee76b9d-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "a19c5154-889e-41c6-8a31-1278fee76b9d" (UID: "a19c5154-889e-41c6-8a31-1278fee76b9d"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:28:00.579180 master-0 kubenswrapper[31559]: I0216 02:28:00.578883 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a19c5154-889e-41c6-8a31-1278fee76b9d-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "a19c5154-889e-41c6-8a31-1278fee76b9d" (UID: "a19c5154-889e-41c6-8a31-1278fee76b9d"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:28:00.579180 master-0 kubenswrapper[31559]: I0216 02:28:00.578894 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6b97984bd5-mgf8l\" (UID: \"cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0\") " pod="openshift-authentication/oauth-openshift-6b97984bd5-mgf8l" Feb 16 02:28:00.579180 master-0 kubenswrapper[31559]: I0216 02:28:00.578994 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0-v4-0-config-user-template-error\") pod \"oauth-openshift-6b97984bd5-mgf8l\" (UID: \"cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0\") " pod="openshift-authentication/oauth-openshift-6b97984bd5-mgf8l" Feb 16 02:28:00.579180 master-0 kubenswrapper[31559]: I0216 02:28:00.579002 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a19c5154-889e-41c6-8a31-1278fee76b9d-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "a19c5154-889e-41c6-8a31-1278fee76b9d" (UID: "a19c5154-889e-41c6-8a31-1278fee76b9d"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:28:00.579640 master-0 kubenswrapper[31559]: I0216 02:28:00.579127 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0-audit-dir\") pod \"oauth-openshift-6b97984bd5-mgf8l\" (UID: \"cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0\") " pod="openshift-authentication/oauth-openshift-6b97984bd5-mgf8l" Feb 16 02:28:00.579640 master-0 kubenswrapper[31559]: I0216 02:28:00.579346 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0-v4-0-config-system-service-ca\") pod \"oauth-openshift-6b97984bd5-mgf8l\" (UID: \"cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0\") " pod="openshift-authentication/oauth-openshift-6b97984bd5-mgf8l" Feb 16 02:28:00.579640 master-0 kubenswrapper[31559]: I0216 02:28:00.579402 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdt2h\" (UniqueName: \"kubernetes.io/projected/cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0-kube-api-access-hdt2h\") pod \"oauth-openshift-6b97984bd5-mgf8l\" (UID: \"cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0\") " pod="openshift-authentication/oauth-openshift-6b97984bd5-mgf8l" Feb 16 02:28:00.579640 master-0 kubenswrapper[31559]: I0216 02:28:00.579580 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6b97984bd5-mgf8l\" (UID: \"cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0\") " pod="openshift-authentication/oauth-openshift-6b97984bd5-mgf8l" Feb 16 02:28:00.579640 master-0 kubenswrapper[31559]: I0216 02:28:00.579630 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0-v4-0-config-system-session\") pod \"oauth-openshift-6b97984bd5-mgf8l\" (UID: \"cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0\") " pod="openshift-authentication/oauth-openshift-6b97984bd5-mgf8l" Feb 16 02:28:00.579991 master-0 kubenswrapper[31559]: I0216 02:28:00.579676 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6b97984bd5-mgf8l\" (UID: \"cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0\") " pod="openshift-authentication/oauth-openshift-6b97984bd5-mgf8l" Feb 16 02:28:00.579991 master-0 kubenswrapper[31559]: I0216 02:28:00.579760 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0-v4-0-config-system-router-certs\") pod \"oauth-openshift-6b97984bd5-mgf8l\" (UID: \"cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0\") " pod="openshift-authentication/oauth-openshift-6b97984bd5-mgf8l" Feb 16 02:28:00.579991 master-0 kubenswrapper[31559]: I0216 02:28:00.579804 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6b97984bd5-mgf8l\" (UID: \"cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0\") " pod="openshift-authentication/oauth-openshift-6b97984bd5-mgf8l" Feb 16 02:28:00.579991 master-0 kubenswrapper[31559]: I0216 02:28:00.579831 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0-audit-policies\") pod \"oauth-openshift-6b97984bd5-mgf8l\" (UID: \"cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0\") " pod="openshift-authentication/oauth-openshift-6b97984bd5-mgf8l" Feb 16 02:28:00.579991 master-0 kubenswrapper[31559]: I0216 02:28:00.579920 31559 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a19c5154-889e-41c6-8a31-1278fee76b9d-v4-0-config-system-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 16 02:28:00.579991 master-0 kubenswrapper[31559]: I0216 02:28:00.579936 31559 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a19c5154-889e-41c6-8a31-1278fee76b9d-audit-policies\") on node \"master-0\" DevicePath \"\"" Feb 16 02:28:00.579991 master-0 kubenswrapper[31559]: I0216 02:28:00.579951 31559 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a19c5154-889e-41c6-8a31-1278fee76b9d-v4-0-config-system-cliconfig\") on node \"master-0\" DevicePath \"\"" Feb 16 02:28:00.579991 master-0 kubenswrapper[31559]: I0216 02:28:00.579963 31559 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a19c5154-889e-41c6-8a31-1278fee76b9d-v4-0-config-system-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 02:28:00.579991 master-0 kubenswrapper[31559]: I0216 02:28:00.579977 31559 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a19c5154-889e-41c6-8a31-1278fee76b9d-audit-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 02:28:00.580659 master-0 kubenswrapper[31559]: I0216 02:28:00.580343 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a19c5154-889e-41c6-8a31-1278fee76b9d-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "a19c5154-889e-41c6-8a31-1278fee76b9d" (UID: "a19c5154-889e-41c6-8a31-1278fee76b9d"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:28:00.581071 master-0 kubenswrapper[31559]: I0216 02:28:00.581003 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a19c5154-889e-41c6-8a31-1278fee76b9d-kube-api-access-lfk6h" (OuterVolumeSpecName: "kube-api-access-lfk6h") pod "a19c5154-889e-41c6-8a31-1278fee76b9d" (UID: "a19c5154-889e-41c6-8a31-1278fee76b9d"). InnerVolumeSpecName "kube-api-access-lfk6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:28:00.581549 master-0 kubenswrapper[31559]: I0216 02:28:00.581497 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a19c5154-889e-41c6-8a31-1278fee76b9d-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "a19c5154-889e-41c6-8a31-1278fee76b9d" (UID: "a19c5154-889e-41c6-8a31-1278fee76b9d"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:28:00.581867 master-0 kubenswrapper[31559]: I0216 02:28:00.581697 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a19c5154-889e-41c6-8a31-1278fee76b9d-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "a19c5154-889e-41c6-8a31-1278fee76b9d" (UID: "a19c5154-889e-41c6-8a31-1278fee76b9d"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:28:00.582586 master-0 kubenswrapper[31559]: I0216 02:28:00.582534 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a19c5154-889e-41c6-8a31-1278fee76b9d-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "a19c5154-889e-41c6-8a31-1278fee76b9d" (UID: "a19c5154-889e-41c6-8a31-1278fee76b9d"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:28:00.582889 master-0 kubenswrapper[31559]: I0216 02:28:00.582853 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a19c5154-889e-41c6-8a31-1278fee76b9d-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "a19c5154-889e-41c6-8a31-1278fee76b9d" (UID: "a19c5154-889e-41c6-8a31-1278fee76b9d"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:28:00.583011 master-0 kubenswrapper[31559]: I0216 02:28:00.582891 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a19c5154-889e-41c6-8a31-1278fee76b9d-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "a19c5154-889e-41c6-8a31-1278fee76b9d" (UID: "a19c5154-889e-41c6-8a31-1278fee76b9d"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:28:00.584093 master-0 kubenswrapper[31559]: I0216 02:28:00.584007 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a19c5154-889e-41c6-8a31-1278fee76b9d-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "a19c5154-889e-41c6-8a31-1278fee76b9d" (UID: "a19c5154-889e-41c6-8a31-1278fee76b9d"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:28:00.681940 master-0 kubenswrapper[31559]: I0216 02:28:00.681861 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0-v4-0-config-user-template-login\") pod \"oauth-openshift-6b97984bd5-mgf8l\" (UID: \"cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0\") " pod="openshift-authentication/oauth-openshift-6b97984bd5-mgf8l" Feb 16 02:28:00.682167 master-0 kubenswrapper[31559]: I0216 02:28:00.681953 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6b97984bd5-mgf8l\" (UID: \"cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0\") " pod="openshift-authentication/oauth-openshift-6b97984bd5-mgf8l" Feb 16 02:28:00.682167 master-0 kubenswrapper[31559]: I0216 02:28:00.682021 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0-v4-0-config-user-template-error\") pod \"oauth-openshift-6b97984bd5-mgf8l\" (UID: \"cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0\") " pod="openshift-authentication/oauth-openshift-6b97984bd5-mgf8l" Feb 16 02:28:00.682167 master-0 kubenswrapper[31559]: I0216 02:28:00.682069 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0-audit-dir\") pod \"oauth-openshift-6b97984bd5-mgf8l\" (UID: \"cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0\") " pod="openshift-authentication/oauth-openshift-6b97984bd5-mgf8l" Feb 16 02:28:00.682167 master-0 kubenswrapper[31559]: I0216 02:28:00.682124 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0-v4-0-config-system-service-ca\") pod \"oauth-openshift-6b97984bd5-mgf8l\" (UID: \"cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0\") " pod="openshift-authentication/oauth-openshift-6b97984bd5-mgf8l" Feb 16 02:28:00.682167 master-0 kubenswrapper[31559]: I0216 02:28:00.682161 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdt2h\" (UniqueName: \"kubernetes.io/projected/cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0-kube-api-access-hdt2h\") pod \"oauth-openshift-6b97984bd5-mgf8l\" (UID: \"cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0\") " pod="openshift-authentication/oauth-openshift-6b97984bd5-mgf8l" Feb 16 02:28:00.682322 master-0 kubenswrapper[31559]: I0216 02:28:00.682221 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6b97984bd5-mgf8l\" (UID: \"cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0\") " pod="openshift-authentication/oauth-openshift-6b97984bd5-mgf8l" Feb 16 02:28:00.682500 master-0 kubenswrapper[31559]: I0216 02:28:00.682384 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0-audit-dir\") pod \"oauth-openshift-6b97984bd5-mgf8l\" (UID: \"cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0\") " pod="openshift-authentication/oauth-openshift-6b97984bd5-mgf8l" Feb 16 02:28:00.682866 master-0 kubenswrapper[31559]: I0216 02:28:00.682815 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0-v4-0-config-system-session\") pod \"oauth-openshift-6b97984bd5-mgf8l\" (UID: \"cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0\") " pod="openshift-authentication/oauth-openshift-6b97984bd5-mgf8l" Feb 16 02:28:00.682938 master-0 kubenswrapper[31559]: I0216 02:28:00.682906 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6b97984bd5-mgf8l\" (UID: \"cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0\") " pod="openshift-authentication/oauth-openshift-6b97984bd5-mgf8l" Feb 16 02:28:00.682973 master-0 kubenswrapper[31559]: I0216 02:28:00.682958 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0-v4-0-config-system-router-certs\") pod \"oauth-openshift-6b97984bd5-mgf8l\" (UID: \"cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0\") " pod="openshift-authentication/oauth-openshift-6b97984bd5-mgf8l" Feb 16 02:28:00.683210 master-0 kubenswrapper[31559]: I0216 02:28:00.683166 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6b97984bd5-mgf8l\" (UID: \"cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0\") " pod="openshift-authentication/oauth-openshift-6b97984bd5-mgf8l" Feb 16 02:28:00.683253 master-0 kubenswrapper[31559]: I0216 02:28:00.683231 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0-audit-policies\") pod \"oauth-openshift-6b97984bd5-mgf8l\" (UID: \"cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0\") " pod="openshift-authentication/oauth-openshift-6b97984bd5-mgf8l" Feb 16 02:28:00.683304 master-0 kubenswrapper[31559]: I0216 02:28:00.683278 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6b97984bd5-mgf8l\" (UID: \"cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0\") " pod="openshift-authentication/oauth-openshift-6b97984bd5-mgf8l" Feb 16 02:28:00.683480 master-0 kubenswrapper[31559]: I0216 02:28:00.683430 31559 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a19c5154-889e-41c6-8a31-1278fee76b9d-v4-0-config-system-ocp-branding-template\") on node \"master-0\" DevicePath \"\"" Feb 16 02:28:00.683524 master-0 kubenswrapper[31559]: I0216 02:28:00.683490 31559 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a19c5154-889e-41c6-8a31-1278fee76b9d-v4-0-config-user-template-error\") on node \"master-0\" DevicePath \"\"" Feb 16 02:28:00.683524 master-0 kubenswrapper[31559]: I0216 02:28:00.683515 31559 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a19c5154-889e-41c6-8a31-1278fee76b9d-v4-0-config-system-session\") on node \"master-0\" DevicePath \"\"" Feb 16 02:28:00.683585 master-0 kubenswrapper[31559]: I0216 02:28:00.683539 31559 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a19c5154-889e-41c6-8a31-1278fee76b9d-v4-0-config-user-template-provider-selection\") on node \"master-0\" DevicePath \"\"" Feb 16 02:28:00.683585 master-0 kubenswrapper[31559]: I0216 02:28:00.683561 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfk6h\" (UniqueName: \"kubernetes.io/projected/a19c5154-889e-41c6-8a31-1278fee76b9d-kube-api-access-lfk6h\") on node \"master-0\" DevicePath \"\"" Feb 16 02:28:00.683647 master-0 kubenswrapper[31559]: I0216 02:28:00.683582 31559 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a19c5154-889e-41c6-8a31-1278fee76b9d-v4-0-config-user-template-login\") on node \"master-0\" DevicePath \"\"" Feb 16 02:28:00.685579 master-0 kubenswrapper[31559]: I0216 02:28:00.685532 31559 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a19c5154-889e-41c6-8a31-1278fee76b9d-v4-0-config-system-router-certs\") on node \"master-0\" DevicePath \"\"" Feb 16 02:28:00.685655 master-0 kubenswrapper[31559]: I0216 02:28:00.685576 31559 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a19c5154-889e-41c6-8a31-1278fee76b9d-v4-0-config-system-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 16 02:28:00.685655 master-0 kubenswrapper[31559]: I0216 02:28:00.684959 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0-v4-0-config-system-service-ca\") pod \"oauth-openshift-6b97984bd5-mgf8l\" (UID: \"cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0\") " pod="openshift-authentication/oauth-openshift-6b97984bd5-mgf8l" Feb 16 02:28:00.685655 master-0 kubenswrapper[31559]: I0216 02:28:00.685222 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6b97984bd5-mgf8l\" (UID: \"cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0\") " pod="openshift-authentication/oauth-openshift-6b97984bd5-mgf8l" Feb 16 02:28:00.685795 master-0 kubenswrapper[31559]: I0216 02:28:00.684569 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0-audit-policies\") pod \"oauth-openshift-6b97984bd5-mgf8l\" (UID: \"cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0\") " pod="openshift-authentication/oauth-openshift-6b97984bd5-mgf8l" Feb 16 02:28:00.685795 master-0 kubenswrapper[31559]: I0216 02:28:00.684737 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6b97984bd5-mgf8l\" (UID: \"cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0\") " pod="openshift-authentication/oauth-openshift-6b97984bd5-mgf8l" Feb 16 02:28:00.687287 master-0 kubenswrapper[31559]: I0216 02:28:00.687233 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0-v4-0-config-user-template-login\") pod \"oauth-openshift-6b97984bd5-mgf8l\" (UID: \"cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0\") " pod="openshift-authentication/oauth-openshift-6b97984bd5-mgf8l" Feb 16 02:28:00.688057 master-0 kubenswrapper[31559]: I0216 02:28:00.688004 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6b97984bd5-mgf8l\" (UID: \"cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0\") " pod="openshift-authentication/oauth-openshift-6b97984bd5-mgf8l" Feb 16 02:28:00.688736 master-0 kubenswrapper[31559]: I0216 02:28:00.688689 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0-v4-0-config-system-session\") pod \"oauth-openshift-6b97984bd5-mgf8l\" (UID: \"cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0\") " pod="openshift-authentication/oauth-openshift-6b97984bd5-mgf8l" Feb 16 02:28:00.689284 master-0 kubenswrapper[31559]: I0216 02:28:00.689233 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6b97984bd5-mgf8l\" (UID: \"cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0\") " pod="openshift-authentication/oauth-openshift-6b97984bd5-mgf8l" Feb 16 02:28:00.689767 master-0 kubenswrapper[31559]: I0216 02:28:00.689728 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0-v4-0-config-system-router-certs\") pod \"oauth-openshift-6b97984bd5-mgf8l\" (UID: \"cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0\") " pod="openshift-authentication/oauth-openshift-6b97984bd5-mgf8l" Feb 16 02:28:00.690696 master-0 kubenswrapper[31559]: I0216 02:28:00.690657 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6b97984bd5-mgf8l\" (UID: \"cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0\") " pod="openshift-authentication/oauth-openshift-6b97984bd5-mgf8l" Feb 16 02:28:00.693954 master-0 kubenswrapper[31559]: I0216 02:28:00.693865 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0-v4-0-config-user-template-error\") pod \"oauth-openshift-6b97984bd5-mgf8l\" (UID: \"cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0\") " pod="openshift-authentication/oauth-openshift-6b97984bd5-mgf8l" Feb 16 02:28:00.705392 master-0 kubenswrapper[31559]: I0216 02:28:00.705300 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdt2h\" (UniqueName: \"kubernetes.io/projected/cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0-kube-api-access-hdt2h\") pod \"oauth-openshift-6b97984bd5-mgf8l\" (UID: \"cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0\") " pod="openshift-authentication/oauth-openshift-6b97984bd5-mgf8l" Feb 16 02:28:00.783774 master-0 kubenswrapper[31559]: I0216 02:28:00.783681 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-645f9fcbc6-lsqk8" event={"ID":"a19c5154-889e-41c6-8a31-1278fee76b9d","Type":"ContainerDied","Data":"ac80794df23ebbe8cebc38b294eabbae9f74584205cd49d218bcbd26a54c14de"} Feb 16 02:28:00.783774 master-0 kubenswrapper[31559]: I0216 02:28:00.783798 31559 scope.go:117] "RemoveContainer" containerID="1e37cadb894aba0e691914d9f5cb8b6bc3df98b9a587bf6b74164aec63113fef" Feb 16 02:28:00.784161 master-0 kubenswrapper[31559]: I0216 02:28:00.783794 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-645f9fcbc6-lsqk8" Feb 16 02:28:00.851263 master-0 kubenswrapper[31559]: I0216 02:28:00.847887 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-645f9fcbc6-lsqk8"] Feb 16 02:28:00.857053 master-0 kubenswrapper[31559]: I0216 02:28:00.856975 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-645f9fcbc6-lsqk8"] Feb 16 02:28:00.874086 master-0 kubenswrapper[31559]: I0216 02:28:00.874021 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6b97984bd5-mgf8l" Feb 16 02:28:01.413276 master-0 kubenswrapper[31559]: I0216 02:28:01.413188 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6b97984bd5-mgf8l"] Feb 16 02:28:01.796945 master-0 kubenswrapper[31559]: I0216 02:28:01.796862 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6b97984bd5-mgf8l" event={"ID":"cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0","Type":"ContainerStarted","Data":"87d6bc3dd7280ff97441f68417f9244b90435658af878b350e015cd00c6ad826"} Feb 16 02:28:01.796945 master-0 kubenswrapper[31559]: I0216 02:28:01.796957 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6b97984bd5-mgf8l" event={"ID":"cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0","Type":"ContainerStarted","Data":"dd2264b852013de55aaf3a64d8e5d58f4fb27ec067919171b215813cb6163426"} Feb 16 02:28:01.797815 master-0 kubenswrapper[31559]: I0216 02:28:01.797416 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6b97984bd5-mgf8l" Feb 16 02:28:01.800294 master-0 kubenswrapper[31559]: I0216 02:28:01.800243 31559 patch_prober.go:28] interesting pod/oauth-openshift-6b97984bd5-mgf8l container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.128.0.110:6443/healthz\": dial tcp 10.128.0.110:6443: connect: connection refused" start-of-body= Feb 16 02:28:01.800428 master-0 kubenswrapper[31559]: I0216 02:28:01.800328 31559 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-6b97984bd5-mgf8l" podUID="cf6f695a-fcc6-4e8d-9a56-b7e980fcf1e0" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.128.0.110:6443/healthz\": dial tcp 10.128.0.110:6443: connect: connection refused" Feb 16 02:28:01.840969 master-0 kubenswrapper[31559]: I0216 02:28:01.840872 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6b97984bd5-mgf8l" podStartSLOduration=27.840842553999998 podStartE2EDuration="27.840842554s" podCreationTimestamp="2026-02-16 02:27:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:28:01.840149106 +0000 UTC m=+334.184755171" watchObservedRunningTime="2026-02-16 02:28:01.840842554 +0000 UTC m=+334.185448609" Feb 16 02:28:01.933960 master-0 kubenswrapper[31559]: I0216 02:28:01.933827 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a19c5154-889e-41c6-8a31-1278fee76b9d" path="/var/lib/kubelet/pods/a19c5154-889e-41c6-8a31-1278fee76b9d/volumes" Feb 16 02:28:02.815923 master-0 kubenswrapper[31559]: I0216 02:28:02.815834 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6b97984bd5-mgf8l" Feb 16 02:28:07.108878 master-0 kubenswrapper[31559]: I0216 02:28:07.108305 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-dcd7b7d95-kdt9k" Feb 16 02:28:09.773906 master-0 kubenswrapper[31559]: I0216 02:28:09.773763 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-5ddb9565b5-mcp7q" podUID="ac79c0aa-10de-4da9-8e8b-95685d9fc609" containerName="console" containerID="cri-o://7eb02b6580c83c0d72cbcba2b2249fffb4ba61714c193833e659156340a42966" gracePeriod=15 Feb 16 02:28:09.912928 master-0 kubenswrapper[31559]: I0216 02:28:09.912836 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5ddb9565b5-mcp7q_ac79c0aa-10de-4da9-8e8b-95685d9fc609/console/0.log" Feb 16 02:28:09.912928 master-0 kubenswrapper[31559]: I0216 02:28:09.912931 31559 generic.go:334] "Generic (PLEG): container finished" podID="ac79c0aa-10de-4da9-8e8b-95685d9fc609" containerID="7eb02b6580c83c0d72cbcba2b2249fffb4ba61714c193833e659156340a42966" exitCode=2 Feb 16 02:28:09.913251 master-0 kubenswrapper[31559]: I0216 02:28:09.912976 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5ddb9565b5-mcp7q" event={"ID":"ac79c0aa-10de-4da9-8e8b-95685d9fc609","Type":"ContainerDied","Data":"7eb02b6580c83c0d72cbcba2b2249fffb4ba61714c193833e659156340a42966"} Feb 16 02:28:10.492116 master-0 kubenswrapper[31559]: I0216 02:28:10.492041 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5ddb9565b5-mcp7q_ac79c0aa-10de-4da9-8e8b-95685d9fc609/console/0.log" Feb 16 02:28:10.492349 master-0 kubenswrapper[31559]: I0216 02:28:10.492153 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5ddb9565b5-mcp7q" Feb 16 02:28:10.573269 master-0 kubenswrapper[31559]: I0216 02:28:10.573191 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ac79c0aa-10de-4da9-8e8b-95685d9fc609-console-config\") pod \"ac79c0aa-10de-4da9-8e8b-95685d9fc609\" (UID: \"ac79c0aa-10de-4da9-8e8b-95685d9fc609\") " Feb 16 02:28:10.573269 master-0 kubenswrapper[31559]: I0216 02:28:10.573273 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ac79c0aa-10de-4da9-8e8b-95685d9fc609-console-oauth-config\") pod \"ac79c0aa-10de-4da9-8e8b-95685d9fc609\" (UID: \"ac79c0aa-10de-4da9-8e8b-95685d9fc609\") " Feb 16 02:28:10.573650 master-0 kubenswrapper[31559]: I0216 02:28:10.573361 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ac79c0aa-10de-4da9-8e8b-95685d9fc609-service-ca\") pod \"ac79c0aa-10de-4da9-8e8b-95685d9fc609\" (UID: \"ac79c0aa-10de-4da9-8e8b-95685d9fc609\") " Feb 16 02:28:10.573650 master-0 kubenswrapper[31559]: I0216 02:28:10.573406 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ac79c0aa-10de-4da9-8e8b-95685d9fc609-oauth-serving-cert\") pod \"ac79c0aa-10de-4da9-8e8b-95685d9fc609\" (UID: \"ac79c0aa-10de-4da9-8e8b-95685d9fc609\") " Feb 16 02:28:10.573650 master-0 kubenswrapper[31559]: I0216 02:28:10.573486 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ac79c0aa-10de-4da9-8e8b-95685d9fc609-console-serving-cert\") pod \"ac79c0aa-10de-4da9-8e8b-95685d9fc609\" (UID: \"ac79c0aa-10de-4da9-8e8b-95685d9fc609\") " Feb 16 02:28:10.573650 master-0 kubenswrapper[31559]: I0216 02:28:10.573529 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kdhn\" (UniqueName: \"kubernetes.io/projected/ac79c0aa-10de-4da9-8e8b-95685d9fc609-kube-api-access-8kdhn\") pod \"ac79c0aa-10de-4da9-8e8b-95685d9fc609\" (UID: \"ac79c0aa-10de-4da9-8e8b-95685d9fc609\") " Feb 16 02:28:10.574348 master-0 kubenswrapper[31559]: I0216 02:28:10.574266 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac79c0aa-10de-4da9-8e8b-95685d9fc609-service-ca" (OuterVolumeSpecName: "service-ca") pod "ac79c0aa-10de-4da9-8e8b-95685d9fc609" (UID: "ac79c0aa-10de-4da9-8e8b-95685d9fc609"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:28:10.574764 master-0 kubenswrapper[31559]: I0216 02:28:10.574675 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac79c0aa-10de-4da9-8e8b-95685d9fc609-console-config" (OuterVolumeSpecName: "console-config") pod "ac79c0aa-10de-4da9-8e8b-95685d9fc609" (UID: "ac79c0aa-10de-4da9-8e8b-95685d9fc609"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:28:10.574862 master-0 kubenswrapper[31559]: I0216 02:28:10.574703 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac79c0aa-10de-4da9-8e8b-95685d9fc609-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "ac79c0aa-10de-4da9-8e8b-95685d9fc609" (UID: "ac79c0aa-10de-4da9-8e8b-95685d9fc609"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:28:10.578244 master-0 kubenswrapper[31559]: I0216 02:28:10.578190 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac79c0aa-10de-4da9-8e8b-95685d9fc609-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "ac79c0aa-10de-4da9-8e8b-95685d9fc609" (UID: "ac79c0aa-10de-4da9-8e8b-95685d9fc609"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:28:10.578673 master-0 kubenswrapper[31559]: I0216 02:28:10.578631 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac79c0aa-10de-4da9-8e8b-95685d9fc609-kube-api-access-8kdhn" (OuterVolumeSpecName: "kube-api-access-8kdhn") pod "ac79c0aa-10de-4da9-8e8b-95685d9fc609" (UID: "ac79c0aa-10de-4da9-8e8b-95685d9fc609"). InnerVolumeSpecName "kube-api-access-8kdhn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:28:10.581651 master-0 kubenswrapper[31559]: I0216 02:28:10.581607 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac79c0aa-10de-4da9-8e8b-95685d9fc609-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "ac79c0aa-10de-4da9-8e8b-95685d9fc609" (UID: "ac79c0aa-10de-4da9-8e8b-95685d9fc609"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:28:10.675220 master-0 kubenswrapper[31559]: I0216 02:28:10.675127 31559 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ac79c0aa-10de-4da9-8e8b-95685d9fc609-console-config\") on node \"master-0\" DevicePath \"\"" Feb 16 02:28:10.675220 master-0 kubenswrapper[31559]: I0216 02:28:10.675171 31559 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ac79c0aa-10de-4da9-8e8b-95685d9fc609-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Feb 16 02:28:10.675220 master-0 kubenswrapper[31559]: I0216 02:28:10.675186 31559 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ac79c0aa-10de-4da9-8e8b-95685d9fc609-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 16 02:28:10.675220 master-0 kubenswrapper[31559]: I0216 02:28:10.675199 31559 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ac79c0aa-10de-4da9-8e8b-95685d9fc609-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 16 02:28:10.675220 master-0 kubenswrapper[31559]: I0216 02:28:10.675211 31559 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ac79c0aa-10de-4da9-8e8b-95685d9fc609-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 16 02:28:10.675220 master-0 kubenswrapper[31559]: I0216 02:28:10.675223 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8kdhn\" (UniqueName: \"kubernetes.io/projected/ac79c0aa-10de-4da9-8e8b-95685d9fc609-kube-api-access-8kdhn\") on node \"master-0\" DevicePath \"\"" Feb 16 02:28:10.927637 master-0 kubenswrapper[31559]: I0216 02:28:10.927588 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5ddb9565b5-mcp7q_ac79c0aa-10de-4da9-8e8b-95685d9fc609/console/0.log" Feb 16 02:28:10.928094 master-0 kubenswrapper[31559]: I0216 02:28:10.927712 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5ddb9565b5-mcp7q" event={"ID":"ac79c0aa-10de-4da9-8e8b-95685d9fc609","Type":"ContainerDied","Data":"e4be6d6268e664a7ed005d090a0a44c11587bff48eb931785193a226660be17c"} Feb 16 02:28:10.928094 master-0 kubenswrapper[31559]: I0216 02:28:10.927838 31559 scope.go:117] "RemoveContainer" containerID="7eb02b6580c83c0d72cbcba2b2249fffb4ba61714c193833e659156340a42966" Feb 16 02:28:10.928094 master-0 kubenswrapper[31559]: I0216 02:28:10.927840 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5ddb9565b5-mcp7q" Feb 16 02:28:10.981611 master-0 kubenswrapper[31559]: I0216 02:28:10.981528 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5ddb9565b5-mcp7q"] Feb 16 02:28:10.990773 master-0 kubenswrapper[31559]: I0216 02:28:10.990721 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-5ddb9565b5-mcp7q"] Feb 16 02:28:11.940699 master-0 kubenswrapper[31559]: I0216 02:28:11.940631 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac79c0aa-10de-4da9-8e8b-95685d9fc609" path="/var/lib/kubelet/pods/ac79c0aa-10de-4da9-8e8b-95685d9fc609/volumes" Feb 16 02:28:16.885203 master-0 kubenswrapper[31559]: I0216 02:28:16.884768 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:28:16.931828 master-0 kubenswrapper[31559]: I0216 02:28:16.931745 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:28:17.043567 master-0 kubenswrapper[31559]: I0216 02:28:17.043492 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:28:23.831287 master-0 kubenswrapper[31559]: I0216 02:28:23.830974 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 16 02:28:23.832803 master-0 kubenswrapper[31559]: I0216 02:28:23.832079 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="40960ec6-becf-40a6-ad1e-828df1c42847" containerName="kube-rbac-proxy-metric" containerID="cri-o://d698d4f4ed5397da53ee6958c378cbd852fbe16cca431ad7281cf26b7ddc6f25" gracePeriod=120 Feb 16 02:28:23.832803 master-0 kubenswrapper[31559]: I0216 02:28:23.832284 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="40960ec6-becf-40a6-ad1e-828df1c42847" containerName="prom-label-proxy" containerID="cri-o://262f433d546e0685ed7e4ccfbe391174ee356b8b31595276fca13f3ac6e8fa87" gracePeriod=120 Feb 16 02:28:23.832803 master-0 kubenswrapper[31559]: I0216 02:28:23.832374 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="40960ec6-becf-40a6-ad1e-828df1c42847" containerName="kube-rbac-proxy-web" containerID="cri-o://fcb86f3336f6e01e939e46616a2206b7d0d1035dfeebea23c097729bb4c05fc8" gracePeriod=120 Feb 16 02:28:23.832803 master-0 kubenswrapper[31559]: I0216 02:28:23.832469 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="40960ec6-becf-40a6-ad1e-828df1c42847" containerName="config-reloader" containerID="cri-o://4ccaee1c7cc0a1374adb4f0df1c083c907e88185b22476eaad2a88d8cf10106d" gracePeriod=120 Feb 16 02:28:23.832803 master-0 kubenswrapper[31559]: I0216 02:28:23.832487 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="40960ec6-becf-40a6-ad1e-828df1c42847" containerName="kube-rbac-proxy" containerID="cri-o://2c60cfdfded888fe1d8a63276e7725e268f9cf7599efabdc3a94932d6b91f55f" gracePeriod=120 Feb 16 02:28:23.832803 master-0 kubenswrapper[31559]: I0216 02:28:23.831986 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="40960ec6-becf-40a6-ad1e-828df1c42847" containerName="alertmanager" containerID="cri-o://99aee9a4c80979659413e7d34094c8562f17b775b31490e30d0158db7a668e2f" gracePeriod=120 Feb 16 02:28:24.090902 master-0 kubenswrapper[31559]: I0216 02:28:24.090741 31559 generic.go:334] "Generic (PLEG): container finished" podID="40960ec6-becf-40a6-ad1e-828df1c42847" containerID="262f433d546e0685ed7e4ccfbe391174ee356b8b31595276fca13f3ac6e8fa87" exitCode=0 Feb 16 02:28:24.090902 master-0 kubenswrapper[31559]: I0216 02:28:24.090784 31559 generic.go:334] "Generic (PLEG): container finished" podID="40960ec6-becf-40a6-ad1e-828df1c42847" containerID="2c60cfdfded888fe1d8a63276e7725e268f9cf7599efabdc3a94932d6b91f55f" exitCode=0 Feb 16 02:28:24.090902 master-0 kubenswrapper[31559]: I0216 02:28:24.090792 31559 generic.go:334] "Generic (PLEG): container finished" podID="40960ec6-becf-40a6-ad1e-828df1c42847" containerID="4ccaee1c7cc0a1374adb4f0df1c083c907e88185b22476eaad2a88d8cf10106d" exitCode=0 Feb 16 02:28:24.090902 master-0 kubenswrapper[31559]: I0216 02:28:24.090798 31559 generic.go:334] "Generic (PLEG): container finished" podID="40960ec6-becf-40a6-ad1e-828df1c42847" containerID="99aee9a4c80979659413e7d34094c8562f17b775b31490e30d0158db7a668e2f" exitCode=0 Feb 16 02:28:24.090902 master-0 kubenswrapper[31559]: I0216 02:28:24.090826 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"40960ec6-becf-40a6-ad1e-828df1c42847","Type":"ContainerDied","Data":"262f433d546e0685ed7e4ccfbe391174ee356b8b31595276fca13f3ac6e8fa87"} Feb 16 02:28:24.090902 master-0 kubenswrapper[31559]: I0216 02:28:24.090888 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"40960ec6-becf-40a6-ad1e-828df1c42847","Type":"ContainerDied","Data":"2c60cfdfded888fe1d8a63276e7725e268f9cf7599efabdc3a94932d6b91f55f"} Feb 16 02:28:24.090902 master-0 kubenswrapper[31559]: I0216 02:28:24.090911 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"40960ec6-becf-40a6-ad1e-828df1c42847","Type":"ContainerDied","Data":"4ccaee1c7cc0a1374adb4f0df1c083c907e88185b22476eaad2a88d8cf10106d"} Feb 16 02:28:24.091552 master-0 kubenswrapper[31559]: I0216 02:28:24.090933 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"40960ec6-becf-40a6-ad1e-828df1c42847","Type":"ContainerDied","Data":"99aee9a4c80979659413e7d34094c8562f17b775b31490e30d0158db7a668e2f"} Feb 16 02:28:25.110186 master-0 kubenswrapper[31559]: I0216 02:28:25.110068 31559 generic.go:334] "Generic (PLEG): container finished" podID="40960ec6-becf-40a6-ad1e-828df1c42847" containerID="d698d4f4ed5397da53ee6958c378cbd852fbe16cca431ad7281cf26b7ddc6f25" exitCode=0 Feb 16 02:28:25.110186 master-0 kubenswrapper[31559]: I0216 02:28:25.110150 31559 generic.go:334] "Generic (PLEG): container finished" podID="40960ec6-becf-40a6-ad1e-828df1c42847" containerID="fcb86f3336f6e01e939e46616a2206b7d0d1035dfeebea23c097729bb4c05fc8" exitCode=0 Feb 16 02:28:25.111120 master-0 kubenswrapper[31559]: I0216 02:28:25.110186 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"40960ec6-becf-40a6-ad1e-828df1c42847","Type":"ContainerDied","Data":"d698d4f4ed5397da53ee6958c378cbd852fbe16cca431ad7281cf26b7ddc6f25"} Feb 16 02:28:25.111120 master-0 kubenswrapper[31559]: I0216 02:28:25.110271 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"40960ec6-becf-40a6-ad1e-828df1c42847","Type":"ContainerDied","Data":"fcb86f3336f6e01e939e46616a2206b7d0d1035dfeebea23c097729bb4c05fc8"} Feb 16 02:28:25.445248 master-0 kubenswrapper[31559]: I0216 02:28:25.445175 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 16 02:28:25.545371 master-0 kubenswrapper[31559]: I0216 02:28:25.545253 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/40960ec6-becf-40a6-ad1e-828df1c42847-config-volume\") pod \"40960ec6-becf-40a6-ad1e-828df1c42847\" (UID: \"40960ec6-becf-40a6-ad1e-828df1c42847\") " Feb 16 02:28:25.550840 master-0 kubenswrapper[31559]: I0216 02:28:25.550760 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40960ec6-becf-40a6-ad1e-828df1c42847-config-volume" (OuterVolumeSpecName: "config-volume") pod "40960ec6-becf-40a6-ad1e-828df1c42847" (UID: "40960ec6-becf-40a6-ad1e-828df1c42847"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:28:25.649400 master-0 kubenswrapper[31559]: I0216 02:28:25.649206 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/40960ec6-becf-40a6-ad1e-828df1c42847-secret-alertmanager-kube-rbac-proxy\") pod \"40960ec6-becf-40a6-ad1e-828df1c42847\" (UID: \"40960ec6-becf-40a6-ad1e-828df1c42847\") " Feb 16 02:28:25.649400 master-0 kubenswrapper[31559]: I0216 02:28:25.649320 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/40960ec6-becf-40a6-ad1e-828df1c42847-secret-alertmanager-kube-rbac-proxy-metric\") pod \"40960ec6-becf-40a6-ad1e-828df1c42847\" (UID: \"40960ec6-becf-40a6-ad1e-828df1c42847\") " Feb 16 02:28:25.649849 master-0 kubenswrapper[31559]: I0216 02:28:25.649543 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40960ec6-becf-40a6-ad1e-828df1c42847-alertmanager-trusted-ca-bundle\") pod \"40960ec6-becf-40a6-ad1e-828df1c42847\" (UID: \"40960ec6-becf-40a6-ad1e-828df1c42847\") " Feb 16 02:28:25.649849 master-0 kubenswrapper[31559]: I0216 02:28:25.649618 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2snbx\" (UniqueName: \"kubernetes.io/projected/40960ec6-becf-40a6-ad1e-828df1c42847-kube-api-access-2snbx\") pod \"40960ec6-becf-40a6-ad1e-828df1c42847\" (UID: \"40960ec6-becf-40a6-ad1e-828df1c42847\") " Feb 16 02:28:25.649849 master-0 kubenswrapper[31559]: I0216 02:28:25.649697 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/40960ec6-becf-40a6-ad1e-828df1c42847-metrics-client-ca\") pod \"40960ec6-becf-40a6-ad1e-828df1c42847\" (UID: \"40960ec6-becf-40a6-ad1e-828df1c42847\") " Feb 16 02:28:25.649849 master-0 kubenswrapper[31559]: I0216 02:28:25.649792 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/40960ec6-becf-40a6-ad1e-828df1c42847-web-config\") pod \"40960ec6-becf-40a6-ad1e-828df1c42847\" (UID: \"40960ec6-becf-40a6-ad1e-828df1c42847\") " Feb 16 02:28:25.650203 master-0 kubenswrapper[31559]: I0216 02:28:25.649856 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/40960ec6-becf-40a6-ad1e-828df1c42847-secret-alertmanager-kube-rbac-proxy-web\") pod \"40960ec6-becf-40a6-ad1e-828df1c42847\" (UID: \"40960ec6-becf-40a6-ad1e-828df1c42847\") " Feb 16 02:28:25.650203 master-0 kubenswrapper[31559]: I0216 02:28:25.649973 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/40960ec6-becf-40a6-ad1e-828df1c42847-config-out\") pod \"40960ec6-becf-40a6-ad1e-828df1c42847\" (UID: \"40960ec6-becf-40a6-ad1e-828df1c42847\") " Feb 16 02:28:25.650203 master-0 kubenswrapper[31559]: I0216 02:28:25.650029 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/40960ec6-becf-40a6-ad1e-828df1c42847-secret-alertmanager-main-tls\") pod \"40960ec6-becf-40a6-ad1e-828df1c42847\" (UID: \"40960ec6-becf-40a6-ad1e-828df1c42847\") " Feb 16 02:28:25.650203 master-0 kubenswrapper[31559]: I0216 02:28:25.650081 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/40960ec6-becf-40a6-ad1e-828df1c42847-alertmanager-main-db\") pod \"40960ec6-becf-40a6-ad1e-828df1c42847\" (UID: \"40960ec6-becf-40a6-ad1e-828df1c42847\") " Feb 16 02:28:25.650203 master-0 kubenswrapper[31559]: I0216 02:28:25.650154 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/40960ec6-becf-40a6-ad1e-828df1c42847-tls-assets\") pod \"40960ec6-becf-40a6-ad1e-828df1c42847\" (UID: \"40960ec6-becf-40a6-ad1e-828df1c42847\") " Feb 16 02:28:25.650686 master-0 kubenswrapper[31559]: I0216 02:28:25.650456 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40960ec6-becf-40a6-ad1e-828df1c42847-alertmanager-trusted-ca-bundle" (OuterVolumeSpecName: "alertmanager-trusted-ca-bundle") pod "40960ec6-becf-40a6-ad1e-828df1c42847" (UID: "40960ec6-becf-40a6-ad1e-828df1c42847"). InnerVolumeSpecName "alertmanager-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:28:25.651067 master-0 kubenswrapper[31559]: I0216 02:28:25.650983 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40960ec6-becf-40a6-ad1e-828df1c42847-metrics-client-ca" (OuterVolumeSpecName: "metrics-client-ca") pod "40960ec6-becf-40a6-ad1e-828df1c42847" (UID: "40960ec6-becf-40a6-ad1e-828df1c42847"). InnerVolumeSpecName "metrics-client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:28:25.651627 master-0 kubenswrapper[31559]: I0216 02:28:25.651581 31559 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/40960ec6-becf-40a6-ad1e-828df1c42847-config-volume\") on node \"master-0\" DevicePath \"\"" Feb 16 02:28:25.651772 master-0 kubenswrapper[31559]: I0216 02:28:25.651638 31559 reconciler_common.go:293] "Volume detached for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40960ec6-becf-40a6-ad1e-828df1c42847-alertmanager-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 02:28:25.651772 master-0 kubenswrapper[31559]: I0216 02:28:25.651660 31559 reconciler_common.go:293] "Volume detached for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/40960ec6-becf-40a6-ad1e-828df1c42847-metrics-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 16 02:28:25.651958 master-0 kubenswrapper[31559]: I0216 02:28:25.651764 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40960ec6-becf-40a6-ad1e-828df1c42847-alertmanager-main-db" (OuterVolumeSpecName: "alertmanager-main-db") pod "40960ec6-becf-40a6-ad1e-828df1c42847" (UID: "40960ec6-becf-40a6-ad1e-828df1c42847"). InnerVolumeSpecName "alertmanager-main-db". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 02:28:25.660695 master-0 kubenswrapper[31559]: I0216 02:28:25.660617 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40960ec6-becf-40a6-ad1e-828df1c42847-config-out" (OuterVolumeSpecName: "config-out") pod "40960ec6-becf-40a6-ad1e-828df1c42847" (UID: "40960ec6-becf-40a6-ad1e-828df1c42847"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 02:28:25.660695 master-0 kubenswrapper[31559]: I0216 02:28:25.660666 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40960ec6-becf-40a6-ad1e-828df1c42847-secret-alertmanager-kube-rbac-proxy" (OuterVolumeSpecName: "secret-alertmanager-kube-rbac-proxy") pod "40960ec6-becf-40a6-ad1e-828df1c42847" (UID: "40960ec6-becf-40a6-ad1e-828df1c42847"). InnerVolumeSpecName "secret-alertmanager-kube-rbac-proxy". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:28:25.660947 master-0 kubenswrapper[31559]: I0216 02:28:25.660614 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40960ec6-becf-40a6-ad1e-828df1c42847-kube-api-access-2snbx" (OuterVolumeSpecName: "kube-api-access-2snbx") pod "40960ec6-becf-40a6-ad1e-828df1c42847" (UID: "40960ec6-becf-40a6-ad1e-828df1c42847"). InnerVolumeSpecName "kube-api-access-2snbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:28:25.660947 master-0 kubenswrapper[31559]: I0216 02:28:25.660695 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40960ec6-becf-40a6-ad1e-828df1c42847-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "40960ec6-becf-40a6-ad1e-828df1c42847" (UID: "40960ec6-becf-40a6-ad1e-828df1c42847"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:28:25.660947 master-0 kubenswrapper[31559]: I0216 02:28:25.660731 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40960ec6-becf-40a6-ad1e-828df1c42847-secret-alertmanager-kube-rbac-proxy-web" (OuterVolumeSpecName: "secret-alertmanager-kube-rbac-proxy-web") pod "40960ec6-becf-40a6-ad1e-828df1c42847" (UID: "40960ec6-becf-40a6-ad1e-828df1c42847"). InnerVolumeSpecName "secret-alertmanager-kube-rbac-proxy-web". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:28:25.661121 master-0 kubenswrapper[31559]: I0216 02:28:25.660924 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40960ec6-becf-40a6-ad1e-828df1c42847-secret-alertmanager-kube-rbac-proxy-metric" (OuterVolumeSpecName: "secret-alertmanager-kube-rbac-proxy-metric") pod "40960ec6-becf-40a6-ad1e-828df1c42847" (UID: "40960ec6-becf-40a6-ad1e-828df1c42847"). InnerVolumeSpecName "secret-alertmanager-kube-rbac-proxy-metric". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:28:25.662167 master-0 kubenswrapper[31559]: I0216 02:28:25.662112 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40960ec6-becf-40a6-ad1e-828df1c42847-secret-alertmanager-main-tls" (OuterVolumeSpecName: "secret-alertmanager-main-tls") pod "40960ec6-becf-40a6-ad1e-828df1c42847" (UID: "40960ec6-becf-40a6-ad1e-828df1c42847"). InnerVolumeSpecName "secret-alertmanager-main-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:28:25.753208 master-0 kubenswrapper[31559]: I0216 02:28:25.753141 31559 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/40960ec6-becf-40a6-ad1e-828df1c42847-tls-assets\") on node \"master-0\" DevicePath \"\"" Feb 16 02:28:25.753208 master-0 kubenswrapper[31559]: I0216 02:28:25.753198 31559 reconciler_common.go:293] "Volume detached for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/40960ec6-becf-40a6-ad1e-828df1c42847-secret-alertmanager-kube-rbac-proxy\") on node \"master-0\" DevicePath \"\"" Feb 16 02:28:25.753601 master-0 kubenswrapper[31559]: I0216 02:28:25.753222 31559 reconciler_common.go:293] "Volume detached for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/40960ec6-becf-40a6-ad1e-828df1c42847-secret-alertmanager-kube-rbac-proxy-metric\") on node \"master-0\" DevicePath \"\"" Feb 16 02:28:25.753601 master-0 kubenswrapper[31559]: I0216 02:28:25.753248 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2snbx\" (UniqueName: \"kubernetes.io/projected/40960ec6-becf-40a6-ad1e-828df1c42847-kube-api-access-2snbx\") on node \"master-0\" DevicePath \"\"" Feb 16 02:28:25.753601 master-0 kubenswrapper[31559]: I0216 02:28:25.753269 31559 reconciler_common.go:293] "Volume detached for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/40960ec6-becf-40a6-ad1e-828df1c42847-secret-alertmanager-kube-rbac-proxy-web\") on node \"master-0\" DevicePath \"\"" Feb 16 02:28:25.753601 master-0 kubenswrapper[31559]: I0216 02:28:25.753289 31559 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/40960ec6-becf-40a6-ad1e-828df1c42847-config-out\") on node \"master-0\" DevicePath \"\"" Feb 16 02:28:25.753601 master-0 kubenswrapper[31559]: I0216 02:28:25.753312 31559 reconciler_common.go:293] "Volume detached for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/40960ec6-becf-40a6-ad1e-828df1c42847-secret-alertmanager-main-tls\") on node \"master-0\" DevicePath \"\"" Feb 16 02:28:25.753601 master-0 kubenswrapper[31559]: I0216 02:28:25.753333 31559 reconciler_common.go:293] "Volume detached for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/40960ec6-becf-40a6-ad1e-828df1c42847-alertmanager-main-db\") on node \"master-0\" DevicePath \"\"" Feb 16 02:28:25.754047 master-0 kubenswrapper[31559]: I0216 02:28:25.753953 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40960ec6-becf-40a6-ad1e-828df1c42847-web-config" (OuterVolumeSpecName: "web-config") pod "40960ec6-becf-40a6-ad1e-828df1c42847" (UID: "40960ec6-becf-40a6-ad1e-828df1c42847"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:28:25.854721 master-0 kubenswrapper[31559]: I0216 02:28:25.854636 31559 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/40960ec6-becf-40a6-ad1e-828df1c42847-web-config\") on node \"master-0\" DevicePath \"\"" Feb 16 02:28:26.128110 master-0 kubenswrapper[31559]: I0216 02:28:26.127986 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"40960ec6-becf-40a6-ad1e-828df1c42847","Type":"ContainerDied","Data":"30bc49d8eb52a4442f4fee04967ac9e14801e56406eb0ee531f12d2e49257060"} Feb 16 02:28:26.128110 master-0 kubenswrapper[31559]: I0216 02:28:26.128092 31559 scope.go:117] "RemoveContainer" containerID="262f433d546e0685ed7e4ccfbe391174ee356b8b31595276fca13f3ac6e8fa87" Feb 16 02:28:26.128110 master-0 kubenswrapper[31559]: I0216 02:28:26.128109 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 16 02:28:26.159708 master-0 kubenswrapper[31559]: I0216 02:28:26.159638 31559 scope.go:117] "RemoveContainer" containerID="d698d4f4ed5397da53ee6958c378cbd852fbe16cca431ad7281cf26b7ddc6f25" Feb 16 02:28:26.169534 master-0 kubenswrapper[31559]: I0216 02:28:26.168786 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 16 02:28:26.181055 master-0 kubenswrapper[31559]: I0216 02:28:26.180986 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 16 02:28:26.197909 master-0 kubenswrapper[31559]: I0216 02:28:26.197854 31559 scope.go:117] "RemoveContainer" containerID="2c60cfdfded888fe1d8a63276e7725e268f9cf7599efabdc3a94932d6b91f55f" Feb 16 02:28:26.226892 master-0 kubenswrapper[31559]: I0216 02:28:26.225692 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 16 02:28:26.226892 master-0 kubenswrapper[31559]: E0216 02:28:26.226399 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac79c0aa-10de-4da9-8e8b-95685d9fc609" containerName="console" Feb 16 02:28:26.226892 master-0 kubenswrapper[31559]: I0216 02:28:26.226469 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac79c0aa-10de-4da9-8e8b-95685d9fc609" containerName="console" Feb 16 02:28:26.226892 master-0 kubenswrapper[31559]: E0216 02:28:26.226528 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40960ec6-becf-40a6-ad1e-828df1c42847" containerName="kube-rbac-proxy-web" Feb 16 02:28:26.226892 master-0 kubenswrapper[31559]: I0216 02:28:26.226571 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="40960ec6-becf-40a6-ad1e-828df1c42847" containerName="kube-rbac-proxy-web" Feb 16 02:28:26.226892 master-0 kubenswrapper[31559]: E0216 02:28:26.226607 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40960ec6-becf-40a6-ad1e-828df1c42847" containerName="alertmanager" Feb 16 02:28:26.226892 master-0 kubenswrapper[31559]: I0216 02:28:26.226649 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="40960ec6-becf-40a6-ad1e-828df1c42847" containerName="alertmanager" Feb 16 02:28:26.226892 master-0 kubenswrapper[31559]: E0216 02:28:26.226672 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40960ec6-becf-40a6-ad1e-828df1c42847" containerName="prom-label-proxy" Feb 16 02:28:26.226892 master-0 kubenswrapper[31559]: I0216 02:28:26.226683 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="40960ec6-becf-40a6-ad1e-828df1c42847" containerName="prom-label-proxy" Feb 16 02:28:26.226892 master-0 kubenswrapper[31559]: E0216 02:28:26.226699 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40960ec6-becf-40a6-ad1e-828df1c42847" containerName="config-reloader" Feb 16 02:28:26.226892 master-0 kubenswrapper[31559]: I0216 02:28:26.226742 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="40960ec6-becf-40a6-ad1e-828df1c42847" containerName="config-reloader" Feb 16 02:28:26.226892 master-0 kubenswrapper[31559]: E0216 02:28:26.226756 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40960ec6-becf-40a6-ad1e-828df1c42847" containerName="kube-rbac-proxy" Feb 16 02:28:26.226892 master-0 kubenswrapper[31559]: I0216 02:28:26.226766 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="40960ec6-becf-40a6-ad1e-828df1c42847" containerName="kube-rbac-proxy" Feb 16 02:28:26.226892 master-0 kubenswrapper[31559]: E0216 02:28:26.226828 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40960ec6-becf-40a6-ad1e-828df1c42847" containerName="init-config-reloader" Feb 16 02:28:26.226892 master-0 kubenswrapper[31559]: I0216 02:28:26.226841 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="40960ec6-becf-40a6-ad1e-828df1c42847" containerName="init-config-reloader" Feb 16 02:28:26.226892 master-0 kubenswrapper[31559]: E0216 02:28:26.226856 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40960ec6-becf-40a6-ad1e-828df1c42847" containerName="kube-rbac-proxy-metric" Feb 16 02:28:26.226892 master-0 kubenswrapper[31559]: I0216 02:28:26.226867 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="40960ec6-becf-40a6-ad1e-828df1c42847" containerName="kube-rbac-proxy-metric" Feb 16 02:28:26.229027 master-0 kubenswrapper[31559]: I0216 02:28:26.227060 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac79c0aa-10de-4da9-8e8b-95685d9fc609" containerName="console" Feb 16 02:28:26.229027 master-0 kubenswrapper[31559]: I0216 02:28:26.227099 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="40960ec6-becf-40a6-ad1e-828df1c42847" containerName="config-reloader" Feb 16 02:28:26.229027 master-0 kubenswrapper[31559]: I0216 02:28:26.227117 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="40960ec6-becf-40a6-ad1e-828df1c42847" containerName="prom-label-proxy" Feb 16 02:28:26.229027 master-0 kubenswrapper[31559]: I0216 02:28:26.227136 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="40960ec6-becf-40a6-ad1e-828df1c42847" containerName="kube-rbac-proxy" Feb 16 02:28:26.229027 master-0 kubenswrapper[31559]: I0216 02:28:26.227149 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="40960ec6-becf-40a6-ad1e-828df1c42847" containerName="alertmanager" Feb 16 02:28:26.229027 master-0 kubenswrapper[31559]: I0216 02:28:26.227169 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="40960ec6-becf-40a6-ad1e-828df1c42847" containerName="kube-rbac-proxy-web" Feb 16 02:28:26.229027 master-0 kubenswrapper[31559]: I0216 02:28:26.227185 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="40960ec6-becf-40a6-ad1e-828df1c42847" containerName="kube-rbac-proxy-metric" Feb 16 02:28:26.230110 master-0 kubenswrapper[31559]: I0216 02:28:26.230055 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 16 02:28:26.234698 master-0 kubenswrapper[31559]: I0216 02:28:26.234332 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Feb 16 02:28:26.234698 master-0 kubenswrapper[31559]: I0216 02:28:26.234390 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Feb 16 02:28:26.234698 master-0 kubenswrapper[31559]: I0216 02:28:26.234461 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Feb 16 02:28:26.234698 master-0 kubenswrapper[31559]: I0216 02:28:26.234369 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Feb 16 02:28:26.234698 master-0 kubenswrapper[31559]: I0216 02:28:26.235949 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Feb 16 02:28:26.234698 master-0 kubenswrapper[31559]: I0216 02:28:26.239867 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Feb 16 02:28:26.234698 master-0 kubenswrapper[31559]: I0216 02:28:26.241065 31559 scope.go:117] "RemoveContainer" containerID="fcb86f3336f6e01e939e46616a2206b7d0d1035dfeebea23c097729bb4c05fc8" Feb 16 02:28:26.234698 master-0 kubenswrapper[31559]: I0216 02:28:26.242242 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Feb 16 02:28:26.234698 master-0 kubenswrapper[31559]: I0216 02:28:26.245688 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Feb 16 02:28:26.234698 master-0 kubenswrapper[31559]: I0216 02:28:26.249827 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 16 02:28:26.319859 master-0 kubenswrapper[31559]: I0216 02:28:26.319769 31559 scope.go:117] "RemoveContainer" containerID="4ccaee1c7cc0a1374adb4f0df1c083c907e88185b22476eaad2a88d8cf10106d" Feb 16 02:28:26.339102 master-0 kubenswrapper[31559]: I0216 02:28:26.339065 31559 scope.go:117] "RemoveContainer" containerID="99aee9a4c80979659413e7d34094c8562f17b775b31490e30d0158db7a668e2f" Feb 16 02:28:26.355740 master-0 kubenswrapper[31559]: I0216 02:28:26.355679 31559 scope.go:117] "RemoveContainer" containerID="3c67c940a1d1d9bb40cd9ec1e8f020cd76c38ef2a18a5d3e2bb8915c2c5b1a3c" Feb 16 02:28:26.376944 master-0 kubenswrapper[31559]: I0216 02:28:26.376898 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/5abb5a89-cacc-49d7-907e-ccd374e119d5-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"5abb5a89-cacc-49d7-907e-ccd374e119d5\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 02:28:26.376944 master-0 kubenswrapper[31559]: I0216 02:28:26.376940 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/5abb5a89-cacc-49d7-907e-ccd374e119d5-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"5abb5a89-cacc-49d7-907e-ccd374e119d5\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 02:28:26.377164 master-0 kubenswrapper[31559]: I0216 02:28:26.376960 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5abb5a89-cacc-49d7-907e-ccd374e119d5-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"5abb5a89-cacc-49d7-907e-ccd374e119d5\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 02:28:26.377164 master-0 kubenswrapper[31559]: I0216 02:28:26.377101 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/5abb5a89-cacc-49d7-907e-ccd374e119d5-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"5abb5a89-cacc-49d7-907e-ccd374e119d5\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 02:28:26.377164 master-0 kubenswrapper[31559]: I0216 02:28:26.377127 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/5abb5a89-cacc-49d7-907e-ccd374e119d5-web-config\") pod \"alertmanager-main-0\" (UID: \"5abb5a89-cacc-49d7-907e-ccd374e119d5\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 02:28:26.377164 master-0 kubenswrapper[31559]: I0216 02:28:26.377144 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/5abb5a89-cacc-49d7-907e-ccd374e119d5-tls-assets\") pod \"alertmanager-main-0\" (UID: \"5abb5a89-cacc-49d7-907e-ccd374e119d5\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 02:28:26.377410 master-0 kubenswrapper[31559]: I0216 02:28:26.377169 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/5abb5a89-cacc-49d7-907e-ccd374e119d5-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"5abb5a89-cacc-49d7-907e-ccd374e119d5\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 02:28:26.377410 master-0 kubenswrapper[31559]: I0216 02:28:26.377203 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5abb5a89-cacc-49d7-907e-ccd374e119d5-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"5abb5a89-cacc-49d7-907e-ccd374e119d5\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 02:28:26.377410 master-0 kubenswrapper[31559]: I0216 02:28:26.377222 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5rxk\" (UniqueName: \"kubernetes.io/projected/5abb5a89-cacc-49d7-907e-ccd374e119d5-kube-api-access-d5rxk\") pod \"alertmanager-main-0\" (UID: \"5abb5a89-cacc-49d7-907e-ccd374e119d5\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 02:28:26.377410 master-0 kubenswrapper[31559]: I0216 02:28:26.377242 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/5abb5a89-cacc-49d7-907e-ccd374e119d5-config-volume\") pod \"alertmanager-main-0\" (UID: \"5abb5a89-cacc-49d7-907e-ccd374e119d5\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 02:28:26.377410 master-0 kubenswrapper[31559]: I0216 02:28:26.377266 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/5abb5a89-cacc-49d7-907e-ccd374e119d5-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"5abb5a89-cacc-49d7-907e-ccd374e119d5\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 02:28:26.377410 master-0 kubenswrapper[31559]: I0216 02:28:26.377290 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/5abb5a89-cacc-49d7-907e-ccd374e119d5-config-out\") pod \"alertmanager-main-0\" (UID: \"5abb5a89-cacc-49d7-907e-ccd374e119d5\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 02:28:26.478775 master-0 kubenswrapper[31559]: I0216 02:28:26.478601 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/5abb5a89-cacc-49d7-907e-ccd374e119d5-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"5abb5a89-cacc-49d7-907e-ccd374e119d5\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 02:28:26.478775 master-0 kubenswrapper[31559]: I0216 02:28:26.478711 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/5abb5a89-cacc-49d7-907e-ccd374e119d5-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"5abb5a89-cacc-49d7-907e-ccd374e119d5\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 02:28:26.479031 master-0 kubenswrapper[31559]: I0216 02:28:26.478921 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5abb5a89-cacc-49d7-907e-ccd374e119d5-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"5abb5a89-cacc-49d7-907e-ccd374e119d5\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 02:28:26.479105 master-0 kubenswrapper[31559]: I0216 02:28:26.479045 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/5abb5a89-cacc-49d7-907e-ccd374e119d5-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"5abb5a89-cacc-49d7-907e-ccd374e119d5\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 02:28:26.479173 master-0 kubenswrapper[31559]: I0216 02:28:26.479122 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/5abb5a89-cacc-49d7-907e-ccd374e119d5-web-config\") pod \"alertmanager-main-0\" (UID: \"5abb5a89-cacc-49d7-907e-ccd374e119d5\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 02:28:26.479242 master-0 kubenswrapper[31559]: I0216 02:28:26.479175 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/5abb5a89-cacc-49d7-907e-ccd374e119d5-tls-assets\") pod \"alertmanager-main-0\" (UID: \"5abb5a89-cacc-49d7-907e-ccd374e119d5\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 02:28:26.479389 master-0 kubenswrapper[31559]: I0216 02:28:26.479253 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/5abb5a89-cacc-49d7-907e-ccd374e119d5-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"5abb5a89-cacc-49d7-907e-ccd374e119d5\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 02:28:26.479389 master-0 kubenswrapper[31559]: I0216 02:28:26.479321 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5abb5a89-cacc-49d7-907e-ccd374e119d5-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"5abb5a89-cacc-49d7-907e-ccd374e119d5\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 02:28:26.479582 master-0 kubenswrapper[31559]: I0216 02:28:26.479384 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5rxk\" (UniqueName: \"kubernetes.io/projected/5abb5a89-cacc-49d7-907e-ccd374e119d5-kube-api-access-d5rxk\") pod \"alertmanager-main-0\" (UID: \"5abb5a89-cacc-49d7-907e-ccd374e119d5\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 02:28:26.480044 master-0 kubenswrapper[31559]: I0216 02:28:26.479974 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/5abb5a89-cacc-49d7-907e-ccd374e119d5-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"5abb5a89-cacc-49d7-907e-ccd374e119d5\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 02:28:26.480849 master-0 kubenswrapper[31559]: I0216 02:28:26.480775 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5abb5a89-cacc-49d7-907e-ccd374e119d5-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"5abb5a89-cacc-49d7-907e-ccd374e119d5\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 02:28:26.481019 master-0 kubenswrapper[31559]: I0216 02:28:26.480985 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/5abb5a89-cacc-49d7-907e-ccd374e119d5-config-volume\") pod \"alertmanager-main-0\" (UID: \"5abb5a89-cacc-49d7-907e-ccd374e119d5\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 02:28:26.481107 master-0 kubenswrapper[31559]: I0216 02:28:26.481050 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/5abb5a89-cacc-49d7-907e-ccd374e119d5-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"5abb5a89-cacc-49d7-907e-ccd374e119d5\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 02:28:26.481179 master-0 kubenswrapper[31559]: I0216 02:28:26.481108 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/5abb5a89-cacc-49d7-907e-ccd374e119d5-config-out\") pod \"alertmanager-main-0\" (UID: \"5abb5a89-cacc-49d7-907e-ccd374e119d5\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 02:28:26.485549 master-0 kubenswrapper[31559]: I0216 02:28:26.485488 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/5abb5a89-cacc-49d7-907e-ccd374e119d5-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"5abb5a89-cacc-49d7-907e-ccd374e119d5\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 02:28:26.485549 master-0 kubenswrapper[31559]: I0216 02:28:26.485518 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/5abb5a89-cacc-49d7-907e-ccd374e119d5-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"5abb5a89-cacc-49d7-907e-ccd374e119d5\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 02:28:26.487713 master-0 kubenswrapper[31559]: I0216 02:28:26.487648 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/5abb5a89-cacc-49d7-907e-ccd374e119d5-config-out\") pod \"alertmanager-main-0\" (UID: \"5abb5a89-cacc-49d7-907e-ccd374e119d5\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 02:28:26.488659 master-0 kubenswrapper[31559]: I0216 02:28:26.488555 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/5abb5a89-cacc-49d7-907e-ccd374e119d5-config-volume\") pod \"alertmanager-main-0\" (UID: \"5abb5a89-cacc-49d7-907e-ccd374e119d5\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 02:28:26.488659 master-0 kubenswrapper[31559]: I0216 02:28:26.488564 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/5abb5a89-cacc-49d7-907e-ccd374e119d5-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"5abb5a89-cacc-49d7-907e-ccd374e119d5\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 02:28:26.489037 master-0 kubenswrapper[31559]: I0216 02:28:26.488967 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/5abb5a89-cacc-49d7-907e-ccd374e119d5-tls-assets\") pod \"alertmanager-main-0\" (UID: \"5abb5a89-cacc-49d7-907e-ccd374e119d5\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 02:28:26.490549 master-0 kubenswrapper[31559]: I0216 02:28:26.490493 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/5abb5a89-cacc-49d7-907e-ccd374e119d5-web-config\") pod \"alertmanager-main-0\" (UID: \"5abb5a89-cacc-49d7-907e-ccd374e119d5\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 02:28:26.493231 master-0 kubenswrapper[31559]: I0216 02:28:26.493175 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5abb5a89-cacc-49d7-907e-ccd374e119d5-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"5abb5a89-cacc-49d7-907e-ccd374e119d5\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 02:28:26.494306 master-0 kubenswrapper[31559]: I0216 02:28:26.494226 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/5abb5a89-cacc-49d7-907e-ccd374e119d5-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"5abb5a89-cacc-49d7-907e-ccd374e119d5\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 02:28:26.516055 master-0 kubenswrapper[31559]: I0216 02:28:26.515995 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5rxk\" (UniqueName: \"kubernetes.io/projected/5abb5a89-cacc-49d7-907e-ccd374e119d5-kube-api-access-d5rxk\") pod \"alertmanager-main-0\" (UID: \"5abb5a89-cacc-49d7-907e-ccd374e119d5\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 02:28:26.576705 master-0 kubenswrapper[31559]: I0216 02:28:26.576514 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7cdbd48f5b-slwlb"] Feb 16 02:28:26.578012 master-0 kubenswrapper[31559]: I0216 02:28:26.577988 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7cdbd48f5b-slwlb" Feb 16 02:28:26.584549 master-0 kubenswrapper[31559]: I0216 02:28:26.582599 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8859b956-70db-4e59-abff-faf38aa377fc-console-oauth-config\") pod \"console-7cdbd48f5b-slwlb\" (UID: \"8859b956-70db-4e59-abff-faf38aa377fc\") " pod="openshift-console/console-7cdbd48f5b-slwlb" Feb 16 02:28:26.584549 master-0 kubenswrapper[31559]: I0216 02:28:26.582673 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8859b956-70db-4e59-abff-faf38aa377fc-console-serving-cert\") pod \"console-7cdbd48f5b-slwlb\" (UID: \"8859b956-70db-4e59-abff-faf38aa377fc\") " pod="openshift-console/console-7cdbd48f5b-slwlb" Feb 16 02:28:26.584549 master-0 kubenswrapper[31559]: I0216 02:28:26.582701 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8859b956-70db-4e59-abff-faf38aa377fc-service-ca\") pod \"console-7cdbd48f5b-slwlb\" (UID: \"8859b956-70db-4e59-abff-faf38aa377fc\") " pod="openshift-console/console-7cdbd48f5b-slwlb" Feb 16 02:28:26.584549 master-0 kubenswrapper[31559]: I0216 02:28:26.582741 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8859b956-70db-4e59-abff-faf38aa377fc-oauth-serving-cert\") pod \"console-7cdbd48f5b-slwlb\" (UID: \"8859b956-70db-4e59-abff-faf38aa377fc\") " pod="openshift-console/console-7cdbd48f5b-slwlb" Feb 16 02:28:26.584549 master-0 kubenswrapper[31559]: I0216 02:28:26.582765 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l94js\" (UniqueName: \"kubernetes.io/projected/8859b956-70db-4e59-abff-faf38aa377fc-kube-api-access-l94js\") pod \"console-7cdbd48f5b-slwlb\" (UID: \"8859b956-70db-4e59-abff-faf38aa377fc\") " pod="openshift-console/console-7cdbd48f5b-slwlb" Feb 16 02:28:26.584549 master-0 kubenswrapper[31559]: I0216 02:28:26.582793 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8859b956-70db-4e59-abff-faf38aa377fc-console-config\") pod \"console-7cdbd48f5b-slwlb\" (UID: \"8859b956-70db-4e59-abff-faf38aa377fc\") " pod="openshift-console/console-7cdbd48f5b-slwlb" Feb 16 02:28:26.584549 master-0 kubenswrapper[31559]: I0216 02:28:26.582825 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8859b956-70db-4e59-abff-faf38aa377fc-trusted-ca-bundle\") pod \"console-7cdbd48f5b-slwlb\" (UID: \"8859b956-70db-4e59-abff-faf38aa377fc\") " pod="openshift-console/console-7cdbd48f5b-slwlb" Feb 16 02:28:26.587467 master-0 kubenswrapper[31559]: I0216 02:28:26.587306 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7cdbd48f5b-slwlb"] Feb 16 02:28:26.607269 master-0 kubenswrapper[31559]: I0216 02:28:26.607210 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 16 02:28:26.684322 master-0 kubenswrapper[31559]: I0216 02:28:26.684219 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8859b956-70db-4e59-abff-faf38aa377fc-console-oauth-config\") pod \"console-7cdbd48f5b-slwlb\" (UID: \"8859b956-70db-4e59-abff-faf38aa377fc\") " pod="openshift-console/console-7cdbd48f5b-slwlb" Feb 16 02:28:26.684479 master-0 kubenswrapper[31559]: I0216 02:28:26.684352 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8859b956-70db-4e59-abff-faf38aa377fc-console-serving-cert\") pod \"console-7cdbd48f5b-slwlb\" (UID: \"8859b956-70db-4e59-abff-faf38aa377fc\") " pod="openshift-console/console-7cdbd48f5b-slwlb" Feb 16 02:28:26.685056 master-0 kubenswrapper[31559]: I0216 02:28:26.685021 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8859b956-70db-4e59-abff-faf38aa377fc-service-ca\") pod \"console-7cdbd48f5b-slwlb\" (UID: \"8859b956-70db-4e59-abff-faf38aa377fc\") " pod="openshift-console/console-7cdbd48f5b-slwlb" Feb 16 02:28:26.685120 master-0 kubenswrapper[31559]: I0216 02:28:26.685089 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8859b956-70db-4e59-abff-faf38aa377fc-oauth-serving-cert\") pod \"console-7cdbd48f5b-slwlb\" (UID: \"8859b956-70db-4e59-abff-faf38aa377fc\") " pod="openshift-console/console-7cdbd48f5b-slwlb" Feb 16 02:28:26.685192 master-0 kubenswrapper[31559]: I0216 02:28:26.685163 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l94js\" (UniqueName: \"kubernetes.io/projected/8859b956-70db-4e59-abff-faf38aa377fc-kube-api-access-l94js\") pod \"console-7cdbd48f5b-slwlb\" (UID: \"8859b956-70db-4e59-abff-faf38aa377fc\") " pod="openshift-console/console-7cdbd48f5b-slwlb" Feb 16 02:28:26.685282 master-0 kubenswrapper[31559]: I0216 02:28:26.685220 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8859b956-70db-4e59-abff-faf38aa377fc-console-config\") pod \"console-7cdbd48f5b-slwlb\" (UID: \"8859b956-70db-4e59-abff-faf38aa377fc\") " pod="openshift-console/console-7cdbd48f5b-slwlb" Feb 16 02:28:26.685319 master-0 kubenswrapper[31559]: I0216 02:28:26.685308 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8859b956-70db-4e59-abff-faf38aa377fc-trusted-ca-bundle\") pod \"console-7cdbd48f5b-slwlb\" (UID: \"8859b956-70db-4e59-abff-faf38aa377fc\") " pod="openshift-console/console-7cdbd48f5b-slwlb" Feb 16 02:28:26.686054 master-0 kubenswrapper[31559]: I0216 02:28:26.686013 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8859b956-70db-4e59-abff-faf38aa377fc-oauth-serving-cert\") pod \"console-7cdbd48f5b-slwlb\" (UID: \"8859b956-70db-4e59-abff-faf38aa377fc\") " pod="openshift-console/console-7cdbd48f5b-slwlb" Feb 16 02:28:26.686911 master-0 kubenswrapper[31559]: I0216 02:28:26.686875 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8859b956-70db-4e59-abff-faf38aa377fc-service-ca\") pod \"console-7cdbd48f5b-slwlb\" (UID: \"8859b956-70db-4e59-abff-faf38aa377fc\") " pod="openshift-console/console-7cdbd48f5b-slwlb" Feb 16 02:28:26.687681 master-0 kubenswrapper[31559]: I0216 02:28:26.687027 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8859b956-70db-4e59-abff-faf38aa377fc-trusted-ca-bundle\") pod \"console-7cdbd48f5b-slwlb\" (UID: \"8859b956-70db-4e59-abff-faf38aa377fc\") " pod="openshift-console/console-7cdbd48f5b-slwlb" Feb 16 02:28:26.687856 master-0 kubenswrapper[31559]: I0216 02:28:26.687770 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8859b956-70db-4e59-abff-faf38aa377fc-console-config\") pod \"console-7cdbd48f5b-slwlb\" (UID: \"8859b956-70db-4e59-abff-faf38aa377fc\") " pod="openshift-console/console-7cdbd48f5b-slwlb" Feb 16 02:28:26.688703 master-0 kubenswrapper[31559]: I0216 02:28:26.688672 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8859b956-70db-4e59-abff-faf38aa377fc-console-serving-cert\") pod \"console-7cdbd48f5b-slwlb\" (UID: \"8859b956-70db-4e59-abff-faf38aa377fc\") " pod="openshift-console/console-7cdbd48f5b-slwlb" Feb 16 02:28:26.689101 master-0 kubenswrapper[31559]: I0216 02:28:26.689052 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8859b956-70db-4e59-abff-faf38aa377fc-console-oauth-config\") pod \"console-7cdbd48f5b-slwlb\" (UID: \"8859b956-70db-4e59-abff-faf38aa377fc\") " pod="openshift-console/console-7cdbd48f5b-slwlb" Feb 16 02:28:26.704611 master-0 kubenswrapper[31559]: I0216 02:28:26.704542 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l94js\" (UniqueName: \"kubernetes.io/projected/8859b956-70db-4e59-abff-faf38aa377fc-kube-api-access-l94js\") pod \"console-7cdbd48f5b-slwlb\" (UID: \"8859b956-70db-4e59-abff-faf38aa377fc\") " pod="openshift-console/console-7cdbd48f5b-slwlb" Feb 16 02:28:26.908485 master-0 kubenswrapper[31559]: I0216 02:28:26.908348 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7cdbd48f5b-slwlb" Feb 16 02:28:27.066232 master-0 kubenswrapper[31559]: I0216 02:28:27.066152 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 16 02:28:27.138173 master-0 kubenswrapper[31559]: I0216 02:28:27.138081 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"5abb5a89-cacc-49d7-907e-ccd374e119d5","Type":"ContainerStarted","Data":"b72e190adcea952f7d4a0bcf847a9a4a99963a855d946acfc822c4792b786192"} Feb 16 02:28:27.434204 master-0 kubenswrapper[31559]: I0216 02:28:27.431691 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7cdbd48f5b-slwlb"] Feb 16 02:28:27.434204 master-0 kubenswrapper[31559]: W0216 02:28:27.433893 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8859b956_70db_4e59_abff_faf38aa377fc.slice/crio-0aa2de3fd08df8b26f3eda9cf64141424e9a1212cd2d37b4121cf652b6c95d20 WatchSource:0}: Error finding container 0aa2de3fd08df8b26f3eda9cf64141424e9a1212cd2d37b4121cf652b6c95d20: Status 404 returned error can't find the container with id 0aa2de3fd08df8b26f3eda9cf64141424e9a1212cd2d37b4121cf652b6c95d20 Feb 16 02:28:27.938188 master-0 kubenswrapper[31559]: I0216 02:28:27.938120 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40960ec6-becf-40a6-ad1e-828df1c42847" path="/var/lib/kubelet/pods/40960ec6-becf-40a6-ad1e-828df1c42847/volumes" Feb 16 02:28:28.150240 master-0 kubenswrapper[31559]: I0216 02:28:28.150148 31559 generic.go:334] "Generic (PLEG): container finished" podID="5abb5a89-cacc-49d7-907e-ccd374e119d5" containerID="8a94ef05a5053084da3ad1d77633394006a0b001c32a2a4d34f19fc57b43da58" exitCode=0 Feb 16 02:28:28.151145 master-0 kubenswrapper[31559]: I0216 02:28:28.150273 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"5abb5a89-cacc-49d7-907e-ccd374e119d5","Type":"ContainerDied","Data":"8a94ef05a5053084da3ad1d77633394006a0b001c32a2a4d34f19fc57b43da58"} Feb 16 02:28:28.152967 master-0 kubenswrapper[31559]: I0216 02:28:28.152893 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7cdbd48f5b-slwlb" event={"ID":"8859b956-70db-4e59-abff-faf38aa377fc","Type":"ContainerStarted","Data":"85b995952273ff9ef848d9545d01c6d93093545daba5183940e46289d87fd69c"} Feb 16 02:28:28.152967 master-0 kubenswrapper[31559]: I0216 02:28:28.152967 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7cdbd48f5b-slwlb" event={"ID":"8859b956-70db-4e59-abff-faf38aa377fc","Type":"ContainerStarted","Data":"0aa2de3fd08df8b26f3eda9cf64141424e9a1212cd2d37b4121cf652b6c95d20"} Feb 16 02:28:28.234115 master-0 kubenswrapper[31559]: I0216 02:28:28.233999 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 16 02:28:28.234688 master-0 kubenswrapper[31559]: I0216 02:28:28.234622 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="9b50da24-3f10-4b81-be90-912874ed2629" containerName="prometheus" containerID="cri-o://4dd0ca3d1ef720a8082afa260fcbaf109837a0b326a0d5205fe9539b323bea7b" gracePeriod=600 Feb 16 02:28:28.235334 master-0 kubenswrapper[31559]: I0216 02:28:28.235155 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7cdbd48f5b-slwlb" podStartSLOduration=2.235131944 podStartE2EDuration="2.235131944s" podCreationTimestamp="2026-02-16 02:28:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:28:28.214689318 +0000 UTC m=+360.559295373" watchObservedRunningTime="2026-02-16 02:28:28.235131944 +0000 UTC m=+360.579737969" Feb 16 02:28:28.235424 master-0 kubenswrapper[31559]: I0216 02:28:28.235341 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="9b50da24-3f10-4b81-be90-912874ed2629" containerName="kube-rbac-proxy-web" containerID="cri-o://75cad3478045470f2040f88da8c40c241bef15b6fd03a907d8a57534c70ec620" gracePeriod=600 Feb 16 02:28:28.235664 master-0 kubenswrapper[31559]: I0216 02:28:28.235503 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="9b50da24-3f10-4b81-be90-912874ed2629" containerName="kube-rbac-proxy-thanos" containerID="cri-o://fef27121ceeb80a7aebc31c20224bc11cf1a9cbb5557ffff946afefa4448b15a" gracePeriod=600 Feb 16 02:28:28.235772 master-0 kubenswrapper[31559]: I0216 02:28:28.235721 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="9b50da24-3f10-4b81-be90-912874ed2629" containerName="config-reloader" containerID="cri-o://a66be50108e13d548689d1184b0c507616dfc0f43d0669e813aa03fb0dfedc02" gracePeriod=600 Feb 16 02:28:28.235937 master-0 kubenswrapper[31559]: I0216 02:28:28.235885 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="9b50da24-3f10-4b81-be90-912874ed2629" containerName="thanos-sidecar" containerID="cri-o://c35a553e2747f5aac1b45381ac10c69bcf39cea169e699e46724c0742a84361e" gracePeriod=600 Feb 16 02:28:28.239084 master-0 kubenswrapper[31559]: I0216 02:28:28.235605 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="9b50da24-3f10-4b81-be90-912874ed2629" containerName="kube-rbac-proxy" containerID="cri-o://01b27fa108a3b5894a94255befeb44230fc5897f8f6ce1acf532932fc2b53d83" gracePeriod=600 Feb 16 02:28:28.716160 master-0 kubenswrapper[31559]: I0216 02:28:28.716109 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:28:28.745116 master-0 kubenswrapper[31559]: I0216 02:28:28.732549 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/9b50da24-3f10-4b81-be90-912874ed2629-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"9b50da24-3f10-4b81-be90-912874ed2629\" (UID: \"9b50da24-3f10-4b81-be90-912874ed2629\") " Feb 16 02:28:28.745116 master-0 kubenswrapper[31559]: I0216 02:28:28.732624 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/9b50da24-3f10-4b81-be90-912874ed2629-secret-kube-rbac-proxy\") pod \"9b50da24-3f10-4b81-be90-912874ed2629\" (UID: \"9b50da24-3f10-4b81-be90-912874ed2629\") " Feb 16 02:28:28.745116 master-0 kubenswrapper[31559]: I0216 02:28:28.732715 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9b50da24-3f10-4b81-be90-912874ed2629-tls-assets\") pod \"9b50da24-3f10-4b81-be90-912874ed2629\" (UID: \"9b50da24-3f10-4b81-be90-912874ed2629\") " Feb 16 02:28:28.745116 master-0 kubenswrapper[31559]: I0216 02:28:28.733949 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/9b50da24-3f10-4b81-be90-912874ed2629-secret-metrics-client-certs\") pod \"9b50da24-3f10-4b81-be90-912874ed2629\" (UID: \"9b50da24-3f10-4b81-be90-912874ed2629\") " Feb 16 02:28:28.745116 master-0 kubenswrapper[31559]: I0216 02:28:28.734221 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b50da24-3f10-4b81-be90-912874ed2629-prometheus-trusted-ca-bundle\") pod \"9b50da24-3f10-4b81-be90-912874ed2629\" (UID: \"9b50da24-3f10-4b81-be90-912874ed2629\") " Feb 16 02:28:28.745116 master-0 kubenswrapper[31559]: I0216 02:28:28.734943 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9b50da24-3f10-4b81-be90-912874ed2629-thanos-prometheus-http-client-file\") pod \"9b50da24-3f10-4b81-be90-912874ed2629\" (UID: \"9b50da24-3f10-4b81-be90-912874ed2629\") " Feb 16 02:28:28.745116 master-0 kubenswrapper[31559]: I0216 02:28:28.734960 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b50da24-3f10-4b81-be90-912874ed2629-prometheus-trusted-ca-bundle" (OuterVolumeSpecName: "prometheus-trusted-ca-bundle") pod "9b50da24-3f10-4b81-be90-912874ed2629" (UID: "9b50da24-3f10-4b81-be90-912874ed2629"). InnerVolumeSpecName "prometheus-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:28:28.745116 master-0 kubenswrapper[31559]: I0216 02:28:28.734987 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b50da24-3f10-4b81-be90-912874ed2629-configmap-serving-certs-ca-bundle\") pod \"9b50da24-3f10-4b81-be90-912874ed2629\" (UID: \"9b50da24-3f10-4b81-be90-912874ed2629\") " Feb 16 02:28:28.745116 master-0 kubenswrapper[31559]: I0216 02:28:28.735011 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/9b50da24-3f10-4b81-be90-912874ed2629-secret-grpc-tls\") pod \"9b50da24-3f10-4b81-be90-912874ed2629\" (UID: \"9b50da24-3f10-4b81-be90-912874ed2629\") " Feb 16 02:28:28.745116 master-0 kubenswrapper[31559]: I0216 02:28:28.735081 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/9b50da24-3f10-4b81-be90-912874ed2629-secret-prometheus-k8s-tls\") pod \"9b50da24-3f10-4b81-be90-912874ed2629\" (UID: \"9b50da24-3f10-4b81-be90-912874ed2629\") " Feb 16 02:28:28.745116 master-0 kubenswrapper[31559]: I0216 02:28:28.735208 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9b50da24-3f10-4b81-be90-912874ed2629-prometheus-k8s-rulefiles-0\") pod \"9b50da24-3f10-4b81-be90-912874ed2629\" (UID: \"9b50da24-3f10-4b81-be90-912874ed2629\") " Feb 16 02:28:28.745116 master-0 kubenswrapper[31559]: I0216 02:28:28.735259 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/9b50da24-3f10-4b81-be90-912874ed2629-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"9b50da24-3f10-4b81-be90-912874ed2629\" (UID: \"9b50da24-3f10-4b81-be90-912874ed2629\") " Feb 16 02:28:28.745116 master-0 kubenswrapper[31559]: I0216 02:28:28.735287 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9b50da24-3f10-4b81-be90-912874ed2629-web-config\") pod \"9b50da24-3f10-4b81-be90-912874ed2629\" (UID: \"9b50da24-3f10-4b81-be90-912874ed2629\") " Feb 16 02:28:28.745116 master-0 kubenswrapper[31559]: I0216 02:28:28.735322 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b50da24-3f10-4b81-be90-912874ed2629-configmap-kubelet-serving-ca-bundle\") pod \"9b50da24-3f10-4b81-be90-912874ed2629\" (UID: \"9b50da24-3f10-4b81-be90-912874ed2629\") " Feb 16 02:28:28.745116 master-0 kubenswrapper[31559]: I0216 02:28:28.735362 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9b50da24-3f10-4b81-be90-912874ed2629-config\") pod \"9b50da24-3f10-4b81-be90-912874ed2629\" (UID: \"9b50da24-3f10-4b81-be90-912874ed2629\") " Feb 16 02:28:28.745116 master-0 kubenswrapper[31559]: I0216 02:28:28.735402 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qj9nn\" (UniqueName: \"kubernetes.io/projected/9b50da24-3f10-4b81-be90-912874ed2629-kube-api-access-qj9nn\") pod \"9b50da24-3f10-4b81-be90-912874ed2629\" (UID: \"9b50da24-3f10-4b81-be90-912874ed2629\") " Feb 16 02:28:28.745116 master-0 kubenswrapper[31559]: I0216 02:28:28.735458 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9b50da24-3f10-4b81-be90-912874ed2629-config-out\") pod \"9b50da24-3f10-4b81-be90-912874ed2629\" (UID: \"9b50da24-3f10-4b81-be90-912874ed2629\") " Feb 16 02:28:28.745116 master-0 kubenswrapper[31559]: I0216 02:28:28.735489 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/9b50da24-3f10-4b81-be90-912874ed2629-prometheus-k8s-db\") pod \"9b50da24-3f10-4b81-be90-912874ed2629\" (UID: \"9b50da24-3f10-4b81-be90-912874ed2629\") " Feb 16 02:28:28.745116 master-0 kubenswrapper[31559]: I0216 02:28:28.735540 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9b50da24-3f10-4b81-be90-912874ed2629-configmap-metrics-client-ca\") pod \"9b50da24-3f10-4b81-be90-912874ed2629\" (UID: \"9b50da24-3f10-4b81-be90-912874ed2629\") " Feb 16 02:28:28.745116 master-0 kubenswrapper[31559]: I0216 02:28:28.736154 31559 reconciler_common.go:293] "Volume detached for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b50da24-3f10-4b81-be90-912874ed2629-prometheus-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 02:28:28.745116 master-0 kubenswrapper[31559]: I0216 02:28:28.736207 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b50da24-3f10-4b81-be90-912874ed2629-configmap-serving-certs-ca-bundle" (OuterVolumeSpecName: "configmap-serving-certs-ca-bundle") pod "9b50da24-3f10-4b81-be90-912874ed2629" (UID: "9b50da24-3f10-4b81-be90-912874ed2629"). InnerVolumeSpecName "configmap-serving-certs-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:28:28.745116 master-0 kubenswrapper[31559]: I0216 02:28:28.736877 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b50da24-3f10-4b81-be90-912874ed2629-configmap-metrics-client-ca" (OuterVolumeSpecName: "configmap-metrics-client-ca") pod "9b50da24-3f10-4b81-be90-912874ed2629" (UID: "9b50da24-3f10-4b81-be90-912874ed2629"). InnerVolumeSpecName "configmap-metrics-client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:28:28.745116 master-0 kubenswrapper[31559]: I0216 02:28:28.737724 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b50da24-3f10-4b81-be90-912874ed2629-secret-metrics-client-certs" (OuterVolumeSpecName: "secret-metrics-client-certs") pod "9b50da24-3f10-4b81-be90-912874ed2629" (UID: "9b50da24-3f10-4b81-be90-912874ed2629"). InnerVolumeSpecName "secret-metrics-client-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:28:28.745116 master-0 kubenswrapper[31559]: I0216 02:28:28.737771 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b50da24-3f10-4b81-be90-912874ed2629-secret-grpc-tls" (OuterVolumeSpecName: "secret-grpc-tls") pod "9b50da24-3f10-4b81-be90-912874ed2629" (UID: "9b50da24-3f10-4b81-be90-912874ed2629"). InnerVolumeSpecName "secret-grpc-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:28:28.745116 master-0 kubenswrapper[31559]: I0216 02:28:28.738098 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b50da24-3f10-4b81-be90-912874ed2629-prometheus-k8s-db" (OuterVolumeSpecName: "prometheus-k8s-db") pod "9b50da24-3f10-4b81-be90-912874ed2629" (UID: "9b50da24-3f10-4b81-be90-912874ed2629"). InnerVolumeSpecName "prometheus-k8s-db". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 02:28:28.745116 master-0 kubenswrapper[31559]: I0216 02:28:28.738654 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b50da24-3f10-4b81-be90-912874ed2629-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "9b50da24-3f10-4b81-be90-912874ed2629" (UID: "9b50da24-3f10-4b81-be90-912874ed2629"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:28:28.745116 master-0 kubenswrapper[31559]: I0216 02:28:28.739548 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b50da24-3f10-4b81-be90-912874ed2629-secret-prometheus-k8s-tls" (OuterVolumeSpecName: "secret-prometheus-k8s-tls") pod "9b50da24-3f10-4b81-be90-912874ed2629" (UID: "9b50da24-3f10-4b81-be90-912874ed2629"). InnerVolumeSpecName "secret-prometheus-k8s-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:28:28.745116 master-0 kubenswrapper[31559]: I0216 02:28:28.739736 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b50da24-3f10-4b81-be90-912874ed2629-secret-kube-rbac-proxy" (OuterVolumeSpecName: "secret-kube-rbac-proxy") pod "9b50da24-3f10-4b81-be90-912874ed2629" (UID: "9b50da24-3f10-4b81-be90-912874ed2629"). InnerVolumeSpecName "secret-kube-rbac-proxy". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:28:28.745116 master-0 kubenswrapper[31559]: I0216 02:28:28.739840 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b50da24-3f10-4b81-be90-912874ed2629-secret-prometheus-k8s-thanos-sidecar-tls" (OuterVolumeSpecName: "secret-prometheus-k8s-thanos-sidecar-tls") pod "9b50da24-3f10-4b81-be90-912874ed2629" (UID: "9b50da24-3f10-4b81-be90-912874ed2629"). InnerVolumeSpecName "secret-prometheus-k8s-thanos-sidecar-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:28:28.745116 master-0 kubenswrapper[31559]: I0216 02:28:28.739996 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b50da24-3f10-4b81-be90-912874ed2629-kube-api-access-qj9nn" (OuterVolumeSpecName: "kube-api-access-qj9nn") pod "9b50da24-3f10-4b81-be90-912874ed2629" (UID: "9b50da24-3f10-4b81-be90-912874ed2629"). InnerVolumeSpecName "kube-api-access-qj9nn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:28:28.745116 master-0 kubenswrapper[31559]: I0216 02:28:28.741181 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b50da24-3f10-4b81-be90-912874ed2629-config" (OuterVolumeSpecName: "config") pod "9b50da24-3f10-4b81-be90-912874ed2629" (UID: "9b50da24-3f10-4b81-be90-912874ed2629"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:28:28.745116 master-0 kubenswrapper[31559]: I0216 02:28:28.742937 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b50da24-3f10-4b81-be90-912874ed2629-secret-prometheus-k8s-kube-rbac-proxy-web" (OuterVolumeSpecName: "secret-prometheus-k8s-kube-rbac-proxy-web") pod "9b50da24-3f10-4b81-be90-912874ed2629" (UID: "9b50da24-3f10-4b81-be90-912874ed2629"). InnerVolumeSpecName "secret-prometheus-k8s-kube-rbac-proxy-web". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:28:28.745116 master-0 kubenswrapper[31559]: I0216 02:28:28.742592 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b50da24-3f10-4b81-be90-912874ed2629-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "9b50da24-3f10-4b81-be90-912874ed2629" (UID: "9b50da24-3f10-4b81-be90-912874ed2629"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:28:28.745116 master-0 kubenswrapper[31559]: I0216 02:28:28.743714 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b50da24-3f10-4b81-be90-912874ed2629-prometheus-k8s-rulefiles-0" (OuterVolumeSpecName: "prometheus-k8s-rulefiles-0") pod "9b50da24-3f10-4b81-be90-912874ed2629" (UID: "9b50da24-3f10-4b81-be90-912874ed2629"). InnerVolumeSpecName "prometheus-k8s-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:28:28.748372 master-0 kubenswrapper[31559]: I0216 02:28:28.747476 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b50da24-3f10-4b81-be90-912874ed2629-configmap-kubelet-serving-ca-bundle" (OuterVolumeSpecName: "configmap-kubelet-serving-ca-bundle") pod "9b50da24-3f10-4b81-be90-912874ed2629" (UID: "9b50da24-3f10-4b81-be90-912874ed2629"). InnerVolumeSpecName "configmap-kubelet-serving-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:28:28.756770 master-0 kubenswrapper[31559]: I0216 02:28:28.749501 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b50da24-3f10-4b81-be90-912874ed2629-config-out" (OuterVolumeSpecName: "config-out") pod "9b50da24-3f10-4b81-be90-912874ed2629" (UID: "9b50da24-3f10-4b81-be90-912874ed2629"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 02:28:28.833242 master-0 kubenswrapper[31559]: I0216 02:28:28.833171 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b50da24-3f10-4b81-be90-912874ed2629-web-config" (OuterVolumeSpecName: "web-config") pod "9b50da24-3f10-4b81-be90-912874ed2629" (UID: "9b50da24-3f10-4b81-be90-912874ed2629"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:28:28.841749 master-0 kubenswrapper[31559]: I0216 02:28:28.841422 31559 reconciler_common.go:293] "Volume detached for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/9b50da24-3f10-4b81-be90-912874ed2629-secret-kube-rbac-proxy\") on node \"master-0\" DevicePath \"\"" Feb 16 02:28:28.841749 master-0 kubenswrapper[31559]: I0216 02:28:28.841520 31559 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9b50da24-3f10-4b81-be90-912874ed2629-tls-assets\") on node \"master-0\" DevicePath \"\"" Feb 16 02:28:28.841749 master-0 kubenswrapper[31559]: I0216 02:28:28.841534 31559 reconciler_common.go:293] "Volume detached for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/9b50da24-3f10-4b81-be90-912874ed2629-secret-metrics-client-certs\") on node \"master-0\" DevicePath \"\"" Feb 16 02:28:28.841749 master-0 kubenswrapper[31559]: I0216 02:28:28.841548 31559 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9b50da24-3f10-4b81-be90-912874ed2629-thanos-prometheus-http-client-file\") on node \"master-0\" DevicePath \"\"" Feb 16 02:28:28.841749 master-0 kubenswrapper[31559]: I0216 02:28:28.841566 31559 reconciler_common.go:293] "Volume detached for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/9b50da24-3f10-4b81-be90-912874ed2629-secret-grpc-tls\") on node \"master-0\" DevicePath \"\"" Feb 16 02:28:28.841749 master-0 kubenswrapper[31559]: I0216 02:28:28.841582 31559 reconciler_common.go:293] "Volume detached for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b50da24-3f10-4b81-be90-912874ed2629-configmap-serving-certs-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 02:28:28.841749 master-0 kubenswrapper[31559]: I0216 02:28:28.841595 31559 reconciler_common.go:293] "Volume detached for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/9b50da24-3f10-4b81-be90-912874ed2629-secret-prometheus-k8s-tls\") on node \"master-0\" DevicePath \"\"" Feb 16 02:28:28.841749 master-0 kubenswrapper[31559]: I0216 02:28:28.841608 31559 reconciler_common.go:293] "Volume detached for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9b50da24-3f10-4b81-be90-912874ed2629-prometheus-k8s-rulefiles-0\") on node \"master-0\" DevicePath \"\"" Feb 16 02:28:28.841749 master-0 kubenswrapper[31559]: I0216 02:28:28.841624 31559 reconciler_common.go:293] "Volume detached for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/9b50da24-3f10-4b81-be90-912874ed2629-secret-prometheus-k8s-kube-rbac-proxy-web\") on node \"master-0\" DevicePath \"\"" Feb 16 02:28:28.841749 master-0 kubenswrapper[31559]: I0216 02:28:28.841637 31559 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9b50da24-3f10-4b81-be90-912874ed2629-web-config\") on node \"master-0\" DevicePath \"\"" Feb 16 02:28:28.841749 master-0 kubenswrapper[31559]: I0216 02:28:28.841650 31559 reconciler_common.go:293] "Volume detached for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b50da24-3f10-4b81-be90-912874ed2629-configmap-kubelet-serving-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 02:28:28.841749 master-0 kubenswrapper[31559]: I0216 02:28:28.841663 31559 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/9b50da24-3f10-4b81-be90-912874ed2629-config\") on node \"master-0\" DevicePath \"\"" Feb 16 02:28:28.841749 master-0 kubenswrapper[31559]: I0216 02:28:28.841676 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qj9nn\" (UniqueName: \"kubernetes.io/projected/9b50da24-3f10-4b81-be90-912874ed2629-kube-api-access-qj9nn\") on node \"master-0\" DevicePath \"\"" Feb 16 02:28:28.841749 master-0 kubenswrapper[31559]: I0216 02:28:28.841689 31559 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9b50da24-3f10-4b81-be90-912874ed2629-config-out\") on node \"master-0\" DevicePath \"\"" Feb 16 02:28:28.841749 master-0 kubenswrapper[31559]: I0216 02:28:28.841701 31559 reconciler_common.go:293] "Volume detached for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/9b50da24-3f10-4b81-be90-912874ed2629-prometheus-k8s-db\") on node \"master-0\" DevicePath \"\"" Feb 16 02:28:28.841749 master-0 kubenswrapper[31559]: I0216 02:28:28.841714 31559 reconciler_common.go:293] "Volume detached for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9b50da24-3f10-4b81-be90-912874ed2629-configmap-metrics-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 16 02:28:28.841749 master-0 kubenswrapper[31559]: I0216 02:28:28.841727 31559 reconciler_common.go:293] "Volume detached for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/9b50da24-3f10-4b81-be90-912874ed2629-secret-prometheus-k8s-thanos-sidecar-tls\") on node \"master-0\" DevicePath \"\"" Feb 16 02:28:29.166368 master-0 kubenswrapper[31559]: I0216 02:28:29.166215 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"5abb5a89-cacc-49d7-907e-ccd374e119d5","Type":"ContainerStarted","Data":"f5d7f77ca60560060ced0161e738ba8cc1231ab6e2f7c6951a779fce473dc015"} Feb 16 02:28:29.166368 master-0 kubenswrapper[31559]: I0216 02:28:29.166289 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"5abb5a89-cacc-49d7-907e-ccd374e119d5","Type":"ContainerStarted","Data":"8159d246fa03630717c21323465feb379b97fbc7312b5b8a9ab583dd2c91344b"} Feb 16 02:28:29.166368 master-0 kubenswrapper[31559]: I0216 02:28:29.166310 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"5abb5a89-cacc-49d7-907e-ccd374e119d5","Type":"ContainerStarted","Data":"9a2af6ef39698d46d6f4d8099f69195a220157d7962acae2bab719f4bbd1c9f2"} Feb 16 02:28:29.166368 master-0 kubenswrapper[31559]: I0216 02:28:29.166328 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"5abb5a89-cacc-49d7-907e-ccd374e119d5","Type":"ContainerStarted","Data":"5aa6b5259e259c47fab80cbb3c5c24ebef826ef263cf256c4ab1d39fd30deb49"} Feb 16 02:28:29.171751 master-0 kubenswrapper[31559]: I0216 02:28:29.171672 31559 generic.go:334] "Generic (PLEG): container finished" podID="9b50da24-3f10-4b81-be90-912874ed2629" containerID="fef27121ceeb80a7aebc31c20224bc11cf1a9cbb5557ffff946afefa4448b15a" exitCode=0 Feb 16 02:28:29.171751 master-0 kubenswrapper[31559]: I0216 02:28:29.171737 31559 generic.go:334] "Generic (PLEG): container finished" podID="9b50da24-3f10-4b81-be90-912874ed2629" containerID="01b27fa108a3b5894a94255befeb44230fc5897f8f6ce1acf532932fc2b53d83" exitCode=0 Feb 16 02:28:29.172027 master-0 kubenswrapper[31559]: I0216 02:28:29.171761 31559 generic.go:334] "Generic (PLEG): container finished" podID="9b50da24-3f10-4b81-be90-912874ed2629" containerID="75cad3478045470f2040f88da8c40c241bef15b6fd03a907d8a57534c70ec620" exitCode=0 Feb 16 02:28:29.172027 master-0 kubenswrapper[31559]: I0216 02:28:29.171793 31559 generic.go:334] "Generic (PLEG): container finished" podID="9b50da24-3f10-4b81-be90-912874ed2629" containerID="c35a553e2747f5aac1b45381ac10c69bcf39cea169e699e46724c0742a84361e" exitCode=0 Feb 16 02:28:29.172027 master-0 kubenswrapper[31559]: I0216 02:28:29.171819 31559 generic.go:334] "Generic (PLEG): container finished" podID="9b50da24-3f10-4b81-be90-912874ed2629" containerID="a66be50108e13d548689d1184b0c507616dfc0f43d0669e813aa03fb0dfedc02" exitCode=0 Feb 16 02:28:29.172027 master-0 kubenswrapper[31559]: I0216 02:28:29.171844 31559 generic.go:334] "Generic (PLEG): container finished" podID="9b50da24-3f10-4b81-be90-912874ed2629" containerID="4dd0ca3d1ef720a8082afa260fcbaf109837a0b326a0d5205fe9539b323bea7b" exitCode=0 Feb 16 02:28:29.172027 master-0 kubenswrapper[31559]: I0216 02:28:29.171763 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:28:29.172027 master-0 kubenswrapper[31559]: I0216 02:28:29.171787 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"9b50da24-3f10-4b81-be90-912874ed2629","Type":"ContainerDied","Data":"fef27121ceeb80a7aebc31c20224bc11cf1a9cbb5557ffff946afefa4448b15a"} Feb 16 02:28:29.172027 master-0 kubenswrapper[31559]: I0216 02:28:29.171986 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"9b50da24-3f10-4b81-be90-912874ed2629","Type":"ContainerDied","Data":"01b27fa108a3b5894a94255befeb44230fc5897f8f6ce1acf532932fc2b53d83"} Feb 16 02:28:29.172027 master-0 kubenswrapper[31559]: I0216 02:28:29.172016 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"9b50da24-3f10-4b81-be90-912874ed2629","Type":"ContainerDied","Data":"75cad3478045470f2040f88da8c40c241bef15b6fd03a907d8a57534c70ec620"} Feb 16 02:28:29.172027 master-0 kubenswrapper[31559]: I0216 02:28:29.172031 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"9b50da24-3f10-4b81-be90-912874ed2629","Type":"ContainerDied","Data":"c35a553e2747f5aac1b45381ac10c69bcf39cea169e699e46724c0742a84361e"} Feb 16 02:28:29.172027 master-0 kubenswrapper[31559]: I0216 02:28:29.172044 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"9b50da24-3f10-4b81-be90-912874ed2629","Type":"ContainerDied","Data":"a66be50108e13d548689d1184b0c507616dfc0f43d0669e813aa03fb0dfedc02"} Feb 16 02:28:29.172027 master-0 kubenswrapper[31559]: I0216 02:28:29.172057 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"9b50da24-3f10-4b81-be90-912874ed2629","Type":"ContainerDied","Data":"4dd0ca3d1ef720a8082afa260fcbaf109837a0b326a0d5205fe9539b323bea7b"} Feb 16 02:28:29.172997 master-0 kubenswrapper[31559]: I0216 02:28:29.172069 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"9b50da24-3f10-4b81-be90-912874ed2629","Type":"ContainerDied","Data":"f22f528c13db514a186ce08f072b116dc1c1208712f39c43bbeee8730ef240a7"} Feb 16 02:28:29.172997 master-0 kubenswrapper[31559]: I0216 02:28:29.172089 31559 scope.go:117] "RemoveContainer" containerID="fef27121ceeb80a7aebc31c20224bc11cf1a9cbb5557ffff946afefa4448b15a" Feb 16 02:28:29.198346 master-0 kubenswrapper[31559]: I0216 02:28:29.198187 31559 scope.go:117] "RemoveContainer" containerID="01b27fa108a3b5894a94255befeb44230fc5897f8f6ce1acf532932fc2b53d83" Feb 16 02:28:29.243067 master-0 kubenswrapper[31559]: I0216 02:28:29.242787 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 16 02:28:29.248154 master-0 kubenswrapper[31559]: I0216 02:28:29.248075 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 16 02:28:29.252076 master-0 kubenswrapper[31559]: I0216 02:28:29.251795 31559 scope.go:117] "RemoveContainer" containerID="75cad3478045470f2040f88da8c40c241bef15b6fd03a907d8a57534c70ec620" Feb 16 02:28:29.296487 master-0 kubenswrapper[31559]: I0216 02:28:29.295737 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 16 02:28:29.296487 master-0 kubenswrapper[31559]: E0216 02:28:29.296277 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b50da24-3f10-4b81-be90-912874ed2629" containerName="kube-rbac-proxy" Feb 16 02:28:29.296487 master-0 kubenswrapper[31559]: I0216 02:28:29.296303 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b50da24-3f10-4b81-be90-912874ed2629" containerName="kube-rbac-proxy" Feb 16 02:28:29.296487 master-0 kubenswrapper[31559]: E0216 02:28:29.296341 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b50da24-3f10-4b81-be90-912874ed2629" containerName="thanos-sidecar" Feb 16 02:28:29.296487 master-0 kubenswrapper[31559]: I0216 02:28:29.296354 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b50da24-3f10-4b81-be90-912874ed2629" containerName="thanos-sidecar" Feb 16 02:28:29.296487 master-0 kubenswrapper[31559]: E0216 02:28:29.296377 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b50da24-3f10-4b81-be90-912874ed2629" containerName="config-reloader" Feb 16 02:28:29.296487 master-0 kubenswrapper[31559]: I0216 02:28:29.296390 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b50da24-3f10-4b81-be90-912874ed2629" containerName="config-reloader" Feb 16 02:28:29.296487 master-0 kubenswrapper[31559]: E0216 02:28:29.296416 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b50da24-3f10-4b81-be90-912874ed2629" containerName="kube-rbac-proxy-web" Feb 16 02:28:29.296487 master-0 kubenswrapper[31559]: I0216 02:28:29.296430 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b50da24-3f10-4b81-be90-912874ed2629" containerName="kube-rbac-proxy-web" Feb 16 02:28:29.296487 master-0 kubenswrapper[31559]: E0216 02:28:29.296498 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b50da24-3f10-4b81-be90-912874ed2629" containerName="prometheus" Feb 16 02:28:29.296487 master-0 kubenswrapper[31559]: I0216 02:28:29.296512 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b50da24-3f10-4b81-be90-912874ed2629" containerName="prometheus" Feb 16 02:28:29.296487 master-0 kubenswrapper[31559]: E0216 02:28:29.296538 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b50da24-3f10-4b81-be90-912874ed2629" containerName="init-config-reloader" Feb 16 02:28:29.297683 master-0 kubenswrapper[31559]: I0216 02:28:29.296552 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b50da24-3f10-4b81-be90-912874ed2629" containerName="init-config-reloader" Feb 16 02:28:29.297683 master-0 kubenswrapper[31559]: E0216 02:28:29.296583 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b50da24-3f10-4b81-be90-912874ed2629" containerName="kube-rbac-proxy-thanos" Feb 16 02:28:29.297683 master-0 kubenswrapper[31559]: I0216 02:28:29.296599 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b50da24-3f10-4b81-be90-912874ed2629" containerName="kube-rbac-proxy-thanos" Feb 16 02:28:29.297683 master-0 kubenswrapper[31559]: I0216 02:28:29.296829 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b50da24-3f10-4b81-be90-912874ed2629" containerName="thanos-sidecar" Feb 16 02:28:29.297683 master-0 kubenswrapper[31559]: I0216 02:28:29.296886 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b50da24-3f10-4b81-be90-912874ed2629" containerName="kube-rbac-proxy" Feb 16 02:28:29.297683 master-0 kubenswrapper[31559]: I0216 02:28:29.296910 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b50da24-3f10-4b81-be90-912874ed2629" containerName="kube-rbac-proxy-web" Feb 16 02:28:29.297683 master-0 kubenswrapper[31559]: I0216 02:28:29.296935 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b50da24-3f10-4b81-be90-912874ed2629" containerName="prometheus" Feb 16 02:28:29.297683 master-0 kubenswrapper[31559]: I0216 02:28:29.296972 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b50da24-3f10-4b81-be90-912874ed2629" containerName="config-reloader" Feb 16 02:28:29.297683 master-0 kubenswrapper[31559]: I0216 02:28:29.296994 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b50da24-3f10-4b81-be90-912874ed2629" containerName="kube-rbac-proxy-thanos" Feb 16 02:28:29.303484 master-0 kubenswrapper[31559]: I0216 02:28:29.301786 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:28:29.313717 master-0 kubenswrapper[31559]: I0216 02:28:29.313643 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Feb 16 02:28:29.314074 master-0 kubenswrapper[31559]: I0216 02:28:29.313882 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Feb 16 02:28:29.314074 master-0 kubenswrapper[31559]: I0216 02:28:29.313944 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Feb 16 02:28:29.314923 master-0 kubenswrapper[31559]: I0216 02:28:29.314491 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Feb 16 02:28:29.316209 master-0 kubenswrapper[31559]: I0216 02:28:29.315814 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Feb 16 02:28:29.316629 master-0 kubenswrapper[31559]: I0216 02:28:29.316580 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-3lmddhljekc4u" Feb 16 02:28:29.316853 master-0 kubenswrapper[31559]: I0216 02:28:29.316699 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Feb 16 02:28:29.316853 master-0 kubenswrapper[31559]: I0216 02:28:29.316816 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Feb 16 02:28:29.322550 master-0 kubenswrapper[31559]: I0216 02:28:29.319351 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Feb 16 02:28:29.322550 master-0 kubenswrapper[31559]: I0216 02:28:29.320208 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Feb 16 02:28:29.358041 master-0 kubenswrapper[31559]: I0216 02:28:29.357967 31559 scope.go:117] "RemoveContainer" containerID="c35a553e2747f5aac1b45381ac10c69bcf39cea169e699e46724c0742a84361e" Feb 16 02:28:29.369752 master-0 kubenswrapper[31559]: I0216 02:28:29.367207 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 16 02:28:29.369752 master-0 kubenswrapper[31559]: I0216 02:28:29.369309 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Feb 16 02:28:29.373227 master-0 kubenswrapper[31559]: I0216 02:28:29.373167 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Feb 16 02:28:29.402617 master-0 kubenswrapper[31559]: I0216 02:28:29.402507 31559 scope.go:117] "RemoveContainer" containerID="a66be50108e13d548689d1184b0c507616dfc0f43d0669e813aa03fb0dfedc02" Feb 16 02:28:29.428864 master-0 kubenswrapper[31559]: I0216 02:28:29.428790 31559 scope.go:117] "RemoveContainer" containerID="4dd0ca3d1ef720a8082afa260fcbaf109837a0b326a0d5205fe9539b323bea7b" Feb 16 02:28:29.460892 master-0 kubenswrapper[31559]: I0216 02:28:29.460810 31559 scope.go:117] "RemoveContainer" containerID="557e60bc17eac08530d704e0dda9c831397e8de1d926a03728413a4cf53d59ab" Feb 16 02:28:29.467630 master-0 kubenswrapper[31559]: I0216 02:28:29.467566 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/ef270823-b0cf-477d-bb96-772731e2cec5-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"ef270823-b0cf-477d-bb96-772731e2cec5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:28:29.467630 master-0 kubenswrapper[31559]: I0216 02:28:29.467627 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef270823-b0cf-477d-bb96-772731e2cec5-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"ef270823-b0cf-477d-bb96-772731e2cec5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:28:29.467932 master-0 kubenswrapper[31559]: I0216 02:28:29.467658 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ef270823-b0cf-477d-bb96-772731e2cec5-config\") pod \"prometheus-k8s-0\" (UID: \"ef270823-b0cf-477d-bb96-772731e2cec5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:28:29.467932 master-0 kubenswrapper[31559]: I0216 02:28:29.467692 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/ef270823-b0cf-477d-bb96-772731e2cec5-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"ef270823-b0cf-477d-bb96-772731e2cec5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:28:29.467932 master-0 kubenswrapper[31559]: I0216 02:28:29.467794 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef270823-b0cf-477d-bb96-772731e2cec5-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"ef270823-b0cf-477d-bb96-772731e2cec5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:28:29.467932 master-0 kubenswrapper[31559]: I0216 02:28:29.467854 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/ef270823-b0cf-477d-bb96-772731e2cec5-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"ef270823-b0cf-477d-bb96-772731e2cec5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:28:29.467932 master-0 kubenswrapper[31559]: I0216 02:28:29.467900 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/ef270823-b0cf-477d-bb96-772731e2cec5-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"ef270823-b0cf-477d-bb96-772731e2cec5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:28:29.468608 master-0 kubenswrapper[31559]: I0216 02:28:29.468571 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/ef270823-b0cf-477d-bb96-772731e2cec5-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"ef270823-b0cf-477d-bb96-772731e2cec5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:28:29.468718 master-0 kubenswrapper[31559]: I0216 02:28:29.468648 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef270823-b0cf-477d-bb96-772731e2cec5-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"ef270823-b0cf-477d-bb96-772731e2cec5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:28:29.468824 master-0 kubenswrapper[31559]: I0216 02:28:29.468709 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ef270823-b0cf-477d-bb96-772731e2cec5-config-out\") pod \"prometheus-k8s-0\" (UID: \"ef270823-b0cf-477d-bb96-772731e2cec5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:28:29.468946 master-0 kubenswrapper[31559]: I0216 02:28:29.468852 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/ef270823-b0cf-477d-bb96-772731e2cec5-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"ef270823-b0cf-477d-bb96-772731e2cec5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:28:29.469054 master-0 kubenswrapper[31559]: I0216 02:28:29.468944 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/ef270823-b0cf-477d-bb96-772731e2cec5-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"ef270823-b0cf-477d-bb96-772731e2cec5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:28:29.469429 master-0 kubenswrapper[31559]: I0216 02:28:29.469343 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ef270823-b0cf-477d-bb96-772731e2cec5-web-config\") pod \"prometheus-k8s-0\" (UID: \"ef270823-b0cf-477d-bb96-772731e2cec5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:28:29.469651 master-0 kubenswrapper[31559]: I0216 02:28:29.469507 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/ef270823-b0cf-477d-bb96-772731e2cec5-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"ef270823-b0cf-477d-bb96-772731e2cec5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:28:29.469651 master-0 kubenswrapper[31559]: I0216 02:28:29.469610 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ef270823-b0cf-477d-bb96-772731e2cec5-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"ef270823-b0cf-477d-bb96-772731e2cec5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:28:29.469870 master-0 kubenswrapper[31559]: I0216 02:28:29.469709 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gr4d\" (UniqueName: \"kubernetes.io/projected/ef270823-b0cf-477d-bb96-772731e2cec5-kube-api-access-9gr4d\") pod \"prometheus-k8s-0\" (UID: \"ef270823-b0cf-477d-bb96-772731e2cec5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:28:29.469870 master-0 kubenswrapper[31559]: I0216 02:28:29.469822 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/ef270823-b0cf-477d-bb96-772731e2cec5-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"ef270823-b0cf-477d-bb96-772731e2cec5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:28:29.470069 master-0 kubenswrapper[31559]: I0216 02:28:29.469882 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ef270823-b0cf-477d-bb96-772731e2cec5-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"ef270823-b0cf-477d-bb96-772731e2cec5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:28:29.486003 master-0 kubenswrapper[31559]: I0216 02:28:29.485918 31559 scope.go:117] "RemoveContainer" containerID="fef27121ceeb80a7aebc31c20224bc11cf1a9cbb5557ffff946afefa4448b15a" Feb 16 02:28:29.486463 master-0 kubenswrapper[31559]: E0216 02:28:29.486376 31559 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fef27121ceeb80a7aebc31c20224bc11cf1a9cbb5557ffff946afefa4448b15a\": container with ID starting with fef27121ceeb80a7aebc31c20224bc11cf1a9cbb5557ffff946afefa4448b15a not found: ID does not exist" containerID="fef27121ceeb80a7aebc31c20224bc11cf1a9cbb5557ffff946afefa4448b15a" Feb 16 02:28:29.486623 master-0 kubenswrapper[31559]: I0216 02:28:29.486548 31559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fef27121ceeb80a7aebc31c20224bc11cf1a9cbb5557ffff946afefa4448b15a"} err="failed to get container status \"fef27121ceeb80a7aebc31c20224bc11cf1a9cbb5557ffff946afefa4448b15a\": rpc error: code = NotFound desc = could not find container \"fef27121ceeb80a7aebc31c20224bc11cf1a9cbb5557ffff946afefa4448b15a\": container with ID starting with fef27121ceeb80a7aebc31c20224bc11cf1a9cbb5557ffff946afefa4448b15a not found: ID does not exist" Feb 16 02:28:29.486881 master-0 kubenswrapper[31559]: I0216 02:28:29.486746 31559 scope.go:117] "RemoveContainer" containerID="01b27fa108a3b5894a94255befeb44230fc5897f8f6ce1acf532932fc2b53d83" Feb 16 02:28:29.487697 master-0 kubenswrapper[31559]: E0216 02:28:29.487622 31559 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01b27fa108a3b5894a94255befeb44230fc5897f8f6ce1acf532932fc2b53d83\": container with ID starting with 01b27fa108a3b5894a94255befeb44230fc5897f8f6ce1acf532932fc2b53d83 not found: ID does not exist" containerID="01b27fa108a3b5894a94255befeb44230fc5897f8f6ce1acf532932fc2b53d83" Feb 16 02:28:29.487834 master-0 kubenswrapper[31559]: I0216 02:28:29.487703 31559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01b27fa108a3b5894a94255befeb44230fc5897f8f6ce1acf532932fc2b53d83"} err="failed to get container status \"01b27fa108a3b5894a94255befeb44230fc5897f8f6ce1acf532932fc2b53d83\": rpc error: code = NotFound desc = could not find container \"01b27fa108a3b5894a94255befeb44230fc5897f8f6ce1acf532932fc2b53d83\": container with ID starting with 01b27fa108a3b5894a94255befeb44230fc5897f8f6ce1acf532932fc2b53d83 not found: ID does not exist" Feb 16 02:28:29.487834 master-0 kubenswrapper[31559]: I0216 02:28:29.487798 31559 scope.go:117] "RemoveContainer" containerID="75cad3478045470f2040f88da8c40c241bef15b6fd03a907d8a57534c70ec620" Feb 16 02:28:29.488576 master-0 kubenswrapper[31559]: E0216 02:28:29.488504 31559 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75cad3478045470f2040f88da8c40c241bef15b6fd03a907d8a57534c70ec620\": container with ID starting with 75cad3478045470f2040f88da8c40c241bef15b6fd03a907d8a57534c70ec620 not found: ID does not exist" containerID="75cad3478045470f2040f88da8c40c241bef15b6fd03a907d8a57534c70ec620" Feb 16 02:28:29.488802 master-0 kubenswrapper[31559]: I0216 02:28:29.488576 31559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75cad3478045470f2040f88da8c40c241bef15b6fd03a907d8a57534c70ec620"} err="failed to get container status \"75cad3478045470f2040f88da8c40c241bef15b6fd03a907d8a57534c70ec620\": rpc error: code = NotFound desc = could not find container \"75cad3478045470f2040f88da8c40c241bef15b6fd03a907d8a57534c70ec620\": container with ID starting with 75cad3478045470f2040f88da8c40c241bef15b6fd03a907d8a57534c70ec620 not found: ID does not exist" Feb 16 02:28:29.488802 master-0 kubenswrapper[31559]: I0216 02:28:29.488625 31559 scope.go:117] "RemoveContainer" containerID="c35a553e2747f5aac1b45381ac10c69bcf39cea169e699e46724c0742a84361e" Feb 16 02:28:29.489081 master-0 kubenswrapper[31559]: E0216 02:28:29.489018 31559 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c35a553e2747f5aac1b45381ac10c69bcf39cea169e699e46724c0742a84361e\": container with ID starting with c35a553e2747f5aac1b45381ac10c69bcf39cea169e699e46724c0742a84361e not found: ID does not exist" containerID="c35a553e2747f5aac1b45381ac10c69bcf39cea169e699e46724c0742a84361e" Feb 16 02:28:29.489081 master-0 kubenswrapper[31559]: I0216 02:28:29.489065 31559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c35a553e2747f5aac1b45381ac10c69bcf39cea169e699e46724c0742a84361e"} err="failed to get container status \"c35a553e2747f5aac1b45381ac10c69bcf39cea169e699e46724c0742a84361e\": rpc error: code = NotFound desc = could not find container \"c35a553e2747f5aac1b45381ac10c69bcf39cea169e699e46724c0742a84361e\": container with ID starting with c35a553e2747f5aac1b45381ac10c69bcf39cea169e699e46724c0742a84361e not found: ID does not exist" Feb 16 02:28:29.489258 master-0 kubenswrapper[31559]: I0216 02:28:29.489091 31559 scope.go:117] "RemoveContainer" containerID="a66be50108e13d548689d1184b0c507616dfc0f43d0669e813aa03fb0dfedc02" Feb 16 02:28:29.489701 master-0 kubenswrapper[31559]: E0216 02:28:29.489640 31559 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a66be50108e13d548689d1184b0c507616dfc0f43d0669e813aa03fb0dfedc02\": container with ID starting with a66be50108e13d548689d1184b0c507616dfc0f43d0669e813aa03fb0dfedc02 not found: ID does not exist" containerID="a66be50108e13d548689d1184b0c507616dfc0f43d0669e813aa03fb0dfedc02" Feb 16 02:28:29.489798 master-0 kubenswrapper[31559]: I0216 02:28:29.489706 31559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a66be50108e13d548689d1184b0c507616dfc0f43d0669e813aa03fb0dfedc02"} err="failed to get container status \"a66be50108e13d548689d1184b0c507616dfc0f43d0669e813aa03fb0dfedc02\": rpc error: code = NotFound desc = could not find container \"a66be50108e13d548689d1184b0c507616dfc0f43d0669e813aa03fb0dfedc02\": container with ID starting with a66be50108e13d548689d1184b0c507616dfc0f43d0669e813aa03fb0dfedc02 not found: ID does not exist" Feb 16 02:28:29.489798 master-0 kubenswrapper[31559]: I0216 02:28:29.489752 31559 scope.go:117] "RemoveContainer" containerID="4dd0ca3d1ef720a8082afa260fcbaf109837a0b326a0d5205fe9539b323bea7b" Feb 16 02:28:29.490274 master-0 kubenswrapper[31559]: E0216 02:28:29.490209 31559 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4dd0ca3d1ef720a8082afa260fcbaf109837a0b326a0d5205fe9539b323bea7b\": container with ID starting with 4dd0ca3d1ef720a8082afa260fcbaf109837a0b326a0d5205fe9539b323bea7b not found: ID does not exist" containerID="4dd0ca3d1ef720a8082afa260fcbaf109837a0b326a0d5205fe9539b323bea7b" Feb 16 02:28:29.490274 master-0 kubenswrapper[31559]: I0216 02:28:29.490260 31559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4dd0ca3d1ef720a8082afa260fcbaf109837a0b326a0d5205fe9539b323bea7b"} err="failed to get container status \"4dd0ca3d1ef720a8082afa260fcbaf109837a0b326a0d5205fe9539b323bea7b\": rpc error: code = NotFound desc = could not find container \"4dd0ca3d1ef720a8082afa260fcbaf109837a0b326a0d5205fe9539b323bea7b\": container with ID starting with 4dd0ca3d1ef720a8082afa260fcbaf109837a0b326a0d5205fe9539b323bea7b not found: ID does not exist" Feb 16 02:28:29.490461 master-0 kubenswrapper[31559]: I0216 02:28:29.490290 31559 scope.go:117] "RemoveContainer" containerID="557e60bc17eac08530d704e0dda9c831397e8de1d926a03728413a4cf53d59ab" Feb 16 02:28:29.490745 master-0 kubenswrapper[31559]: E0216 02:28:29.490679 31559 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"557e60bc17eac08530d704e0dda9c831397e8de1d926a03728413a4cf53d59ab\": container with ID starting with 557e60bc17eac08530d704e0dda9c831397e8de1d926a03728413a4cf53d59ab not found: ID does not exist" containerID="557e60bc17eac08530d704e0dda9c831397e8de1d926a03728413a4cf53d59ab" Feb 16 02:28:29.490824 master-0 kubenswrapper[31559]: I0216 02:28:29.490736 31559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"557e60bc17eac08530d704e0dda9c831397e8de1d926a03728413a4cf53d59ab"} err="failed to get container status \"557e60bc17eac08530d704e0dda9c831397e8de1d926a03728413a4cf53d59ab\": rpc error: code = NotFound desc = could not find container \"557e60bc17eac08530d704e0dda9c831397e8de1d926a03728413a4cf53d59ab\": container with ID starting with 557e60bc17eac08530d704e0dda9c831397e8de1d926a03728413a4cf53d59ab not found: ID does not exist" Feb 16 02:28:29.490824 master-0 kubenswrapper[31559]: I0216 02:28:29.490769 31559 scope.go:117] "RemoveContainer" containerID="fef27121ceeb80a7aebc31c20224bc11cf1a9cbb5557ffff946afefa4448b15a" Feb 16 02:28:29.491207 master-0 kubenswrapper[31559]: I0216 02:28:29.491144 31559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fef27121ceeb80a7aebc31c20224bc11cf1a9cbb5557ffff946afefa4448b15a"} err="failed to get container status \"fef27121ceeb80a7aebc31c20224bc11cf1a9cbb5557ffff946afefa4448b15a\": rpc error: code = NotFound desc = could not find container \"fef27121ceeb80a7aebc31c20224bc11cf1a9cbb5557ffff946afefa4448b15a\": container with ID starting with fef27121ceeb80a7aebc31c20224bc11cf1a9cbb5557ffff946afefa4448b15a not found: ID does not exist" Feb 16 02:28:29.491207 master-0 kubenswrapper[31559]: I0216 02:28:29.491191 31559 scope.go:117] "RemoveContainer" containerID="01b27fa108a3b5894a94255befeb44230fc5897f8f6ce1acf532932fc2b53d83" Feb 16 02:28:29.491628 master-0 kubenswrapper[31559]: I0216 02:28:29.491562 31559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01b27fa108a3b5894a94255befeb44230fc5897f8f6ce1acf532932fc2b53d83"} err="failed to get container status \"01b27fa108a3b5894a94255befeb44230fc5897f8f6ce1acf532932fc2b53d83\": rpc error: code = NotFound desc = could not find container \"01b27fa108a3b5894a94255befeb44230fc5897f8f6ce1acf532932fc2b53d83\": container with ID starting with 01b27fa108a3b5894a94255befeb44230fc5897f8f6ce1acf532932fc2b53d83 not found: ID does not exist" Feb 16 02:28:29.491628 master-0 kubenswrapper[31559]: I0216 02:28:29.491611 31559 scope.go:117] "RemoveContainer" containerID="75cad3478045470f2040f88da8c40c241bef15b6fd03a907d8a57534c70ec620" Feb 16 02:28:29.492094 master-0 kubenswrapper[31559]: I0216 02:28:29.492032 31559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75cad3478045470f2040f88da8c40c241bef15b6fd03a907d8a57534c70ec620"} err="failed to get container status \"75cad3478045470f2040f88da8c40c241bef15b6fd03a907d8a57534c70ec620\": rpc error: code = NotFound desc = could not find container \"75cad3478045470f2040f88da8c40c241bef15b6fd03a907d8a57534c70ec620\": container with ID starting with 75cad3478045470f2040f88da8c40c241bef15b6fd03a907d8a57534c70ec620 not found: ID does not exist" Feb 16 02:28:29.492094 master-0 kubenswrapper[31559]: I0216 02:28:29.492078 31559 scope.go:117] "RemoveContainer" containerID="c35a553e2747f5aac1b45381ac10c69bcf39cea169e699e46724c0742a84361e" Feb 16 02:28:29.492531 master-0 kubenswrapper[31559]: I0216 02:28:29.492469 31559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c35a553e2747f5aac1b45381ac10c69bcf39cea169e699e46724c0742a84361e"} err="failed to get container status \"c35a553e2747f5aac1b45381ac10c69bcf39cea169e699e46724c0742a84361e\": rpc error: code = NotFound desc = could not find container \"c35a553e2747f5aac1b45381ac10c69bcf39cea169e699e46724c0742a84361e\": container with ID starting with c35a553e2747f5aac1b45381ac10c69bcf39cea169e699e46724c0742a84361e not found: ID does not exist" Feb 16 02:28:29.492531 master-0 kubenswrapper[31559]: I0216 02:28:29.492513 31559 scope.go:117] "RemoveContainer" containerID="a66be50108e13d548689d1184b0c507616dfc0f43d0669e813aa03fb0dfedc02" Feb 16 02:28:29.493000 master-0 kubenswrapper[31559]: I0216 02:28:29.492872 31559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a66be50108e13d548689d1184b0c507616dfc0f43d0669e813aa03fb0dfedc02"} err="failed to get container status \"a66be50108e13d548689d1184b0c507616dfc0f43d0669e813aa03fb0dfedc02\": rpc error: code = NotFound desc = could not find container \"a66be50108e13d548689d1184b0c507616dfc0f43d0669e813aa03fb0dfedc02\": container with ID starting with a66be50108e13d548689d1184b0c507616dfc0f43d0669e813aa03fb0dfedc02 not found: ID does not exist" Feb 16 02:28:29.493000 master-0 kubenswrapper[31559]: I0216 02:28:29.492919 31559 scope.go:117] "RemoveContainer" containerID="4dd0ca3d1ef720a8082afa260fcbaf109837a0b326a0d5205fe9539b323bea7b" Feb 16 02:28:29.493379 master-0 kubenswrapper[31559]: I0216 02:28:29.493261 31559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4dd0ca3d1ef720a8082afa260fcbaf109837a0b326a0d5205fe9539b323bea7b"} err="failed to get container status \"4dd0ca3d1ef720a8082afa260fcbaf109837a0b326a0d5205fe9539b323bea7b\": rpc error: code = NotFound desc = could not find container \"4dd0ca3d1ef720a8082afa260fcbaf109837a0b326a0d5205fe9539b323bea7b\": container with ID starting with 4dd0ca3d1ef720a8082afa260fcbaf109837a0b326a0d5205fe9539b323bea7b not found: ID does not exist" Feb 16 02:28:29.493379 master-0 kubenswrapper[31559]: I0216 02:28:29.493302 31559 scope.go:117] "RemoveContainer" containerID="557e60bc17eac08530d704e0dda9c831397e8de1d926a03728413a4cf53d59ab" Feb 16 02:28:29.493787 master-0 kubenswrapper[31559]: I0216 02:28:29.493712 31559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"557e60bc17eac08530d704e0dda9c831397e8de1d926a03728413a4cf53d59ab"} err="failed to get container status \"557e60bc17eac08530d704e0dda9c831397e8de1d926a03728413a4cf53d59ab\": rpc error: code = NotFound desc = could not find container \"557e60bc17eac08530d704e0dda9c831397e8de1d926a03728413a4cf53d59ab\": container with ID starting with 557e60bc17eac08530d704e0dda9c831397e8de1d926a03728413a4cf53d59ab not found: ID does not exist" Feb 16 02:28:29.493787 master-0 kubenswrapper[31559]: I0216 02:28:29.493752 31559 scope.go:117] "RemoveContainer" containerID="fef27121ceeb80a7aebc31c20224bc11cf1a9cbb5557ffff946afefa4448b15a" Feb 16 02:28:29.494169 master-0 kubenswrapper[31559]: I0216 02:28:29.494114 31559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fef27121ceeb80a7aebc31c20224bc11cf1a9cbb5557ffff946afefa4448b15a"} err="failed to get container status \"fef27121ceeb80a7aebc31c20224bc11cf1a9cbb5557ffff946afefa4448b15a\": rpc error: code = NotFound desc = could not find container \"fef27121ceeb80a7aebc31c20224bc11cf1a9cbb5557ffff946afefa4448b15a\": container with ID starting with fef27121ceeb80a7aebc31c20224bc11cf1a9cbb5557ffff946afefa4448b15a not found: ID does not exist" Feb 16 02:28:29.494169 master-0 kubenswrapper[31559]: I0216 02:28:29.494152 31559 scope.go:117] "RemoveContainer" containerID="01b27fa108a3b5894a94255befeb44230fc5897f8f6ce1acf532932fc2b53d83" Feb 16 02:28:29.494588 master-0 kubenswrapper[31559]: I0216 02:28:29.494531 31559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01b27fa108a3b5894a94255befeb44230fc5897f8f6ce1acf532932fc2b53d83"} err="failed to get container status \"01b27fa108a3b5894a94255befeb44230fc5897f8f6ce1acf532932fc2b53d83\": rpc error: code = NotFound desc = could not find container \"01b27fa108a3b5894a94255befeb44230fc5897f8f6ce1acf532932fc2b53d83\": container with ID starting with 01b27fa108a3b5894a94255befeb44230fc5897f8f6ce1acf532932fc2b53d83 not found: ID does not exist" Feb 16 02:28:29.494688 master-0 kubenswrapper[31559]: I0216 02:28:29.494590 31559 scope.go:117] "RemoveContainer" containerID="75cad3478045470f2040f88da8c40c241bef15b6fd03a907d8a57534c70ec620" Feb 16 02:28:29.495086 master-0 kubenswrapper[31559]: I0216 02:28:29.495006 31559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75cad3478045470f2040f88da8c40c241bef15b6fd03a907d8a57534c70ec620"} err="failed to get container status \"75cad3478045470f2040f88da8c40c241bef15b6fd03a907d8a57534c70ec620\": rpc error: code = NotFound desc = could not find container \"75cad3478045470f2040f88da8c40c241bef15b6fd03a907d8a57534c70ec620\": container with ID starting with 75cad3478045470f2040f88da8c40c241bef15b6fd03a907d8a57534c70ec620 not found: ID does not exist" Feb 16 02:28:29.495086 master-0 kubenswrapper[31559]: I0216 02:28:29.495057 31559 scope.go:117] "RemoveContainer" containerID="c35a553e2747f5aac1b45381ac10c69bcf39cea169e699e46724c0742a84361e" Feb 16 02:28:29.495476 master-0 kubenswrapper[31559]: I0216 02:28:29.495389 31559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c35a553e2747f5aac1b45381ac10c69bcf39cea169e699e46724c0742a84361e"} err="failed to get container status \"c35a553e2747f5aac1b45381ac10c69bcf39cea169e699e46724c0742a84361e\": rpc error: code = NotFound desc = could not find container \"c35a553e2747f5aac1b45381ac10c69bcf39cea169e699e46724c0742a84361e\": container with ID starting with c35a553e2747f5aac1b45381ac10c69bcf39cea169e699e46724c0742a84361e not found: ID does not exist" Feb 16 02:28:29.495476 master-0 kubenswrapper[31559]: I0216 02:28:29.495430 31559 scope.go:117] "RemoveContainer" containerID="a66be50108e13d548689d1184b0c507616dfc0f43d0669e813aa03fb0dfedc02" Feb 16 02:28:29.495950 master-0 kubenswrapper[31559]: I0216 02:28:29.495884 31559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a66be50108e13d548689d1184b0c507616dfc0f43d0669e813aa03fb0dfedc02"} err="failed to get container status \"a66be50108e13d548689d1184b0c507616dfc0f43d0669e813aa03fb0dfedc02\": rpc error: code = NotFound desc = could not find container \"a66be50108e13d548689d1184b0c507616dfc0f43d0669e813aa03fb0dfedc02\": container with ID starting with a66be50108e13d548689d1184b0c507616dfc0f43d0669e813aa03fb0dfedc02 not found: ID does not exist" Feb 16 02:28:29.495950 master-0 kubenswrapper[31559]: I0216 02:28:29.495935 31559 scope.go:117] "RemoveContainer" containerID="4dd0ca3d1ef720a8082afa260fcbaf109837a0b326a0d5205fe9539b323bea7b" Feb 16 02:28:29.496514 master-0 kubenswrapper[31559]: I0216 02:28:29.496429 31559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4dd0ca3d1ef720a8082afa260fcbaf109837a0b326a0d5205fe9539b323bea7b"} err="failed to get container status \"4dd0ca3d1ef720a8082afa260fcbaf109837a0b326a0d5205fe9539b323bea7b\": rpc error: code = NotFound desc = could not find container \"4dd0ca3d1ef720a8082afa260fcbaf109837a0b326a0d5205fe9539b323bea7b\": container with ID starting with 4dd0ca3d1ef720a8082afa260fcbaf109837a0b326a0d5205fe9539b323bea7b not found: ID does not exist" Feb 16 02:28:29.496514 master-0 kubenswrapper[31559]: I0216 02:28:29.496500 31559 scope.go:117] "RemoveContainer" containerID="557e60bc17eac08530d704e0dda9c831397e8de1d926a03728413a4cf53d59ab" Feb 16 02:28:29.496965 master-0 kubenswrapper[31559]: I0216 02:28:29.496904 31559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"557e60bc17eac08530d704e0dda9c831397e8de1d926a03728413a4cf53d59ab"} err="failed to get container status \"557e60bc17eac08530d704e0dda9c831397e8de1d926a03728413a4cf53d59ab\": rpc error: code = NotFound desc = could not find container \"557e60bc17eac08530d704e0dda9c831397e8de1d926a03728413a4cf53d59ab\": container with ID starting with 557e60bc17eac08530d704e0dda9c831397e8de1d926a03728413a4cf53d59ab not found: ID does not exist" Feb 16 02:28:29.496965 master-0 kubenswrapper[31559]: I0216 02:28:29.496951 31559 scope.go:117] "RemoveContainer" containerID="fef27121ceeb80a7aebc31c20224bc11cf1a9cbb5557ffff946afefa4448b15a" Feb 16 02:28:29.497400 master-0 kubenswrapper[31559]: I0216 02:28:29.497302 31559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fef27121ceeb80a7aebc31c20224bc11cf1a9cbb5557ffff946afefa4448b15a"} err="failed to get container status \"fef27121ceeb80a7aebc31c20224bc11cf1a9cbb5557ffff946afefa4448b15a\": rpc error: code = NotFound desc = could not find container \"fef27121ceeb80a7aebc31c20224bc11cf1a9cbb5557ffff946afefa4448b15a\": container with ID starting with fef27121ceeb80a7aebc31c20224bc11cf1a9cbb5557ffff946afefa4448b15a not found: ID does not exist" Feb 16 02:28:29.497400 master-0 kubenswrapper[31559]: I0216 02:28:29.497348 31559 scope.go:117] "RemoveContainer" containerID="01b27fa108a3b5894a94255befeb44230fc5897f8f6ce1acf532932fc2b53d83" Feb 16 02:28:29.497885 master-0 kubenswrapper[31559]: I0216 02:28:29.497830 31559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01b27fa108a3b5894a94255befeb44230fc5897f8f6ce1acf532932fc2b53d83"} err="failed to get container status \"01b27fa108a3b5894a94255befeb44230fc5897f8f6ce1acf532932fc2b53d83\": rpc error: code = NotFound desc = could not find container \"01b27fa108a3b5894a94255befeb44230fc5897f8f6ce1acf532932fc2b53d83\": container with ID starting with 01b27fa108a3b5894a94255befeb44230fc5897f8f6ce1acf532932fc2b53d83 not found: ID does not exist" Feb 16 02:28:29.497885 master-0 kubenswrapper[31559]: I0216 02:28:29.497872 31559 scope.go:117] "RemoveContainer" containerID="75cad3478045470f2040f88da8c40c241bef15b6fd03a907d8a57534c70ec620" Feb 16 02:28:29.498382 master-0 kubenswrapper[31559]: I0216 02:28:29.498264 31559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75cad3478045470f2040f88da8c40c241bef15b6fd03a907d8a57534c70ec620"} err="failed to get container status \"75cad3478045470f2040f88da8c40c241bef15b6fd03a907d8a57534c70ec620\": rpc error: code = NotFound desc = could not find container \"75cad3478045470f2040f88da8c40c241bef15b6fd03a907d8a57534c70ec620\": container with ID starting with 75cad3478045470f2040f88da8c40c241bef15b6fd03a907d8a57534c70ec620 not found: ID does not exist" Feb 16 02:28:29.498382 master-0 kubenswrapper[31559]: I0216 02:28:29.498368 31559 scope.go:117] "RemoveContainer" containerID="c35a553e2747f5aac1b45381ac10c69bcf39cea169e699e46724c0742a84361e" Feb 16 02:28:29.498911 master-0 kubenswrapper[31559]: I0216 02:28:29.498791 31559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c35a553e2747f5aac1b45381ac10c69bcf39cea169e699e46724c0742a84361e"} err="failed to get container status \"c35a553e2747f5aac1b45381ac10c69bcf39cea169e699e46724c0742a84361e\": rpc error: code = NotFound desc = could not find container \"c35a553e2747f5aac1b45381ac10c69bcf39cea169e699e46724c0742a84361e\": container with ID starting with c35a553e2747f5aac1b45381ac10c69bcf39cea169e699e46724c0742a84361e not found: ID does not exist" Feb 16 02:28:29.498911 master-0 kubenswrapper[31559]: I0216 02:28:29.498841 31559 scope.go:117] "RemoveContainer" containerID="a66be50108e13d548689d1184b0c507616dfc0f43d0669e813aa03fb0dfedc02" Feb 16 02:28:29.499311 master-0 kubenswrapper[31559]: I0216 02:28:29.499244 31559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a66be50108e13d548689d1184b0c507616dfc0f43d0669e813aa03fb0dfedc02"} err="failed to get container status \"a66be50108e13d548689d1184b0c507616dfc0f43d0669e813aa03fb0dfedc02\": rpc error: code = NotFound desc = could not find container \"a66be50108e13d548689d1184b0c507616dfc0f43d0669e813aa03fb0dfedc02\": container with ID starting with a66be50108e13d548689d1184b0c507616dfc0f43d0669e813aa03fb0dfedc02 not found: ID does not exist" Feb 16 02:28:29.499311 master-0 kubenswrapper[31559]: I0216 02:28:29.499293 31559 scope.go:117] "RemoveContainer" containerID="4dd0ca3d1ef720a8082afa260fcbaf109837a0b326a0d5205fe9539b323bea7b" Feb 16 02:28:29.499774 master-0 kubenswrapper[31559]: I0216 02:28:29.499723 31559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4dd0ca3d1ef720a8082afa260fcbaf109837a0b326a0d5205fe9539b323bea7b"} err="failed to get container status \"4dd0ca3d1ef720a8082afa260fcbaf109837a0b326a0d5205fe9539b323bea7b\": rpc error: code = NotFound desc = could not find container \"4dd0ca3d1ef720a8082afa260fcbaf109837a0b326a0d5205fe9539b323bea7b\": container with ID starting with 4dd0ca3d1ef720a8082afa260fcbaf109837a0b326a0d5205fe9539b323bea7b not found: ID does not exist" Feb 16 02:28:29.499774 master-0 kubenswrapper[31559]: I0216 02:28:29.499755 31559 scope.go:117] "RemoveContainer" containerID="557e60bc17eac08530d704e0dda9c831397e8de1d926a03728413a4cf53d59ab" Feb 16 02:28:29.500205 master-0 kubenswrapper[31559]: I0216 02:28:29.500149 31559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"557e60bc17eac08530d704e0dda9c831397e8de1d926a03728413a4cf53d59ab"} err="failed to get container status \"557e60bc17eac08530d704e0dda9c831397e8de1d926a03728413a4cf53d59ab\": rpc error: code = NotFound desc = could not find container \"557e60bc17eac08530d704e0dda9c831397e8de1d926a03728413a4cf53d59ab\": container with ID starting with 557e60bc17eac08530d704e0dda9c831397e8de1d926a03728413a4cf53d59ab not found: ID does not exist" Feb 16 02:28:29.500205 master-0 kubenswrapper[31559]: I0216 02:28:29.500193 31559 scope.go:117] "RemoveContainer" containerID="fef27121ceeb80a7aebc31c20224bc11cf1a9cbb5557ffff946afefa4448b15a" Feb 16 02:28:29.500646 master-0 kubenswrapper[31559]: I0216 02:28:29.500588 31559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fef27121ceeb80a7aebc31c20224bc11cf1a9cbb5557ffff946afefa4448b15a"} err="failed to get container status \"fef27121ceeb80a7aebc31c20224bc11cf1a9cbb5557ffff946afefa4448b15a\": rpc error: code = NotFound desc = could not find container \"fef27121ceeb80a7aebc31c20224bc11cf1a9cbb5557ffff946afefa4448b15a\": container with ID starting with fef27121ceeb80a7aebc31c20224bc11cf1a9cbb5557ffff946afefa4448b15a not found: ID does not exist" Feb 16 02:28:29.500646 master-0 kubenswrapper[31559]: I0216 02:28:29.500632 31559 scope.go:117] "RemoveContainer" containerID="01b27fa108a3b5894a94255befeb44230fc5897f8f6ce1acf532932fc2b53d83" Feb 16 02:28:29.501246 master-0 kubenswrapper[31559]: I0216 02:28:29.501184 31559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01b27fa108a3b5894a94255befeb44230fc5897f8f6ce1acf532932fc2b53d83"} err="failed to get container status \"01b27fa108a3b5894a94255befeb44230fc5897f8f6ce1acf532932fc2b53d83\": rpc error: code = NotFound desc = could not find container \"01b27fa108a3b5894a94255befeb44230fc5897f8f6ce1acf532932fc2b53d83\": container with ID starting with 01b27fa108a3b5894a94255befeb44230fc5897f8f6ce1acf532932fc2b53d83 not found: ID does not exist" Feb 16 02:28:29.501246 master-0 kubenswrapper[31559]: I0216 02:28:29.501235 31559 scope.go:117] "RemoveContainer" containerID="75cad3478045470f2040f88da8c40c241bef15b6fd03a907d8a57534c70ec620" Feb 16 02:28:29.501867 master-0 kubenswrapper[31559]: I0216 02:28:29.501798 31559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75cad3478045470f2040f88da8c40c241bef15b6fd03a907d8a57534c70ec620"} err="failed to get container status \"75cad3478045470f2040f88da8c40c241bef15b6fd03a907d8a57534c70ec620\": rpc error: code = NotFound desc = could not find container \"75cad3478045470f2040f88da8c40c241bef15b6fd03a907d8a57534c70ec620\": container with ID starting with 75cad3478045470f2040f88da8c40c241bef15b6fd03a907d8a57534c70ec620 not found: ID does not exist" Feb 16 02:28:29.501867 master-0 kubenswrapper[31559]: I0216 02:28:29.501848 31559 scope.go:117] "RemoveContainer" containerID="c35a553e2747f5aac1b45381ac10c69bcf39cea169e699e46724c0742a84361e" Feb 16 02:28:29.502416 master-0 kubenswrapper[31559]: I0216 02:28:29.502371 31559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c35a553e2747f5aac1b45381ac10c69bcf39cea169e699e46724c0742a84361e"} err="failed to get container status \"c35a553e2747f5aac1b45381ac10c69bcf39cea169e699e46724c0742a84361e\": rpc error: code = NotFound desc = could not find container \"c35a553e2747f5aac1b45381ac10c69bcf39cea169e699e46724c0742a84361e\": container with ID starting with c35a553e2747f5aac1b45381ac10c69bcf39cea169e699e46724c0742a84361e not found: ID does not exist" Feb 16 02:28:29.502416 master-0 kubenswrapper[31559]: I0216 02:28:29.502411 31559 scope.go:117] "RemoveContainer" containerID="a66be50108e13d548689d1184b0c507616dfc0f43d0669e813aa03fb0dfedc02" Feb 16 02:28:29.502947 master-0 kubenswrapper[31559]: I0216 02:28:29.502878 31559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a66be50108e13d548689d1184b0c507616dfc0f43d0669e813aa03fb0dfedc02"} err="failed to get container status \"a66be50108e13d548689d1184b0c507616dfc0f43d0669e813aa03fb0dfedc02\": rpc error: code = NotFound desc = could not find container \"a66be50108e13d548689d1184b0c507616dfc0f43d0669e813aa03fb0dfedc02\": container with ID starting with a66be50108e13d548689d1184b0c507616dfc0f43d0669e813aa03fb0dfedc02 not found: ID does not exist" Feb 16 02:28:29.502947 master-0 kubenswrapper[31559]: I0216 02:28:29.502920 31559 scope.go:117] "RemoveContainer" containerID="4dd0ca3d1ef720a8082afa260fcbaf109837a0b326a0d5205fe9539b323bea7b" Feb 16 02:28:29.503556 master-0 kubenswrapper[31559]: I0216 02:28:29.503489 31559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4dd0ca3d1ef720a8082afa260fcbaf109837a0b326a0d5205fe9539b323bea7b"} err="failed to get container status \"4dd0ca3d1ef720a8082afa260fcbaf109837a0b326a0d5205fe9539b323bea7b\": rpc error: code = NotFound desc = could not find container \"4dd0ca3d1ef720a8082afa260fcbaf109837a0b326a0d5205fe9539b323bea7b\": container with ID starting with 4dd0ca3d1ef720a8082afa260fcbaf109837a0b326a0d5205fe9539b323bea7b not found: ID does not exist" Feb 16 02:28:29.503556 master-0 kubenswrapper[31559]: I0216 02:28:29.503537 31559 scope.go:117] "RemoveContainer" containerID="557e60bc17eac08530d704e0dda9c831397e8de1d926a03728413a4cf53d59ab" Feb 16 02:28:29.503980 master-0 kubenswrapper[31559]: I0216 02:28:29.503917 31559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"557e60bc17eac08530d704e0dda9c831397e8de1d926a03728413a4cf53d59ab"} err="failed to get container status \"557e60bc17eac08530d704e0dda9c831397e8de1d926a03728413a4cf53d59ab\": rpc error: code = NotFound desc = could not find container \"557e60bc17eac08530d704e0dda9c831397e8de1d926a03728413a4cf53d59ab\": container with ID starting with 557e60bc17eac08530d704e0dda9c831397e8de1d926a03728413a4cf53d59ab not found: ID does not exist" Feb 16 02:28:29.503980 master-0 kubenswrapper[31559]: I0216 02:28:29.503958 31559 scope.go:117] "RemoveContainer" containerID="fef27121ceeb80a7aebc31c20224bc11cf1a9cbb5557ffff946afefa4448b15a" Feb 16 02:28:29.504392 master-0 kubenswrapper[31559]: I0216 02:28:29.504330 31559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fef27121ceeb80a7aebc31c20224bc11cf1a9cbb5557ffff946afefa4448b15a"} err="failed to get container status \"fef27121ceeb80a7aebc31c20224bc11cf1a9cbb5557ffff946afefa4448b15a\": rpc error: code = NotFound desc = could not find container \"fef27121ceeb80a7aebc31c20224bc11cf1a9cbb5557ffff946afefa4448b15a\": container with ID starting with fef27121ceeb80a7aebc31c20224bc11cf1a9cbb5557ffff946afefa4448b15a not found: ID does not exist" Feb 16 02:28:29.504392 master-0 kubenswrapper[31559]: I0216 02:28:29.504368 31559 scope.go:117] "RemoveContainer" containerID="01b27fa108a3b5894a94255befeb44230fc5897f8f6ce1acf532932fc2b53d83" Feb 16 02:28:29.504787 master-0 kubenswrapper[31559]: I0216 02:28:29.504701 31559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01b27fa108a3b5894a94255befeb44230fc5897f8f6ce1acf532932fc2b53d83"} err="failed to get container status \"01b27fa108a3b5894a94255befeb44230fc5897f8f6ce1acf532932fc2b53d83\": rpc error: code = NotFound desc = could not find container \"01b27fa108a3b5894a94255befeb44230fc5897f8f6ce1acf532932fc2b53d83\": container with ID starting with 01b27fa108a3b5894a94255befeb44230fc5897f8f6ce1acf532932fc2b53d83 not found: ID does not exist" Feb 16 02:28:29.504787 master-0 kubenswrapper[31559]: I0216 02:28:29.504722 31559 scope.go:117] "RemoveContainer" containerID="75cad3478045470f2040f88da8c40c241bef15b6fd03a907d8a57534c70ec620" Feb 16 02:28:29.505121 master-0 kubenswrapper[31559]: I0216 02:28:29.505058 31559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75cad3478045470f2040f88da8c40c241bef15b6fd03a907d8a57534c70ec620"} err="failed to get container status \"75cad3478045470f2040f88da8c40c241bef15b6fd03a907d8a57534c70ec620\": rpc error: code = NotFound desc = could not find container \"75cad3478045470f2040f88da8c40c241bef15b6fd03a907d8a57534c70ec620\": container with ID starting with 75cad3478045470f2040f88da8c40c241bef15b6fd03a907d8a57534c70ec620 not found: ID does not exist" Feb 16 02:28:29.505121 master-0 kubenswrapper[31559]: I0216 02:28:29.505099 31559 scope.go:117] "RemoveContainer" containerID="c35a553e2747f5aac1b45381ac10c69bcf39cea169e699e46724c0742a84361e" Feb 16 02:28:29.505552 master-0 kubenswrapper[31559]: I0216 02:28:29.505506 31559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c35a553e2747f5aac1b45381ac10c69bcf39cea169e699e46724c0742a84361e"} err="failed to get container status \"c35a553e2747f5aac1b45381ac10c69bcf39cea169e699e46724c0742a84361e\": rpc error: code = NotFound desc = could not find container \"c35a553e2747f5aac1b45381ac10c69bcf39cea169e699e46724c0742a84361e\": container with ID starting with c35a553e2747f5aac1b45381ac10c69bcf39cea169e699e46724c0742a84361e not found: ID does not exist" Feb 16 02:28:29.505552 master-0 kubenswrapper[31559]: I0216 02:28:29.505542 31559 scope.go:117] "RemoveContainer" containerID="a66be50108e13d548689d1184b0c507616dfc0f43d0669e813aa03fb0dfedc02" Feb 16 02:28:29.506182 master-0 kubenswrapper[31559]: I0216 02:28:29.506121 31559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a66be50108e13d548689d1184b0c507616dfc0f43d0669e813aa03fb0dfedc02"} err="failed to get container status \"a66be50108e13d548689d1184b0c507616dfc0f43d0669e813aa03fb0dfedc02\": rpc error: code = NotFound desc = could not find container \"a66be50108e13d548689d1184b0c507616dfc0f43d0669e813aa03fb0dfedc02\": container with ID starting with a66be50108e13d548689d1184b0c507616dfc0f43d0669e813aa03fb0dfedc02 not found: ID does not exist" Feb 16 02:28:29.506182 master-0 kubenswrapper[31559]: I0216 02:28:29.506164 31559 scope.go:117] "RemoveContainer" containerID="4dd0ca3d1ef720a8082afa260fcbaf109837a0b326a0d5205fe9539b323bea7b" Feb 16 02:28:29.506600 master-0 kubenswrapper[31559]: I0216 02:28:29.506541 31559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4dd0ca3d1ef720a8082afa260fcbaf109837a0b326a0d5205fe9539b323bea7b"} err="failed to get container status \"4dd0ca3d1ef720a8082afa260fcbaf109837a0b326a0d5205fe9539b323bea7b\": rpc error: code = NotFound desc = could not find container \"4dd0ca3d1ef720a8082afa260fcbaf109837a0b326a0d5205fe9539b323bea7b\": container with ID starting with 4dd0ca3d1ef720a8082afa260fcbaf109837a0b326a0d5205fe9539b323bea7b not found: ID does not exist" Feb 16 02:28:29.506600 master-0 kubenswrapper[31559]: I0216 02:28:29.506579 31559 scope.go:117] "RemoveContainer" containerID="557e60bc17eac08530d704e0dda9c831397e8de1d926a03728413a4cf53d59ab" Feb 16 02:28:29.507156 master-0 kubenswrapper[31559]: I0216 02:28:29.506992 31559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"557e60bc17eac08530d704e0dda9c831397e8de1d926a03728413a4cf53d59ab"} err="failed to get container status \"557e60bc17eac08530d704e0dda9c831397e8de1d926a03728413a4cf53d59ab\": rpc error: code = NotFound desc = could not find container \"557e60bc17eac08530d704e0dda9c831397e8de1d926a03728413a4cf53d59ab\": container with ID starting with 557e60bc17eac08530d704e0dda9c831397e8de1d926a03728413a4cf53d59ab not found: ID does not exist" Feb 16 02:28:29.571410 master-0 kubenswrapper[31559]: I0216 02:28:29.571330 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef270823-b0cf-477d-bb96-772731e2cec5-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"ef270823-b0cf-477d-bb96-772731e2cec5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:28:29.571410 master-0 kubenswrapper[31559]: I0216 02:28:29.571396 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/ef270823-b0cf-477d-bb96-772731e2cec5-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"ef270823-b0cf-477d-bb96-772731e2cec5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:28:29.571793 master-0 kubenswrapper[31559]: I0216 02:28:29.571428 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/ef270823-b0cf-477d-bb96-772731e2cec5-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"ef270823-b0cf-477d-bb96-772731e2cec5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:28:29.571793 master-0 kubenswrapper[31559]: I0216 02:28:29.571486 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/ef270823-b0cf-477d-bb96-772731e2cec5-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"ef270823-b0cf-477d-bb96-772731e2cec5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:28:29.572263 master-0 kubenswrapper[31559]: I0216 02:28:29.572189 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef270823-b0cf-477d-bb96-772731e2cec5-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"ef270823-b0cf-477d-bb96-772731e2cec5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:28:29.572366 master-0 kubenswrapper[31559]: I0216 02:28:29.572326 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ef270823-b0cf-477d-bb96-772731e2cec5-config-out\") pod \"prometheus-k8s-0\" (UID: \"ef270823-b0cf-477d-bb96-772731e2cec5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:28:29.572479 master-0 kubenswrapper[31559]: I0216 02:28:29.572428 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/ef270823-b0cf-477d-bb96-772731e2cec5-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"ef270823-b0cf-477d-bb96-772731e2cec5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:28:29.572570 master-0 kubenswrapper[31559]: I0216 02:28:29.572492 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/ef270823-b0cf-477d-bb96-772731e2cec5-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"ef270823-b0cf-477d-bb96-772731e2cec5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:28:29.573016 master-0 kubenswrapper[31559]: I0216 02:28:29.572959 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/ef270823-b0cf-477d-bb96-772731e2cec5-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"ef270823-b0cf-477d-bb96-772731e2cec5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:28:29.573101 master-0 kubenswrapper[31559]: I0216 02:28:29.573001 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ef270823-b0cf-477d-bb96-772731e2cec5-web-config\") pod \"prometheus-k8s-0\" (UID: \"ef270823-b0cf-477d-bb96-772731e2cec5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:28:29.573210 master-0 kubenswrapper[31559]: I0216 02:28:29.573161 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/ef270823-b0cf-477d-bb96-772731e2cec5-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"ef270823-b0cf-477d-bb96-772731e2cec5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:28:29.573292 master-0 kubenswrapper[31559]: I0216 02:28:29.573236 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ef270823-b0cf-477d-bb96-772731e2cec5-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"ef270823-b0cf-477d-bb96-772731e2cec5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:28:29.573369 master-0 kubenswrapper[31559]: I0216 02:28:29.573349 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gr4d\" (UniqueName: \"kubernetes.io/projected/ef270823-b0cf-477d-bb96-772731e2cec5-kube-api-access-9gr4d\") pod \"prometheus-k8s-0\" (UID: \"ef270823-b0cf-477d-bb96-772731e2cec5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:28:29.573547 master-0 kubenswrapper[31559]: I0216 02:28:29.573499 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/ef270823-b0cf-477d-bb96-772731e2cec5-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"ef270823-b0cf-477d-bb96-772731e2cec5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:28:29.573638 master-0 kubenswrapper[31559]: I0216 02:28:29.573549 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef270823-b0cf-477d-bb96-772731e2cec5-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"ef270823-b0cf-477d-bb96-772731e2cec5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:28:29.573638 master-0 kubenswrapper[31559]: I0216 02:28:29.573578 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ef270823-b0cf-477d-bb96-772731e2cec5-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"ef270823-b0cf-477d-bb96-772731e2cec5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:28:29.573777 master-0 kubenswrapper[31559]: I0216 02:28:29.573727 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/ef270823-b0cf-477d-bb96-772731e2cec5-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"ef270823-b0cf-477d-bb96-772731e2cec5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:28:29.573846 master-0 kubenswrapper[31559]: I0216 02:28:29.573796 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef270823-b0cf-477d-bb96-772731e2cec5-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"ef270823-b0cf-477d-bb96-772731e2cec5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:28:29.573915 master-0 kubenswrapper[31559]: I0216 02:28:29.573849 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ef270823-b0cf-477d-bb96-772731e2cec5-config\") pod \"prometheus-k8s-0\" (UID: \"ef270823-b0cf-477d-bb96-772731e2cec5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:28:29.573915 master-0 kubenswrapper[31559]: I0216 02:28:29.573905 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/ef270823-b0cf-477d-bb96-772731e2cec5-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"ef270823-b0cf-477d-bb96-772731e2cec5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:28:29.575603 master-0 kubenswrapper[31559]: I0216 02:28:29.575558 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef270823-b0cf-477d-bb96-772731e2cec5-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"ef270823-b0cf-477d-bb96-772731e2cec5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:28:29.575603 master-0 kubenswrapper[31559]: I0216 02:28:29.575582 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ef270823-b0cf-477d-bb96-772731e2cec5-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"ef270823-b0cf-477d-bb96-772731e2cec5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:28:29.575836 master-0 kubenswrapper[31559]: I0216 02:28:29.575740 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef270823-b0cf-477d-bb96-772731e2cec5-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"ef270823-b0cf-477d-bb96-772731e2cec5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:28:29.578500 master-0 kubenswrapper[31559]: I0216 02:28:29.578422 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ef270823-b0cf-477d-bb96-772731e2cec5-config-out\") pod \"prometheus-k8s-0\" (UID: \"ef270823-b0cf-477d-bb96-772731e2cec5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:28:29.578939 master-0 kubenswrapper[31559]: I0216 02:28:29.578869 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ef270823-b0cf-477d-bb96-772731e2cec5-web-config\") pod \"prometheus-k8s-0\" (UID: \"ef270823-b0cf-477d-bb96-772731e2cec5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:28:29.579846 master-0 kubenswrapper[31559]: I0216 02:28:29.579711 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/ef270823-b0cf-477d-bb96-772731e2cec5-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"ef270823-b0cf-477d-bb96-772731e2cec5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:28:29.579846 master-0 kubenswrapper[31559]: I0216 02:28:29.579768 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ef270823-b0cf-477d-bb96-772731e2cec5-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"ef270823-b0cf-477d-bb96-772731e2cec5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:28:29.580784 master-0 kubenswrapper[31559]: I0216 02:28:29.580709 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/ef270823-b0cf-477d-bb96-772731e2cec5-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"ef270823-b0cf-477d-bb96-772731e2cec5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:28:29.580894 master-0 kubenswrapper[31559]: I0216 02:28:29.580796 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/ef270823-b0cf-477d-bb96-772731e2cec5-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"ef270823-b0cf-477d-bb96-772731e2cec5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:28:29.580894 master-0 kubenswrapper[31559]: I0216 02:28:29.580859 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/ef270823-b0cf-477d-bb96-772731e2cec5-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"ef270823-b0cf-477d-bb96-772731e2cec5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:28:29.581882 master-0 kubenswrapper[31559]: I0216 02:28:29.581823 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/ef270823-b0cf-477d-bb96-772731e2cec5-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"ef270823-b0cf-477d-bb96-772731e2cec5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:28:29.585507 master-0 kubenswrapper[31559]: I0216 02:28:29.583259 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/ef270823-b0cf-477d-bb96-772731e2cec5-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"ef270823-b0cf-477d-bb96-772731e2cec5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:28:29.585507 master-0 kubenswrapper[31559]: I0216 02:28:29.583266 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/ef270823-b0cf-477d-bb96-772731e2cec5-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"ef270823-b0cf-477d-bb96-772731e2cec5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:28:29.585507 master-0 kubenswrapper[31559]: I0216 02:28:29.585404 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ef270823-b0cf-477d-bb96-772731e2cec5-config\") pod \"prometheus-k8s-0\" (UID: \"ef270823-b0cf-477d-bb96-772731e2cec5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:28:29.594106 master-0 kubenswrapper[31559]: I0216 02:28:29.594024 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/ef270823-b0cf-477d-bb96-772731e2cec5-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"ef270823-b0cf-477d-bb96-772731e2cec5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:28:29.616820 master-0 kubenswrapper[31559]: I0216 02:28:29.616731 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gr4d\" (UniqueName: \"kubernetes.io/projected/ef270823-b0cf-477d-bb96-772731e2cec5-kube-api-access-9gr4d\") pod \"prometheus-k8s-0\" (UID: \"ef270823-b0cf-477d-bb96-772731e2cec5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:28:29.696984 master-0 kubenswrapper[31559]: I0216 02:28:29.696915 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:28:29.939420 master-0 kubenswrapper[31559]: I0216 02:28:29.939344 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b50da24-3f10-4b81-be90-912874ed2629" path="/var/lib/kubelet/pods/9b50da24-3f10-4b81-be90-912874ed2629/volumes" Feb 16 02:28:30.193270 master-0 kubenswrapper[31559]: I0216 02:28:30.193175 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"5abb5a89-cacc-49d7-907e-ccd374e119d5","Type":"ContainerStarted","Data":"c88c112b60ea1908f13e30f0cbbdb96e3ea33b71e572f2192ce55af6fedabe47"} Feb 16 02:28:30.193270 master-0 kubenswrapper[31559]: I0216 02:28:30.193258 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"5abb5a89-cacc-49d7-907e-ccd374e119d5","Type":"ContainerStarted","Data":"b433c20d0d1d640816bf4d1a5311973f09efa084b8431331e39076b516f5610c"} Feb 16 02:28:30.328963 master-0 kubenswrapper[31559]: I0216 02:28:30.328887 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 16 02:28:30.337640 master-0 kubenswrapper[31559]: W0216 02:28:30.335999 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef270823_b0cf_477d_bb96_772731e2cec5.slice/crio-b2b8e0730565733ab228ffb73f61baf629f3ffb8e506bed95bc86d8eef74dab3 WatchSource:0}: Error finding container b2b8e0730565733ab228ffb73f61baf629f3ffb8e506bed95bc86d8eef74dab3: Status 404 returned error can't find the container with id b2b8e0730565733ab228ffb73f61baf629f3ffb8e506bed95bc86d8eef74dab3 Feb 16 02:28:30.579726 master-0 kubenswrapper[31559]: I0216 02:28:30.579606 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=4.579575148 podStartE2EDuration="4.579575148s" podCreationTimestamp="2026-02-16 02:28:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:28:30.577386241 +0000 UTC m=+362.921992286" watchObservedRunningTime="2026-02-16 02:28:30.579575148 +0000 UTC m=+362.924181203" Feb 16 02:28:31.209622 master-0 kubenswrapper[31559]: I0216 02:28:31.209494 31559 generic.go:334] "Generic (PLEG): container finished" podID="ef270823-b0cf-477d-bb96-772731e2cec5" containerID="0eb61fa34b6eaef995af54a0e4b100513b1ad6fec90ec1a39e1aad9665f152de" exitCode=0 Feb 16 02:28:31.209622 master-0 kubenswrapper[31559]: I0216 02:28:31.209561 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"ef270823-b0cf-477d-bb96-772731e2cec5","Type":"ContainerDied","Data":"0eb61fa34b6eaef995af54a0e4b100513b1ad6fec90ec1a39e1aad9665f152de"} Feb 16 02:28:31.210639 master-0 kubenswrapper[31559]: I0216 02:28:31.209661 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"ef270823-b0cf-477d-bb96-772731e2cec5","Type":"ContainerStarted","Data":"b2b8e0730565733ab228ffb73f61baf629f3ffb8e506bed95bc86d8eef74dab3"} Feb 16 02:28:32.227530 master-0 kubenswrapper[31559]: I0216 02:28:32.226982 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"ef270823-b0cf-477d-bb96-772731e2cec5","Type":"ContainerStarted","Data":"bfd5864531471ac6c8f9ec1fa6bebaf8de47788823ee2ea57277f3ef0d97168c"} Feb 16 02:28:32.227530 master-0 kubenswrapper[31559]: I0216 02:28:32.227028 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"ef270823-b0cf-477d-bb96-772731e2cec5","Type":"ContainerStarted","Data":"30400d6771d6cef350d6176eff2215e32c3219c24a27a3904141dfa5c144baa0"} Feb 16 02:28:32.227530 master-0 kubenswrapper[31559]: I0216 02:28:32.227037 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"ef270823-b0cf-477d-bb96-772731e2cec5","Type":"ContainerStarted","Data":"513c8d5894e83267f8e5452d0d48a6c1201bf68549da83a512368fc25a72ed01"} Feb 16 02:28:32.227530 master-0 kubenswrapper[31559]: I0216 02:28:32.227047 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"ef270823-b0cf-477d-bb96-772731e2cec5","Type":"ContainerStarted","Data":"1e20f190da66a2b77dd69c9086f7e7e213798841691e55b80d7c8ac247a98d78"} Feb 16 02:28:32.227530 master-0 kubenswrapper[31559]: I0216 02:28:32.227055 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"ef270823-b0cf-477d-bb96-772731e2cec5","Type":"ContainerStarted","Data":"789ba57d30f2b652540c4ec9c4377bb4c6218f9cce544d0a784c5d758299da9a"} Feb 16 02:28:33.240565 master-0 kubenswrapper[31559]: I0216 02:28:33.240378 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"ef270823-b0cf-477d-bb96-772731e2cec5","Type":"ContainerStarted","Data":"f24d2f7a7823c7488738f9123e2ee5aa746fb270b4a571e98af8083347df6c7d"} Feb 16 02:28:33.292849 master-0 kubenswrapper[31559]: I0216 02:28:33.292727 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=4.29269139 podStartE2EDuration="4.29269139s" podCreationTimestamp="2026-02-16 02:28:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:28:33.290310307 +0000 UTC m=+365.634916372" watchObservedRunningTime="2026-02-16 02:28:33.29269139 +0000 UTC m=+365.637297405" Feb 16 02:28:33.337394 master-0 kubenswrapper[31559]: E0216 02:28:33.337306 31559 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 192.168.32.10:50118->192.168.32.10:34313: write tcp 192.168.32.10:50118->192.168.32.10:34313: write: connection reset by peer Feb 16 02:28:34.332616 master-0 kubenswrapper[31559]: I0216 02:28:34.332504 31559 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 16 02:28:34.335114 master-0 kubenswrapper[31559]: I0216 02:28:34.335044 31559 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Feb 16 02:28:34.335377 master-0 kubenswrapper[31559]: I0216 02:28:34.335292 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 02:28:34.336288 master-0 kubenswrapper[31559]: I0216 02:28:34.336219 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="faa15f80078a2bfbe2234a74ab4da87c" containerName="kube-apiserver" containerID="cri-o://c5e856a5a1deabc8ec437ec153f6dc9513c1142be01650534dba41f3a69aa9ac" gracePeriod=15 Feb 16 02:28:34.336430 master-0 kubenswrapper[31559]: I0216 02:28:34.336260 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="faa15f80078a2bfbe2234a74ab4da87c" containerName="kube-apiserver-check-endpoints" containerID="cri-o://b0745343c4d73d707339ae202ee2843192faeca2cb8a695a44972b86e95a9611" gracePeriod=15 Feb 16 02:28:34.336430 master-0 kubenswrapper[31559]: I0216 02:28:34.336388 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="faa15f80078a2bfbe2234a74ab4da87c" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://9bdf2f568c4df3e9b3963f330037f1881dfe657c9df363c02007e717c60ce8ef" gracePeriod=15 Feb 16 02:28:34.336664 master-0 kubenswrapper[31559]: I0216 02:28:34.336583 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="faa15f80078a2bfbe2234a74ab4da87c" containerName="kube-apiserver-cert-syncer" containerID="cri-o://fc25e646752f2eefc1069034fc9ce62011699a37685d3736d305e53a5721016c" gracePeriod=15 Feb 16 02:28:34.336731 master-0 kubenswrapper[31559]: I0216 02:28:34.336586 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="faa15f80078a2bfbe2234a74ab4da87c" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://7560250703a4f95f92cb4830137ba4c28861f553762a780e9ed897c205c248ce" gracePeriod=15 Feb 16 02:28:34.336963 master-0 kubenswrapper[31559]: I0216 02:28:34.336921 31559 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Feb 16 02:28:34.338334 master-0 kubenswrapper[31559]: E0216 02:28:34.338298 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faa15f80078a2bfbe2234a74ab4da87c" containerName="kube-apiserver-cert-regeneration-controller" Feb 16 02:28:34.338531 master-0 kubenswrapper[31559]: I0216 02:28:34.338505 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="faa15f80078a2bfbe2234a74ab4da87c" containerName="kube-apiserver-cert-regeneration-controller" Feb 16 02:28:34.338675 master-0 kubenswrapper[31559]: E0216 02:28:34.338653 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faa15f80078a2bfbe2234a74ab4da87c" containerName="kube-apiserver" Feb 16 02:28:34.338811 master-0 kubenswrapper[31559]: I0216 02:28:34.338789 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="faa15f80078a2bfbe2234a74ab4da87c" containerName="kube-apiserver" Feb 16 02:28:34.339015 master-0 kubenswrapper[31559]: E0216 02:28:34.338993 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faa15f80078a2bfbe2234a74ab4da87c" containerName="kube-apiserver-cert-syncer" Feb 16 02:28:34.339147 master-0 kubenswrapper[31559]: I0216 02:28:34.339126 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="faa15f80078a2bfbe2234a74ab4da87c" containerName="kube-apiserver-cert-syncer" Feb 16 02:28:34.339323 master-0 kubenswrapper[31559]: E0216 02:28:34.339299 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faa15f80078a2bfbe2234a74ab4da87c" containerName="kube-apiserver-insecure-readyz" Feb 16 02:28:34.339644 master-0 kubenswrapper[31559]: I0216 02:28:34.339486 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="faa15f80078a2bfbe2234a74ab4da87c" containerName="kube-apiserver-insecure-readyz" Feb 16 02:28:34.339851 master-0 kubenswrapper[31559]: E0216 02:28:34.339827 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faa15f80078a2bfbe2234a74ab4da87c" containerName="kube-apiserver-check-endpoints" Feb 16 02:28:34.339983 master-0 kubenswrapper[31559]: I0216 02:28:34.339963 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="faa15f80078a2bfbe2234a74ab4da87c" containerName="kube-apiserver-check-endpoints" Feb 16 02:28:34.340142 master-0 kubenswrapper[31559]: E0216 02:28:34.340120 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faa15f80078a2bfbe2234a74ab4da87c" containerName="setup" Feb 16 02:28:34.340267 master-0 kubenswrapper[31559]: I0216 02:28:34.340247 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="faa15f80078a2bfbe2234a74ab4da87c" containerName="setup" Feb 16 02:28:34.340713 master-0 kubenswrapper[31559]: I0216 02:28:34.340682 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="faa15f80078a2bfbe2234a74ab4da87c" containerName="kube-apiserver-cert-regeneration-controller" Feb 16 02:28:34.340883 master-0 kubenswrapper[31559]: I0216 02:28:34.340862 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="faa15f80078a2bfbe2234a74ab4da87c" containerName="kube-apiserver-check-endpoints" Feb 16 02:28:34.341056 master-0 kubenswrapper[31559]: I0216 02:28:34.341032 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="faa15f80078a2bfbe2234a74ab4da87c" containerName="kube-apiserver-insecure-readyz" Feb 16 02:28:34.341244 master-0 kubenswrapper[31559]: I0216 02:28:34.341217 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="faa15f80078a2bfbe2234a74ab4da87c" containerName="kube-apiserver-cert-syncer" Feb 16 02:28:34.341473 master-0 kubenswrapper[31559]: I0216 02:28:34.341419 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="faa15f80078a2bfbe2234a74ab4da87c" containerName="kube-apiserver" Feb 16 02:28:34.375242 master-0 kubenswrapper[31559]: I0216 02:28:34.375161 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a9a2b3a37af32e5d570b82bfd956f250\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 02:28:34.375674 master-0 kubenswrapper[31559]: I0216 02:28:34.375640 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a9a2b3a37af32e5d570b82bfd956f250\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 02:28:34.375908 master-0 kubenswrapper[31559]: I0216 02:28:34.375874 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/afa8ee25cec0b37c40dad37c52b89d42-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"afa8ee25cec0b37c40dad37c52b89d42\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 02:28:34.376145 master-0 kubenswrapper[31559]: I0216 02:28:34.376116 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a9a2b3a37af32e5d570b82bfd956f250\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 02:28:34.376328 master-0 kubenswrapper[31559]: I0216 02:28:34.376300 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a9a2b3a37af32e5d570b82bfd956f250\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 02:28:34.376779 master-0 kubenswrapper[31559]: I0216 02:28:34.376686 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/afa8ee25cec0b37c40dad37c52b89d42-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"afa8ee25cec0b37c40dad37c52b89d42\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 02:28:34.376994 master-0 kubenswrapper[31559]: I0216 02:28:34.376929 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/afa8ee25cec0b37c40dad37c52b89d42-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"afa8ee25cec0b37c40dad37c52b89d42\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 02:28:34.377244 master-0 kubenswrapper[31559]: I0216 02:28:34.377163 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a9a2b3a37af32e5d570b82bfd956f250\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 02:28:34.455481 master-0 kubenswrapper[31559]: E0216 02:28:34.453862 31559 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 02:28:34.481720 master-0 kubenswrapper[31559]: I0216 02:28:34.481658 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a9a2b3a37af32e5d570b82bfd956f250\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 02:28:34.481996 master-0 kubenswrapper[31559]: I0216 02:28:34.481961 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/afa8ee25cec0b37c40dad37c52b89d42-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"afa8ee25cec0b37c40dad37c52b89d42\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 02:28:34.482197 master-0 kubenswrapper[31559]: I0216 02:28:34.482168 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/afa8ee25cec0b37c40dad37c52b89d42-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"afa8ee25cec0b37c40dad37c52b89d42\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 02:28:34.482395 master-0 kubenswrapper[31559]: I0216 02:28:34.482357 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a9a2b3a37af32e5d570b82bfd956f250\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 02:28:34.482692 master-0 kubenswrapper[31559]: I0216 02:28:34.482656 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a9a2b3a37af32e5d570b82bfd956f250\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 02:28:34.482956 master-0 kubenswrapper[31559]: I0216 02:28:34.482922 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a9a2b3a37af32e5d570b82bfd956f250\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 02:28:34.483096 master-0 kubenswrapper[31559]: I0216 02:28:34.482945 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/afa8ee25cec0b37c40dad37c52b89d42-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"afa8ee25cec0b37c40dad37c52b89d42\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 02:28:34.483228 master-0 kubenswrapper[31559]: I0216 02:28:34.482992 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/afa8ee25cec0b37c40dad37c52b89d42-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"afa8ee25cec0b37c40dad37c52b89d42\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 02:28:34.483364 master-0 kubenswrapper[31559]: I0216 02:28:34.483026 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a9a2b3a37af32e5d570b82bfd956f250\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 02:28:34.483553 master-0 kubenswrapper[31559]: I0216 02:28:34.483206 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a9a2b3a37af32e5d570b82bfd956f250\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 02:28:34.483764 master-0 kubenswrapper[31559]: I0216 02:28:34.483733 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/afa8ee25cec0b37c40dad37c52b89d42-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"afa8ee25cec0b37c40dad37c52b89d42\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 02:28:34.484041 master-0 kubenswrapper[31559]: I0216 02:28:34.484009 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a9a2b3a37af32e5d570b82bfd956f250\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 02:28:34.484292 master-0 kubenswrapper[31559]: I0216 02:28:34.484263 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a9a2b3a37af32e5d570b82bfd956f250\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 02:28:34.484646 master-0 kubenswrapper[31559]: I0216 02:28:34.483258 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a9a2b3a37af32e5d570b82bfd956f250\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 02:28:34.484845 master-0 kubenswrapper[31559]: I0216 02:28:34.484816 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/afa8ee25cec0b37c40dad37c52b89d42-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"afa8ee25cec0b37c40dad37c52b89d42\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 02:28:34.485018 master-0 kubenswrapper[31559]: I0216 02:28:34.483383 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a9a2b3a37af32e5d570b82bfd956f250\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 02:28:34.698722 master-0 kubenswrapper[31559]: I0216 02:28:34.698594 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:28:34.755328 master-0 kubenswrapper[31559]: I0216 02:28:34.755209 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 02:28:34.787711 master-0 kubenswrapper[31559]: W0216 02:28:34.787627 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9a2b3a37af32e5d570b82bfd956f250.slice/crio-3b200aff6bfb5b7b3f3ae41a26f6ce9d7363ce8b5592ebdc26acc1da043248f6 WatchSource:0}: Error finding container 3b200aff6bfb5b7b3f3ae41a26f6ce9d7363ce8b5592ebdc26acc1da043248f6: Status 404 returned error can't find the container with id 3b200aff6bfb5b7b3f3ae41a26f6ce9d7363ce8b5592ebdc26acc1da043248f6 Feb 16 02:28:34.792867 master-0 kubenswrapper[31559]: E0216 02:28:34.792631 31559 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.1894991fad207de3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:a9a2b3a37af32e5d570b82bfd956f250,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:28:34.791071203 +0000 UTC m=+367.135677248,LastTimestamp:2026-02-16 02:28:34.791071203 +0000 UTC m=+367.135677248,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:28:35.269515 master-0 kubenswrapper[31559]: I0216 02:28:35.269406 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_faa15f80078a2bfbe2234a74ab4da87c/kube-apiserver-cert-syncer/0.log" Feb 16 02:28:35.270717 master-0 kubenswrapper[31559]: I0216 02:28:35.270653 31559 generic.go:334] "Generic (PLEG): container finished" podID="faa15f80078a2bfbe2234a74ab4da87c" containerID="b0745343c4d73d707339ae202ee2843192faeca2cb8a695a44972b86e95a9611" exitCode=0 Feb 16 02:28:35.270717 master-0 kubenswrapper[31559]: I0216 02:28:35.270703 31559 generic.go:334] "Generic (PLEG): container finished" podID="faa15f80078a2bfbe2234a74ab4da87c" containerID="7560250703a4f95f92cb4830137ba4c28861f553762a780e9ed897c205c248ce" exitCode=0 Feb 16 02:28:35.270717 master-0 kubenswrapper[31559]: I0216 02:28:35.270719 31559 generic.go:334] "Generic (PLEG): container finished" podID="faa15f80078a2bfbe2234a74ab4da87c" containerID="9bdf2f568c4df3e9b3963f330037f1881dfe657c9df363c02007e717c60ce8ef" exitCode=0 Feb 16 02:28:35.271012 master-0 kubenswrapper[31559]: I0216 02:28:35.270734 31559 generic.go:334] "Generic (PLEG): container finished" podID="faa15f80078a2bfbe2234a74ab4da87c" containerID="fc25e646752f2eefc1069034fc9ce62011699a37685d3736d305e53a5721016c" exitCode=2 Feb 16 02:28:35.273504 master-0 kubenswrapper[31559]: I0216 02:28:35.273409 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"a9a2b3a37af32e5d570b82bfd956f250","Type":"ContainerStarted","Data":"952b358416521f11e1d86b9917b4c4ff5b0730ac77eeb86ca5f67f6a183f7d45"} Feb 16 02:28:35.273664 master-0 kubenswrapper[31559]: I0216 02:28:35.273528 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"a9a2b3a37af32e5d570b82bfd956f250","Type":"ContainerStarted","Data":"3b200aff6bfb5b7b3f3ae41a26f6ce9d7363ce8b5592ebdc26acc1da043248f6"} Feb 16 02:28:35.275878 master-0 kubenswrapper[31559]: I0216 02:28:35.275815 31559 generic.go:334] "Generic (PLEG): container finished" podID="7dc8db6b-5b2f-4fd7-b0fe-3a45a5012cdb" containerID="af787ba662c256e085e431417aed5dc09012adb3789a28544bea627d8db37a48" exitCode=0 Feb 16 02:28:35.275993 master-0 kubenswrapper[31559]: I0216 02:28:35.275887 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-0" event={"ID":"7dc8db6b-5b2f-4fd7-b0fe-3a45a5012cdb","Type":"ContainerDied","Data":"af787ba662c256e085e431417aed5dc09012adb3789a28544bea627d8db37a48"} Feb 16 02:28:35.276222 master-0 kubenswrapper[31559]: I0216 02:28:35.276146 31559 status_manager.go:851] "Failed to get status for pod" podUID="faa15f80078a2bfbe2234a74ab4da87c" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:28:35.276318 master-0 kubenswrapper[31559]: E0216 02:28:35.276208 31559 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 02:28:35.277586 master-0 kubenswrapper[31559]: I0216 02:28:35.277491 31559 status_manager.go:851] "Failed to get status for pod" podUID="7dc8db6b-5b2f-4fd7-b0fe-3a45a5012cdb" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:28:35.280201 master-0 kubenswrapper[31559]: I0216 02:28:35.278510 31559 status_manager.go:851] "Failed to get status for pod" podUID="faa15f80078a2bfbe2234a74ab4da87c" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:28:36.220069 master-0 kubenswrapper[31559]: E0216 02:28:36.219922 31559 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T02:28:36Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T02:28:36Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T02:28:36Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T02:28:36Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:28:36.221112 master-0 kubenswrapper[31559]: E0216 02:28:36.221073 31559 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:28:36.222014 master-0 kubenswrapper[31559]: E0216 02:28:36.221948 31559 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:28:36.222857 master-0 kubenswrapper[31559]: E0216 02:28:36.222793 31559 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:28:36.223900 master-0 kubenswrapper[31559]: E0216 02:28:36.223766 31559 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:28:36.223900 master-0 kubenswrapper[31559]: E0216 02:28:36.223885 31559 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 02:28:36.286615 master-0 kubenswrapper[31559]: E0216 02:28:36.286381 31559 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 02:28:36.794674 master-0 kubenswrapper[31559]: I0216 02:28:36.794496 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Feb 16 02:28:36.796769 master-0 kubenswrapper[31559]: I0216 02:28:36.796645 31559 status_manager.go:851] "Failed to get status for pod" podUID="7dc8db6b-5b2f-4fd7-b0fe-3a45a5012cdb" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:28:36.802695 master-0 kubenswrapper[31559]: I0216 02:28:36.802624 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_faa15f80078a2bfbe2234a74ab4da87c/kube-apiserver-cert-syncer/0.log" Feb 16 02:28:36.803978 master-0 kubenswrapper[31559]: I0216 02:28:36.803922 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 02:28:36.805349 master-0 kubenswrapper[31559]: I0216 02:28:36.805264 31559 status_manager.go:851] "Failed to get status for pod" podUID="faa15f80078a2bfbe2234a74ab4da87c" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:28:36.806301 master-0 kubenswrapper[31559]: I0216 02:28:36.806221 31559 status_manager.go:851] "Failed to get status for pod" podUID="7dc8db6b-5b2f-4fd7-b0fe-3a45a5012cdb" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:28:36.909492 master-0 kubenswrapper[31559]: I0216 02:28:36.909331 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7cdbd48f5b-slwlb" Feb 16 02:28:36.909492 master-0 kubenswrapper[31559]: I0216 02:28:36.909501 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7cdbd48f5b-slwlb" Feb 16 02:28:36.916944 master-0 kubenswrapper[31559]: I0216 02:28:36.916847 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7cdbd48f5b-slwlb" Feb 16 02:28:36.918382 master-0 kubenswrapper[31559]: I0216 02:28:36.918293 31559 status_manager.go:851] "Failed to get status for pod" podUID="7dc8db6b-5b2f-4fd7-b0fe-3a45a5012cdb" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:28:36.919513 master-0 kubenswrapper[31559]: I0216 02:28:36.919405 31559 status_manager.go:851] "Failed to get status for pod" podUID="8859b956-70db-4e59-abff-faf38aa377fc" pod="openshift-console/console-7cdbd48f5b-slwlb" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-7cdbd48f5b-slwlb\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:28:36.920635 master-0 kubenswrapper[31559]: I0216 02:28:36.920567 31559 status_manager.go:851] "Failed to get status for pod" podUID="faa15f80078a2bfbe2234a74ab4da87c" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:28:36.941006 master-0 kubenswrapper[31559]: I0216 02:28:36.940885 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/faa15f80078a2bfbe2234a74ab4da87c-audit-dir\") pod \"faa15f80078a2bfbe2234a74ab4da87c\" (UID: \"faa15f80078a2bfbe2234a74ab4da87c\") " Feb 16 02:28:36.941196 master-0 kubenswrapper[31559]: I0216 02:28:36.941025 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/faa15f80078a2bfbe2234a74ab4da87c-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "faa15f80078a2bfbe2234a74ab4da87c" (UID: "faa15f80078a2bfbe2234a74ab4da87c"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:28:36.941272 master-0 kubenswrapper[31559]: I0216 02:28:36.941247 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7dc8db6b-5b2f-4fd7-b0fe-3a45a5012cdb-kubelet-dir\") pod \"7dc8db6b-5b2f-4fd7-b0fe-3a45a5012cdb\" (UID: \"7dc8db6b-5b2f-4fd7-b0fe-3a45a5012cdb\") " Feb 16 02:28:36.941350 master-0 kubenswrapper[31559]: I0216 02:28:36.941304 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7dc8db6b-5b2f-4fd7-b0fe-3a45a5012cdb-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7dc8db6b-5b2f-4fd7-b0fe-3a45a5012cdb" (UID: "7dc8db6b-5b2f-4fd7-b0fe-3a45a5012cdb"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:28:36.941605 master-0 kubenswrapper[31559]: I0216 02:28:36.941533 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/faa15f80078a2bfbe2234a74ab4da87c-cert-dir\") pod \"faa15f80078a2bfbe2234a74ab4da87c\" (UID: \"faa15f80078a2bfbe2234a74ab4da87c\") " Feb 16 02:28:36.941703 master-0 kubenswrapper[31559]: I0216 02:28:36.941633 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/faa15f80078a2bfbe2234a74ab4da87c-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "faa15f80078a2bfbe2234a74ab4da87c" (UID: "faa15f80078a2bfbe2234a74ab4da87c"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:28:36.941790 master-0 kubenswrapper[31559]: I0216 02:28:36.941712 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7dc8db6b-5b2f-4fd7-b0fe-3a45a5012cdb-var-lock\") pod \"7dc8db6b-5b2f-4fd7-b0fe-3a45a5012cdb\" (UID: \"7dc8db6b-5b2f-4fd7-b0fe-3a45a5012cdb\") " Feb 16 02:28:36.941866 master-0 kubenswrapper[31559]: I0216 02:28:36.941848 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7dc8db6b-5b2f-4fd7-b0fe-3a45a5012cdb-kube-api-access\") pod \"7dc8db6b-5b2f-4fd7-b0fe-3a45a5012cdb\" (UID: \"7dc8db6b-5b2f-4fd7-b0fe-3a45a5012cdb\") " Feb 16 02:28:36.941946 master-0 kubenswrapper[31559]: I0216 02:28:36.941883 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7dc8db6b-5b2f-4fd7-b0fe-3a45a5012cdb-var-lock" (OuterVolumeSpecName: "var-lock") pod "7dc8db6b-5b2f-4fd7-b0fe-3a45a5012cdb" (UID: "7dc8db6b-5b2f-4fd7-b0fe-3a45a5012cdb"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:28:36.941946 master-0 kubenswrapper[31559]: I0216 02:28:36.941907 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/faa15f80078a2bfbe2234a74ab4da87c-resource-dir\") pod \"faa15f80078a2bfbe2234a74ab4da87c\" (UID: \"faa15f80078a2bfbe2234a74ab4da87c\") " Feb 16 02:28:36.942318 master-0 kubenswrapper[31559]: I0216 02:28:36.942201 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/faa15f80078a2bfbe2234a74ab4da87c-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "faa15f80078a2bfbe2234a74ab4da87c" (UID: "faa15f80078a2bfbe2234a74ab4da87c"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:28:36.943885 master-0 kubenswrapper[31559]: I0216 02:28:36.943832 31559 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/faa15f80078a2bfbe2234a74ab4da87c-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 02:28:36.943885 master-0 kubenswrapper[31559]: I0216 02:28:36.943867 31559 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/faa15f80078a2bfbe2234a74ab4da87c-audit-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 02:28:36.943885 master-0 kubenswrapper[31559]: I0216 02:28:36.943878 31559 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7dc8db6b-5b2f-4fd7-b0fe-3a45a5012cdb-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 02:28:36.943885 master-0 kubenswrapper[31559]: I0216 02:28:36.943888 31559 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/faa15f80078a2bfbe2234a74ab4da87c-cert-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 02:28:36.943885 master-0 kubenswrapper[31559]: I0216 02:28:36.943899 31559 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7dc8db6b-5b2f-4fd7-b0fe-3a45a5012cdb-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 16 02:28:36.945592 master-0 kubenswrapper[31559]: I0216 02:28:36.945533 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dc8db6b-5b2f-4fd7-b0fe-3a45a5012cdb-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7dc8db6b-5b2f-4fd7-b0fe-3a45a5012cdb" (UID: "7dc8db6b-5b2f-4fd7-b0fe-3a45a5012cdb"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:28:37.045574 master-0 kubenswrapper[31559]: I0216 02:28:37.045365 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7dc8db6b-5b2f-4fd7-b0fe-3a45a5012cdb-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 16 02:28:37.300237 master-0 kubenswrapper[31559]: I0216 02:28:37.297322 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-0" event={"ID":"7dc8db6b-5b2f-4fd7-b0fe-3a45a5012cdb","Type":"ContainerDied","Data":"9ade67ff50d2f5bdab488b585ded346b7adde45a9678d0034659fda89d38a311"} Feb 16 02:28:37.300237 master-0 kubenswrapper[31559]: I0216 02:28:37.297421 31559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ade67ff50d2f5bdab488b585ded346b7adde45a9678d0034659fda89d38a311" Feb 16 02:28:37.300237 master-0 kubenswrapper[31559]: I0216 02:28:37.297349 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Feb 16 02:28:37.303835 master-0 kubenswrapper[31559]: I0216 02:28:37.303778 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_faa15f80078a2bfbe2234a74ab4da87c/kube-apiserver-cert-syncer/0.log" Feb 16 02:28:37.305617 master-0 kubenswrapper[31559]: I0216 02:28:37.305563 31559 generic.go:334] "Generic (PLEG): container finished" podID="faa15f80078a2bfbe2234a74ab4da87c" containerID="c5e856a5a1deabc8ec437ec153f6dc9513c1142be01650534dba41f3a69aa9ac" exitCode=0 Feb 16 02:28:37.305771 master-0 kubenswrapper[31559]: I0216 02:28:37.305717 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 02:28:37.305946 master-0 kubenswrapper[31559]: I0216 02:28:37.305738 31559 scope.go:117] "RemoveContainer" containerID="b0745343c4d73d707339ae202ee2843192faeca2cb8a695a44972b86e95a9611" Feb 16 02:28:37.312292 master-0 kubenswrapper[31559]: I0216 02:28:37.312246 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7cdbd48f5b-slwlb" Feb 16 02:28:37.313246 master-0 kubenswrapper[31559]: I0216 02:28:37.313177 31559 status_manager.go:851] "Failed to get status for pod" podUID="faa15f80078a2bfbe2234a74ab4da87c" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:28:37.314263 master-0 kubenswrapper[31559]: I0216 02:28:37.314031 31559 status_manager.go:851] "Failed to get status for pod" podUID="7dc8db6b-5b2f-4fd7-b0fe-3a45a5012cdb" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:28:37.317366 master-0 kubenswrapper[31559]: I0216 02:28:37.317310 31559 status_manager.go:851] "Failed to get status for pod" podUID="8859b956-70db-4e59-abff-faf38aa377fc" pod="openshift-console/console-7cdbd48f5b-slwlb" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-7cdbd48f5b-slwlb\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:28:37.334228 master-0 kubenswrapper[31559]: I0216 02:28:37.334158 31559 scope.go:117] "RemoveContainer" containerID="7560250703a4f95f92cb4830137ba4c28861f553762a780e9ed897c205c248ce" Feb 16 02:28:37.352343 master-0 kubenswrapper[31559]: I0216 02:28:37.352264 31559 status_manager.go:851] "Failed to get status for pod" podUID="7dc8db6b-5b2f-4fd7-b0fe-3a45a5012cdb" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:28:37.353508 master-0 kubenswrapper[31559]: I0216 02:28:37.353389 31559 status_manager.go:851] "Failed to get status for pod" podUID="8859b956-70db-4e59-abff-faf38aa377fc" pod="openshift-console/console-7cdbd48f5b-slwlb" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-7cdbd48f5b-slwlb\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:28:37.354221 master-0 kubenswrapper[31559]: I0216 02:28:37.354085 31559 status_manager.go:851] "Failed to get status for pod" podUID="faa15f80078a2bfbe2234a74ab4da87c" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:28:37.355188 master-0 kubenswrapper[31559]: I0216 02:28:37.355065 31559 status_manager.go:851] "Failed to get status for pod" podUID="faa15f80078a2bfbe2234a74ab4da87c" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:28:37.356013 master-0 kubenswrapper[31559]: I0216 02:28:37.355944 31559 status_manager.go:851] "Failed to get status for pod" podUID="7dc8db6b-5b2f-4fd7-b0fe-3a45a5012cdb" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:28:37.357059 master-0 kubenswrapper[31559]: I0216 02:28:37.356864 31559 status_manager.go:851] "Failed to get status for pod" podUID="8859b956-70db-4e59-abff-faf38aa377fc" pod="openshift-console/console-7cdbd48f5b-slwlb" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-7cdbd48f5b-slwlb\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:28:37.371883 master-0 kubenswrapper[31559]: I0216 02:28:37.371810 31559 scope.go:117] "RemoveContainer" containerID="9bdf2f568c4df3e9b3963f330037f1881dfe657c9df363c02007e717c60ce8ef" Feb 16 02:28:37.401319 master-0 kubenswrapper[31559]: I0216 02:28:37.401249 31559 scope.go:117] "RemoveContainer" containerID="fc25e646752f2eefc1069034fc9ce62011699a37685d3736d305e53a5721016c" Feb 16 02:28:37.433679 master-0 kubenswrapper[31559]: I0216 02:28:37.433604 31559 scope.go:117] "RemoveContainer" containerID="c5e856a5a1deabc8ec437ec153f6dc9513c1142be01650534dba41f3a69aa9ac" Feb 16 02:28:37.483210 master-0 kubenswrapper[31559]: I0216 02:28:37.483107 31559 scope.go:117] "RemoveContainer" containerID="5ff322ca44f3862c17754d1f775bd6f38d2521a29ea7b308d3c99dba1d568471" Feb 16 02:28:37.517857 master-0 kubenswrapper[31559]: I0216 02:28:37.517779 31559 scope.go:117] "RemoveContainer" containerID="b0745343c4d73d707339ae202ee2843192faeca2cb8a695a44972b86e95a9611" Feb 16 02:28:37.519379 master-0 kubenswrapper[31559]: E0216 02:28:37.519284 31559 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0745343c4d73d707339ae202ee2843192faeca2cb8a695a44972b86e95a9611\": container with ID starting with b0745343c4d73d707339ae202ee2843192faeca2cb8a695a44972b86e95a9611 not found: ID does not exist" containerID="b0745343c4d73d707339ae202ee2843192faeca2cb8a695a44972b86e95a9611" Feb 16 02:28:37.519548 master-0 kubenswrapper[31559]: I0216 02:28:37.519390 31559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0745343c4d73d707339ae202ee2843192faeca2cb8a695a44972b86e95a9611"} err="failed to get container status \"b0745343c4d73d707339ae202ee2843192faeca2cb8a695a44972b86e95a9611\": rpc error: code = NotFound desc = could not find container \"b0745343c4d73d707339ae202ee2843192faeca2cb8a695a44972b86e95a9611\": container with ID starting with b0745343c4d73d707339ae202ee2843192faeca2cb8a695a44972b86e95a9611 not found: ID does not exist" Feb 16 02:28:37.519642 master-0 kubenswrapper[31559]: I0216 02:28:37.519561 31559 scope.go:117] "RemoveContainer" containerID="7560250703a4f95f92cb4830137ba4c28861f553762a780e9ed897c205c248ce" Feb 16 02:28:37.520378 master-0 kubenswrapper[31559]: E0216 02:28:37.520302 31559 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7560250703a4f95f92cb4830137ba4c28861f553762a780e9ed897c205c248ce\": container with ID starting with 7560250703a4f95f92cb4830137ba4c28861f553762a780e9ed897c205c248ce not found: ID does not exist" containerID="7560250703a4f95f92cb4830137ba4c28861f553762a780e9ed897c205c248ce" Feb 16 02:28:37.520504 master-0 kubenswrapper[31559]: I0216 02:28:37.520389 31559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7560250703a4f95f92cb4830137ba4c28861f553762a780e9ed897c205c248ce"} err="failed to get container status \"7560250703a4f95f92cb4830137ba4c28861f553762a780e9ed897c205c248ce\": rpc error: code = NotFound desc = could not find container \"7560250703a4f95f92cb4830137ba4c28861f553762a780e9ed897c205c248ce\": container with ID starting with 7560250703a4f95f92cb4830137ba4c28861f553762a780e9ed897c205c248ce not found: ID does not exist" Feb 16 02:28:37.520504 master-0 kubenswrapper[31559]: I0216 02:28:37.520479 31559 scope.go:117] "RemoveContainer" containerID="9bdf2f568c4df3e9b3963f330037f1881dfe657c9df363c02007e717c60ce8ef" Feb 16 02:28:37.521746 master-0 kubenswrapper[31559]: E0216 02:28:37.521702 31559 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bdf2f568c4df3e9b3963f330037f1881dfe657c9df363c02007e717c60ce8ef\": container with ID starting with 9bdf2f568c4df3e9b3963f330037f1881dfe657c9df363c02007e717c60ce8ef not found: ID does not exist" containerID="9bdf2f568c4df3e9b3963f330037f1881dfe657c9df363c02007e717c60ce8ef" Feb 16 02:28:37.522385 master-0 kubenswrapper[31559]: I0216 02:28:37.522334 31559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bdf2f568c4df3e9b3963f330037f1881dfe657c9df363c02007e717c60ce8ef"} err="failed to get container status \"9bdf2f568c4df3e9b3963f330037f1881dfe657c9df363c02007e717c60ce8ef\": rpc error: code = NotFound desc = could not find container \"9bdf2f568c4df3e9b3963f330037f1881dfe657c9df363c02007e717c60ce8ef\": container with ID starting with 9bdf2f568c4df3e9b3963f330037f1881dfe657c9df363c02007e717c60ce8ef not found: ID does not exist" Feb 16 02:28:37.522563 master-0 kubenswrapper[31559]: I0216 02:28:37.522539 31559 scope.go:117] "RemoveContainer" containerID="fc25e646752f2eefc1069034fc9ce62011699a37685d3736d305e53a5721016c" Feb 16 02:28:37.524523 master-0 kubenswrapper[31559]: E0216 02:28:37.524426 31559 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc25e646752f2eefc1069034fc9ce62011699a37685d3736d305e53a5721016c\": container with ID starting with fc25e646752f2eefc1069034fc9ce62011699a37685d3736d305e53a5721016c not found: ID does not exist" containerID="fc25e646752f2eefc1069034fc9ce62011699a37685d3736d305e53a5721016c" Feb 16 02:28:37.524646 master-0 kubenswrapper[31559]: I0216 02:28:37.524544 31559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc25e646752f2eefc1069034fc9ce62011699a37685d3736d305e53a5721016c"} err="failed to get container status \"fc25e646752f2eefc1069034fc9ce62011699a37685d3736d305e53a5721016c\": rpc error: code = NotFound desc = could not find container \"fc25e646752f2eefc1069034fc9ce62011699a37685d3736d305e53a5721016c\": container with ID starting with fc25e646752f2eefc1069034fc9ce62011699a37685d3736d305e53a5721016c not found: ID does not exist" Feb 16 02:28:37.524646 master-0 kubenswrapper[31559]: I0216 02:28:37.524590 31559 scope.go:117] "RemoveContainer" containerID="c5e856a5a1deabc8ec437ec153f6dc9513c1142be01650534dba41f3a69aa9ac" Feb 16 02:28:37.525013 master-0 kubenswrapper[31559]: E0216 02:28:37.524980 31559 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5e856a5a1deabc8ec437ec153f6dc9513c1142be01650534dba41f3a69aa9ac\": container with ID starting with c5e856a5a1deabc8ec437ec153f6dc9513c1142be01650534dba41f3a69aa9ac not found: ID does not exist" containerID="c5e856a5a1deabc8ec437ec153f6dc9513c1142be01650534dba41f3a69aa9ac" Feb 16 02:28:37.525150 master-0 kubenswrapper[31559]: I0216 02:28:37.525119 31559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5e856a5a1deabc8ec437ec153f6dc9513c1142be01650534dba41f3a69aa9ac"} err="failed to get container status \"c5e856a5a1deabc8ec437ec153f6dc9513c1142be01650534dba41f3a69aa9ac\": rpc error: code = NotFound desc = could not find container \"c5e856a5a1deabc8ec437ec153f6dc9513c1142be01650534dba41f3a69aa9ac\": container with ID starting with c5e856a5a1deabc8ec437ec153f6dc9513c1142be01650534dba41f3a69aa9ac not found: ID does not exist" Feb 16 02:28:37.525273 master-0 kubenswrapper[31559]: I0216 02:28:37.525254 31559 scope.go:117] "RemoveContainer" containerID="5ff322ca44f3862c17754d1f775bd6f38d2521a29ea7b308d3c99dba1d568471" Feb 16 02:28:37.526793 master-0 kubenswrapper[31559]: E0216 02:28:37.526729 31559 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ff322ca44f3862c17754d1f775bd6f38d2521a29ea7b308d3c99dba1d568471\": container with ID starting with 5ff322ca44f3862c17754d1f775bd6f38d2521a29ea7b308d3c99dba1d568471 not found: ID does not exist" containerID="5ff322ca44f3862c17754d1f775bd6f38d2521a29ea7b308d3c99dba1d568471" Feb 16 02:28:37.526886 master-0 kubenswrapper[31559]: I0216 02:28:37.526817 31559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ff322ca44f3862c17754d1f775bd6f38d2521a29ea7b308d3c99dba1d568471"} err="failed to get container status \"5ff322ca44f3862c17754d1f775bd6f38d2521a29ea7b308d3c99dba1d568471\": rpc error: code = NotFound desc = could not find container \"5ff322ca44f3862c17754d1f775bd6f38d2521a29ea7b308d3c99dba1d568471\": container with ID starting with 5ff322ca44f3862c17754d1f775bd6f38d2521a29ea7b308d3c99dba1d568471 not found: ID does not exist" Feb 16 02:28:37.929518 master-0 kubenswrapper[31559]: I0216 02:28:37.929426 31559 status_manager.go:851] "Failed to get status for pod" podUID="7dc8db6b-5b2f-4fd7-b0fe-3a45a5012cdb" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:28:37.930108 master-0 kubenswrapper[31559]: I0216 02:28:37.930033 31559 status_manager.go:851] "Failed to get status for pod" podUID="8859b956-70db-4e59-abff-faf38aa377fc" pod="openshift-console/console-7cdbd48f5b-slwlb" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-7cdbd48f5b-slwlb\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:28:37.930900 master-0 kubenswrapper[31559]: I0216 02:28:37.930827 31559 status_manager.go:851] "Failed to get status for pod" podUID="faa15f80078a2bfbe2234a74ab4da87c" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:28:37.934386 master-0 kubenswrapper[31559]: I0216 02:28:37.934330 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="faa15f80078a2bfbe2234a74ab4da87c" path="/var/lib/kubelet/pods/faa15f80078a2bfbe2234a74ab4da87c/volumes" Feb 16 02:28:38.315622 master-0 kubenswrapper[31559]: I0216 02:28:38.315528 31559 generic.go:334] "Generic (PLEG): container finished" podID="8c267cc7-a51a-4b14-baee-e584254eefc5" containerID="f80750b41fcca97bf8458c1b6044d45377e09a5a0f5619c086ed38bf7a1478e0" exitCode=0 Feb 16 02:28:38.316084 master-0 kubenswrapper[31559]: I0216 02:28:38.315634 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-67b79bd656-cs2n2" event={"ID":"8c267cc7-a51a-4b14-baee-e584254eefc5","Type":"ContainerDied","Data":"f80750b41fcca97bf8458c1b6044d45377e09a5a0f5619c086ed38bf7a1478e0"} Feb 16 02:28:38.596582 master-0 kubenswrapper[31559]: I0216 02:28:38.596504 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-67b79bd656-cs2n2" Feb 16 02:28:38.598099 master-0 kubenswrapper[31559]: I0216 02:28:38.598000 31559 status_manager.go:851] "Failed to get status for pod" podUID="7dc8db6b-5b2f-4fd7-b0fe-3a45a5012cdb" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:28:38.599093 master-0 kubenswrapper[31559]: I0216 02:28:38.599016 31559 status_manager.go:851] "Failed to get status for pod" podUID="8859b956-70db-4e59-abff-faf38aa377fc" pod="openshift-console/console-7cdbd48f5b-slwlb" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-7cdbd48f5b-slwlb\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:28:38.599971 master-0 kubenswrapper[31559]: I0216 02:28:38.599906 31559 status_manager.go:851] "Failed to get status for pod" podUID="8c267cc7-a51a-4b14-baee-e584254eefc5" pod="openshift-monitoring/metrics-server-67b79bd656-cs2n2" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/metrics-server-67b79bd656-cs2n2\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:28:38.795267 master-0 kubenswrapper[31559]: I0216 02:28:38.795086 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9snq8\" (UniqueName: \"kubernetes.io/projected/8c267cc7-a51a-4b14-baee-e584254eefc5-kube-api-access-9snq8\") pod \"8c267cc7-a51a-4b14-baee-e584254eefc5\" (UID: \"8c267cc7-a51a-4b14-baee-e584254eefc5\") " Feb 16 02:28:38.795267 master-0 kubenswrapper[31559]: I0216 02:28:38.795231 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/8c267cc7-a51a-4b14-baee-e584254eefc5-audit-log\") pod \"8c267cc7-a51a-4b14-baee-e584254eefc5\" (UID: \"8c267cc7-a51a-4b14-baee-e584254eefc5\") " Feb 16 02:28:38.795618 master-0 kubenswrapper[31559]: I0216 02:28:38.795527 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/8c267cc7-a51a-4b14-baee-e584254eefc5-secret-metrics-client-certs\") pod \"8c267cc7-a51a-4b14-baee-e584254eefc5\" (UID: \"8c267cc7-a51a-4b14-baee-e584254eefc5\") " Feb 16 02:28:38.795686 master-0 kubenswrapper[31559]: I0216 02:28:38.795579 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/8c267cc7-a51a-4b14-baee-e584254eefc5-metrics-server-audit-profiles\") pod \"8c267cc7-a51a-4b14-baee-e584254eefc5\" (UID: \"8c267cc7-a51a-4b14-baee-e584254eefc5\") " Feb 16 02:28:38.795771 master-0 kubenswrapper[31559]: I0216 02:28:38.795737 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c267cc7-a51a-4b14-baee-e584254eefc5-client-ca-bundle\") pod \"8c267cc7-a51a-4b14-baee-e584254eefc5\" (UID: \"8c267cc7-a51a-4b14-baee-e584254eefc5\") " Feb 16 02:28:38.795874 master-0 kubenswrapper[31559]: I0216 02:28:38.795832 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/8c267cc7-a51a-4b14-baee-e584254eefc5-secret-metrics-server-tls\") pod \"8c267cc7-a51a-4b14-baee-e584254eefc5\" (UID: \"8c267cc7-a51a-4b14-baee-e584254eefc5\") " Feb 16 02:28:38.796001 master-0 kubenswrapper[31559]: I0216 02:28:38.795956 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c267cc7-a51a-4b14-baee-e584254eefc5-configmap-kubelet-serving-ca-bundle\") pod \"8c267cc7-a51a-4b14-baee-e584254eefc5\" (UID: \"8c267cc7-a51a-4b14-baee-e584254eefc5\") " Feb 16 02:28:38.797186 master-0 kubenswrapper[31559]: I0216 02:28:38.797104 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c267cc7-a51a-4b14-baee-e584254eefc5-metrics-server-audit-profiles" (OuterVolumeSpecName: "metrics-server-audit-profiles") pod "8c267cc7-a51a-4b14-baee-e584254eefc5" (UID: "8c267cc7-a51a-4b14-baee-e584254eefc5"). InnerVolumeSpecName "metrics-server-audit-profiles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:28:38.797388 master-0 kubenswrapper[31559]: I0216 02:28:38.797327 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c267cc7-a51a-4b14-baee-e584254eefc5-audit-log" (OuterVolumeSpecName: "audit-log") pod "8c267cc7-a51a-4b14-baee-e584254eefc5" (UID: "8c267cc7-a51a-4b14-baee-e584254eefc5"). InnerVolumeSpecName "audit-log". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 02:28:38.797735 master-0 kubenswrapper[31559]: I0216 02:28:38.797674 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c267cc7-a51a-4b14-baee-e584254eefc5-configmap-kubelet-serving-ca-bundle" (OuterVolumeSpecName: "configmap-kubelet-serving-ca-bundle") pod "8c267cc7-a51a-4b14-baee-e584254eefc5" (UID: "8c267cc7-a51a-4b14-baee-e584254eefc5"). InnerVolumeSpecName "configmap-kubelet-serving-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:28:38.801130 master-0 kubenswrapper[31559]: I0216 02:28:38.801056 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c267cc7-a51a-4b14-baee-e584254eefc5-kube-api-access-9snq8" (OuterVolumeSpecName: "kube-api-access-9snq8") pod "8c267cc7-a51a-4b14-baee-e584254eefc5" (UID: "8c267cc7-a51a-4b14-baee-e584254eefc5"). InnerVolumeSpecName "kube-api-access-9snq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:28:38.801904 master-0 kubenswrapper[31559]: I0216 02:28:38.801843 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c267cc7-a51a-4b14-baee-e584254eefc5-secret-metrics-client-certs" (OuterVolumeSpecName: "secret-metrics-client-certs") pod "8c267cc7-a51a-4b14-baee-e584254eefc5" (UID: "8c267cc7-a51a-4b14-baee-e584254eefc5"). InnerVolumeSpecName "secret-metrics-client-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:28:38.802414 master-0 kubenswrapper[31559]: I0216 02:28:38.802349 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c267cc7-a51a-4b14-baee-e584254eefc5-secret-metrics-server-tls" (OuterVolumeSpecName: "secret-metrics-server-tls") pod "8c267cc7-a51a-4b14-baee-e584254eefc5" (UID: "8c267cc7-a51a-4b14-baee-e584254eefc5"). InnerVolumeSpecName "secret-metrics-server-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:28:38.803014 master-0 kubenswrapper[31559]: I0216 02:28:38.802953 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c267cc7-a51a-4b14-baee-e584254eefc5-client-ca-bundle" (OuterVolumeSpecName: "client-ca-bundle") pod "8c267cc7-a51a-4b14-baee-e584254eefc5" (UID: "8c267cc7-a51a-4b14-baee-e584254eefc5"). InnerVolumeSpecName "client-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:28:38.898100 master-0 kubenswrapper[31559]: I0216 02:28:38.897987 31559 reconciler_common.go:293] "Volume detached for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c267cc7-a51a-4b14-baee-e584254eefc5-configmap-kubelet-serving-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 02:28:38.898100 master-0 kubenswrapper[31559]: I0216 02:28:38.898056 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9snq8\" (UniqueName: \"kubernetes.io/projected/8c267cc7-a51a-4b14-baee-e584254eefc5-kube-api-access-9snq8\") on node \"master-0\" DevicePath \"\"" Feb 16 02:28:38.898100 master-0 kubenswrapper[31559]: I0216 02:28:38.898085 31559 reconciler_common.go:293] "Volume detached for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/8c267cc7-a51a-4b14-baee-e584254eefc5-audit-log\") on node \"master-0\" DevicePath \"\"" Feb 16 02:28:38.898100 master-0 kubenswrapper[31559]: I0216 02:28:38.898104 31559 reconciler_common.go:293] "Volume detached for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/8c267cc7-a51a-4b14-baee-e584254eefc5-secret-metrics-client-certs\") on node \"master-0\" DevicePath \"\"" Feb 16 02:28:38.898648 master-0 kubenswrapper[31559]: I0216 02:28:38.898127 31559 reconciler_common.go:293] "Volume detached for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/8c267cc7-a51a-4b14-baee-e584254eefc5-metrics-server-audit-profiles\") on node \"master-0\" DevicePath \"\"" Feb 16 02:28:38.898648 master-0 kubenswrapper[31559]: I0216 02:28:38.898147 31559 reconciler_common.go:293] "Volume detached for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c267cc7-a51a-4b14-baee-e584254eefc5-client-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 02:28:38.898648 master-0 kubenswrapper[31559]: I0216 02:28:38.898165 31559 reconciler_common.go:293] "Volume detached for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/8c267cc7-a51a-4b14-baee-e584254eefc5-secret-metrics-server-tls\") on node \"master-0\" DevicePath \"\"" Feb 16 02:28:39.327689 master-0 kubenswrapper[31559]: I0216 02:28:39.327550 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-67b79bd656-cs2n2" event={"ID":"8c267cc7-a51a-4b14-baee-e584254eefc5","Type":"ContainerDied","Data":"4f55a0409391e0031662fe90965f9c6570290d87940cb9577014c63ddf57bd34"} Feb 16 02:28:39.327689 master-0 kubenswrapper[31559]: I0216 02:28:39.327644 31559 scope.go:117] "RemoveContainer" containerID="f80750b41fcca97bf8458c1b6044d45377e09a5a0f5619c086ed38bf7a1478e0" Feb 16 02:28:39.328175 master-0 kubenswrapper[31559]: I0216 02:28:39.327670 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-67b79bd656-cs2n2" Feb 16 02:28:39.329304 master-0 kubenswrapper[31559]: I0216 02:28:39.329174 31559 status_manager.go:851] "Failed to get status for pod" podUID="7dc8db6b-5b2f-4fd7-b0fe-3a45a5012cdb" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:28:39.330401 master-0 kubenswrapper[31559]: I0216 02:28:39.330344 31559 status_manager.go:851] "Failed to get status for pod" podUID="8859b956-70db-4e59-abff-faf38aa377fc" pod="openshift-console/console-7cdbd48f5b-slwlb" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-7cdbd48f5b-slwlb\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:28:39.331290 master-0 kubenswrapper[31559]: I0216 02:28:39.331237 31559 status_manager.go:851] "Failed to get status for pod" podUID="8c267cc7-a51a-4b14-baee-e584254eefc5" pod="openshift-monitoring/metrics-server-67b79bd656-cs2n2" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/metrics-server-67b79bd656-cs2n2\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:28:39.360792 master-0 kubenswrapper[31559]: I0216 02:28:39.360689 31559 status_manager.go:851] "Failed to get status for pod" podUID="7dc8db6b-5b2f-4fd7-b0fe-3a45a5012cdb" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:28:39.361659 master-0 kubenswrapper[31559]: I0216 02:28:39.361599 31559 status_manager.go:851] "Failed to get status for pod" podUID="8859b956-70db-4e59-abff-faf38aa377fc" pod="openshift-console/console-7cdbd48f5b-slwlb" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-7cdbd48f5b-slwlb\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:28:39.362314 master-0 kubenswrapper[31559]: I0216 02:28:39.362258 31559 status_manager.go:851] "Failed to get status for pod" podUID="8c267cc7-a51a-4b14-baee-e584254eefc5" pod="openshift-monitoring/metrics-server-67b79bd656-cs2n2" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/metrics-server-67b79bd656-cs2n2\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:28:40.706078 master-0 kubenswrapper[31559]: E0216 02:28:40.705940 31559 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:28:40.717950 master-0 kubenswrapper[31559]: E0216 02:28:40.717077 31559 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:28:40.721515 master-0 kubenswrapper[31559]: E0216 02:28:40.721449 31559 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:28:40.725872 master-0 kubenswrapper[31559]: E0216 02:28:40.725195 31559 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:28:40.727200 master-0 kubenswrapper[31559]: E0216 02:28:40.727121 31559 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:28:40.727200 master-0 kubenswrapper[31559]: I0216 02:28:40.727177 31559 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 16 02:28:40.728585 master-0 kubenswrapper[31559]: E0216 02:28:40.728212 31559 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Feb 16 02:28:40.930244 master-0 kubenswrapper[31559]: E0216 02:28:40.930170 31559 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Feb 16 02:28:41.332166 master-0 kubenswrapper[31559]: E0216 02:28:41.332021 31559 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Feb 16 02:28:42.134041 master-0 kubenswrapper[31559]: E0216 02:28:42.133858 31559 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Feb 16 02:28:42.971071 master-0 kubenswrapper[31559]: E0216 02:28:42.970823 31559 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.1894991fad207de3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:a9a2b3a37af32e5d570b82bfd956f250,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 02:28:34.791071203 +0000 UTC m=+367.135677248,LastTimestamp:2026-02-16 02:28:34.791071203 +0000 UTC m=+367.135677248,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 02:28:43.738488 master-0 kubenswrapper[31559]: E0216 02:28:43.737763 31559 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Feb 16 02:28:46.940140 master-0 kubenswrapper[31559]: E0216 02:28:46.940076 31559 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="6.4s" Feb 16 02:28:47.933604 master-0 kubenswrapper[31559]: I0216 02:28:47.932990 31559 status_manager.go:851] "Failed to get status for pod" podUID="8859b956-70db-4e59-abff-faf38aa377fc" pod="openshift-console/console-7cdbd48f5b-slwlb" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-7cdbd48f5b-slwlb\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:28:47.935127 master-0 kubenswrapper[31559]: I0216 02:28:47.935021 31559 status_manager.go:851] "Failed to get status for pod" podUID="8c267cc7-a51a-4b14-baee-e584254eefc5" pod="openshift-monitoring/metrics-server-67b79bd656-cs2n2" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/metrics-server-67b79bd656-cs2n2\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:28:47.937564 master-0 kubenswrapper[31559]: I0216 02:28:47.937484 31559 status_manager.go:851] "Failed to get status for pod" podUID="7dc8db6b-5b2f-4fd7-b0fe-3a45a5012cdb" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:28:49.447948 master-0 kubenswrapper[31559]: I0216 02:28:49.447874 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_532487ad51c30257b744e7c1c79fb34f/kube-controller-manager/3.log" Feb 16 02:28:49.449919 master-0 kubenswrapper[31559]: I0216 02:28:49.449868 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_532487ad51c30257b744e7c1c79fb34f/cluster-policy-controller/3.log" Feb 16 02:28:49.451875 master-0 kubenswrapper[31559]: I0216 02:28:49.451824 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_532487ad51c30257b744e7c1c79fb34f/kube-controller-manager/2.log" Feb 16 02:28:49.453689 master-0 kubenswrapper[31559]: I0216 02:28:49.453632 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_532487ad51c30257b744e7c1c79fb34f/kube-controller-manager-cert-syncer/0.log" Feb 16 02:28:49.453811 master-0 kubenswrapper[31559]: I0216 02:28:49.453740 31559 generic.go:334] "Generic (PLEG): container finished" podID="532487ad51c30257b744e7c1c79fb34f" containerID="7936c2730f8175510de2d253b1e10cfbfb35dc725232ac6b9454cae07e1ba691" exitCode=1 Feb 16 02:28:49.453811 master-0 kubenswrapper[31559]: I0216 02:28:49.453797 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"532487ad51c30257b744e7c1c79fb34f","Type":"ContainerDied","Data":"7936c2730f8175510de2d253b1e10cfbfb35dc725232ac6b9454cae07e1ba691"} Feb 16 02:28:49.453942 master-0 kubenswrapper[31559]: I0216 02:28:49.453868 31559 scope.go:117] "RemoveContainer" containerID="5fc96fb916b196b3dbc229cfd525c7d85b5052106365d264bf8c22b6c5329dbb" Feb 16 02:28:49.454816 master-0 kubenswrapper[31559]: I0216 02:28:49.454741 31559 scope.go:117] "RemoveContainer" containerID="7936c2730f8175510de2d253b1e10cfbfb35dc725232ac6b9454cae07e1ba691" Feb 16 02:28:49.455869 master-0 kubenswrapper[31559]: I0216 02:28:49.455769 31559 status_manager.go:851] "Failed to get status for pod" podUID="532487ad51c30257b744e7c1c79fb34f" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:28:49.456723 master-0 kubenswrapper[31559]: E0216 02:28:49.456629 31559 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-master-0_openshift-kube-controller-manager(532487ad51c30257b744e7c1c79fb34f)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="532487ad51c30257b744e7c1c79fb34f" Feb 16 02:28:49.457339 master-0 kubenswrapper[31559]: I0216 02:28:49.457289 31559 status_manager.go:851] "Failed to get status for pod" podUID="7dc8db6b-5b2f-4fd7-b0fe-3a45a5012cdb" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:28:49.458708 master-0 kubenswrapper[31559]: I0216 02:28:49.458591 31559 status_manager.go:851] "Failed to get status for pod" podUID="8859b956-70db-4e59-abff-faf38aa377fc" pod="openshift-console/console-7cdbd48f5b-slwlb" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-7cdbd48f5b-slwlb\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:28:49.459987 master-0 kubenswrapper[31559]: I0216 02:28:49.459876 31559 status_manager.go:851] "Failed to get status for pod" podUID="8c267cc7-a51a-4b14-baee-e584254eefc5" pod="openshift-monitoring/metrics-server-67b79bd656-cs2n2" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/metrics-server-67b79bd656-cs2n2\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:28:49.924689 master-0 kubenswrapper[31559]: I0216 02:28:49.924561 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 02:28:49.926489 master-0 kubenswrapper[31559]: I0216 02:28:49.926391 31559 status_manager.go:851] "Failed to get status for pod" podUID="8c267cc7-a51a-4b14-baee-e584254eefc5" pod="openshift-monitoring/metrics-server-67b79bd656-cs2n2" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/metrics-server-67b79bd656-cs2n2\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:28:49.927474 master-0 kubenswrapper[31559]: I0216 02:28:49.927350 31559 status_manager.go:851] "Failed to get status for pod" podUID="532487ad51c30257b744e7c1c79fb34f" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:28:49.928342 master-0 kubenswrapper[31559]: I0216 02:28:49.928271 31559 status_manager.go:851] "Failed to get status for pod" podUID="7dc8db6b-5b2f-4fd7-b0fe-3a45a5012cdb" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:28:49.929112 master-0 kubenswrapper[31559]: I0216 02:28:49.929051 31559 status_manager.go:851] "Failed to get status for pod" podUID="8859b956-70db-4e59-abff-faf38aa377fc" pod="openshift-console/console-7cdbd48f5b-slwlb" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-7cdbd48f5b-slwlb\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:28:49.961981 master-0 kubenswrapper[31559]: I0216 02:28:49.961897 31559 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="37e5f834-e7ad-4cfa-ad87-ac0d0310ccfc" Feb 16 02:28:49.961981 master-0 kubenswrapper[31559]: I0216 02:28:49.961958 31559 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="37e5f834-e7ad-4cfa-ad87-ac0d0310ccfc" Feb 16 02:28:49.962969 master-0 kubenswrapper[31559]: E0216 02:28:49.962890 31559 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 02:28:49.963890 master-0 kubenswrapper[31559]: I0216 02:28:49.963841 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 02:28:49.998285 master-0 kubenswrapper[31559]: W0216 02:28:49.998184 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podafa8ee25cec0b37c40dad37c52b89d42.slice/crio-9dfdbb576926c1092e2e7fda16e1ac2fd1efa6f1a7ded9611b1c681b6ce8e265 WatchSource:0}: Error finding container 9dfdbb576926c1092e2e7fda16e1ac2fd1efa6f1a7ded9611b1c681b6ce8e265: Status 404 returned error can't find the container with id 9dfdbb576926c1092e2e7fda16e1ac2fd1efa6f1a7ded9611b1c681b6ce8e265 Feb 16 02:28:50.384739 master-0 kubenswrapper[31559]: I0216 02:28:50.384672 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:28:50.464082 master-0 kubenswrapper[31559]: I0216 02:28:50.464004 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_532487ad51c30257b744e7c1c79fb34f/kube-controller-manager/3.log" Feb 16 02:28:50.466207 master-0 kubenswrapper[31559]: I0216 02:28:50.466142 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_532487ad51c30257b744e7c1c79fb34f/cluster-policy-controller/3.log" Feb 16 02:28:50.467999 master-0 kubenswrapper[31559]: I0216 02:28:50.467949 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_532487ad51c30257b744e7c1c79fb34f/kube-controller-manager-cert-syncer/0.log" Feb 16 02:28:50.468905 master-0 kubenswrapper[31559]: I0216 02:28:50.468854 31559 scope.go:117] "RemoveContainer" containerID="7936c2730f8175510de2d253b1e10cfbfb35dc725232ac6b9454cae07e1ba691" Feb 16 02:28:50.469367 master-0 kubenswrapper[31559]: E0216 02:28:50.469315 31559 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-master-0_openshift-kube-controller-manager(532487ad51c30257b744e7c1c79fb34f)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="532487ad51c30257b744e7c1c79fb34f" Feb 16 02:28:50.470585 master-0 kubenswrapper[31559]: I0216 02:28:50.470229 31559 status_manager.go:851] "Failed to get status for pod" podUID="532487ad51c30257b744e7c1c79fb34f" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:28:50.471013 master-0 kubenswrapper[31559]: I0216 02:28:50.470950 31559 status_manager.go:851] "Failed to get status for pod" podUID="7dc8db6b-5b2f-4fd7-b0fe-3a45a5012cdb" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:28:50.472473 master-0 kubenswrapper[31559]: I0216 02:28:50.471724 31559 status_manager.go:851] "Failed to get status for pod" podUID="8859b956-70db-4e59-abff-faf38aa377fc" pod="openshift-console/console-7cdbd48f5b-slwlb" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-7cdbd48f5b-slwlb\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:28:50.476971 master-0 kubenswrapper[31559]: I0216 02:28:50.476834 31559 status_manager.go:851] "Failed to get status for pod" podUID="8c267cc7-a51a-4b14-baee-e584254eefc5" pod="openshift-monitoring/metrics-server-67b79bd656-cs2n2" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/metrics-server-67b79bd656-cs2n2\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:28:50.477700 master-0 kubenswrapper[31559]: I0216 02:28:50.477643 31559 generic.go:334] "Generic (PLEG): container finished" podID="afa8ee25cec0b37c40dad37c52b89d42" containerID="b5cdb3a5acbb3a835027ee443616d35f0b97c2174c6ac5e921442f28162d15ee" exitCode=0 Feb 16 02:28:50.477805 master-0 kubenswrapper[31559]: I0216 02:28:50.477704 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"afa8ee25cec0b37c40dad37c52b89d42","Type":"ContainerDied","Data":"b5cdb3a5acbb3a835027ee443616d35f0b97c2174c6ac5e921442f28162d15ee"} Feb 16 02:28:50.477805 master-0 kubenswrapper[31559]: I0216 02:28:50.477754 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"afa8ee25cec0b37c40dad37c52b89d42","Type":"ContainerStarted","Data":"9dfdbb576926c1092e2e7fda16e1ac2fd1efa6f1a7ded9611b1c681b6ce8e265"} Feb 16 02:28:50.478207 master-0 kubenswrapper[31559]: I0216 02:28:50.478163 31559 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="37e5f834-e7ad-4cfa-ad87-ac0d0310ccfc" Feb 16 02:28:50.478207 master-0 kubenswrapper[31559]: I0216 02:28:50.478195 31559 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="37e5f834-e7ad-4cfa-ad87-ac0d0310ccfc" Feb 16 02:28:50.479833 master-0 kubenswrapper[31559]: E0216 02:28:50.479771 31559 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 02:28:50.480939 master-0 kubenswrapper[31559]: I0216 02:28:50.480838 31559 status_manager.go:851] "Failed to get status for pod" podUID="8c267cc7-a51a-4b14-baee-e584254eefc5" pod="openshift-monitoring/metrics-server-67b79bd656-cs2n2" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/metrics-server-67b79bd656-cs2n2\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:28:50.481963 master-0 kubenswrapper[31559]: I0216 02:28:50.481880 31559 status_manager.go:851] "Failed to get status for pod" podUID="532487ad51c30257b744e7c1c79fb34f" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:28:50.482905 master-0 kubenswrapper[31559]: I0216 02:28:50.482837 31559 status_manager.go:851] "Failed to get status for pod" podUID="7dc8db6b-5b2f-4fd7-b0fe-3a45a5012cdb" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:28:50.483883 master-0 kubenswrapper[31559]: I0216 02:28:50.483796 31559 status_manager.go:851] "Failed to get status for pod" podUID="8859b956-70db-4e59-abff-faf38aa377fc" pod="openshift-console/console-7cdbd48f5b-slwlb" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-7cdbd48f5b-slwlb\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 02:28:51.492859 master-0 kubenswrapper[31559]: I0216 02:28:51.492771 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"afa8ee25cec0b37c40dad37c52b89d42","Type":"ContainerStarted","Data":"399e3c2c9aee33eb01aaa05fa19c173995cda6d374c1026c4e2e6b1114feaa46"} Feb 16 02:28:51.492859 master-0 kubenswrapper[31559]: I0216 02:28:51.492825 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"afa8ee25cec0b37c40dad37c52b89d42","Type":"ContainerStarted","Data":"2468d3e202d83afaa068410f870360ae13750c88b6a75aec61df522756d9b117"} Feb 16 02:28:52.561621 master-0 kubenswrapper[31559]: I0216 02:28:52.561541 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"afa8ee25cec0b37c40dad37c52b89d42","Type":"ContainerStarted","Data":"3063a8aac5104022caa492578f6961de86c46d96872c188224ff21e2584ef11a"} Feb 16 02:28:52.561621 master-0 kubenswrapper[31559]: I0216 02:28:52.561618 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"afa8ee25cec0b37c40dad37c52b89d42","Type":"ContainerStarted","Data":"46b7d1ad201ccc9e628a30636a9aec6d928cf997a495c76dbb6520619732f252"} Feb 16 02:28:52.562152 master-0 kubenswrapper[31559]: I0216 02:28:52.561637 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"afa8ee25cec0b37c40dad37c52b89d42","Type":"ContainerStarted","Data":"5e4ae72cd8c2f4f325cdc1f2a000f3395f45ad809229aa8806d61ccebf2abf47"} Feb 16 02:28:52.562152 master-0 kubenswrapper[31559]: I0216 02:28:52.561864 31559 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="37e5f834-e7ad-4cfa-ad87-ac0d0310ccfc" Feb 16 02:28:52.562152 master-0 kubenswrapper[31559]: I0216 02:28:52.561894 31559 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="37e5f834-e7ad-4cfa-ad87-ac0d0310ccfc" Feb 16 02:28:52.562152 master-0 kubenswrapper[31559]: I0216 02:28:52.562057 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 02:28:52.863533 master-0 kubenswrapper[31559]: I0216 02:28:52.863369 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:28:52.864137 master-0 kubenswrapper[31559]: I0216 02:28:52.864093 31559 scope.go:117] "RemoveContainer" containerID="7936c2730f8175510de2d253b1e10cfbfb35dc725232ac6b9454cae07e1ba691" Feb 16 02:28:52.864489 master-0 kubenswrapper[31559]: E0216 02:28:52.864453 31559 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-master-0_openshift-kube-controller-manager(532487ad51c30257b744e7c1c79fb34f)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="532487ad51c30257b744e7c1c79fb34f" Feb 16 02:28:54.964853 master-0 kubenswrapper[31559]: I0216 02:28:54.964783 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 02:28:54.965376 master-0 kubenswrapper[31559]: I0216 02:28:54.964869 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 02:28:54.972627 master-0 kubenswrapper[31559]: I0216 02:28:54.972575 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 02:28:57.592501 master-0 kubenswrapper[31559]: I0216 02:28:57.592397 31559 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 02:28:57.623642 master-0 kubenswrapper[31559]: I0216 02:28:57.623536 31559 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="37e5f834-e7ad-4cfa-ad87-ac0d0310ccfc" Feb 16 02:28:57.624881 master-0 kubenswrapper[31559]: I0216 02:28:57.624847 31559 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="37e5f834-e7ad-4cfa-ad87-ac0d0310ccfc" Feb 16 02:28:57.631628 master-0 kubenswrapper[31559]: I0216 02:28:57.628187 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 02:28:57.661183 master-0 kubenswrapper[31559]: I0216 02:28:57.660913 31559 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="afa8ee25cec0b37c40dad37c52b89d42" podUID="09416699-719a-4602-a6b9-f952f4f24892" Feb 16 02:28:58.632789 master-0 kubenswrapper[31559]: I0216 02:28:58.632717 31559 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="37e5f834-e7ad-4cfa-ad87-ac0d0310ccfc" Feb 16 02:28:58.632789 master-0 kubenswrapper[31559]: I0216 02:28:58.632772 31559 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="37e5f834-e7ad-4cfa-ad87-ac0d0310ccfc" Feb 16 02:28:58.646015 master-0 kubenswrapper[31559]: I0216 02:28:58.645970 31559 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:28:58.646905 master-0 kubenswrapper[31559]: I0216 02:28:58.646867 31559 scope.go:117] "RemoveContainer" containerID="7936c2730f8175510de2d253b1e10cfbfb35dc725232ac6b9454cae07e1ba691" Feb 16 02:28:58.647376 master-0 kubenswrapper[31559]: E0216 02:28:58.647330 31559 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-master-0_openshift-kube-controller-manager(532487ad51c30257b744e7c1c79fb34f)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="532487ad51c30257b744e7c1c79fb34f" Feb 16 02:29:07.612715 master-0 kubenswrapper[31559]: I0216 02:29:07.612627 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 16 02:29:07.772062 master-0 kubenswrapper[31559]: I0216 02:29:07.771984 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 16 02:29:07.953226 master-0 kubenswrapper[31559]: I0216 02:29:07.953134 31559 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="afa8ee25cec0b37c40dad37c52b89d42" podUID="09416699-719a-4602-a6b9-f952f4f24892" Feb 16 02:29:08.142307 master-0 kubenswrapper[31559]: I0216 02:29:08.142194 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 16 02:29:08.186045 master-0 kubenswrapper[31559]: I0216 02:29:08.185951 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 16 02:29:08.204149 master-0 kubenswrapper[31559]: I0216 02:29:08.203987 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 16 02:29:08.207069 master-0 kubenswrapper[31559]: I0216 02:29:08.207013 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 16 02:29:08.304391 master-0 kubenswrapper[31559]: I0216 02:29:08.304312 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 16 02:29:08.542507 master-0 kubenswrapper[31559]: I0216 02:29:08.542327 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 16 02:29:08.738484 master-0 kubenswrapper[31559]: I0216 02:29:08.738378 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Feb 16 02:29:09.062241 master-0 kubenswrapper[31559]: I0216 02:29:09.062167 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Feb 16 02:29:09.131559 master-0 kubenswrapper[31559]: I0216 02:29:09.131491 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 16 02:29:09.353925 master-0 kubenswrapper[31559]: I0216 02:29:09.353668 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 16 02:29:09.372572 master-0 kubenswrapper[31559]: I0216 02:29:09.372536 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 16 02:29:09.437678 master-0 kubenswrapper[31559]: I0216 02:29:09.437617 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 02:29:09.520430 master-0 kubenswrapper[31559]: I0216 02:29:09.520368 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 16 02:29:09.644319 master-0 kubenswrapper[31559]: I0216 02:29:09.644140 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Feb 16 02:29:09.704562 master-0 kubenswrapper[31559]: I0216 02:29:09.704473 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Feb 16 02:29:09.885663 master-0 kubenswrapper[31559]: I0216 02:29:09.885595 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 16 02:29:09.891121 master-0 kubenswrapper[31559]: I0216 02:29:09.891061 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Feb 16 02:29:09.935892 master-0 kubenswrapper[31559]: I0216 02:29:09.935796 31559 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 16 02:29:09.968862 master-0 kubenswrapper[31559]: I0216 02:29:09.968786 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 16 02:29:10.050274 master-0 kubenswrapper[31559]: I0216 02:29:10.050195 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Feb 16 02:29:10.226343 master-0 kubenswrapper[31559]: I0216 02:29:10.226170 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-nfl29" Feb 16 02:29:10.379845 master-0 kubenswrapper[31559]: I0216 02:29:10.379761 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-bh67v" Feb 16 02:29:10.382503 master-0 kubenswrapper[31559]: I0216 02:29:10.382338 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-7m5el3rgfcivj" Feb 16 02:29:10.565238 master-0 kubenswrapper[31559]: I0216 02:29:10.565101 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 16 02:29:10.673513 master-0 kubenswrapper[31559]: I0216 02:29:10.673388 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 16 02:29:10.696623 master-0 kubenswrapper[31559]: I0216 02:29:10.696573 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Feb 16 02:29:10.811582 master-0 kubenswrapper[31559]: I0216 02:29:10.811525 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 16 02:29:10.861622 master-0 kubenswrapper[31559]: I0216 02:29:10.860671 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-xg7sl" Feb 16 02:29:10.960913 master-0 kubenswrapper[31559]: I0216 02:29:10.960836 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 16 02:29:11.026598 master-0 kubenswrapper[31559]: I0216 02:29:11.026542 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Feb 16 02:29:11.050429 master-0 kubenswrapper[31559]: I0216 02:29:11.050349 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Feb 16 02:29:11.100135 master-0 kubenswrapper[31559]: I0216 02:29:11.100053 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Feb 16 02:29:11.111023 master-0 kubenswrapper[31559]: I0216 02:29:11.110974 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 16 02:29:11.145328 master-0 kubenswrapper[31559]: I0216 02:29:11.145217 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Feb 16 02:29:11.173257 master-0 kubenswrapper[31559]: I0216 02:29:11.173162 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 16 02:29:11.173388 master-0 kubenswrapper[31559]: I0216 02:29:11.173181 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 16 02:29:11.255741 master-0 kubenswrapper[31559]: I0216 02:29:11.255649 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 16 02:29:11.289331 master-0 kubenswrapper[31559]: I0216 02:29:11.289243 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 16 02:29:11.397539 master-0 kubenswrapper[31559]: I0216 02:29:11.397382 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 16 02:29:11.452230 master-0 kubenswrapper[31559]: I0216 02:29:11.452136 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 16 02:29:11.454454 master-0 kubenswrapper[31559]: I0216 02:29:11.454390 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-785jj" Feb 16 02:29:11.456259 master-0 kubenswrapper[31559]: I0216 02:29:11.456220 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Feb 16 02:29:11.484981 master-0 kubenswrapper[31559]: I0216 02:29:11.484907 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Feb 16 02:29:11.614013 master-0 kubenswrapper[31559]: I0216 02:29:11.613934 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 16 02:29:11.698264 master-0 kubenswrapper[31559]: I0216 02:29:11.698202 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 16 02:29:11.713013 master-0 kubenswrapper[31559]: I0216 02:29:11.712930 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 16 02:29:11.773656 master-0 kubenswrapper[31559]: I0216 02:29:11.773577 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-s4hmw" Feb 16 02:29:11.801333 master-0 kubenswrapper[31559]: I0216 02:29:11.801261 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 16 02:29:11.802657 master-0 kubenswrapper[31559]: I0216 02:29:11.802613 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 16 02:29:11.861213 master-0 kubenswrapper[31559]: I0216 02:29:11.861088 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 16 02:29:11.951727 master-0 kubenswrapper[31559]: I0216 02:29:11.941376 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Feb 16 02:29:11.972806 master-0 kubenswrapper[31559]: I0216 02:29:11.972580 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 16 02:29:11.978783 master-0 kubenswrapper[31559]: I0216 02:29:11.978644 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 16 02:29:11.978783 master-0 kubenswrapper[31559]: I0216 02:29:11.978707 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Feb 16 02:29:11.980564 master-0 kubenswrapper[31559]: I0216 02:29:11.980401 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 16 02:29:12.032037 master-0 kubenswrapper[31559]: I0216 02:29:12.031684 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Feb 16 02:29:12.145509 master-0 kubenswrapper[31559]: I0216 02:29:12.145392 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Feb 16 02:29:12.188715 master-0 kubenswrapper[31559]: I0216 02:29:12.188641 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 16 02:29:12.201079 master-0 kubenswrapper[31559]: I0216 02:29:12.201000 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Feb 16 02:29:12.243495 master-0 kubenswrapper[31559]: I0216 02:29:12.243272 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 16 02:29:12.289396 master-0 kubenswrapper[31559]: I0216 02:29:12.289329 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Feb 16 02:29:12.321323 master-0 kubenswrapper[31559]: I0216 02:29:12.321083 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Feb 16 02:29:12.400220 master-0 kubenswrapper[31559]: I0216 02:29:12.400163 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 16 02:29:12.415584 master-0 kubenswrapper[31559]: I0216 02:29:12.415522 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Feb 16 02:29:12.483695 master-0 kubenswrapper[31559]: I0216 02:29:12.483590 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 16 02:29:12.555250 master-0 kubenswrapper[31559]: I0216 02:29:12.555074 31559 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 16 02:29:12.579009 master-0 kubenswrapper[31559]: I0216 02:29:12.578919 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 16 02:29:12.586323 master-0 kubenswrapper[31559]: I0216 02:29:12.586271 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Feb 16 02:29:12.765980 master-0 kubenswrapper[31559]: I0216 02:29:12.765866 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 16 02:29:12.804395 master-0 kubenswrapper[31559]: I0216 02:29:12.804317 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Feb 16 02:29:12.809157 master-0 kubenswrapper[31559]: I0216 02:29:12.808740 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Feb 16 02:29:12.861619 master-0 kubenswrapper[31559]: I0216 02:29:12.861544 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 16 02:29:12.924849 master-0 kubenswrapper[31559]: I0216 02:29:12.924768 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 16 02:29:12.927306 master-0 kubenswrapper[31559]: I0216 02:29:12.926025 31559 scope.go:117] "RemoveContainer" containerID="7936c2730f8175510de2d253b1e10cfbfb35dc725232ac6b9454cae07e1ba691" Feb 16 02:29:12.943123 master-0 kubenswrapper[31559]: I0216 02:29:12.943048 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 16 02:29:12.957851 master-0 kubenswrapper[31559]: I0216 02:29:12.957790 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 16 02:29:12.992231 master-0 kubenswrapper[31559]: I0216 02:29:12.992116 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Feb 16 02:29:13.007388 master-0 kubenswrapper[31559]: I0216 02:29:13.007299 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 16 02:29:13.055233 master-0 kubenswrapper[31559]: I0216 02:29:13.055138 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Feb 16 02:29:13.072678 master-0 kubenswrapper[31559]: I0216 02:29:13.072512 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-r9rvr" Feb 16 02:29:13.091036 master-0 kubenswrapper[31559]: I0216 02:29:13.090957 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 16 02:29:13.097847 master-0 kubenswrapper[31559]: I0216 02:29:13.097792 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Feb 16 02:29:13.135116 master-0 kubenswrapper[31559]: I0216 02:29:13.135061 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 16 02:29:13.152471 master-0 kubenswrapper[31559]: I0216 02:29:13.151467 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 16 02:29:13.182061 master-0 kubenswrapper[31559]: I0216 02:29:13.181994 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-qmfsl" Feb 16 02:29:13.204810 master-0 kubenswrapper[31559]: I0216 02:29:13.204735 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Feb 16 02:29:13.279669 master-0 kubenswrapper[31559]: I0216 02:29:13.279571 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-9dqnm" Feb 16 02:29:13.298331 master-0 kubenswrapper[31559]: I0216 02:29:13.298216 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 16 02:29:13.307209 master-0 kubenswrapper[31559]: I0216 02:29:13.307132 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Feb 16 02:29:13.417726 master-0 kubenswrapper[31559]: I0216 02:29:13.417633 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-whkzn" Feb 16 02:29:13.421893 master-0 kubenswrapper[31559]: I0216 02:29:13.421831 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 16 02:29:13.460266 master-0 kubenswrapper[31559]: I0216 02:29:13.460175 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 16 02:29:13.572045 master-0 kubenswrapper[31559]: I0216 02:29:13.571937 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 16 02:29:13.583609 master-0 kubenswrapper[31559]: I0216 02:29:13.583541 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 16 02:29:13.637540 master-0 kubenswrapper[31559]: I0216 02:29:13.637463 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Feb 16 02:29:13.641104 master-0 kubenswrapper[31559]: I0216 02:29:13.641053 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 16 02:29:13.644310 master-0 kubenswrapper[31559]: I0216 02:29:13.644255 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 02:29:13.661252 master-0 kubenswrapper[31559]: I0216 02:29:13.661007 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 16 02:29:13.686099 master-0 kubenswrapper[31559]: I0216 02:29:13.685947 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 16 02:29:13.688194 master-0 kubenswrapper[31559]: I0216 02:29:13.688118 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 16 02:29:13.760282 master-0 kubenswrapper[31559]: I0216 02:29:13.760192 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 02:29:13.763684 master-0 kubenswrapper[31559]: I0216 02:29:13.763537 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 16 02:29:13.795482 master-0 kubenswrapper[31559]: I0216 02:29:13.794692 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_532487ad51c30257b744e7c1c79fb34f/kube-controller-manager/3.log" Feb 16 02:29:13.796639 master-0 kubenswrapper[31559]: I0216 02:29:13.796590 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_532487ad51c30257b744e7c1c79fb34f/cluster-policy-controller/3.log" Feb 16 02:29:13.799629 master-0 kubenswrapper[31559]: I0216 02:29:13.799588 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_532487ad51c30257b744e7c1c79fb34f/kube-controller-manager-cert-syncer/0.log" Feb 16 02:29:13.799763 master-0 kubenswrapper[31559]: I0216 02:29:13.799680 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"532487ad51c30257b744e7c1c79fb34f","Type":"ContainerStarted","Data":"8e65952f68dc70a998ca96fc43cd86d783845ba696b7cee768810bbdde0b1b72"} Feb 16 02:29:13.855085 master-0 kubenswrapper[31559]: I0216 02:29:13.855027 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 16 02:29:13.899418 master-0 kubenswrapper[31559]: I0216 02:29:13.898915 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 16 02:29:13.968161 master-0 kubenswrapper[31559]: I0216 02:29:13.967362 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 02:29:14.020312 master-0 kubenswrapper[31559]: I0216 02:29:14.020207 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Feb 16 02:29:14.034789 master-0 kubenswrapper[31559]: I0216 02:29:14.034716 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Feb 16 02:29:14.042206 master-0 kubenswrapper[31559]: I0216 02:29:14.040737 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Feb 16 02:29:14.078703 master-0 kubenswrapper[31559]: I0216 02:29:14.078583 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 16 02:29:14.151979 master-0 kubenswrapper[31559]: I0216 02:29:14.151869 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 16 02:29:14.203199 master-0 kubenswrapper[31559]: I0216 02:29:14.203101 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 16 02:29:14.257756 master-0 kubenswrapper[31559]: I0216 02:29:14.257610 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 16 02:29:14.259092 master-0 kubenswrapper[31559]: I0216 02:29:14.259047 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 16 02:29:14.366328 master-0 kubenswrapper[31559]: I0216 02:29:14.366249 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 02:29:14.453237 master-0 kubenswrapper[31559]: I0216 02:29:14.453165 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 16 02:29:14.497124 master-0 kubenswrapper[31559]: I0216 02:29:14.497059 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 02:29:14.546103 master-0 kubenswrapper[31559]: I0216 02:29:14.545969 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 16 02:29:14.568258 master-0 kubenswrapper[31559]: I0216 02:29:14.568191 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Feb 16 02:29:14.663347 master-0 kubenswrapper[31559]: I0216 02:29:14.663278 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 16 02:29:14.746136 master-0 kubenswrapper[31559]: I0216 02:29:14.746058 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Feb 16 02:29:14.751651 master-0 kubenswrapper[31559]: I0216 02:29:14.751589 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-kvrqk" Feb 16 02:29:14.779565 master-0 kubenswrapper[31559]: I0216 02:29:14.779266 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 16 02:29:14.780922 master-0 kubenswrapper[31559]: I0216 02:29:14.780844 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 16 02:29:14.804678 master-0 kubenswrapper[31559]: I0216 02:29:14.804537 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 16 02:29:14.854915 master-0 kubenswrapper[31559]: I0216 02:29:14.854834 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-dockercfg-sfwp5" Feb 16 02:29:14.855595 master-0 kubenswrapper[31559]: I0216 02:29:14.855536 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 16 02:29:14.870881 master-0 kubenswrapper[31559]: I0216 02:29:14.870797 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Feb 16 02:29:15.078362 master-0 kubenswrapper[31559]: I0216 02:29:15.078220 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 16 02:29:15.079754 master-0 kubenswrapper[31559]: I0216 02:29:15.079700 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 16 02:29:15.106357 master-0 kubenswrapper[31559]: I0216 02:29:15.106284 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 16 02:29:15.131696 master-0 kubenswrapper[31559]: I0216 02:29:15.131583 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 16 02:29:15.143143 master-0 kubenswrapper[31559]: I0216 02:29:15.143082 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 16 02:29:15.152853 master-0 kubenswrapper[31559]: I0216 02:29:15.152788 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Feb 16 02:29:15.198973 master-0 kubenswrapper[31559]: I0216 02:29:15.198905 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Feb 16 02:29:15.223888 master-0 kubenswrapper[31559]: I0216 02:29:15.223836 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 16 02:29:15.285708 master-0 kubenswrapper[31559]: I0216 02:29:15.285628 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 16 02:29:15.285988 master-0 kubenswrapper[31559]: I0216 02:29:15.285904 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 16 02:29:15.288890 master-0 kubenswrapper[31559]: I0216 02:29:15.288848 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 16 02:29:15.302586 master-0 kubenswrapper[31559]: I0216 02:29:15.302535 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 16 02:29:15.312259 master-0 kubenswrapper[31559]: I0216 02:29:15.312186 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 16 02:29:15.324490 master-0 kubenswrapper[31559]: I0216 02:29:15.324389 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-8pgh8" Feb 16 02:29:15.402787 master-0 kubenswrapper[31559]: I0216 02:29:15.402601 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Feb 16 02:29:15.432296 master-0 kubenswrapper[31559]: I0216 02:29:15.432210 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 16 02:29:15.580935 master-0 kubenswrapper[31559]: I0216 02:29:15.580867 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Feb 16 02:29:15.619620 master-0 kubenswrapper[31559]: I0216 02:29:15.619410 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Feb 16 02:29:15.621312 master-0 kubenswrapper[31559]: I0216 02:29:15.621231 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 16 02:29:15.649042 master-0 kubenswrapper[31559]: I0216 02:29:15.648961 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 16 02:29:15.650515 master-0 kubenswrapper[31559]: I0216 02:29:15.650430 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Feb 16 02:29:15.675233 master-0 kubenswrapper[31559]: I0216 02:29:15.675150 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-f82q5" Feb 16 02:29:15.720422 master-0 kubenswrapper[31559]: I0216 02:29:15.720331 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Feb 16 02:29:15.737825 master-0 kubenswrapper[31559]: I0216 02:29:15.737766 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Feb 16 02:29:15.831058 master-0 kubenswrapper[31559]: I0216 02:29:15.830980 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 16 02:29:15.859075 master-0 kubenswrapper[31559]: I0216 02:29:15.859018 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 16 02:29:15.901055 master-0 kubenswrapper[31559]: I0216 02:29:15.900995 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 16 02:29:15.906800 master-0 kubenswrapper[31559]: I0216 02:29:15.906753 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 16 02:29:16.055463 master-0 kubenswrapper[31559]: I0216 02:29:16.055301 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 16 02:29:16.087968 master-0 kubenswrapper[31559]: I0216 02:29:16.087888 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 16 02:29:16.095941 master-0 kubenswrapper[31559]: I0216 02:29:16.095871 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Feb 16 02:29:16.175265 master-0 kubenswrapper[31559]: I0216 02:29:16.175174 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Feb 16 02:29:16.199577 master-0 kubenswrapper[31559]: I0216 02:29:16.199493 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 16 02:29:16.265129 master-0 kubenswrapper[31559]: I0216 02:29:16.265037 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Feb 16 02:29:16.282613 master-0 kubenswrapper[31559]: I0216 02:29:16.282501 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 16 02:29:16.394305 master-0 kubenswrapper[31559]: I0216 02:29:16.394142 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Feb 16 02:29:16.406514 master-0 kubenswrapper[31559]: I0216 02:29:16.406428 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 16 02:29:16.408407 master-0 kubenswrapper[31559]: I0216 02:29:16.408344 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 16 02:29:16.436308 master-0 kubenswrapper[31559]: I0216 02:29:16.436218 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 16 02:29:16.654331 master-0 kubenswrapper[31559]: I0216 02:29:16.654167 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Feb 16 02:29:16.684853 master-0 kubenswrapper[31559]: I0216 02:29:16.684798 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-bxb2h" Feb 16 02:29:16.709453 master-0 kubenswrapper[31559]: I0216 02:29:16.709349 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 16 02:29:16.733376 master-0 kubenswrapper[31559]: I0216 02:29:16.733303 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 16 02:29:16.754148 master-0 kubenswrapper[31559]: I0216 02:29:16.754086 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 02:29:16.780589 master-0 kubenswrapper[31559]: I0216 02:29:16.780506 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 16 02:29:16.836967 master-0 kubenswrapper[31559]: I0216 02:29:16.836886 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Feb 16 02:29:16.887964 master-0 kubenswrapper[31559]: I0216 02:29:16.887858 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 16 02:29:16.991230 master-0 kubenswrapper[31559]: I0216 02:29:16.991159 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-cltf9" Feb 16 02:29:17.012825 master-0 kubenswrapper[31559]: I0216 02:29:17.012741 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 16 02:29:17.153969 master-0 kubenswrapper[31559]: I0216 02:29:17.153890 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 16 02:29:17.204925 master-0 kubenswrapper[31559]: I0216 02:29:17.204856 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 16 02:29:17.274374 master-0 kubenswrapper[31559]: I0216 02:29:17.274250 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Feb 16 02:29:17.291232 master-0 kubenswrapper[31559]: I0216 02:29:17.291146 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Feb 16 02:29:17.352953 master-0 kubenswrapper[31559]: I0216 02:29:17.352889 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 16 02:29:17.441806 master-0 kubenswrapper[31559]: I0216 02:29:17.441764 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-3lmddhljekc4u" Feb 16 02:29:17.464367 master-0 kubenswrapper[31559]: I0216 02:29:17.464297 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Feb 16 02:29:17.496951 master-0 kubenswrapper[31559]: I0216 02:29:17.496900 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 16 02:29:17.538703 master-0 kubenswrapper[31559]: I0216 02:29:17.538565 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 16 02:29:17.541617 master-0 kubenswrapper[31559]: I0216 02:29:17.541574 31559 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 16 02:29:17.544857 master-0 kubenswrapper[31559]: I0216 02:29:17.544800 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Feb 16 02:29:17.627781 master-0 kubenswrapper[31559]: I0216 02:29:17.627735 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 16 02:29:17.661892 master-0 kubenswrapper[31559]: I0216 02:29:17.661783 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 16 02:29:17.759478 master-0 kubenswrapper[31559]: I0216 02:29:17.759363 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Feb 16 02:29:17.780037 master-0 kubenswrapper[31559]: I0216 02:29:17.779938 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 16 02:29:17.788064 master-0 kubenswrapper[31559]: I0216 02:29:17.788002 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Feb 16 02:29:17.821891 master-0 kubenswrapper[31559]: I0216 02:29:17.821715 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 16 02:29:17.859463 master-0 kubenswrapper[31559]: I0216 02:29:17.855831 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 16 02:29:17.869282 master-0 kubenswrapper[31559]: I0216 02:29:17.869201 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 16 02:29:17.883669 master-0 kubenswrapper[31559]: I0216 02:29:17.883608 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 16 02:29:17.910200 master-0 kubenswrapper[31559]: I0216 02:29:17.910044 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Feb 16 02:29:17.921038 master-0 kubenswrapper[31559]: I0216 02:29:17.920983 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-ccbvw" Feb 16 02:29:17.978227 master-0 kubenswrapper[31559]: I0216 02:29:17.978181 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 16 02:29:18.044381 master-0 kubenswrapper[31559]: I0216 02:29:18.044303 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 16 02:29:18.055604 master-0 kubenswrapper[31559]: I0216 02:29:18.055569 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 16 02:29:18.091833 master-0 kubenswrapper[31559]: I0216 02:29:18.091712 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Feb 16 02:29:18.137152 master-0 kubenswrapper[31559]: I0216 02:29:18.137054 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Feb 16 02:29:18.229040 master-0 kubenswrapper[31559]: I0216 02:29:18.228957 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 16 02:29:18.315496 master-0 kubenswrapper[31559]: I0216 02:29:18.315323 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 16 02:29:18.342294 master-0 kubenswrapper[31559]: I0216 02:29:18.342194 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 16 02:29:18.372223 master-0 kubenswrapper[31559]: I0216 02:29:18.372195 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 16 02:29:18.379407 master-0 kubenswrapper[31559]: I0216 02:29:18.379387 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 16 02:29:18.411203 master-0 kubenswrapper[31559]: I0216 02:29:18.411148 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 16 02:29:18.414821 master-0 kubenswrapper[31559]: I0216 02:29:18.414788 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 16 02:29:18.431049 master-0 kubenswrapper[31559]: I0216 02:29:18.430957 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 16 02:29:18.442333 master-0 kubenswrapper[31559]: I0216 02:29:18.442260 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-xrwft" Feb 16 02:29:18.485507 master-0 kubenswrapper[31559]: I0216 02:29:18.485164 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 16 02:29:18.522407 master-0 kubenswrapper[31559]: I0216 02:29:18.522329 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Feb 16 02:29:18.559674 master-0 kubenswrapper[31559]: I0216 02:29:18.559531 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 16 02:29:18.682802 master-0 kubenswrapper[31559]: I0216 02:29:18.682723 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Feb 16 02:29:18.735581 master-0 kubenswrapper[31559]: I0216 02:29:18.734269 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 16 02:29:18.763511 master-0 kubenswrapper[31559]: I0216 02:29:18.758922 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 16 02:29:18.808935 master-0 kubenswrapper[31559]: I0216 02:29:18.808853 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Feb 16 02:29:18.831588 master-0 kubenswrapper[31559]: I0216 02:29:18.831487 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 16 02:29:18.889268 master-0 kubenswrapper[31559]: I0216 02:29:18.889204 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Feb 16 02:29:18.908645 master-0 kubenswrapper[31559]: I0216 02:29:18.908291 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 16 02:29:18.908645 master-0 kubenswrapper[31559]: I0216 02:29:18.908387 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-tmpwz" Feb 16 02:29:18.994792 master-0 kubenswrapper[31559]: I0216 02:29:18.994640 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 16 02:29:19.049599 master-0 kubenswrapper[31559]: I0216 02:29:19.049495 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Feb 16 02:29:19.049599 master-0 kubenswrapper[31559]: I0216 02:29:19.049543 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-70ac8e3ti0c8f" Feb 16 02:29:19.127696 master-0 kubenswrapper[31559]: I0216 02:29:19.127219 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 16 02:29:19.165296 master-0 kubenswrapper[31559]: I0216 02:29:19.165217 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Feb 16 02:29:19.233511 master-0 kubenswrapper[31559]: I0216 02:29:19.233396 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 02:29:19.234996 master-0 kubenswrapper[31559]: I0216 02:29:19.234944 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Feb 16 02:29:19.345325 master-0 kubenswrapper[31559]: I0216 02:29:19.345140 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 16 02:29:19.475955 master-0 kubenswrapper[31559]: I0216 02:29:19.475870 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 16 02:29:19.489688 master-0 kubenswrapper[31559]: I0216 02:29:19.489608 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 16 02:29:19.490517 master-0 kubenswrapper[31559]: I0216 02:29:19.490428 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 16 02:29:19.544398 master-0 kubenswrapper[31559]: I0216 02:29:19.544287 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Feb 16 02:29:19.553773 master-0 kubenswrapper[31559]: I0216 02:29:19.553724 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 16 02:29:19.573572 master-0 kubenswrapper[31559]: I0216 02:29:19.573510 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 16 02:29:19.573715 master-0 kubenswrapper[31559]: I0216 02:29:19.573640 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 16 02:29:19.583877 master-0 kubenswrapper[31559]: I0216 02:29:19.583814 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Feb 16 02:29:19.595717 master-0 kubenswrapper[31559]: I0216 02:29:19.595576 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 16 02:29:19.617183 master-0 kubenswrapper[31559]: I0216 02:29:19.617049 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 16 02:29:19.654028 master-0 kubenswrapper[31559]: I0216 02:29:19.653909 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Feb 16 02:29:19.658975 master-0 kubenswrapper[31559]: I0216 02:29:19.658697 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 16 02:29:19.696469 master-0 kubenswrapper[31559]: I0216 02:29:19.696375 31559 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 16 02:29:19.710010 master-0 kubenswrapper[31559]: I0216 02:29:19.709933 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 16 02:29:19.763475 master-0 kubenswrapper[31559]: I0216 02:29:19.763378 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Feb 16 02:29:19.774053 master-0 kubenswrapper[31559]: I0216 02:29:19.773994 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 16 02:29:19.793580 master-0 kubenswrapper[31559]: I0216 02:29:19.793508 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 16 02:29:19.883057 master-0 kubenswrapper[31559]: I0216 02:29:19.882876 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 16 02:29:20.134618 master-0 kubenswrapper[31559]: I0216 02:29:20.134497 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 16 02:29:20.213330 master-0 kubenswrapper[31559]: I0216 02:29:20.213263 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Feb 16 02:29:20.221925 master-0 kubenswrapper[31559]: I0216 02:29:20.221882 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 16 02:29:20.223298 master-0 kubenswrapper[31559]: I0216 02:29:20.222711 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Feb 16 02:29:20.232638 master-0 kubenswrapper[31559]: I0216 02:29:20.232507 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 16 02:29:20.237663 master-0 kubenswrapper[31559]: I0216 02:29:20.237578 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 16 02:29:20.355016 master-0 kubenswrapper[31559]: I0216 02:29:20.354935 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 16 02:29:20.366746 master-0 kubenswrapper[31559]: I0216 02:29:20.366662 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Feb 16 02:29:20.384411 master-0 kubenswrapper[31559]: I0216 02:29:20.384340 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:29:20.391410 master-0 kubenswrapper[31559]: I0216 02:29:20.391281 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:29:20.449170 master-0 kubenswrapper[31559]: I0216 02:29:20.449091 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 16 02:29:20.649217 master-0 kubenswrapper[31559]: I0216 02:29:20.649049 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 02:29:20.658702 master-0 kubenswrapper[31559]: I0216 02:29:20.658640 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 16 02:29:20.722408 master-0 kubenswrapper[31559]: I0216 02:29:20.722320 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 16 02:29:20.829776 master-0 kubenswrapper[31559]: I0216 02:29:20.829683 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-wsv7k" Feb 16 02:29:20.840765 master-0 kubenswrapper[31559]: I0216 02:29:20.840712 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-wp42g" Feb 16 02:29:20.869816 master-0 kubenswrapper[31559]: I0216 02:29:20.869742 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:29:20.999857 master-0 kubenswrapper[31559]: I0216 02:29:20.999762 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 16 02:29:21.022859 master-0 kubenswrapper[31559]: I0216 02:29:21.022796 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 16 02:29:21.067723 master-0 kubenswrapper[31559]: I0216 02:29:21.067630 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 16 02:29:21.090621 master-0 kubenswrapper[31559]: I0216 02:29:21.090533 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Feb 16 02:29:21.164941 master-0 kubenswrapper[31559]: I0216 02:29:21.164806 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-mhjgf" Feb 16 02:29:21.203089 master-0 kubenswrapper[31559]: I0216 02:29:21.203000 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 16 02:29:21.251355 master-0 kubenswrapper[31559]: I0216 02:29:21.251224 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Feb 16 02:29:21.272409 master-0 kubenswrapper[31559]: I0216 02:29:21.272364 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Feb 16 02:29:21.355874 master-0 kubenswrapper[31559]: I0216 02:29:21.355804 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 16 02:29:21.514665 master-0 kubenswrapper[31559]: I0216 02:29:21.514537 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-tw6fq" Feb 16 02:29:21.564707 master-0 kubenswrapper[31559]: I0216 02:29:21.564625 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 16 02:29:21.602352 master-0 kubenswrapper[31559]: I0216 02:29:21.602249 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 16 02:29:21.623305 master-0 kubenswrapper[31559]: I0216 02:29:21.623203 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 16 02:29:21.632515 master-0 kubenswrapper[31559]: I0216 02:29:21.632389 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 16 02:29:21.634695 master-0 kubenswrapper[31559]: I0216 02:29:21.634617 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 16 02:29:21.745851 master-0 kubenswrapper[31559]: I0216 02:29:21.745769 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Feb 16 02:29:21.845263 master-0 kubenswrapper[31559]: I0216 02:29:21.845111 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 16 02:29:21.862816 master-0 kubenswrapper[31559]: I0216 02:29:21.862738 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 16 02:29:21.916749 master-0 kubenswrapper[31559]: I0216 02:29:21.916662 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-4zs9t" Feb 16 02:29:21.917498 master-0 kubenswrapper[31559]: I0216 02:29:21.917079 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 16 02:29:21.987761 master-0 kubenswrapper[31559]: I0216 02:29:21.987648 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 16 02:29:22.313783 master-0 kubenswrapper[31559]: I0216 02:29:22.313684 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Feb 16 02:29:22.431008 master-0 kubenswrapper[31559]: I0216 02:29:22.430909 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 16 02:29:22.701408 master-0 kubenswrapper[31559]: I0216 02:29:22.701320 31559 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 16 02:29:22.721992 master-0 kubenswrapper[31559]: I0216 02:29:22.721361 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-monitoring/metrics-server-67b79bd656-cs2n2","openshift-kube-apiserver/kube-apiserver-master-0"] Feb 16 02:29:22.721992 master-0 kubenswrapper[31559]: I0216 02:29:22.721512 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Feb 16 02:29:22.746527 master-0 kubenswrapper[31559]: I0216 02:29:22.740517 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 02:29:22.763402 master-0 kubenswrapper[31559]: I0216 02:29:22.761605 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-0" podStartSLOduration=25.761584361 podStartE2EDuration="25.761584361s" podCreationTimestamp="2026-02-16 02:28:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:29:22.757417702 +0000 UTC m=+415.102023727" watchObservedRunningTime="2026-02-16 02:29:22.761584361 +0000 UTC m=+415.106190386" Feb 16 02:29:22.795418 master-0 kubenswrapper[31559]: I0216 02:29:22.795354 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 02:29:22.823270 master-0 kubenswrapper[31559]: I0216 02:29:22.823211 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-j5nng" Feb 16 02:29:22.842987 master-0 kubenswrapper[31559]: I0216 02:29:22.842946 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" Feb 16 02:29:22.867378 master-0 kubenswrapper[31559]: I0216 02:29:22.867284 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Feb 16 02:29:22.949049 master-0 kubenswrapper[31559]: I0216 02:29:22.948961 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Feb 16 02:29:22.957300 master-0 kubenswrapper[31559]: I0216 02:29:22.957128 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 16 02:29:23.266851 master-0 kubenswrapper[31559]: I0216 02:29:23.266701 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Feb 16 02:29:23.939683 master-0 kubenswrapper[31559]: I0216 02:29:23.939573 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c267cc7-a51a-4b14-baee-e584254eefc5" path="/var/lib/kubelet/pods/8c267cc7-a51a-4b14-baee-e584254eefc5/volumes" Feb 16 02:29:24.349103 master-0 kubenswrapper[31559]: I0216 02:29:24.348931 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Feb 16 02:29:24.412605 master-0 kubenswrapper[31559]: I0216 02:29:24.412519 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Feb 16 02:29:24.983873 master-0 kubenswrapper[31559]: I0216 02:29:24.983776 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Feb 16 02:29:29.698638 master-0 kubenswrapper[31559]: I0216 02:29:29.698511 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:29:29.728184 master-0 kubenswrapper[31559]: I0216 02:29:29.728064 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:29:30.010839 master-0 kubenswrapper[31559]: I0216 02:29:30.010662 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 02:29:30.492006 master-0 kubenswrapper[31559]: I0216 02:29:30.491915 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-w8x86" Feb 16 02:29:30.659918 master-0 kubenswrapper[31559]: I0216 02:29:30.659779 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 16 02:29:30.886285 master-0 kubenswrapper[31559]: I0216 02:29:30.886146 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6ddf646cb9-nx4kk"] Feb 16 02:29:30.904569 master-0 kubenswrapper[31559]: I0216 02:29:30.902962 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["sushy-emulator/sushy-emulator-58f4c9b998-t2jlp"] Feb 16 02:29:30.904569 master-0 kubenswrapper[31559]: E0216 02:29:30.903289 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dc8db6b-5b2f-4fd7-b0fe-3a45a5012cdb" containerName="installer" Feb 16 02:29:30.904569 master-0 kubenswrapper[31559]: I0216 02:29:30.903302 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dc8db6b-5b2f-4fd7-b0fe-3a45a5012cdb" containerName="installer" Feb 16 02:29:30.904569 master-0 kubenswrapper[31559]: E0216 02:29:30.903317 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c267cc7-a51a-4b14-baee-e584254eefc5" containerName="metrics-server" Feb 16 02:29:30.904569 master-0 kubenswrapper[31559]: I0216 02:29:30.903323 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c267cc7-a51a-4b14-baee-e584254eefc5" containerName="metrics-server" Feb 16 02:29:30.904569 master-0 kubenswrapper[31559]: I0216 02:29:30.903527 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dc8db6b-5b2f-4fd7-b0fe-3a45a5012cdb" containerName="installer" Feb 16 02:29:30.904569 master-0 kubenswrapper[31559]: I0216 02:29:30.903540 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c267cc7-a51a-4b14-baee-e584254eefc5" containerName="metrics-server" Feb 16 02:29:30.904569 master-0 kubenswrapper[31559]: I0216 02:29:30.904030 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-58f4c9b998-t2jlp" Feb 16 02:29:30.911346 master-0 kubenswrapper[31559]: I0216 02:29:30.911268 31559 reflector.go:368] Caches populated for *v1.Secret from object-"sushy-emulator"/"os-client-config" Feb 16 02:29:30.911427 master-0 kubenswrapper[31559]: I0216 02:29:30.911363 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"sushy-emulator"/"sushy-emulator-config" Feb 16 02:29:30.913115 master-0 kubenswrapper[31559]: I0216 02:29:30.911739 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"sushy-emulator"/"openshift-service-ca.crt" Feb 16 02:29:30.913115 master-0 kubenswrapper[31559]: I0216 02:29:30.911773 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"sushy-emulator"/"kube-root-ca.crt" Feb 16 02:29:30.916399 master-0 kubenswrapper[31559]: I0216 02:29:30.916360 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/sushy-emulator-58f4c9b998-t2jlp"] Feb 16 02:29:30.994525 master-0 kubenswrapper[31559]: I0216 02:29:30.994429 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m52lj\" (UniqueName: \"kubernetes.io/projected/83093487-16c4-44d2-a29f-bd113826e05a-kube-api-access-m52lj\") pod \"sushy-emulator-58f4c9b998-t2jlp\" (UID: \"83093487-16c4-44d2-a29f-bd113826e05a\") " pod="sushy-emulator/sushy-emulator-58f4c9b998-t2jlp" Feb 16 02:29:30.995607 master-0 kubenswrapper[31559]: I0216 02:29:30.995523 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/83093487-16c4-44d2-a29f-bd113826e05a-sushy-emulator-config\") pod \"sushy-emulator-58f4c9b998-t2jlp\" (UID: \"83093487-16c4-44d2-a29f-bd113826e05a\") " pod="sushy-emulator/sushy-emulator-58f4c9b998-t2jlp" Feb 16 02:29:30.995786 master-0 kubenswrapper[31559]: I0216 02:29:30.995749 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/83093487-16c4-44d2-a29f-bd113826e05a-os-client-config\") pod \"sushy-emulator-58f4c9b998-t2jlp\" (UID: \"83093487-16c4-44d2-a29f-bd113826e05a\") " pod="sushy-emulator/sushy-emulator-58f4c9b998-t2jlp" Feb 16 02:29:31.099252 master-0 kubenswrapper[31559]: I0216 02:29:31.099129 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m52lj\" (UniqueName: \"kubernetes.io/projected/83093487-16c4-44d2-a29f-bd113826e05a-kube-api-access-m52lj\") pod \"sushy-emulator-58f4c9b998-t2jlp\" (UID: \"83093487-16c4-44d2-a29f-bd113826e05a\") " pod="sushy-emulator/sushy-emulator-58f4c9b998-t2jlp" Feb 16 02:29:31.099583 master-0 kubenswrapper[31559]: I0216 02:29:31.099408 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/83093487-16c4-44d2-a29f-bd113826e05a-sushy-emulator-config\") pod \"sushy-emulator-58f4c9b998-t2jlp\" (UID: \"83093487-16c4-44d2-a29f-bd113826e05a\") " pod="sushy-emulator/sushy-emulator-58f4c9b998-t2jlp" Feb 16 02:29:31.099583 master-0 kubenswrapper[31559]: I0216 02:29:31.099553 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/83093487-16c4-44d2-a29f-bd113826e05a-os-client-config\") pod \"sushy-emulator-58f4c9b998-t2jlp\" (UID: \"83093487-16c4-44d2-a29f-bd113826e05a\") " pod="sushy-emulator/sushy-emulator-58f4c9b998-t2jlp" Feb 16 02:29:31.101785 master-0 kubenswrapper[31559]: I0216 02:29:31.101704 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/83093487-16c4-44d2-a29f-bd113826e05a-sushy-emulator-config\") pod \"sushy-emulator-58f4c9b998-t2jlp\" (UID: \"83093487-16c4-44d2-a29f-bd113826e05a\") " pod="sushy-emulator/sushy-emulator-58f4c9b998-t2jlp" Feb 16 02:29:31.125494 master-0 kubenswrapper[31559]: I0216 02:29:31.125218 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/83093487-16c4-44d2-a29f-bd113826e05a-os-client-config\") pod \"sushy-emulator-58f4c9b998-t2jlp\" (UID: \"83093487-16c4-44d2-a29f-bd113826e05a\") " pod="sushy-emulator/sushy-emulator-58f4c9b998-t2jlp" Feb 16 02:29:31.146089 master-0 kubenswrapper[31559]: I0216 02:29:31.145927 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m52lj\" (UniqueName: \"kubernetes.io/projected/83093487-16c4-44d2-a29f-bd113826e05a-kube-api-access-m52lj\") pod \"sushy-emulator-58f4c9b998-t2jlp\" (UID: \"83093487-16c4-44d2-a29f-bd113826e05a\") " pod="sushy-emulator/sushy-emulator-58f4c9b998-t2jlp" Feb 16 02:29:31.247230 master-0 kubenswrapper[31559]: I0216 02:29:31.247142 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-58f4c9b998-t2jlp" Feb 16 02:29:31.605844 master-0 kubenswrapper[31559]: I0216 02:29:31.604728 31559 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 16 02:29:31.605844 master-0 kubenswrapper[31559]: I0216 02:29:31.605068 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="a9a2b3a37af32e5d570b82bfd956f250" containerName="startup-monitor" containerID="cri-o://952b358416521f11e1d86b9917b4c4ff5b0730ac77eeb86ca5f67f6a183f7d45" gracePeriod=5 Feb 16 02:29:31.784633 master-0 kubenswrapper[31559]: I0216 02:29:31.784543 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/sushy-emulator-58f4c9b998-t2jlp"] Feb 16 02:29:31.793645 master-0 kubenswrapper[31559]: W0216 02:29:31.793568 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83093487_16c4_44d2_a29f_bd113826e05a.slice/crio-29cf36b3d1bbbe59b609a00a8ae6741a025ce1109dd6f774c7afe7497dbc0e4f WatchSource:0}: Error finding container 29cf36b3d1bbbe59b609a00a8ae6741a025ce1109dd6f774c7afe7497dbc0e4f: Status 404 returned error can't find the container with id 29cf36b3d1bbbe59b609a00a8ae6741a025ce1109dd6f774c7afe7497dbc0e4f Feb 16 02:29:31.795822 master-0 kubenswrapper[31559]: I0216 02:29:31.795748 31559 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 02:29:31.976254 master-0 kubenswrapper[31559]: I0216 02:29:31.976189 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-58f4c9b998-t2jlp" event={"ID":"83093487-16c4-44d2-a29f-bd113826e05a","Type":"ContainerStarted","Data":"29cf36b3d1bbbe59b609a00a8ae6741a025ce1109dd6f774c7afe7497dbc0e4f"} Feb 16 02:29:32.866851 master-0 kubenswrapper[31559]: I0216 02:29:32.866801 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:29:37.031634 master-0 kubenswrapper[31559]: I0216 02:29:37.031569 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_a9a2b3a37af32e5d570b82bfd956f250/startup-monitor/0.log" Feb 16 02:29:37.032602 master-0 kubenswrapper[31559]: I0216 02:29:37.031646 31559 generic.go:334] "Generic (PLEG): container finished" podID="a9a2b3a37af32e5d570b82bfd956f250" containerID="952b358416521f11e1d86b9917b4c4ff5b0730ac77eeb86ca5f67f6a183f7d45" exitCode=137 Feb 16 02:29:37.646428 master-0 kubenswrapper[31559]: I0216 02:29:37.646341 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_a9a2b3a37af32e5d570b82bfd956f250/startup-monitor/0.log" Feb 16 02:29:37.646650 master-0 kubenswrapper[31559]: I0216 02:29:37.646485 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 02:29:37.820729 master-0 kubenswrapper[31559]: I0216 02:29:37.820657 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-manifests\") pod \"a9a2b3a37af32e5d570b82bfd956f250\" (UID: \"a9a2b3a37af32e5d570b82bfd956f250\") " Feb 16 02:29:37.820925 master-0 kubenswrapper[31559]: I0216 02:29:37.820765 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-var-log\") pod \"a9a2b3a37af32e5d570b82bfd956f250\" (UID: \"a9a2b3a37af32e5d570b82bfd956f250\") " Feb 16 02:29:37.821014 master-0 kubenswrapper[31559]: I0216 02:29:37.820921 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-resource-dir\") pod \"a9a2b3a37af32e5d570b82bfd956f250\" (UID: \"a9a2b3a37af32e5d570b82bfd956f250\") " Feb 16 02:29:37.821014 master-0 kubenswrapper[31559]: I0216 02:29:37.820921 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-manifests" (OuterVolumeSpecName: "manifests") pod "a9a2b3a37af32e5d570b82bfd956f250" (UID: "a9a2b3a37af32e5d570b82bfd956f250"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:29:37.821014 master-0 kubenswrapper[31559]: I0216 02:29:37.820957 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-var-lock\") pod \"a9a2b3a37af32e5d570b82bfd956f250\" (UID: \"a9a2b3a37af32e5d570b82bfd956f250\") " Feb 16 02:29:37.821196 master-0 kubenswrapper[31559]: I0216 02:29:37.821026 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-var-lock" (OuterVolumeSpecName: "var-lock") pod "a9a2b3a37af32e5d570b82bfd956f250" (UID: "a9a2b3a37af32e5d570b82bfd956f250"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:29:37.821196 master-0 kubenswrapper[31559]: I0216 02:29:37.821030 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-var-log" (OuterVolumeSpecName: "var-log") pod "a9a2b3a37af32e5d570b82bfd956f250" (UID: "a9a2b3a37af32e5d570b82bfd956f250"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:29:37.821196 master-0 kubenswrapper[31559]: I0216 02:29:37.821072 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-pod-resource-dir\") pod \"a9a2b3a37af32e5d570b82bfd956f250\" (UID: \"a9a2b3a37af32e5d570b82bfd956f250\") " Feb 16 02:29:37.821196 master-0 kubenswrapper[31559]: I0216 02:29:37.821158 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "a9a2b3a37af32e5d570b82bfd956f250" (UID: "a9a2b3a37af32e5d570b82bfd956f250"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:29:37.821977 master-0 kubenswrapper[31559]: I0216 02:29:37.821930 31559 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 16 02:29:37.821977 master-0 kubenswrapper[31559]: I0216 02:29:37.821966 31559 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-manifests\") on node \"master-0\" DevicePath \"\"" Feb 16 02:29:37.822178 master-0 kubenswrapper[31559]: I0216 02:29:37.821984 31559 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-var-log\") on node \"master-0\" DevicePath \"\"" Feb 16 02:29:37.822178 master-0 kubenswrapper[31559]: I0216 02:29:37.822003 31559 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 02:29:37.830906 master-0 kubenswrapper[31559]: I0216 02:29:37.830780 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "a9a2b3a37af32e5d570b82bfd956f250" (UID: "a9a2b3a37af32e5d570b82bfd956f250"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:29:37.923318 master-0 kubenswrapper[31559]: I0216 02:29:37.923231 31559 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-pod-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 02:29:37.946664 master-0 kubenswrapper[31559]: I0216 02:29:37.946590 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9a2b3a37af32e5d570b82bfd956f250" path="/var/lib/kubelet/pods/a9a2b3a37af32e5d570b82bfd956f250/volumes" Feb 16 02:29:38.039626 master-0 kubenswrapper[31559]: I0216 02:29:38.039564 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_a9a2b3a37af32e5d570b82bfd956f250/startup-monitor/0.log" Feb 16 02:29:38.040478 master-0 kubenswrapper[31559]: I0216 02:29:38.039727 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 02:29:38.040478 master-0 kubenswrapper[31559]: I0216 02:29:38.039754 31559 scope.go:117] "RemoveContainer" containerID="952b358416521f11e1d86b9917b4c4ff5b0730ac77eeb86ca5f67f6a183f7d45" Feb 16 02:29:38.042709 master-0 kubenswrapper[31559]: I0216 02:29:38.042636 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-58f4c9b998-t2jlp" event={"ID":"83093487-16c4-44d2-a29f-bd113826e05a","Type":"ContainerStarted","Data":"65ff54d138dcfd1472a57d726e907029e5d613b5e9ee1a7bfdd49515d95b5015"} Feb 16 02:29:38.083761 master-0 kubenswrapper[31559]: I0216 02:29:38.083589 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="sushy-emulator/sushy-emulator-58f4c9b998-t2jlp" podStartSLOduration=2.220726922 podStartE2EDuration="8.083560844s" podCreationTimestamp="2026-02-16 02:29:30 +0000 UTC" firstStartedPulling="2026-02-16 02:29:31.79569033 +0000 UTC m=+424.140296375" lastFinishedPulling="2026-02-16 02:29:37.658524242 +0000 UTC m=+430.003130297" observedRunningTime="2026-02-16 02:29:38.079483838 +0000 UTC m=+430.424089913" watchObservedRunningTime="2026-02-16 02:29:38.083560844 +0000 UTC m=+430.428166889" Feb 16 02:29:38.348918 master-0 kubenswrapper[31559]: I0216 02:29:38.348733 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 02:29:38.709333 master-0 kubenswrapper[31559]: I0216 02:29:38.709230 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 16 02:29:40.843640 master-0 kubenswrapper[31559]: I0216 02:29:40.843525 31559 patch_prober.go:28] interesting pod/marketplace-operator-6cc5b65c6b-8nl7s container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.23:8080/healthz\": dial tcp 10.128.0.23:8080: connect: connection refused" start-of-body= Feb 16 02:29:40.844511 master-0 kubenswrapper[31559]: I0216 02:29:40.843639 31559 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-8nl7s" podUID="bde83629-b39c-401e-bc30-5ce205638918" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.23:8080/healthz\": dial tcp 10.128.0.23:8080: connect: connection refused" Feb 16 02:29:41.084550 master-0 kubenswrapper[31559]: I0216 02:29:41.084398 31559 generic.go:334] "Generic (PLEG): container finished" podID="bde83629-b39c-401e-bc30-5ce205638918" containerID="c294045cdc69f6c083a4cdeb23b9bbfe3d4c6dfa0c1d7960cd217705505d5fc6" exitCode=0 Feb 16 02:29:41.084550 master-0 kubenswrapper[31559]: I0216 02:29:41.084498 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-8nl7s" event={"ID":"bde83629-b39c-401e-bc30-5ce205638918","Type":"ContainerDied","Data":"c294045cdc69f6c083a4cdeb23b9bbfe3d4c6dfa0c1d7960cd217705505d5fc6"} Feb 16 02:29:41.084922 master-0 kubenswrapper[31559]: I0216 02:29:41.084625 31559 scope.go:117] "RemoveContainer" containerID="878600941ff09ae766ef1ccc9a324f0c6d5cbe6f0b05660545fe5e976ad49b02" Feb 16 02:29:41.085700 master-0 kubenswrapper[31559]: I0216 02:29:41.085636 31559 scope.go:117] "RemoveContainer" containerID="c294045cdc69f6c083a4cdeb23b9bbfe3d4c6dfa0c1d7960cd217705505d5fc6" Feb 16 02:29:41.248202 master-0 kubenswrapper[31559]: I0216 02:29:41.248119 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="sushy-emulator/sushy-emulator-58f4c9b998-t2jlp" Feb 16 02:29:41.248202 master-0 kubenswrapper[31559]: I0216 02:29:41.248194 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="sushy-emulator/sushy-emulator-58f4c9b998-t2jlp" Feb 16 02:29:41.263628 master-0 kubenswrapper[31559]: I0216 02:29:41.263562 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="sushy-emulator/sushy-emulator-58f4c9b998-t2jlp" Feb 16 02:29:42.101551 master-0 kubenswrapper[31559]: I0216 02:29:42.101460 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-8nl7s" event={"ID":"bde83629-b39c-401e-bc30-5ce205638918","Type":"ContainerStarted","Data":"db9b4dfe7d549a7097fc45a02fb6f4996055535712eb0b308739c81c4f9f0363"} Feb 16 02:29:42.102403 master-0 kubenswrapper[31559]: I0216 02:29:42.102372 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-8nl7s" Feb 16 02:29:42.106727 master-0 kubenswrapper[31559]: I0216 02:29:42.105351 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="sushy-emulator/sushy-emulator-58f4c9b998-t2jlp" Feb 16 02:29:42.106727 master-0 kubenswrapper[31559]: I0216 02:29:42.105820 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-8nl7s" Feb 16 02:29:44.020996 master-0 kubenswrapper[31559]: I0216 02:29:44.020878 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["sushy-emulator/nova-console-poller-5f8d6dc47-6ftb4"] Feb 16 02:29:44.022137 master-0 kubenswrapper[31559]: E0216 02:29:44.021311 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9a2b3a37af32e5d570b82bfd956f250" containerName="startup-monitor" Feb 16 02:29:44.022137 master-0 kubenswrapper[31559]: I0216 02:29:44.021335 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9a2b3a37af32e5d570b82bfd956f250" containerName="startup-monitor" Feb 16 02:29:44.022137 master-0 kubenswrapper[31559]: I0216 02:29:44.021653 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9a2b3a37af32e5d570b82bfd956f250" containerName="startup-monitor" Feb 16 02:29:44.023029 master-0 kubenswrapper[31559]: I0216 02:29:44.022968 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/nova-console-poller-5f8d6dc47-6ftb4" Feb 16 02:29:44.160903 master-0 kubenswrapper[31559]: I0216 02:29:44.160821 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2zdh\" (UniqueName: \"kubernetes.io/projected/f7fc256c-7537-4d5e-a390-a65b85ac90f9-kube-api-access-m2zdh\") pod \"nova-console-poller-5f8d6dc47-6ftb4\" (UID: \"f7fc256c-7537-4d5e-a390-a65b85ac90f9\") " pod="sushy-emulator/nova-console-poller-5f8d6dc47-6ftb4" Feb 16 02:29:44.161210 master-0 kubenswrapper[31559]: I0216 02:29:44.160946 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/f7fc256c-7537-4d5e-a390-a65b85ac90f9-os-client-config\") pod \"nova-console-poller-5f8d6dc47-6ftb4\" (UID: \"f7fc256c-7537-4d5e-a390-a65b85ac90f9\") " pod="sushy-emulator/nova-console-poller-5f8d6dc47-6ftb4" Feb 16 02:29:44.190199 master-0 kubenswrapper[31559]: I0216 02:29:44.190098 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/nova-console-poller-5f8d6dc47-6ftb4"] Feb 16 02:29:44.262987 master-0 kubenswrapper[31559]: I0216 02:29:44.262892 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2zdh\" (UniqueName: \"kubernetes.io/projected/f7fc256c-7537-4d5e-a390-a65b85ac90f9-kube-api-access-m2zdh\") pod \"nova-console-poller-5f8d6dc47-6ftb4\" (UID: \"f7fc256c-7537-4d5e-a390-a65b85ac90f9\") " pod="sushy-emulator/nova-console-poller-5f8d6dc47-6ftb4" Feb 16 02:29:44.262987 master-0 kubenswrapper[31559]: I0216 02:29:44.262993 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/f7fc256c-7537-4d5e-a390-a65b85ac90f9-os-client-config\") pod \"nova-console-poller-5f8d6dc47-6ftb4\" (UID: \"f7fc256c-7537-4d5e-a390-a65b85ac90f9\") " pod="sushy-emulator/nova-console-poller-5f8d6dc47-6ftb4" Feb 16 02:29:44.268400 master-0 kubenswrapper[31559]: I0216 02:29:44.268326 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/f7fc256c-7537-4d5e-a390-a65b85ac90f9-os-client-config\") pod \"nova-console-poller-5f8d6dc47-6ftb4\" (UID: \"f7fc256c-7537-4d5e-a390-a65b85ac90f9\") " pod="sushy-emulator/nova-console-poller-5f8d6dc47-6ftb4" Feb 16 02:29:44.299108 master-0 kubenswrapper[31559]: I0216 02:29:44.298955 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2zdh\" (UniqueName: \"kubernetes.io/projected/f7fc256c-7537-4d5e-a390-a65b85ac90f9-kube-api-access-m2zdh\") pod \"nova-console-poller-5f8d6dc47-6ftb4\" (UID: \"f7fc256c-7537-4d5e-a390-a65b85ac90f9\") " pod="sushy-emulator/nova-console-poller-5f8d6dc47-6ftb4" Feb 16 02:29:44.347128 master-0 kubenswrapper[31559]: I0216 02:29:44.347051 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/nova-console-poller-5f8d6dc47-6ftb4" Feb 16 02:29:44.384609 master-0 kubenswrapper[31559]: I0216 02:29:44.384546 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 16 02:29:44.874280 master-0 kubenswrapper[31559]: I0216 02:29:44.874144 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/nova-console-poller-5f8d6dc47-6ftb4"] Feb 16 02:29:44.879913 master-0 kubenswrapper[31559]: W0216 02:29:44.879845 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf7fc256c_7537_4d5e_a390_a65b85ac90f9.slice/crio-f127a009dbe94e7b13399eff4b7ff99f2db048e3f956e4bda808a6ba77a88163 WatchSource:0}: Error finding container f127a009dbe94e7b13399eff4b7ff99f2db048e3f956e4bda808a6ba77a88163: Status 404 returned error can't find the container with id f127a009dbe94e7b13399eff4b7ff99f2db048e3f956e4bda808a6ba77a88163 Feb 16 02:29:45.131165 master-0 kubenswrapper[31559]: I0216 02:29:45.130952 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-poller-5f8d6dc47-6ftb4" event={"ID":"f7fc256c-7537-4d5e-a390-a65b85ac90f9","Type":"ContainerStarted","Data":"f127a009dbe94e7b13399eff4b7ff99f2db048e3f956e4bda808a6ba77a88163"} Feb 16 02:29:45.188235 master-0 kubenswrapper[31559]: I0216 02:29:45.187855 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Feb 16 02:29:47.694661 master-0 kubenswrapper[31559]: I0216 02:29:47.694480 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Feb 16 02:29:51.197081 master-0 kubenswrapper[31559]: I0216 02:29:51.196986 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-poller-5f8d6dc47-6ftb4" event={"ID":"f7fc256c-7537-4d5e-a390-a65b85ac90f9","Type":"ContainerStarted","Data":"8355a7151137edb2a92303ed6ed0bdbbe256555ceb3864f3f92691205bb2e7c3"} Feb 16 02:29:52.166076 master-0 kubenswrapper[31559]: I0216 02:29:52.166007 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 16 02:29:52.210249 master-0 kubenswrapper[31559]: I0216 02:29:52.210143 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-poller-5f8d6dc47-6ftb4" event={"ID":"f7fc256c-7537-4d5e-a390-a65b85ac90f9","Type":"ContainerStarted","Data":"b6f75efe413206260524db552c8db81975892981c08509eb40961ebbc08261d9"} Feb 16 02:29:52.243536 master-0 kubenswrapper[31559]: I0216 02:29:52.243399 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="sushy-emulator/nova-console-poller-5f8d6dc47-6ftb4" podStartSLOduration=3.14701197 podStartE2EDuration="9.243373833s" podCreationTimestamp="2026-02-16 02:29:43 +0000 UTC" firstStartedPulling="2026-02-16 02:29:44.884196336 +0000 UTC m=+437.228802381" lastFinishedPulling="2026-02-16 02:29:50.980558199 +0000 UTC m=+443.325164244" observedRunningTime="2026-02-16 02:29:52.234626534 +0000 UTC m=+444.579232589" watchObservedRunningTime="2026-02-16 02:29:52.243373833 +0000 UTC m=+444.587979878" Feb 16 02:29:52.813919 master-0 kubenswrapper[31559]: I0216 02:29:52.813845 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 16 02:29:52.888394 master-0 kubenswrapper[31559]: I0216 02:29:52.888312 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Feb 16 02:29:53.800458 master-0 kubenswrapper[31559]: I0216 02:29:53.799784 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 16 02:29:54.351505 master-0 kubenswrapper[31559]: I0216 02:29:54.351395 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Feb 16 02:29:55.937235 master-0 kubenswrapper[31559]: I0216 02:29:55.937141 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-6ddf646cb9-nx4kk" podUID="f6000700-cc1a-4a76-9156-a466cc6e99ef" containerName="console" containerID="cri-o://fb249c68f5877e1ecec16d0e14920df2d5ec238ed03f598db9b30e3e603f66c5" gracePeriod=15 Feb 16 02:29:56.257055 master-0 kubenswrapper[31559]: I0216 02:29:56.256959 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6ddf646cb9-nx4kk_f6000700-cc1a-4a76-9156-a466cc6e99ef/console/0.log" Feb 16 02:29:56.257344 master-0 kubenswrapper[31559]: I0216 02:29:56.257316 31559 generic.go:334] "Generic (PLEG): container finished" podID="f6000700-cc1a-4a76-9156-a466cc6e99ef" containerID="fb249c68f5877e1ecec16d0e14920df2d5ec238ed03f598db9b30e3e603f66c5" exitCode=2 Feb 16 02:29:56.257518 master-0 kubenswrapper[31559]: I0216 02:29:56.257452 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6ddf646cb9-nx4kk" event={"ID":"f6000700-cc1a-4a76-9156-a466cc6e99ef","Type":"ContainerDied","Data":"fb249c68f5877e1ecec16d0e14920df2d5ec238ed03f598db9b30e3e603f66c5"} Feb 16 02:29:56.417986 master-0 kubenswrapper[31559]: I0216 02:29:56.417934 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6ddf646cb9-nx4kk_f6000700-cc1a-4a76-9156-a466cc6e99ef/console/0.log" Feb 16 02:29:56.418263 master-0 kubenswrapper[31559]: I0216 02:29:56.418003 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6ddf646cb9-nx4kk" Feb 16 02:29:56.606784 master-0 kubenswrapper[31559]: I0216 02:29:56.606632 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f6000700-cc1a-4a76-9156-a466cc6e99ef-oauth-serving-cert\") pod \"f6000700-cc1a-4a76-9156-a466cc6e99ef\" (UID: \"f6000700-cc1a-4a76-9156-a466cc6e99ef\") " Feb 16 02:29:56.606784 master-0 kubenswrapper[31559]: I0216 02:29:56.606715 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f6000700-cc1a-4a76-9156-a466cc6e99ef-console-serving-cert\") pod \"f6000700-cc1a-4a76-9156-a466cc6e99ef\" (UID: \"f6000700-cc1a-4a76-9156-a466cc6e99ef\") " Feb 16 02:29:56.607037 master-0 kubenswrapper[31559]: I0216 02:29:56.606804 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6000700-cc1a-4a76-9156-a466cc6e99ef-trusted-ca-bundle\") pod \"f6000700-cc1a-4a76-9156-a466cc6e99ef\" (UID: \"f6000700-cc1a-4a76-9156-a466cc6e99ef\") " Feb 16 02:29:56.607037 master-0 kubenswrapper[31559]: I0216 02:29:56.606919 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f6000700-cc1a-4a76-9156-a466cc6e99ef-console-oauth-config\") pod \"f6000700-cc1a-4a76-9156-a466cc6e99ef\" (UID: \"f6000700-cc1a-4a76-9156-a466cc6e99ef\") " Feb 16 02:29:56.607037 master-0 kubenswrapper[31559]: I0216 02:29:56.606961 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f6000700-cc1a-4a76-9156-a466cc6e99ef-service-ca\") pod \"f6000700-cc1a-4a76-9156-a466cc6e99ef\" (UID: \"f6000700-cc1a-4a76-9156-a466cc6e99ef\") " Feb 16 02:29:56.607525 master-0 kubenswrapper[31559]: I0216 02:29:56.607476 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6000700-cc1a-4a76-9156-a466cc6e99ef-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "f6000700-cc1a-4a76-9156-a466cc6e99ef" (UID: "f6000700-cc1a-4a76-9156-a466cc6e99ef"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:29:56.607955 master-0 kubenswrapper[31559]: I0216 02:29:56.607900 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6000700-cc1a-4a76-9156-a466cc6e99ef-service-ca" (OuterVolumeSpecName: "service-ca") pod "f6000700-cc1a-4a76-9156-a466cc6e99ef" (UID: "f6000700-cc1a-4a76-9156-a466cc6e99ef"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:29:56.608067 master-0 kubenswrapper[31559]: I0216 02:29:56.607920 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6000700-cc1a-4a76-9156-a466cc6e99ef-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f6000700-cc1a-4a76-9156-a466cc6e99ef" (UID: "f6000700-cc1a-4a76-9156-a466cc6e99ef"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:29:56.608225 master-0 kubenswrapper[31559]: I0216 02:29:56.608184 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkxgb\" (UniqueName: \"kubernetes.io/projected/f6000700-cc1a-4a76-9156-a466cc6e99ef-kube-api-access-zkxgb\") pod \"f6000700-cc1a-4a76-9156-a466cc6e99ef\" (UID: \"f6000700-cc1a-4a76-9156-a466cc6e99ef\") " Feb 16 02:29:56.608293 master-0 kubenswrapper[31559]: I0216 02:29:56.608277 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f6000700-cc1a-4a76-9156-a466cc6e99ef-console-config\") pod \"f6000700-cc1a-4a76-9156-a466cc6e99ef\" (UID: \"f6000700-cc1a-4a76-9156-a466cc6e99ef\") " Feb 16 02:29:56.609252 master-0 kubenswrapper[31559]: I0216 02:29:56.609198 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6000700-cc1a-4a76-9156-a466cc6e99ef-console-config" (OuterVolumeSpecName: "console-config") pod "f6000700-cc1a-4a76-9156-a466cc6e99ef" (UID: "f6000700-cc1a-4a76-9156-a466cc6e99ef"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:29:56.609358 master-0 kubenswrapper[31559]: I0216 02:29:56.609218 31559 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f6000700-cc1a-4a76-9156-a466cc6e99ef-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 16 02:29:56.609358 master-0 kubenswrapper[31559]: I0216 02:29:56.609307 31559 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f6000700-cc1a-4a76-9156-a466cc6e99ef-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 16 02:29:56.609358 master-0 kubenswrapper[31559]: I0216 02:29:56.609338 31559 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6000700-cc1a-4a76-9156-a466cc6e99ef-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 02:29:56.610236 master-0 kubenswrapper[31559]: I0216 02:29:56.610165 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6000700-cc1a-4a76-9156-a466cc6e99ef-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "f6000700-cc1a-4a76-9156-a466cc6e99ef" (UID: "f6000700-cc1a-4a76-9156-a466cc6e99ef"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:29:56.610936 master-0 kubenswrapper[31559]: I0216 02:29:56.610902 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6000700-cc1a-4a76-9156-a466cc6e99ef-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "f6000700-cc1a-4a76-9156-a466cc6e99ef" (UID: "f6000700-cc1a-4a76-9156-a466cc6e99ef"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:29:56.611780 master-0 kubenswrapper[31559]: I0216 02:29:56.611732 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6000700-cc1a-4a76-9156-a466cc6e99ef-kube-api-access-zkxgb" (OuterVolumeSpecName: "kube-api-access-zkxgb") pod "f6000700-cc1a-4a76-9156-a466cc6e99ef" (UID: "f6000700-cc1a-4a76-9156-a466cc6e99ef"). InnerVolumeSpecName "kube-api-access-zkxgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:29:56.711280 master-0 kubenswrapper[31559]: I0216 02:29:56.711179 31559 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f6000700-cc1a-4a76-9156-a466cc6e99ef-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Feb 16 02:29:56.711280 master-0 kubenswrapper[31559]: I0216 02:29:56.711243 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkxgb\" (UniqueName: \"kubernetes.io/projected/f6000700-cc1a-4a76-9156-a466cc6e99ef-kube-api-access-zkxgb\") on node \"master-0\" DevicePath \"\"" Feb 16 02:29:56.711280 master-0 kubenswrapper[31559]: I0216 02:29:56.711267 31559 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f6000700-cc1a-4a76-9156-a466cc6e99ef-console-config\") on node \"master-0\" DevicePath \"\"" Feb 16 02:29:56.711280 master-0 kubenswrapper[31559]: I0216 02:29:56.711287 31559 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f6000700-cc1a-4a76-9156-a466cc6e99ef-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 16 02:29:57.270314 master-0 kubenswrapper[31559]: I0216 02:29:57.270250 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6ddf646cb9-nx4kk_f6000700-cc1a-4a76-9156-a466cc6e99ef/console/0.log" Feb 16 02:29:57.271218 master-0 kubenswrapper[31559]: I0216 02:29:57.270348 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6ddf646cb9-nx4kk" event={"ID":"f6000700-cc1a-4a76-9156-a466cc6e99ef","Type":"ContainerDied","Data":"b35706d38726aa11438a45ed268076d3052a4a7368e20e7d7b9ee2e5a3a08664"} Feb 16 02:29:57.271218 master-0 kubenswrapper[31559]: I0216 02:29:57.270413 31559 scope.go:117] "RemoveContainer" containerID="fb249c68f5877e1ecec16d0e14920df2d5ec238ed03f598db9b30e3e603f66c5" Feb 16 02:29:57.271218 master-0 kubenswrapper[31559]: I0216 02:29:57.270474 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6ddf646cb9-nx4kk" Feb 16 02:29:57.363600 master-0 kubenswrapper[31559]: I0216 02:29:57.363501 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6ddf646cb9-nx4kk"] Feb 16 02:29:57.374472 master-0 kubenswrapper[31559]: I0216 02:29:57.374333 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-6ddf646cb9-nx4kk"] Feb 16 02:29:57.944815 master-0 kubenswrapper[31559]: I0216 02:29:57.944641 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6000700-cc1a-4a76-9156-a466cc6e99ef" path="/var/lib/kubelet/pods/f6000700-cc1a-4a76-9156-a466cc6e99ef/volumes" Feb 16 02:29:58.518871 master-0 kubenswrapper[31559]: I0216 02:29:58.518794 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 16 02:30:00.225569 master-0 kubenswrapper[31559]: I0216 02:30:00.225503 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520150-tfx2w"] Feb 16 02:30:00.226176 master-0 kubenswrapper[31559]: E0216 02:30:00.225797 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6000700-cc1a-4a76-9156-a466cc6e99ef" containerName="console" Feb 16 02:30:00.226176 master-0 kubenswrapper[31559]: I0216 02:30:00.225810 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6000700-cc1a-4a76-9156-a466cc6e99ef" containerName="console" Feb 16 02:30:00.226176 master-0 kubenswrapper[31559]: I0216 02:30:00.225961 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6000700-cc1a-4a76-9156-a466cc6e99ef" containerName="console" Feb 16 02:30:00.226380 master-0 kubenswrapper[31559]: I0216 02:30:00.226361 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520150-tfx2w" Feb 16 02:30:00.233211 master-0 kubenswrapper[31559]: I0216 02:30:00.233136 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 02:30:00.233856 master-0 kubenswrapper[31559]: I0216 02:30:00.233822 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-h8ldk" Feb 16 02:30:00.255360 master-0 kubenswrapper[31559]: I0216 02:30:00.255261 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520150-tfx2w"] Feb 16 02:30:00.301650 master-0 kubenswrapper[31559]: I0216 02:30:00.301592 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ed896b20-c4dc-4b95-be82-5648300ac6c3-secret-volume\") pod \"collect-profiles-29520150-tfx2w\" (UID: \"ed896b20-c4dc-4b95-be82-5648300ac6c3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520150-tfx2w" Feb 16 02:30:00.301914 master-0 kubenswrapper[31559]: I0216 02:30:00.301685 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed896b20-c4dc-4b95-be82-5648300ac6c3-config-volume\") pod \"collect-profiles-29520150-tfx2w\" (UID: \"ed896b20-c4dc-4b95-be82-5648300ac6c3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520150-tfx2w" Feb 16 02:30:00.301914 master-0 kubenswrapper[31559]: I0216 02:30:00.301760 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frd8x\" (UniqueName: \"kubernetes.io/projected/ed896b20-c4dc-4b95-be82-5648300ac6c3-kube-api-access-frd8x\") pod \"collect-profiles-29520150-tfx2w\" (UID: \"ed896b20-c4dc-4b95-be82-5648300ac6c3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520150-tfx2w" Feb 16 02:30:00.403613 master-0 kubenswrapper[31559]: I0216 02:30:00.403500 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed896b20-c4dc-4b95-be82-5648300ac6c3-config-volume\") pod \"collect-profiles-29520150-tfx2w\" (UID: \"ed896b20-c4dc-4b95-be82-5648300ac6c3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520150-tfx2w" Feb 16 02:30:00.403942 master-0 kubenswrapper[31559]: I0216 02:30:00.403771 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frd8x\" (UniqueName: \"kubernetes.io/projected/ed896b20-c4dc-4b95-be82-5648300ac6c3-kube-api-access-frd8x\") pod \"collect-profiles-29520150-tfx2w\" (UID: \"ed896b20-c4dc-4b95-be82-5648300ac6c3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520150-tfx2w" Feb 16 02:30:00.403942 master-0 kubenswrapper[31559]: I0216 02:30:00.403816 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ed896b20-c4dc-4b95-be82-5648300ac6c3-secret-volume\") pod \"collect-profiles-29520150-tfx2w\" (UID: \"ed896b20-c4dc-4b95-be82-5648300ac6c3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520150-tfx2w" Feb 16 02:30:00.405000 master-0 kubenswrapper[31559]: I0216 02:30:00.404934 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed896b20-c4dc-4b95-be82-5648300ac6c3-config-volume\") pod \"collect-profiles-29520150-tfx2w\" (UID: \"ed896b20-c4dc-4b95-be82-5648300ac6c3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520150-tfx2w" Feb 16 02:30:00.409170 master-0 kubenswrapper[31559]: I0216 02:30:00.409128 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ed896b20-c4dc-4b95-be82-5648300ac6c3-secret-volume\") pod \"collect-profiles-29520150-tfx2w\" (UID: \"ed896b20-c4dc-4b95-be82-5648300ac6c3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520150-tfx2w" Feb 16 02:30:00.437139 master-0 kubenswrapper[31559]: I0216 02:30:00.437063 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frd8x\" (UniqueName: \"kubernetes.io/projected/ed896b20-c4dc-4b95-be82-5648300ac6c3-kube-api-access-frd8x\") pod \"collect-profiles-29520150-tfx2w\" (UID: \"ed896b20-c4dc-4b95-be82-5648300ac6c3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520150-tfx2w" Feb 16 02:30:00.530520 master-0 kubenswrapper[31559]: I0216 02:30:00.530329 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 16 02:30:00.542685 master-0 kubenswrapper[31559]: I0216 02:30:00.542613 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520150-tfx2w" Feb 16 02:30:01.146699 master-0 kubenswrapper[31559]: I0216 02:30:01.146621 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520150-tfx2w"] Feb 16 02:30:01.309320 master-0 kubenswrapper[31559]: I0216 02:30:01.309248 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520150-tfx2w" event={"ID":"ed896b20-c4dc-4b95-be82-5648300ac6c3","Type":"ContainerStarted","Data":"348fcd0139c800b0e75ca0ef04dfaa0bdd9c5dbdba5f92839bc1e53cfac1ff1c"} Feb 16 02:30:02.318715 master-0 kubenswrapper[31559]: I0216 02:30:02.318660 31559 generic.go:334] "Generic (PLEG): container finished" podID="ed896b20-c4dc-4b95-be82-5648300ac6c3" containerID="4bf332b4ca8908e5fa9080cb0ecdfdbec32f7856e88a772ecce77d71dd299f77" exitCode=0 Feb 16 02:30:02.318715 master-0 kubenswrapper[31559]: I0216 02:30:02.318726 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520150-tfx2w" event={"ID":"ed896b20-c4dc-4b95-be82-5648300ac6c3","Type":"ContainerDied","Data":"4bf332b4ca8908e5fa9080cb0ecdfdbec32f7856e88a772ecce77d71dd299f77"} Feb 16 02:30:03.343381 master-0 kubenswrapper[31559]: I0216 02:30:03.343296 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 16 02:30:03.824607 master-0 kubenswrapper[31559]: I0216 02:30:03.824539 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520150-tfx2w" Feb 16 02:30:03.969414 master-0 kubenswrapper[31559]: I0216 02:30:03.969347 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ed896b20-c4dc-4b95-be82-5648300ac6c3-secret-volume\") pod \"ed896b20-c4dc-4b95-be82-5648300ac6c3\" (UID: \"ed896b20-c4dc-4b95-be82-5648300ac6c3\") " Feb 16 02:30:03.969733 master-0 kubenswrapper[31559]: I0216 02:30:03.969578 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frd8x\" (UniqueName: \"kubernetes.io/projected/ed896b20-c4dc-4b95-be82-5648300ac6c3-kube-api-access-frd8x\") pod \"ed896b20-c4dc-4b95-be82-5648300ac6c3\" (UID: \"ed896b20-c4dc-4b95-be82-5648300ac6c3\") " Feb 16 02:30:03.969733 master-0 kubenswrapper[31559]: I0216 02:30:03.969680 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed896b20-c4dc-4b95-be82-5648300ac6c3-config-volume\") pod \"ed896b20-c4dc-4b95-be82-5648300ac6c3\" (UID: \"ed896b20-c4dc-4b95-be82-5648300ac6c3\") " Feb 16 02:30:03.970810 master-0 kubenswrapper[31559]: I0216 02:30:03.970759 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed896b20-c4dc-4b95-be82-5648300ac6c3-config-volume" (OuterVolumeSpecName: "config-volume") pod "ed896b20-c4dc-4b95-be82-5648300ac6c3" (UID: "ed896b20-c4dc-4b95-be82-5648300ac6c3"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:30:03.975074 master-0 kubenswrapper[31559]: I0216 02:30:03.974519 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed896b20-c4dc-4b95-be82-5648300ac6c3-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ed896b20-c4dc-4b95-be82-5648300ac6c3" (UID: "ed896b20-c4dc-4b95-be82-5648300ac6c3"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:30:03.975221 master-0 kubenswrapper[31559]: I0216 02:30:03.975103 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed896b20-c4dc-4b95-be82-5648300ac6c3-kube-api-access-frd8x" (OuterVolumeSpecName: "kube-api-access-frd8x") pod "ed896b20-c4dc-4b95-be82-5648300ac6c3" (UID: "ed896b20-c4dc-4b95-be82-5648300ac6c3"). InnerVolumeSpecName "kube-api-access-frd8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:30:04.072760 master-0 kubenswrapper[31559]: I0216 02:30:04.072674 31559 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ed896b20-c4dc-4b95-be82-5648300ac6c3-secret-volume\") on node \"master-0\" DevicePath \"\"" Feb 16 02:30:04.072760 master-0 kubenswrapper[31559]: I0216 02:30:04.072748 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frd8x\" (UniqueName: \"kubernetes.io/projected/ed896b20-c4dc-4b95-be82-5648300ac6c3-kube-api-access-frd8x\") on node \"master-0\" DevicePath \"\"" Feb 16 02:30:04.072760 master-0 kubenswrapper[31559]: I0216 02:30:04.072776 31559 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed896b20-c4dc-4b95-be82-5648300ac6c3-config-volume\") on node \"master-0\" DevicePath \"\"" Feb 16 02:30:04.340395 master-0 kubenswrapper[31559]: I0216 02:30:04.340230 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520150-tfx2w" event={"ID":"ed896b20-c4dc-4b95-be82-5648300ac6c3","Type":"ContainerDied","Data":"348fcd0139c800b0e75ca0ef04dfaa0bdd9c5dbdba5f92839bc1e53cfac1ff1c"} Feb 16 02:30:04.340395 master-0 kubenswrapper[31559]: I0216 02:30:04.340320 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520150-tfx2w" Feb 16 02:30:04.340680 master-0 kubenswrapper[31559]: I0216 02:30:04.340321 31559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="348fcd0139c800b0e75ca0ef04dfaa0bdd9c5dbdba5f92839bc1e53cfac1ff1c" Feb 16 02:30:15.172674 master-0 kubenswrapper[31559]: I0216 02:30:15.172593 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-596bfc78ff-27s7m"] Feb 16 02:30:15.173813 master-0 kubenswrapper[31559]: E0216 02:30:15.173111 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed896b20-c4dc-4b95-be82-5648300ac6c3" containerName="collect-profiles" Feb 16 02:30:15.173813 master-0 kubenswrapper[31559]: I0216 02:30:15.173138 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed896b20-c4dc-4b95-be82-5648300ac6c3" containerName="collect-profiles" Feb 16 02:30:15.173813 master-0 kubenswrapper[31559]: I0216 02:30:15.173481 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed896b20-c4dc-4b95-be82-5648300ac6c3" containerName="collect-profiles" Feb 16 02:30:15.174352 master-0 kubenswrapper[31559]: I0216 02:30:15.174305 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-596bfc78ff-27s7m" Feb 16 02:30:15.230576 master-0 kubenswrapper[31559]: I0216 02:30:15.228948 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-596bfc78ff-27s7m"] Feb 16 02:30:15.275951 master-0 kubenswrapper[31559]: I0216 02:30:15.275862 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/97ecc023-a5a8-48d9-a333-732b37168e6e-service-ca\") pod \"console-596bfc78ff-27s7m\" (UID: \"97ecc023-a5a8-48d9-a333-732b37168e6e\") " pod="openshift-console/console-596bfc78ff-27s7m" Feb 16 02:30:15.275951 master-0 kubenswrapper[31559]: I0216 02:30:15.275950 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/97ecc023-a5a8-48d9-a333-732b37168e6e-console-serving-cert\") pod \"console-596bfc78ff-27s7m\" (UID: \"97ecc023-a5a8-48d9-a333-732b37168e6e\") " pod="openshift-console/console-596bfc78ff-27s7m" Feb 16 02:30:15.276242 master-0 kubenswrapper[31559]: I0216 02:30:15.276007 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97ecc023-a5a8-48d9-a333-732b37168e6e-trusted-ca-bundle\") pod \"console-596bfc78ff-27s7m\" (UID: \"97ecc023-a5a8-48d9-a333-732b37168e6e\") " pod="openshift-console/console-596bfc78ff-27s7m" Feb 16 02:30:15.276242 master-0 kubenswrapper[31559]: I0216 02:30:15.276036 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwr2j\" (UniqueName: \"kubernetes.io/projected/97ecc023-a5a8-48d9-a333-732b37168e6e-kube-api-access-fwr2j\") pod \"console-596bfc78ff-27s7m\" (UID: \"97ecc023-a5a8-48d9-a333-732b37168e6e\") " pod="openshift-console/console-596bfc78ff-27s7m" Feb 16 02:30:15.276242 master-0 kubenswrapper[31559]: I0216 02:30:15.276060 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/97ecc023-a5a8-48d9-a333-732b37168e6e-oauth-serving-cert\") pod \"console-596bfc78ff-27s7m\" (UID: \"97ecc023-a5a8-48d9-a333-732b37168e6e\") " pod="openshift-console/console-596bfc78ff-27s7m" Feb 16 02:30:15.276242 master-0 kubenswrapper[31559]: I0216 02:30:15.276082 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/97ecc023-a5a8-48d9-a333-732b37168e6e-console-oauth-config\") pod \"console-596bfc78ff-27s7m\" (UID: \"97ecc023-a5a8-48d9-a333-732b37168e6e\") " pod="openshift-console/console-596bfc78ff-27s7m" Feb 16 02:30:15.276242 master-0 kubenswrapper[31559]: I0216 02:30:15.276141 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/97ecc023-a5a8-48d9-a333-732b37168e6e-console-config\") pod \"console-596bfc78ff-27s7m\" (UID: \"97ecc023-a5a8-48d9-a333-732b37168e6e\") " pod="openshift-console/console-596bfc78ff-27s7m" Feb 16 02:30:15.377311 master-0 kubenswrapper[31559]: I0216 02:30:15.377230 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/97ecc023-a5a8-48d9-a333-732b37168e6e-console-config\") pod \"console-596bfc78ff-27s7m\" (UID: \"97ecc023-a5a8-48d9-a333-732b37168e6e\") " pod="openshift-console/console-596bfc78ff-27s7m" Feb 16 02:30:15.377311 master-0 kubenswrapper[31559]: I0216 02:30:15.377282 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/97ecc023-a5a8-48d9-a333-732b37168e6e-service-ca\") pod \"console-596bfc78ff-27s7m\" (UID: \"97ecc023-a5a8-48d9-a333-732b37168e6e\") " pod="openshift-console/console-596bfc78ff-27s7m" Feb 16 02:30:15.377311 master-0 kubenswrapper[31559]: I0216 02:30:15.377326 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/97ecc023-a5a8-48d9-a333-732b37168e6e-console-serving-cert\") pod \"console-596bfc78ff-27s7m\" (UID: \"97ecc023-a5a8-48d9-a333-732b37168e6e\") " pod="openshift-console/console-596bfc78ff-27s7m" Feb 16 02:30:15.377773 master-0 kubenswrapper[31559]: I0216 02:30:15.377562 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97ecc023-a5a8-48d9-a333-732b37168e6e-trusted-ca-bundle\") pod \"console-596bfc78ff-27s7m\" (UID: \"97ecc023-a5a8-48d9-a333-732b37168e6e\") " pod="openshift-console/console-596bfc78ff-27s7m" Feb 16 02:30:15.377773 master-0 kubenswrapper[31559]: I0216 02:30:15.377653 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwr2j\" (UniqueName: \"kubernetes.io/projected/97ecc023-a5a8-48d9-a333-732b37168e6e-kube-api-access-fwr2j\") pod \"console-596bfc78ff-27s7m\" (UID: \"97ecc023-a5a8-48d9-a333-732b37168e6e\") " pod="openshift-console/console-596bfc78ff-27s7m" Feb 16 02:30:15.377773 master-0 kubenswrapper[31559]: I0216 02:30:15.377700 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/97ecc023-a5a8-48d9-a333-732b37168e6e-oauth-serving-cert\") pod \"console-596bfc78ff-27s7m\" (UID: \"97ecc023-a5a8-48d9-a333-732b37168e6e\") " pod="openshift-console/console-596bfc78ff-27s7m" Feb 16 02:30:15.377773 master-0 kubenswrapper[31559]: I0216 02:30:15.377739 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/97ecc023-a5a8-48d9-a333-732b37168e6e-console-oauth-config\") pod \"console-596bfc78ff-27s7m\" (UID: \"97ecc023-a5a8-48d9-a333-732b37168e6e\") " pod="openshift-console/console-596bfc78ff-27s7m" Feb 16 02:30:15.378176 master-0 kubenswrapper[31559]: I0216 02:30:15.378130 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/97ecc023-a5a8-48d9-a333-732b37168e6e-service-ca\") pod \"console-596bfc78ff-27s7m\" (UID: \"97ecc023-a5a8-48d9-a333-732b37168e6e\") " pod="openshift-console/console-596bfc78ff-27s7m" Feb 16 02:30:15.378712 master-0 kubenswrapper[31559]: I0216 02:30:15.378674 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97ecc023-a5a8-48d9-a333-732b37168e6e-trusted-ca-bundle\") pod \"console-596bfc78ff-27s7m\" (UID: \"97ecc023-a5a8-48d9-a333-732b37168e6e\") " pod="openshift-console/console-596bfc78ff-27s7m" Feb 16 02:30:15.378916 master-0 kubenswrapper[31559]: I0216 02:30:15.378866 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/97ecc023-a5a8-48d9-a333-732b37168e6e-console-config\") pod \"console-596bfc78ff-27s7m\" (UID: \"97ecc023-a5a8-48d9-a333-732b37168e6e\") " pod="openshift-console/console-596bfc78ff-27s7m" Feb 16 02:30:15.379682 master-0 kubenswrapper[31559]: I0216 02:30:15.379591 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/97ecc023-a5a8-48d9-a333-732b37168e6e-oauth-serving-cert\") pod \"console-596bfc78ff-27s7m\" (UID: \"97ecc023-a5a8-48d9-a333-732b37168e6e\") " pod="openshift-console/console-596bfc78ff-27s7m" Feb 16 02:30:15.382028 master-0 kubenswrapper[31559]: I0216 02:30:15.381926 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/97ecc023-a5a8-48d9-a333-732b37168e6e-console-oauth-config\") pod \"console-596bfc78ff-27s7m\" (UID: \"97ecc023-a5a8-48d9-a333-732b37168e6e\") " pod="openshift-console/console-596bfc78ff-27s7m" Feb 16 02:30:15.385380 master-0 kubenswrapper[31559]: I0216 02:30:15.384272 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/97ecc023-a5a8-48d9-a333-732b37168e6e-console-serving-cert\") pod \"console-596bfc78ff-27s7m\" (UID: \"97ecc023-a5a8-48d9-a333-732b37168e6e\") " pod="openshift-console/console-596bfc78ff-27s7m" Feb 16 02:30:15.397520 master-0 kubenswrapper[31559]: I0216 02:30:15.396867 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwr2j\" (UniqueName: \"kubernetes.io/projected/97ecc023-a5a8-48d9-a333-732b37168e6e-kube-api-access-fwr2j\") pod \"console-596bfc78ff-27s7m\" (UID: \"97ecc023-a5a8-48d9-a333-732b37168e6e\") " pod="openshift-console/console-596bfc78ff-27s7m" Feb 16 02:30:15.532261 master-0 kubenswrapper[31559]: I0216 02:30:15.532114 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-596bfc78ff-27s7m" Feb 16 02:30:16.064751 master-0 kubenswrapper[31559]: I0216 02:30:16.064667 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-596bfc78ff-27s7m"] Feb 16 02:30:16.067785 master-0 kubenswrapper[31559]: W0216 02:30:16.067737 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97ecc023_a5a8_48d9_a333_732b37168e6e.slice/crio-541f406008455daa206a2595b98ec52252459822ea2f311546034ffbfed6e440 WatchSource:0}: Error finding container 541f406008455daa206a2595b98ec52252459822ea2f311546034ffbfed6e440: Status 404 returned error can't find the container with id 541f406008455daa206a2595b98ec52252459822ea2f311546034ffbfed6e440 Feb 16 02:30:16.465400 master-0 kubenswrapper[31559]: I0216 02:30:16.465313 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-596bfc78ff-27s7m" event={"ID":"97ecc023-a5a8-48d9-a333-732b37168e6e","Type":"ContainerStarted","Data":"c4150c13da4f2c88b68d35fab7abb2bd849195b09f70a02c5361e2905dacf22f"} Feb 16 02:30:16.465400 master-0 kubenswrapper[31559]: I0216 02:30:16.465374 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-596bfc78ff-27s7m" event={"ID":"97ecc023-a5a8-48d9-a333-732b37168e6e","Type":"ContainerStarted","Data":"541f406008455daa206a2595b98ec52252459822ea2f311546034ffbfed6e440"} Feb 16 02:30:16.498037 master-0 kubenswrapper[31559]: I0216 02:30:16.496196 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-596bfc78ff-27s7m" podStartSLOduration=1.496173728 podStartE2EDuration="1.496173728s" podCreationTimestamp="2026-02-16 02:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:30:16.492995995 +0000 UTC m=+468.837602050" watchObservedRunningTime="2026-02-16 02:30:16.496173728 +0000 UTC m=+468.840779783" Feb 16 02:30:25.532425 master-0 kubenswrapper[31559]: I0216 02:30:25.532338 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-596bfc78ff-27s7m" Feb 16 02:30:25.533230 master-0 kubenswrapper[31559]: I0216 02:30:25.532535 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-596bfc78ff-27s7m" Feb 16 02:30:25.539971 master-0 kubenswrapper[31559]: I0216 02:30:25.539927 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-596bfc78ff-27s7m" Feb 16 02:30:25.558221 master-0 kubenswrapper[31559]: I0216 02:30:25.558155 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-596bfc78ff-27s7m" Feb 16 02:30:25.656223 master-0 kubenswrapper[31559]: I0216 02:30:25.656130 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7cdbd48f5b-slwlb"] Feb 16 02:30:50.723355 master-0 kubenswrapper[31559]: I0216 02:30:50.723250 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-7cdbd48f5b-slwlb" podUID="8859b956-70db-4e59-abff-faf38aa377fc" containerName="console" containerID="cri-o://85b995952273ff9ef848d9545d01c6d93093545daba5183940e46289d87fd69c" gracePeriod=15 Feb 16 02:30:51.213176 master-0 kubenswrapper[31559]: I0216 02:30:51.213106 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7cdbd48f5b-slwlb_8859b956-70db-4e59-abff-faf38aa377fc/console/0.log" Feb 16 02:30:51.213176 master-0 kubenswrapper[31559]: I0216 02:30:51.213181 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7cdbd48f5b-slwlb" Feb 16 02:30:51.259459 master-0 kubenswrapper[31559]: I0216 02:30:51.252356 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8859b956-70db-4e59-abff-faf38aa377fc-console-serving-cert\") pod \"8859b956-70db-4e59-abff-faf38aa377fc\" (UID: \"8859b956-70db-4e59-abff-faf38aa377fc\") " Feb 16 02:30:51.259459 master-0 kubenswrapper[31559]: I0216 02:30:51.252459 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8859b956-70db-4e59-abff-faf38aa377fc-console-config\") pod \"8859b956-70db-4e59-abff-faf38aa377fc\" (UID: \"8859b956-70db-4e59-abff-faf38aa377fc\") " Feb 16 02:30:51.259459 master-0 kubenswrapper[31559]: I0216 02:30:51.252494 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8859b956-70db-4e59-abff-faf38aa377fc-console-oauth-config\") pod \"8859b956-70db-4e59-abff-faf38aa377fc\" (UID: \"8859b956-70db-4e59-abff-faf38aa377fc\") " Feb 16 02:30:51.259459 master-0 kubenswrapper[31559]: I0216 02:30:51.252512 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l94js\" (UniqueName: \"kubernetes.io/projected/8859b956-70db-4e59-abff-faf38aa377fc-kube-api-access-l94js\") pod \"8859b956-70db-4e59-abff-faf38aa377fc\" (UID: \"8859b956-70db-4e59-abff-faf38aa377fc\") " Feb 16 02:30:51.259459 master-0 kubenswrapper[31559]: I0216 02:30:51.252538 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8859b956-70db-4e59-abff-faf38aa377fc-service-ca\") pod \"8859b956-70db-4e59-abff-faf38aa377fc\" (UID: \"8859b956-70db-4e59-abff-faf38aa377fc\") " Feb 16 02:30:51.259459 master-0 kubenswrapper[31559]: I0216 02:30:51.252564 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8859b956-70db-4e59-abff-faf38aa377fc-oauth-serving-cert\") pod \"8859b956-70db-4e59-abff-faf38aa377fc\" (UID: \"8859b956-70db-4e59-abff-faf38aa377fc\") " Feb 16 02:30:51.259459 master-0 kubenswrapper[31559]: I0216 02:30:51.252585 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8859b956-70db-4e59-abff-faf38aa377fc-trusted-ca-bundle\") pod \"8859b956-70db-4e59-abff-faf38aa377fc\" (UID: \"8859b956-70db-4e59-abff-faf38aa377fc\") " Feb 16 02:30:51.259459 master-0 kubenswrapper[31559]: I0216 02:30:51.253298 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8859b956-70db-4e59-abff-faf38aa377fc-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "8859b956-70db-4e59-abff-faf38aa377fc" (UID: "8859b956-70db-4e59-abff-faf38aa377fc"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:30:51.259459 master-0 kubenswrapper[31559]: I0216 02:30:51.253657 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8859b956-70db-4e59-abff-faf38aa377fc-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "8859b956-70db-4e59-abff-faf38aa377fc" (UID: "8859b956-70db-4e59-abff-faf38aa377fc"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:30:51.259459 master-0 kubenswrapper[31559]: I0216 02:30:51.253692 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8859b956-70db-4e59-abff-faf38aa377fc-service-ca" (OuterVolumeSpecName: "service-ca") pod "8859b956-70db-4e59-abff-faf38aa377fc" (UID: "8859b956-70db-4e59-abff-faf38aa377fc"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:30:51.259459 master-0 kubenswrapper[31559]: I0216 02:30:51.253875 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8859b956-70db-4e59-abff-faf38aa377fc-console-config" (OuterVolumeSpecName: "console-config") pod "8859b956-70db-4e59-abff-faf38aa377fc" (UID: "8859b956-70db-4e59-abff-faf38aa377fc"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:30:51.259459 master-0 kubenswrapper[31559]: I0216 02:30:51.256633 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8859b956-70db-4e59-abff-faf38aa377fc-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "8859b956-70db-4e59-abff-faf38aa377fc" (UID: "8859b956-70db-4e59-abff-faf38aa377fc"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:30:51.259459 master-0 kubenswrapper[31559]: I0216 02:30:51.256703 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8859b956-70db-4e59-abff-faf38aa377fc-kube-api-access-l94js" (OuterVolumeSpecName: "kube-api-access-l94js") pod "8859b956-70db-4e59-abff-faf38aa377fc" (UID: "8859b956-70db-4e59-abff-faf38aa377fc"). InnerVolumeSpecName "kube-api-access-l94js". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:30:51.260196 master-0 kubenswrapper[31559]: I0216 02:30:51.259553 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8859b956-70db-4e59-abff-faf38aa377fc-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "8859b956-70db-4e59-abff-faf38aa377fc" (UID: "8859b956-70db-4e59-abff-faf38aa377fc"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:30:51.355029 master-0 kubenswrapper[31559]: I0216 02:30:51.354140 31559 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8859b956-70db-4e59-abff-faf38aa377fc-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 16 02:30:51.355029 master-0 kubenswrapper[31559]: I0216 02:30:51.354208 31559 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8859b956-70db-4e59-abff-faf38aa377fc-console-config\") on node \"master-0\" DevicePath \"\"" Feb 16 02:30:51.355029 master-0 kubenswrapper[31559]: I0216 02:30:51.354222 31559 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8859b956-70db-4e59-abff-faf38aa377fc-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Feb 16 02:30:51.355029 master-0 kubenswrapper[31559]: I0216 02:30:51.354234 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l94js\" (UniqueName: \"kubernetes.io/projected/8859b956-70db-4e59-abff-faf38aa377fc-kube-api-access-l94js\") on node \"master-0\" DevicePath \"\"" Feb 16 02:30:51.355029 master-0 kubenswrapper[31559]: I0216 02:30:51.354247 31559 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8859b956-70db-4e59-abff-faf38aa377fc-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 16 02:30:51.355029 master-0 kubenswrapper[31559]: I0216 02:30:51.354259 31559 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8859b956-70db-4e59-abff-faf38aa377fc-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 16 02:30:51.355029 master-0 kubenswrapper[31559]: I0216 02:30:51.354270 31559 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8859b956-70db-4e59-abff-faf38aa377fc-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 02:30:51.795364 master-0 kubenswrapper[31559]: I0216 02:30:51.795290 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7cdbd48f5b-slwlb_8859b956-70db-4e59-abff-faf38aa377fc/console/0.log" Feb 16 02:30:51.796335 master-0 kubenswrapper[31559]: I0216 02:30:51.795378 31559 generic.go:334] "Generic (PLEG): container finished" podID="8859b956-70db-4e59-abff-faf38aa377fc" containerID="85b995952273ff9ef848d9545d01c6d93093545daba5183940e46289d87fd69c" exitCode=2 Feb 16 02:30:51.796335 master-0 kubenswrapper[31559]: I0216 02:30:51.795426 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7cdbd48f5b-slwlb" event={"ID":"8859b956-70db-4e59-abff-faf38aa377fc","Type":"ContainerDied","Data":"85b995952273ff9ef848d9545d01c6d93093545daba5183940e46289d87fd69c"} Feb 16 02:30:51.796335 master-0 kubenswrapper[31559]: I0216 02:30:51.795506 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7cdbd48f5b-slwlb" event={"ID":"8859b956-70db-4e59-abff-faf38aa377fc","Type":"ContainerDied","Data":"0aa2de3fd08df8b26f3eda9cf64141424e9a1212cd2d37b4121cf652b6c95d20"} Feb 16 02:30:51.796335 master-0 kubenswrapper[31559]: I0216 02:30:51.795537 31559 scope.go:117] "RemoveContainer" containerID="85b995952273ff9ef848d9545d01c6d93093545daba5183940e46289d87fd69c" Feb 16 02:30:51.796335 master-0 kubenswrapper[31559]: I0216 02:30:51.795566 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7cdbd48f5b-slwlb" Feb 16 02:30:51.823736 master-0 kubenswrapper[31559]: I0216 02:30:51.823684 31559 scope.go:117] "RemoveContainer" containerID="85b995952273ff9ef848d9545d01c6d93093545daba5183940e46289d87fd69c" Feb 16 02:30:51.824680 master-0 kubenswrapper[31559]: E0216 02:30:51.824630 31559 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85b995952273ff9ef848d9545d01c6d93093545daba5183940e46289d87fd69c\": container with ID starting with 85b995952273ff9ef848d9545d01c6d93093545daba5183940e46289d87fd69c not found: ID does not exist" containerID="85b995952273ff9ef848d9545d01c6d93093545daba5183940e46289d87fd69c" Feb 16 02:30:51.824807 master-0 kubenswrapper[31559]: I0216 02:30:51.824676 31559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85b995952273ff9ef848d9545d01c6d93093545daba5183940e46289d87fd69c"} err="failed to get container status \"85b995952273ff9ef848d9545d01c6d93093545daba5183940e46289d87fd69c\": rpc error: code = NotFound desc = could not find container \"85b995952273ff9ef848d9545d01c6d93093545daba5183940e46289d87fd69c\": container with ID starting with 85b995952273ff9ef848d9545d01c6d93093545daba5183940e46289d87fd69c not found: ID does not exist" Feb 16 02:30:51.849508 master-0 kubenswrapper[31559]: I0216 02:30:51.849420 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7cdbd48f5b-slwlb"] Feb 16 02:30:51.856148 master-0 kubenswrapper[31559]: I0216 02:30:51.856083 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-7cdbd48f5b-slwlb"] Feb 16 02:30:51.934654 master-0 kubenswrapper[31559]: I0216 02:30:51.934562 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8859b956-70db-4e59-abff-faf38aa377fc" path="/var/lib/kubelet/pods/8859b956-70db-4e59-abff-faf38aa377fc/volumes" Feb 16 02:31:23.373877 master-0 kubenswrapper[31559]: I0216 02:31:23.373759 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4kc9dx"] Feb 16 02:31:23.375265 master-0 kubenswrapper[31559]: E0216 02:31:23.375088 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8859b956-70db-4e59-abff-faf38aa377fc" containerName="console" Feb 16 02:31:23.375265 master-0 kubenswrapper[31559]: I0216 02:31:23.375119 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="8859b956-70db-4e59-abff-faf38aa377fc" containerName="console" Feb 16 02:31:23.375632 master-0 kubenswrapper[31559]: I0216 02:31:23.375589 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="8859b956-70db-4e59-abff-faf38aa377fc" containerName="console" Feb 16 02:31:23.388141 master-0 kubenswrapper[31559]: I0216 02:31:23.388054 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4kc9dx" Feb 16 02:31:23.390903 master-0 kubenswrapper[31559]: I0216 02:31:23.390815 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-w8j9g" Feb 16 02:31:23.407371 master-0 kubenswrapper[31559]: I0216 02:31:23.407291 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4kc9dx"] Feb 16 02:31:23.420789 master-0 kubenswrapper[31559]: I0216 02:31:23.420705 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/49a52261-d6f1-40c0-8ae5-b667feacf94d-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4kc9dx\" (UID: \"49a52261-d6f1-40c0-8ae5-b667feacf94d\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4kc9dx" Feb 16 02:31:23.421053 master-0 kubenswrapper[31559]: I0216 02:31:23.420936 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrc7f\" (UniqueName: \"kubernetes.io/projected/49a52261-d6f1-40c0-8ae5-b667feacf94d-kube-api-access-mrc7f\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4kc9dx\" (UID: \"49a52261-d6f1-40c0-8ae5-b667feacf94d\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4kc9dx" Feb 16 02:31:23.421053 master-0 kubenswrapper[31559]: I0216 02:31:23.421035 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/49a52261-d6f1-40c0-8ae5-b667feacf94d-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4kc9dx\" (UID: \"49a52261-d6f1-40c0-8ae5-b667feacf94d\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4kc9dx" Feb 16 02:31:23.523228 master-0 kubenswrapper[31559]: I0216 02:31:23.523156 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrc7f\" (UniqueName: \"kubernetes.io/projected/49a52261-d6f1-40c0-8ae5-b667feacf94d-kube-api-access-mrc7f\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4kc9dx\" (UID: \"49a52261-d6f1-40c0-8ae5-b667feacf94d\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4kc9dx" Feb 16 02:31:23.523736 master-0 kubenswrapper[31559]: I0216 02:31:23.523687 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/49a52261-d6f1-40c0-8ae5-b667feacf94d-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4kc9dx\" (UID: \"49a52261-d6f1-40c0-8ae5-b667feacf94d\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4kc9dx" Feb 16 02:31:23.524168 master-0 kubenswrapper[31559]: I0216 02:31:23.524091 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/49a52261-d6f1-40c0-8ae5-b667feacf94d-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4kc9dx\" (UID: \"49a52261-d6f1-40c0-8ae5-b667feacf94d\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4kc9dx" Feb 16 02:31:23.524652 master-0 kubenswrapper[31559]: I0216 02:31:23.524574 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/49a52261-d6f1-40c0-8ae5-b667feacf94d-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4kc9dx\" (UID: \"49a52261-d6f1-40c0-8ae5-b667feacf94d\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4kc9dx" Feb 16 02:31:23.526307 master-0 kubenswrapper[31559]: I0216 02:31:23.526263 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/49a52261-d6f1-40c0-8ae5-b667feacf94d-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4kc9dx\" (UID: \"49a52261-d6f1-40c0-8ae5-b667feacf94d\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4kc9dx" Feb 16 02:31:23.555835 master-0 kubenswrapper[31559]: I0216 02:31:23.555729 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrc7f\" (UniqueName: \"kubernetes.io/projected/49a52261-d6f1-40c0-8ae5-b667feacf94d-kube-api-access-mrc7f\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4kc9dx\" (UID: \"49a52261-d6f1-40c0-8ae5-b667feacf94d\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4kc9dx" Feb 16 02:31:23.708076 master-0 kubenswrapper[31559]: I0216 02:31:23.707972 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4kc9dx" Feb 16 02:31:24.243681 master-0 kubenswrapper[31559]: I0216 02:31:24.243576 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4kc9dx"] Feb 16 02:31:24.529203 master-0 kubenswrapper[31559]: I0216 02:31:24.529024 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4kc9dx" event={"ID":"49a52261-d6f1-40c0-8ae5-b667feacf94d","Type":"ContainerStarted","Data":"da34bcc6fd94cb40262550e5d2ddbaa5c589798a908d77f2460de1c4d7cbd458"} Feb 16 02:31:24.530267 master-0 kubenswrapper[31559]: I0216 02:31:24.530221 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4kc9dx" event={"ID":"49a52261-d6f1-40c0-8ae5-b667feacf94d","Type":"ContainerStarted","Data":"855ae4bd7b7fe163244af53d4283f86757c7fbd478ce1dcd95aa5ac96e6a4b14"} Feb 16 02:31:25.540225 master-0 kubenswrapper[31559]: I0216 02:31:25.540158 31559 generic.go:334] "Generic (PLEG): container finished" podID="49a52261-d6f1-40c0-8ae5-b667feacf94d" containerID="da34bcc6fd94cb40262550e5d2ddbaa5c589798a908d77f2460de1c4d7cbd458" exitCode=0 Feb 16 02:31:25.540718 master-0 kubenswrapper[31559]: I0216 02:31:25.540226 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4kc9dx" event={"ID":"49a52261-d6f1-40c0-8ae5-b667feacf94d","Type":"ContainerDied","Data":"da34bcc6fd94cb40262550e5d2ddbaa5c589798a908d77f2460de1c4d7cbd458"} Feb 16 02:31:27.565278 master-0 kubenswrapper[31559]: I0216 02:31:27.565090 31559 generic.go:334] "Generic (PLEG): container finished" podID="49a52261-d6f1-40c0-8ae5-b667feacf94d" containerID="52d38e5fa0d73008915f248edadbaa20b73efe87d765700d02c64f6acf5ae932" exitCode=0 Feb 16 02:31:27.565278 master-0 kubenswrapper[31559]: I0216 02:31:27.565179 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4kc9dx" event={"ID":"49a52261-d6f1-40c0-8ae5-b667feacf94d","Type":"ContainerDied","Data":"52d38e5fa0d73008915f248edadbaa20b73efe87d765700d02c64f6acf5ae932"} Feb 16 02:31:28.578432 master-0 kubenswrapper[31559]: I0216 02:31:28.578302 31559 generic.go:334] "Generic (PLEG): container finished" podID="49a52261-d6f1-40c0-8ae5-b667feacf94d" containerID="88c0fc496e499e626432060a9c52d03fe6860a625bec6578204dcea58b73bf3e" exitCode=0 Feb 16 02:31:28.578432 master-0 kubenswrapper[31559]: I0216 02:31:28.578388 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4kc9dx" event={"ID":"49a52261-d6f1-40c0-8ae5-b667feacf94d","Type":"ContainerDied","Data":"88c0fc496e499e626432060a9c52d03fe6860a625bec6578204dcea58b73bf3e"} Feb 16 02:31:29.987957 master-0 kubenswrapper[31559]: I0216 02:31:29.987862 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4kc9dx" Feb 16 02:31:30.064186 master-0 kubenswrapper[31559]: I0216 02:31:30.064057 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/49a52261-d6f1-40c0-8ae5-b667feacf94d-util\") pod \"49a52261-d6f1-40c0-8ae5-b667feacf94d\" (UID: \"49a52261-d6f1-40c0-8ae5-b667feacf94d\") " Feb 16 02:31:30.064509 master-0 kubenswrapper[31559]: I0216 02:31:30.064369 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrc7f\" (UniqueName: \"kubernetes.io/projected/49a52261-d6f1-40c0-8ae5-b667feacf94d-kube-api-access-mrc7f\") pod \"49a52261-d6f1-40c0-8ae5-b667feacf94d\" (UID: \"49a52261-d6f1-40c0-8ae5-b667feacf94d\") " Feb 16 02:31:30.064509 master-0 kubenswrapper[31559]: I0216 02:31:30.064423 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/49a52261-d6f1-40c0-8ae5-b667feacf94d-bundle\") pod \"49a52261-d6f1-40c0-8ae5-b667feacf94d\" (UID: \"49a52261-d6f1-40c0-8ae5-b667feacf94d\") " Feb 16 02:31:30.066482 master-0 kubenswrapper[31559]: I0216 02:31:30.066383 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49a52261-d6f1-40c0-8ae5-b667feacf94d-bundle" (OuterVolumeSpecName: "bundle") pod "49a52261-d6f1-40c0-8ae5-b667feacf94d" (UID: "49a52261-d6f1-40c0-8ae5-b667feacf94d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 02:31:30.067757 master-0 kubenswrapper[31559]: I0216 02:31:30.067701 31559 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/49a52261-d6f1-40c0-8ae5-b667feacf94d-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 02:31:30.068066 master-0 kubenswrapper[31559]: I0216 02:31:30.068003 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49a52261-d6f1-40c0-8ae5-b667feacf94d-kube-api-access-mrc7f" (OuterVolumeSpecName: "kube-api-access-mrc7f") pod "49a52261-d6f1-40c0-8ae5-b667feacf94d" (UID: "49a52261-d6f1-40c0-8ae5-b667feacf94d"). InnerVolumeSpecName "kube-api-access-mrc7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:31:30.089040 master-0 kubenswrapper[31559]: I0216 02:31:30.088902 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49a52261-d6f1-40c0-8ae5-b667feacf94d-util" (OuterVolumeSpecName: "util") pod "49a52261-d6f1-40c0-8ae5-b667feacf94d" (UID: "49a52261-d6f1-40c0-8ae5-b667feacf94d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 02:31:30.169884 master-0 kubenswrapper[31559]: I0216 02:31:30.169789 31559 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/49a52261-d6f1-40c0-8ae5-b667feacf94d-util\") on node \"master-0\" DevicePath \"\"" Feb 16 02:31:30.169884 master-0 kubenswrapper[31559]: I0216 02:31:30.169861 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrc7f\" (UniqueName: \"kubernetes.io/projected/49a52261-d6f1-40c0-8ae5-b667feacf94d-kube-api-access-mrc7f\") on node \"master-0\" DevicePath \"\"" Feb 16 02:31:30.601030 master-0 kubenswrapper[31559]: I0216 02:31:30.600777 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4kc9dx" event={"ID":"49a52261-d6f1-40c0-8ae5-b667feacf94d","Type":"ContainerDied","Data":"855ae4bd7b7fe163244af53d4283f86757c7fbd478ce1dcd95aa5ac96e6a4b14"} Feb 16 02:31:30.601030 master-0 kubenswrapper[31559]: I0216 02:31:30.600848 31559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="855ae4bd7b7fe163244af53d4283f86757c7fbd478ce1dcd95aa5ac96e6a4b14" Feb 16 02:31:30.601030 master-0 kubenswrapper[31559]: I0216 02:31:30.600966 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4kc9dx" Feb 16 02:31:36.470982 master-0 kubenswrapper[31559]: I0216 02:31:36.470904 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-storage/lvms-operator-7798c7fd99-gksv8"] Feb 16 02:31:36.471855 master-0 kubenswrapper[31559]: E0216 02:31:36.471231 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49a52261-d6f1-40c0-8ae5-b667feacf94d" containerName="util" Feb 16 02:31:36.471855 master-0 kubenswrapper[31559]: I0216 02:31:36.471246 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="49a52261-d6f1-40c0-8ae5-b667feacf94d" containerName="util" Feb 16 02:31:36.471855 master-0 kubenswrapper[31559]: E0216 02:31:36.471257 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49a52261-d6f1-40c0-8ae5-b667feacf94d" containerName="extract" Feb 16 02:31:36.471855 master-0 kubenswrapper[31559]: I0216 02:31:36.471263 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="49a52261-d6f1-40c0-8ae5-b667feacf94d" containerName="extract" Feb 16 02:31:36.471855 master-0 kubenswrapper[31559]: E0216 02:31:36.471292 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49a52261-d6f1-40c0-8ae5-b667feacf94d" containerName="pull" Feb 16 02:31:36.471855 master-0 kubenswrapper[31559]: I0216 02:31:36.471298 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="49a52261-d6f1-40c0-8ae5-b667feacf94d" containerName="pull" Feb 16 02:31:36.471855 master-0 kubenswrapper[31559]: I0216 02:31:36.471450 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="49a52261-d6f1-40c0-8ae5-b667feacf94d" containerName="extract" Feb 16 02:31:36.472169 master-0 kubenswrapper[31559]: I0216 02:31:36.472020 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/lvms-operator-7798c7fd99-gksv8" Feb 16 02:31:36.474078 master-0 kubenswrapper[31559]: I0216 02:31:36.474026 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-metrics-cert" Feb 16 02:31:36.474429 master-0 kubenswrapper[31559]: I0216 02:31:36.474385 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-service-cert" Feb 16 02:31:36.475184 master-0 kubenswrapper[31559]: I0216 02:31:36.475156 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-webhook-server-cert" Feb 16 02:31:36.475767 master-0 kubenswrapper[31559]: I0216 02:31:36.475486 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-storage"/"kube-root-ca.crt" Feb 16 02:31:36.479462 master-0 kubenswrapper[31559]: I0216 02:31:36.476795 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-storage"/"openshift-service-ca.crt" Feb 16 02:31:36.481543 master-0 kubenswrapper[31559]: I0216 02:31:36.481490 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/lvms-operator-7798c7fd99-gksv8"] Feb 16 02:31:36.603057 master-0 kubenswrapper[31559]: I0216 02:31:36.602980 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/ef1e515a-2ae5-4dae-840c-40bc073d2408-metrics-cert\") pod \"lvms-operator-7798c7fd99-gksv8\" (UID: \"ef1e515a-2ae5-4dae-840c-40bc073d2408\") " pod="openshift-storage/lvms-operator-7798c7fd99-gksv8" Feb 16 02:31:36.603324 master-0 kubenswrapper[31559]: I0216 02:31:36.603136 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lktmt\" (UniqueName: \"kubernetes.io/projected/ef1e515a-2ae5-4dae-840c-40bc073d2408-kube-api-access-lktmt\") pod \"lvms-operator-7798c7fd99-gksv8\" (UID: \"ef1e515a-2ae5-4dae-840c-40bc073d2408\") " pod="openshift-storage/lvms-operator-7798c7fd99-gksv8" Feb 16 02:31:36.603395 master-0 kubenswrapper[31559]: I0216 02:31:36.603326 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef1e515a-2ae5-4dae-840c-40bc073d2408-webhook-cert\") pod \"lvms-operator-7798c7fd99-gksv8\" (UID: \"ef1e515a-2ae5-4dae-840c-40bc073d2408\") " pod="openshift-storage/lvms-operator-7798c7fd99-gksv8" Feb 16 02:31:36.603555 master-0 kubenswrapper[31559]: I0216 02:31:36.603514 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ef1e515a-2ae5-4dae-840c-40bc073d2408-apiservice-cert\") pod \"lvms-operator-7798c7fd99-gksv8\" (UID: \"ef1e515a-2ae5-4dae-840c-40bc073d2408\") " pod="openshift-storage/lvms-operator-7798c7fd99-gksv8" Feb 16 02:31:36.603756 master-0 kubenswrapper[31559]: I0216 02:31:36.603719 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/ef1e515a-2ae5-4dae-840c-40bc073d2408-socket-dir\") pod \"lvms-operator-7798c7fd99-gksv8\" (UID: \"ef1e515a-2ae5-4dae-840c-40bc073d2408\") " pod="openshift-storage/lvms-operator-7798c7fd99-gksv8" Feb 16 02:31:36.705127 master-0 kubenswrapper[31559]: I0216 02:31:36.705062 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lktmt\" (UniqueName: \"kubernetes.io/projected/ef1e515a-2ae5-4dae-840c-40bc073d2408-kube-api-access-lktmt\") pod \"lvms-operator-7798c7fd99-gksv8\" (UID: \"ef1e515a-2ae5-4dae-840c-40bc073d2408\") " pod="openshift-storage/lvms-operator-7798c7fd99-gksv8" Feb 16 02:31:36.705127 master-0 kubenswrapper[31559]: I0216 02:31:36.705139 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef1e515a-2ae5-4dae-840c-40bc073d2408-webhook-cert\") pod \"lvms-operator-7798c7fd99-gksv8\" (UID: \"ef1e515a-2ae5-4dae-840c-40bc073d2408\") " pod="openshift-storage/lvms-operator-7798c7fd99-gksv8" Feb 16 02:31:36.705510 master-0 kubenswrapper[31559]: I0216 02:31:36.705181 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ef1e515a-2ae5-4dae-840c-40bc073d2408-apiservice-cert\") pod \"lvms-operator-7798c7fd99-gksv8\" (UID: \"ef1e515a-2ae5-4dae-840c-40bc073d2408\") " pod="openshift-storage/lvms-operator-7798c7fd99-gksv8" Feb 16 02:31:36.705584 master-0 kubenswrapper[31559]: I0216 02:31:36.705506 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/ef1e515a-2ae5-4dae-840c-40bc073d2408-socket-dir\") pod \"lvms-operator-7798c7fd99-gksv8\" (UID: \"ef1e515a-2ae5-4dae-840c-40bc073d2408\") " pod="openshift-storage/lvms-operator-7798c7fd99-gksv8" Feb 16 02:31:36.705924 master-0 kubenswrapper[31559]: I0216 02:31:36.705872 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/ef1e515a-2ae5-4dae-840c-40bc073d2408-metrics-cert\") pod \"lvms-operator-7798c7fd99-gksv8\" (UID: \"ef1e515a-2ae5-4dae-840c-40bc073d2408\") " pod="openshift-storage/lvms-operator-7798c7fd99-gksv8" Feb 16 02:31:36.707467 master-0 kubenswrapper[31559]: I0216 02:31:36.707347 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/ef1e515a-2ae5-4dae-840c-40bc073d2408-socket-dir\") pod \"lvms-operator-7798c7fd99-gksv8\" (UID: \"ef1e515a-2ae5-4dae-840c-40bc073d2408\") " pod="openshift-storage/lvms-operator-7798c7fd99-gksv8" Feb 16 02:31:36.710425 master-0 kubenswrapper[31559]: I0216 02:31:36.710377 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/ef1e515a-2ae5-4dae-840c-40bc073d2408-metrics-cert\") pod \"lvms-operator-7798c7fd99-gksv8\" (UID: \"ef1e515a-2ae5-4dae-840c-40bc073d2408\") " pod="openshift-storage/lvms-operator-7798c7fd99-gksv8" Feb 16 02:31:36.711283 master-0 kubenswrapper[31559]: I0216 02:31:36.711241 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef1e515a-2ae5-4dae-840c-40bc073d2408-webhook-cert\") pod \"lvms-operator-7798c7fd99-gksv8\" (UID: \"ef1e515a-2ae5-4dae-840c-40bc073d2408\") " pod="openshift-storage/lvms-operator-7798c7fd99-gksv8" Feb 16 02:31:36.711475 master-0 kubenswrapper[31559]: I0216 02:31:36.711350 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ef1e515a-2ae5-4dae-840c-40bc073d2408-apiservice-cert\") pod \"lvms-operator-7798c7fd99-gksv8\" (UID: \"ef1e515a-2ae5-4dae-840c-40bc073d2408\") " pod="openshift-storage/lvms-operator-7798c7fd99-gksv8" Feb 16 02:31:36.734909 master-0 kubenswrapper[31559]: I0216 02:31:36.734763 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lktmt\" (UniqueName: \"kubernetes.io/projected/ef1e515a-2ae5-4dae-840c-40bc073d2408-kube-api-access-lktmt\") pod \"lvms-operator-7798c7fd99-gksv8\" (UID: \"ef1e515a-2ae5-4dae-840c-40bc073d2408\") " pod="openshift-storage/lvms-operator-7798c7fd99-gksv8" Feb 16 02:31:36.793559 master-0 kubenswrapper[31559]: I0216 02:31:36.793462 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/lvms-operator-7798c7fd99-gksv8" Feb 16 02:31:37.335191 master-0 kubenswrapper[31559]: I0216 02:31:37.335071 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/lvms-operator-7798c7fd99-gksv8"] Feb 16 02:31:37.679795 master-0 kubenswrapper[31559]: I0216 02:31:37.679703 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/lvms-operator-7798c7fd99-gksv8" event={"ID":"ef1e515a-2ae5-4dae-840c-40bc073d2408","Type":"ContainerStarted","Data":"7cb56ea0971a2eb74abb3f0c6bb33c94e23b400b9a4ea7cbbdd61665739914aa"} Feb 16 02:31:42.727715 master-0 kubenswrapper[31559]: I0216 02:31:42.727620 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/lvms-operator-7798c7fd99-gksv8" event={"ID":"ef1e515a-2ae5-4dae-840c-40bc073d2408","Type":"ContainerStarted","Data":"4e4c6d73b1bd68eedaa644d1de93565a5431fd0e790aa18937b575d52651f782"} Feb 16 02:31:42.728639 master-0 kubenswrapper[31559]: I0216 02:31:42.728038 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/lvms-operator-7798c7fd99-gksv8" Feb 16 02:31:42.762600 master-0 kubenswrapper[31559]: I0216 02:31:42.762401 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-storage/lvms-operator-7798c7fd99-gksv8" podStartSLOduration=2.196985307 podStartE2EDuration="6.762376388s" podCreationTimestamp="2026-02-16 02:31:36 +0000 UTC" firstStartedPulling="2026-02-16 02:31:37.342021999 +0000 UTC m=+549.686628024" lastFinishedPulling="2026-02-16 02:31:41.90741305 +0000 UTC m=+554.252019105" observedRunningTime="2026-02-16 02:31:42.755360801 +0000 UTC m=+555.099966826" watchObservedRunningTime="2026-02-16 02:31:42.762376388 +0000 UTC m=+555.106982413" Feb 16 02:31:43.742557 master-0 kubenswrapper[31559]: I0216 02:31:43.742480 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-storage/lvms-operator-7798c7fd99-gksv8" Feb 16 02:31:47.367887 master-0 kubenswrapper[31559]: I0216 02:31:47.367785 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gkcr7"] Feb 16 02:31:47.371083 master-0 kubenswrapper[31559]: I0216 02:31:47.371014 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gkcr7" Feb 16 02:31:47.374858 master-0 kubenswrapper[31559]: I0216 02:31:47.374782 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-w8j9g" Feb 16 02:31:47.394535 master-0 kubenswrapper[31559]: I0216 02:31:47.394407 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gkcr7"] Feb 16 02:31:47.417322 master-0 kubenswrapper[31559]: I0216 02:31:47.417079 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7cab06de-417c-4114-aae4-2cf82637db54-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gkcr7\" (UID: \"7cab06de-417c-4114-aae4-2cf82637db54\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gkcr7" Feb 16 02:31:47.417322 master-0 kubenswrapper[31559]: I0216 02:31:47.417294 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7cab06de-417c-4114-aae4-2cf82637db54-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gkcr7\" (UID: \"7cab06de-417c-4114-aae4-2cf82637db54\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gkcr7" Feb 16 02:31:47.417771 master-0 kubenswrapper[31559]: I0216 02:31:47.417371 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6t5k\" (UniqueName: \"kubernetes.io/projected/7cab06de-417c-4114-aae4-2cf82637db54-kube-api-access-x6t5k\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gkcr7\" (UID: \"7cab06de-417c-4114-aae4-2cf82637db54\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gkcr7" Feb 16 02:31:47.519527 master-0 kubenswrapper[31559]: I0216 02:31:47.519392 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7cab06de-417c-4114-aae4-2cf82637db54-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gkcr7\" (UID: \"7cab06de-417c-4114-aae4-2cf82637db54\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gkcr7" Feb 16 02:31:47.519826 master-0 kubenswrapper[31559]: I0216 02:31:47.519642 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7cab06de-417c-4114-aae4-2cf82637db54-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gkcr7\" (UID: \"7cab06de-417c-4114-aae4-2cf82637db54\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gkcr7" Feb 16 02:31:47.519938 master-0 kubenswrapper[31559]: I0216 02:31:47.519852 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6t5k\" (UniqueName: \"kubernetes.io/projected/7cab06de-417c-4114-aae4-2cf82637db54-kube-api-access-x6t5k\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gkcr7\" (UID: \"7cab06de-417c-4114-aae4-2cf82637db54\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gkcr7" Feb 16 02:31:47.520093 master-0 kubenswrapper[31559]: I0216 02:31:47.520036 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7cab06de-417c-4114-aae4-2cf82637db54-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gkcr7\" (UID: \"7cab06de-417c-4114-aae4-2cf82637db54\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gkcr7" Feb 16 02:31:47.520339 master-0 kubenswrapper[31559]: I0216 02:31:47.520279 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7cab06de-417c-4114-aae4-2cf82637db54-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gkcr7\" (UID: \"7cab06de-417c-4114-aae4-2cf82637db54\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gkcr7" Feb 16 02:31:47.552027 master-0 kubenswrapper[31559]: I0216 02:31:47.551931 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6t5k\" (UniqueName: \"kubernetes.io/projected/7cab06de-417c-4114-aae4-2cf82637db54-kube-api-access-x6t5k\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gkcr7\" (UID: \"7cab06de-417c-4114-aae4-2cf82637db54\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gkcr7" Feb 16 02:31:47.709374 master-0 kubenswrapper[31559]: I0216 02:31:47.709280 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gkcr7" Feb 16 02:31:48.352529 master-0 kubenswrapper[31559]: I0216 02:31:48.341986 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gkcr7"] Feb 16 02:31:48.352529 master-0 kubenswrapper[31559]: W0216 02:31:48.342230 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7cab06de_417c_4114_aae4_2cf82637db54.slice/crio-26e6908381fe30d43457138db61e3dea67353d582a1d72f90fb632d177bcc626 WatchSource:0}: Error finding container 26e6908381fe30d43457138db61e3dea67353d582a1d72f90fb632d177bcc626: Status 404 returned error can't find the container with id 26e6908381fe30d43457138db61e3dea67353d582a1d72f90fb632d177bcc626 Feb 16 02:31:48.799273 master-0 kubenswrapper[31559]: I0216 02:31:48.799178 31559 generic.go:334] "Generic (PLEG): container finished" podID="7cab06de-417c-4114-aae4-2cf82637db54" containerID="dabc571df33f7fe8358bb49f01c2a7761e682bfc17203c8585dde450e7775fa3" exitCode=0 Feb 16 02:31:48.799273 master-0 kubenswrapper[31559]: I0216 02:31:48.799257 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gkcr7" event={"ID":"7cab06de-417c-4114-aae4-2cf82637db54","Type":"ContainerDied","Data":"dabc571df33f7fe8358bb49f01c2a7761e682bfc17203c8585dde450e7775fa3"} Feb 16 02:31:48.800350 master-0 kubenswrapper[31559]: I0216 02:31:48.799334 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gkcr7" event={"ID":"7cab06de-417c-4114-aae4-2cf82637db54","Type":"ContainerStarted","Data":"26e6908381fe30d43457138db61e3dea67353d582a1d72f90fb632d177bcc626"} Feb 16 02:31:49.967123 master-0 kubenswrapper[31559]: I0216 02:31:49.966995 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vcvwx"] Feb 16 02:31:49.969664 master-0 kubenswrapper[31559]: I0216 02:31:49.969615 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vcvwx" Feb 16 02:31:49.976596 master-0 kubenswrapper[31559]: I0216 02:31:49.976520 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vcvwx"] Feb 16 02:31:50.106469 master-0 kubenswrapper[31559]: I0216 02:31:50.106363 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4z9h\" (UniqueName: \"kubernetes.io/projected/0bbe13e2-1e42-45af-945d-a1137fff5444-kube-api-access-v4z9h\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vcvwx\" (UID: \"0bbe13e2-1e42-45af-945d-a1137fff5444\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vcvwx" Feb 16 02:31:50.106469 master-0 kubenswrapper[31559]: I0216 02:31:50.106466 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0bbe13e2-1e42-45af-945d-a1137fff5444-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vcvwx\" (UID: \"0bbe13e2-1e42-45af-945d-a1137fff5444\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vcvwx" Feb 16 02:31:50.106900 master-0 kubenswrapper[31559]: I0216 02:31:50.106539 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0bbe13e2-1e42-45af-945d-a1137fff5444-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vcvwx\" (UID: \"0bbe13e2-1e42-45af-945d-a1137fff5444\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vcvwx" Feb 16 02:31:50.208786 master-0 kubenswrapper[31559]: I0216 02:31:50.208727 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4z9h\" (UniqueName: \"kubernetes.io/projected/0bbe13e2-1e42-45af-945d-a1137fff5444-kube-api-access-v4z9h\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vcvwx\" (UID: \"0bbe13e2-1e42-45af-945d-a1137fff5444\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vcvwx" Feb 16 02:31:50.208977 master-0 kubenswrapper[31559]: I0216 02:31:50.208801 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0bbe13e2-1e42-45af-945d-a1137fff5444-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vcvwx\" (UID: \"0bbe13e2-1e42-45af-945d-a1137fff5444\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vcvwx" Feb 16 02:31:50.208977 master-0 kubenswrapper[31559]: I0216 02:31:50.208869 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0bbe13e2-1e42-45af-945d-a1137fff5444-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vcvwx\" (UID: \"0bbe13e2-1e42-45af-945d-a1137fff5444\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vcvwx" Feb 16 02:31:50.209445 master-0 kubenswrapper[31559]: I0216 02:31:50.209390 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0bbe13e2-1e42-45af-945d-a1137fff5444-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vcvwx\" (UID: \"0bbe13e2-1e42-45af-945d-a1137fff5444\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vcvwx" Feb 16 02:31:50.209535 master-0 kubenswrapper[31559]: I0216 02:31:50.209504 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0bbe13e2-1e42-45af-945d-a1137fff5444-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vcvwx\" (UID: \"0bbe13e2-1e42-45af-945d-a1137fff5444\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vcvwx" Feb 16 02:31:50.231040 master-0 kubenswrapper[31559]: I0216 02:31:50.230968 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4z9h\" (UniqueName: \"kubernetes.io/projected/0bbe13e2-1e42-45af-945d-a1137fff5444-kube-api-access-v4z9h\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vcvwx\" (UID: \"0bbe13e2-1e42-45af-945d-a1137fff5444\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vcvwx" Feb 16 02:31:50.338814 master-0 kubenswrapper[31559]: I0216 02:31:50.337804 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vcvwx" Feb 16 02:31:50.346349 master-0 kubenswrapper[31559]: I0216 02:31:50.346300 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarlzpv"] Feb 16 02:31:50.349700 master-0 kubenswrapper[31559]: I0216 02:31:50.349493 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarlzpv" Feb 16 02:31:50.360930 master-0 kubenswrapper[31559]: I0216 02:31:50.355535 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarlzpv"] Feb 16 02:31:50.513637 master-0 kubenswrapper[31559]: I0216 02:31:50.513594 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wdvr\" (UniqueName: \"kubernetes.io/projected/20be453f-795d-41dc-b549-38115aa87b08-kube-api-access-8wdvr\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarlzpv\" (UID: \"20be453f-795d-41dc-b549-38115aa87b08\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarlzpv" Feb 16 02:31:50.513840 master-0 kubenswrapper[31559]: I0216 02:31:50.513820 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/20be453f-795d-41dc-b549-38115aa87b08-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarlzpv\" (UID: \"20be453f-795d-41dc-b549-38115aa87b08\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarlzpv" Feb 16 02:31:50.514031 master-0 kubenswrapper[31559]: I0216 02:31:50.514010 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/20be453f-795d-41dc-b549-38115aa87b08-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarlzpv\" (UID: \"20be453f-795d-41dc-b549-38115aa87b08\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarlzpv" Feb 16 02:31:50.616823 master-0 kubenswrapper[31559]: I0216 02:31:50.616094 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/20be453f-795d-41dc-b549-38115aa87b08-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarlzpv\" (UID: \"20be453f-795d-41dc-b549-38115aa87b08\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarlzpv" Feb 16 02:31:50.617148 master-0 kubenswrapper[31559]: I0216 02:31:50.617105 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wdvr\" (UniqueName: \"kubernetes.io/projected/20be453f-795d-41dc-b549-38115aa87b08-kube-api-access-8wdvr\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarlzpv\" (UID: \"20be453f-795d-41dc-b549-38115aa87b08\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarlzpv" Feb 16 02:31:50.617240 master-0 kubenswrapper[31559]: I0216 02:31:50.617149 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/20be453f-795d-41dc-b549-38115aa87b08-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarlzpv\" (UID: \"20be453f-795d-41dc-b549-38115aa87b08\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarlzpv" Feb 16 02:31:50.617318 master-0 kubenswrapper[31559]: I0216 02:31:50.617234 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/20be453f-795d-41dc-b549-38115aa87b08-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarlzpv\" (UID: \"20be453f-795d-41dc-b549-38115aa87b08\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarlzpv" Feb 16 02:31:50.618023 master-0 kubenswrapper[31559]: I0216 02:31:50.617957 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/20be453f-795d-41dc-b549-38115aa87b08-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarlzpv\" (UID: \"20be453f-795d-41dc-b549-38115aa87b08\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarlzpv" Feb 16 02:31:50.635824 master-0 kubenswrapper[31559]: I0216 02:31:50.635765 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wdvr\" (UniqueName: \"kubernetes.io/projected/20be453f-795d-41dc-b549-38115aa87b08-kube-api-access-8wdvr\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarlzpv\" (UID: \"20be453f-795d-41dc-b549-38115aa87b08\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarlzpv" Feb 16 02:31:50.716838 master-0 kubenswrapper[31559]: I0216 02:31:50.716746 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarlzpv" Feb 16 02:31:50.809224 master-0 kubenswrapper[31559]: W0216 02:31:50.809157 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bbe13e2_1e42_45af_945d_a1137fff5444.slice/crio-85ce668e7ec09eb0937cb5db054b189323960fdc93bef90b483674999c1e7aea WatchSource:0}: Error finding container 85ce668e7ec09eb0937cb5db054b189323960fdc93bef90b483674999c1e7aea: Status 404 returned error can't find the container with id 85ce668e7ec09eb0937cb5db054b189323960fdc93bef90b483674999c1e7aea Feb 16 02:31:50.818705 master-0 kubenswrapper[31559]: I0216 02:31:50.818650 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vcvwx"] Feb 16 02:31:51.192303 master-0 kubenswrapper[31559]: I0216 02:31:51.191835 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarlzpv"] Feb 16 02:31:51.830823 master-0 kubenswrapper[31559]: I0216 02:31:51.830747 31559 generic.go:334] "Generic (PLEG): container finished" podID="0bbe13e2-1e42-45af-945d-a1137fff5444" containerID="278e33367541793f23301033aad4a776dc165a9de1232dde8ca2ff9749cf4edd" exitCode=0 Feb 16 02:31:51.830823 master-0 kubenswrapper[31559]: I0216 02:31:51.830815 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vcvwx" event={"ID":"0bbe13e2-1e42-45af-945d-a1137fff5444","Type":"ContainerDied","Data":"278e33367541793f23301033aad4a776dc165a9de1232dde8ca2ff9749cf4edd"} Feb 16 02:31:51.831189 master-0 kubenswrapper[31559]: I0216 02:31:51.830850 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vcvwx" event={"ID":"0bbe13e2-1e42-45af-945d-a1137fff5444","Type":"ContainerStarted","Data":"85ce668e7ec09eb0937cb5db054b189323960fdc93bef90b483674999c1e7aea"} Feb 16 02:31:52.300141 master-0 kubenswrapper[31559]: W0216 02:31:52.300028 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20be453f_795d_41dc_b549_38115aa87b08.slice/crio-13f712033cee40762b4e4672bf4ae6187611346d6b082cec1fefab93e98b9d2a WatchSource:0}: Error finding container 13f712033cee40762b4e4672bf4ae6187611346d6b082cec1fefab93e98b9d2a: Status 404 returned error can't find the container with id 13f712033cee40762b4e4672bf4ae6187611346d6b082cec1fefab93e98b9d2a Feb 16 02:31:52.843490 master-0 kubenswrapper[31559]: I0216 02:31:52.843234 31559 generic.go:334] "Generic (PLEG): container finished" podID="7cab06de-417c-4114-aae4-2cf82637db54" containerID="ab4411255467228bd142fcafcdf5d79d4be45ca7c9c3b31bcb20c6a8898ca614" exitCode=0 Feb 16 02:31:52.843490 master-0 kubenswrapper[31559]: I0216 02:31:52.843358 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gkcr7" event={"ID":"7cab06de-417c-4114-aae4-2cf82637db54","Type":"ContainerDied","Data":"ab4411255467228bd142fcafcdf5d79d4be45ca7c9c3b31bcb20c6a8898ca614"} Feb 16 02:31:52.845727 master-0 kubenswrapper[31559]: I0216 02:31:52.845673 31559 generic.go:334] "Generic (PLEG): container finished" podID="20be453f-795d-41dc-b549-38115aa87b08" containerID="dd7c404eb8a2ba3cda0235612b5fdd1066fb9b6fc6a5ced262a07af848f45b17" exitCode=0 Feb 16 02:31:52.845821 master-0 kubenswrapper[31559]: I0216 02:31:52.845728 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarlzpv" event={"ID":"20be453f-795d-41dc-b549-38115aa87b08","Type":"ContainerDied","Data":"dd7c404eb8a2ba3cda0235612b5fdd1066fb9b6fc6a5ced262a07af848f45b17"} Feb 16 02:31:52.845821 master-0 kubenswrapper[31559]: I0216 02:31:52.845761 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarlzpv" event={"ID":"20be453f-795d-41dc-b549-38115aa87b08","Type":"ContainerStarted","Data":"13f712033cee40762b4e4672bf4ae6187611346d6b082cec1fefab93e98b9d2a"} Feb 16 02:31:53.865358 master-0 kubenswrapper[31559]: I0216 02:31:53.864781 31559 generic.go:334] "Generic (PLEG): container finished" podID="7cab06de-417c-4114-aae4-2cf82637db54" containerID="7d9737586eb0790191cf5b21b3d43c86ceb69881ebbfb804816a9ca7c7a0e8a5" exitCode=0 Feb 16 02:31:53.865358 master-0 kubenswrapper[31559]: I0216 02:31:53.864865 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gkcr7" event={"ID":"7cab06de-417c-4114-aae4-2cf82637db54","Type":"ContainerDied","Data":"7d9737586eb0790191cf5b21b3d43c86ceb69881ebbfb804816a9ca7c7a0e8a5"} Feb 16 02:31:54.247639 master-0 kubenswrapper[31559]: I0216 02:31:54.247589 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkwwr"] Feb 16 02:31:54.251005 master-0 kubenswrapper[31559]: I0216 02:31:54.250965 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkwwr" Feb 16 02:31:54.265699 master-0 kubenswrapper[31559]: I0216 02:31:54.265638 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkwwr"] Feb 16 02:31:54.389203 master-0 kubenswrapper[31559]: I0216 02:31:54.389127 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c16b56c4-40cf-4fb2-ae54-c32f1df90335-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkwwr\" (UID: \"c16b56c4-40cf-4fb2-ae54-c32f1df90335\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkwwr" Feb 16 02:31:54.389203 master-0 kubenswrapper[31559]: I0216 02:31:54.389200 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c16b56c4-40cf-4fb2-ae54-c32f1df90335-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkwwr\" (UID: \"c16b56c4-40cf-4fb2-ae54-c32f1df90335\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkwwr" Feb 16 02:31:54.389570 master-0 kubenswrapper[31559]: I0216 02:31:54.389496 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fddn5\" (UniqueName: \"kubernetes.io/projected/c16b56c4-40cf-4fb2-ae54-c32f1df90335-kube-api-access-fddn5\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkwwr\" (UID: \"c16b56c4-40cf-4fb2-ae54-c32f1df90335\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkwwr" Feb 16 02:31:54.490917 master-0 kubenswrapper[31559]: I0216 02:31:54.490846 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fddn5\" (UniqueName: \"kubernetes.io/projected/c16b56c4-40cf-4fb2-ae54-c32f1df90335-kube-api-access-fddn5\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkwwr\" (UID: \"c16b56c4-40cf-4fb2-ae54-c32f1df90335\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkwwr" Feb 16 02:31:54.491102 master-0 kubenswrapper[31559]: I0216 02:31:54.491025 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c16b56c4-40cf-4fb2-ae54-c32f1df90335-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkwwr\" (UID: \"c16b56c4-40cf-4fb2-ae54-c32f1df90335\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkwwr" Feb 16 02:31:54.491102 master-0 kubenswrapper[31559]: I0216 02:31:54.491067 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c16b56c4-40cf-4fb2-ae54-c32f1df90335-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkwwr\" (UID: \"c16b56c4-40cf-4fb2-ae54-c32f1df90335\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkwwr" Feb 16 02:31:54.492787 master-0 kubenswrapper[31559]: I0216 02:31:54.492742 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c16b56c4-40cf-4fb2-ae54-c32f1df90335-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkwwr\" (UID: \"c16b56c4-40cf-4fb2-ae54-c32f1df90335\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkwwr" Feb 16 02:31:54.493061 master-0 kubenswrapper[31559]: I0216 02:31:54.492992 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c16b56c4-40cf-4fb2-ae54-c32f1df90335-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkwwr\" (UID: \"c16b56c4-40cf-4fb2-ae54-c32f1df90335\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkwwr" Feb 16 02:31:54.511063 master-0 kubenswrapper[31559]: I0216 02:31:54.510975 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fddn5\" (UniqueName: \"kubernetes.io/projected/c16b56c4-40cf-4fb2-ae54-c32f1df90335-kube-api-access-fddn5\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkwwr\" (UID: \"c16b56c4-40cf-4fb2-ae54-c32f1df90335\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkwwr" Feb 16 02:31:54.573662 master-0 kubenswrapper[31559]: I0216 02:31:54.573071 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkwwr" Feb 16 02:31:54.882159 master-0 kubenswrapper[31559]: I0216 02:31:54.881963 31559 generic.go:334] "Generic (PLEG): container finished" podID="0bbe13e2-1e42-45af-945d-a1137fff5444" containerID="c38503c267321fa2106ce3b14aa2f343330c8e5d3a53d59d07aef92a5c268dbe" exitCode=0 Feb 16 02:31:54.883346 master-0 kubenswrapper[31559]: I0216 02:31:54.882994 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vcvwx" event={"ID":"0bbe13e2-1e42-45af-945d-a1137fff5444","Type":"ContainerDied","Data":"c38503c267321fa2106ce3b14aa2f343330c8e5d3a53d59d07aef92a5c268dbe"} Feb 16 02:31:54.892771 master-0 kubenswrapper[31559]: I0216 02:31:54.892273 31559 generic.go:334] "Generic (PLEG): container finished" podID="20be453f-795d-41dc-b549-38115aa87b08" containerID="97bf6026d98f7f50177d4722ee706d8b773032f4992dd4177ea5ff1d9d45818b" exitCode=0 Feb 16 02:31:54.892771 master-0 kubenswrapper[31559]: I0216 02:31:54.892430 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarlzpv" event={"ID":"20be453f-795d-41dc-b549-38115aa87b08","Type":"ContainerDied","Data":"97bf6026d98f7f50177d4722ee706d8b773032f4992dd4177ea5ff1d9d45818b"} Feb 16 02:31:55.067167 master-0 kubenswrapper[31559]: I0216 02:31:55.066229 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkwwr"] Feb 16 02:31:55.275239 master-0 kubenswrapper[31559]: I0216 02:31:55.275187 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gkcr7" Feb 16 02:31:55.407342 master-0 kubenswrapper[31559]: I0216 02:31:55.407280 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7cab06de-417c-4114-aae4-2cf82637db54-bundle\") pod \"7cab06de-417c-4114-aae4-2cf82637db54\" (UID: \"7cab06de-417c-4114-aae4-2cf82637db54\") " Feb 16 02:31:55.407342 master-0 kubenswrapper[31559]: I0216 02:31:55.407346 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7cab06de-417c-4114-aae4-2cf82637db54-util\") pod \"7cab06de-417c-4114-aae4-2cf82637db54\" (UID: \"7cab06de-417c-4114-aae4-2cf82637db54\") " Feb 16 02:31:55.407709 master-0 kubenswrapper[31559]: I0216 02:31:55.407401 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6t5k\" (UniqueName: \"kubernetes.io/projected/7cab06de-417c-4114-aae4-2cf82637db54-kube-api-access-x6t5k\") pod \"7cab06de-417c-4114-aae4-2cf82637db54\" (UID: \"7cab06de-417c-4114-aae4-2cf82637db54\") " Feb 16 02:31:55.408550 master-0 kubenswrapper[31559]: I0216 02:31:55.408460 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7cab06de-417c-4114-aae4-2cf82637db54-bundle" (OuterVolumeSpecName: "bundle") pod "7cab06de-417c-4114-aae4-2cf82637db54" (UID: "7cab06de-417c-4114-aae4-2cf82637db54"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 02:31:55.411575 master-0 kubenswrapper[31559]: I0216 02:31:55.411327 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cab06de-417c-4114-aae4-2cf82637db54-kube-api-access-x6t5k" (OuterVolumeSpecName: "kube-api-access-x6t5k") pod "7cab06de-417c-4114-aae4-2cf82637db54" (UID: "7cab06de-417c-4114-aae4-2cf82637db54"). InnerVolumeSpecName "kube-api-access-x6t5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:31:55.423082 master-0 kubenswrapper[31559]: I0216 02:31:55.423016 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7cab06de-417c-4114-aae4-2cf82637db54-util" (OuterVolumeSpecName: "util") pod "7cab06de-417c-4114-aae4-2cf82637db54" (UID: "7cab06de-417c-4114-aae4-2cf82637db54"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 02:31:55.487773 master-0 kubenswrapper[31559]: E0216 02:31:55.487689 31559 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20be453f_795d_41dc_b549_38115aa87b08.slice/crio-ba906c9ca265d4454122a0524e6c85fa76de21e337207a522d9414037e0d61f7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20be453f_795d_41dc_b549_38115aa87b08.slice/crio-conmon-ba906c9ca265d4454122a0524e6c85fa76de21e337207a522d9414037e0d61f7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bbe13e2_1e42_45af_945d_a1137fff5444.slice/crio-conmon-05f9735112edadb0f9f91ebd878ae7b7d603a246e521607cc740b9ebee70c382.scope\": RecentStats: unable to find data in memory cache]" Feb 16 02:31:55.509012 master-0 kubenswrapper[31559]: I0216 02:31:55.508957 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6t5k\" (UniqueName: \"kubernetes.io/projected/7cab06de-417c-4114-aae4-2cf82637db54-kube-api-access-x6t5k\") on node \"master-0\" DevicePath \"\"" Feb 16 02:31:55.509012 master-0 kubenswrapper[31559]: I0216 02:31:55.509000 31559 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7cab06de-417c-4114-aae4-2cf82637db54-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 02:31:55.509012 master-0 kubenswrapper[31559]: I0216 02:31:55.509013 31559 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7cab06de-417c-4114-aae4-2cf82637db54-util\") on node \"master-0\" DevicePath \"\"" Feb 16 02:31:55.906691 master-0 kubenswrapper[31559]: I0216 02:31:55.906419 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gkcr7" event={"ID":"7cab06de-417c-4114-aae4-2cf82637db54","Type":"ContainerDied","Data":"26e6908381fe30d43457138db61e3dea67353d582a1d72f90fb632d177bcc626"} Feb 16 02:31:55.906691 master-0 kubenswrapper[31559]: I0216 02:31:55.906532 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gkcr7" Feb 16 02:31:55.906691 master-0 kubenswrapper[31559]: I0216 02:31:55.906539 31559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26e6908381fe30d43457138db61e3dea67353d582a1d72f90fb632d177bcc626" Feb 16 02:31:55.909313 master-0 kubenswrapper[31559]: I0216 02:31:55.909249 31559 generic.go:334] "Generic (PLEG): container finished" podID="0bbe13e2-1e42-45af-945d-a1137fff5444" containerID="05f9735112edadb0f9f91ebd878ae7b7d603a246e521607cc740b9ebee70c382" exitCode=0 Feb 16 02:31:55.909460 master-0 kubenswrapper[31559]: I0216 02:31:55.909365 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vcvwx" event={"ID":"0bbe13e2-1e42-45af-945d-a1137fff5444","Type":"ContainerDied","Data":"05f9735112edadb0f9f91ebd878ae7b7d603a246e521607cc740b9ebee70c382"} Feb 16 02:31:55.912131 master-0 kubenswrapper[31559]: I0216 02:31:55.912057 31559 generic.go:334] "Generic (PLEG): container finished" podID="c16b56c4-40cf-4fb2-ae54-c32f1df90335" containerID="f9d0be9ba41ad6b645f20ecd94a987cde63154295bb30403cf6582799ac4414c" exitCode=0 Feb 16 02:31:55.912280 master-0 kubenswrapper[31559]: I0216 02:31:55.912168 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkwwr" event={"ID":"c16b56c4-40cf-4fb2-ae54-c32f1df90335","Type":"ContainerDied","Data":"f9d0be9ba41ad6b645f20ecd94a987cde63154295bb30403cf6582799ac4414c"} Feb 16 02:31:55.912280 master-0 kubenswrapper[31559]: I0216 02:31:55.912224 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkwwr" event={"ID":"c16b56c4-40cf-4fb2-ae54-c32f1df90335","Type":"ContainerStarted","Data":"7b7d800bb96e7785fc9a9f57fba98575c909ffebd29b84a1b7cf30e0a8203b1c"} Feb 16 02:31:55.916627 master-0 kubenswrapper[31559]: I0216 02:31:55.916550 31559 generic.go:334] "Generic (PLEG): container finished" podID="20be453f-795d-41dc-b549-38115aa87b08" containerID="ba906c9ca265d4454122a0524e6c85fa76de21e337207a522d9414037e0d61f7" exitCode=0 Feb 16 02:31:55.916627 master-0 kubenswrapper[31559]: I0216 02:31:55.916602 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarlzpv" event={"ID":"20be453f-795d-41dc-b549-38115aa87b08","Type":"ContainerDied","Data":"ba906c9ca265d4454122a0524e6c85fa76de21e337207a522d9414037e0d61f7"} Feb 16 02:31:57.463525 master-0 kubenswrapper[31559]: I0216 02:31:57.463483 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vcvwx" Feb 16 02:31:57.467197 master-0 kubenswrapper[31559]: I0216 02:31:57.467170 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarlzpv" Feb 16 02:31:57.553299 master-0 kubenswrapper[31559]: I0216 02:31:57.551528 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wdvr\" (UniqueName: \"kubernetes.io/projected/20be453f-795d-41dc-b549-38115aa87b08-kube-api-access-8wdvr\") pod \"20be453f-795d-41dc-b549-38115aa87b08\" (UID: \"20be453f-795d-41dc-b549-38115aa87b08\") " Feb 16 02:31:57.553299 master-0 kubenswrapper[31559]: I0216 02:31:57.551722 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/20be453f-795d-41dc-b549-38115aa87b08-util\") pod \"20be453f-795d-41dc-b549-38115aa87b08\" (UID: \"20be453f-795d-41dc-b549-38115aa87b08\") " Feb 16 02:31:57.553299 master-0 kubenswrapper[31559]: I0216 02:31:57.551863 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/20be453f-795d-41dc-b549-38115aa87b08-bundle\") pod \"20be453f-795d-41dc-b549-38115aa87b08\" (UID: \"20be453f-795d-41dc-b549-38115aa87b08\") " Feb 16 02:31:57.553299 master-0 kubenswrapper[31559]: I0216 02:31:57.551948 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4z9h\" (UniqueName: \"kubernetes.io/projected/0bbe13e2-1e42-45af-945d-a1137fff5444-kube-api-access-v4z9h\") pod \"0bbe13e2-1e42-45af-945d-a1137fff5444\" (UID: \"0bbe13e2-1e42-45af-945d-a1137fff5444\") " Feb 16 02:31:57.553299 master-0 kubenswrapper[31559]: I0216 02:31:57.552225 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0bbe13e2-1e42-45af-945d-a1137fff5444-bundle\") pod \"0bbe13e2-1e42-45af-945d-a1137fff5444\" (UID: \"0bbe13e2-1e42-45af-945d-a1137fff5444\") " Feb 16 02:31:57.553299 master-0 kubenswrapper[31559]: I0216 02:31:57.552270 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0bbe13e2-1e42-45af-945d-a1137fff5444-util\") pod \"0bbe13e2-1e42-45af-945d-a1137fff5444\" (UID: \"0bbe13e2-1e42-45af-945d-a1137fff5444\") " Feb 16 02:31:57.553299 master-0 kubenswrapper[31559]: I0216 02:31:57.552675 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20be453f-795d-41dc-b549-38115aa87b08-bundle" (OuterVolumeSpecName: "bundle") pod "20be453f-795d-41dc-b549-38115aa87b08" (UID: "20be453f-795d-41dc-b549-38115aa87b08"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 02:31:57.553299 master-0 kubenswrapper[31559]: I0216 02:31:57.552885 31559 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/20be453f-795d-41dc-b549-38115aa87b08-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 02:31:57.554513 master-0 kubenswrapper[31559]: I0216 02:31:57.554219 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bbe13e2-1e42-45af-945d-a1137fff5444-bundle" (OuterVolumeSpecName: "bundle") pod "0bbe13e2-1e42-45af-945d-a1137fff5444" (UID: "0bbe13e2-1e42-45af-945d-a1137fff5444"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 02:31:57.567660 master-0 kubenswrapper[31559]: I0216 02:31:57.563587 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20be453f-795d-41dc-b549-38115aa87b08-kube-api-access-8wdvr" (OuterVolumeSpecName: "kube-api-access-8wdvr") pod "20be453f-795d-41dc-b549-38115aa87b08" (UID: "20be453f-795d-41dc-b549-38115aa87b08"). InnerVolumeSpecName "kube-api-access-8wdvr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:31:57.567660 master-0 kubenswrapper[31559]: I0216 02:31:57.565119 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bbe13e2-1e42-45af-945d-a1137fff5444-kube-api-access-v4z9h" (OuterVolumeSpecName: "kube-api-access-v4z9h") pod "0bbe13e2-1e42-45af-945d-a1137fff5444" (UID: "0bbe13e2-1e42-45af-945d-a1137fff5444"). InnerVolumeSpecName "kube-api-access-v4z9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:31:57.578219 master-0 kubenswrapper[31559]: I0216 02:31:57.577319 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20be453f-795d-41dc-b549-38115aa87b08-util" (OuterVolumeSpecName: "util") pod "20be453f-795d-41dc-b549-38115aa87b08" (UID: "20be453f-795d-41dc-b549-38115aa87b08"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 02:31:57.588070 master-0 kubenswrapper[31559]: I0216 02:31:57.587887 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bbe13e2-1e42-45af-945d-a1137fff5444-util" (OuterVolumeSpecName: "util") pod "0bbe13e2-1e42-45af-945d-a1137fff5444" (UID: "0bbe13e2-1e42-45af-945d-a1137fff5444"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 02:31:57.654683 master-0 kubenswrapper[31559]: I0216 02:31:57.654615 31559 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0bbe13e2-1e42-45af-945d-a1137fff5444-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 02:31:57.654683 master-0 kubenswrapper[31559]: I0216 02:31:57.654662 31559 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0bbe13e2-1e42-45af-945d-a1137fff5444-util\") on node \"master-0\" DevicePath \"\"" Feb 16 02:31:57.654683 master-0 kubenswrapper[31559]: I0216 02:31:57.654675 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8wdvr\" (UniqueName: \"kubernetes.io/projected/20be453f-795d-41dc-b549-38115aa87b08-kube-api-access-8wdvr\") on node \"master-0\" DevicePath \"\"" Feb 16 02:31:57.654683 master-0 kubenswrapper[31559]: I0216 02:31:57.654685 31559 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/20be453f-795d-41dc-b549-38115aa87b08-util\") on node \"master-0\" DevicePath \"\"" Feb 16 02:31:57.654683 master-0 kubenswrapper[31559]: I0216 02:31:57.654694 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4z9h\" (UniqueName: \"kubernetes.io/projected/0bbe13e2-1e42-45af-945d-a1137fff5444-kube-api-access-v4z9h\") on node \"master-0\" DevicePath \"\"" Feb 16 02:31:57.940246 master-0 kubenswrapper[31559]: I0216 02:31:57.940191 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vcvwx" Feb 16 02:31:57.940823 master-0 kubenswrapper[31559]: I0216 02:31:57.940615 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vcvwx" event={"ID":"0bbe13e2-1e42-45af-945d-a1137fff5444","Type":"ContainerDied","Data":"85ce668e7ec09eb0937cb5db054b189323960fdc93bef90b483674999c1e7aea"} Feb 16 02:31:57.940823 master-0 kubenswrapper[31559]: I0216 02:31:57.940668 31559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85ce668e7ec09eb0937cb5db054b189323960fdc93bef90b483674999c1e7aea" Feb 16 02:31:57.944071 master-0 kubenswrapper[31559]: I0216 02:31:57.944018 31559 generic.go:334] "Generic (PLEG): container finished" podID="c16b56c4-40cf-4fb2-ae54-c32f1df90335" containerID="936b47e91c35628627acb0023c5c0ec30f8e520a89aa0c86be2d25f4e0bd38bb" exitCode=0 Feb 16 02:31:57.944142 master-0 kubenswrapper[31559]: I0216 02:31:57.944080 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkwwr" event={"ID":"c16b56c4-40cf-4fb2-ae54-c32f1df90335","Type":"ContainerDied","Data":"936b47e91c35628627acb0023c5c0ec30f8e520a89aa0c86be2d25f4e0bd38bb"} Feb 16 02:31:57.949408 master-0 kubenswrapper[31559]: I0216 02:31:57.949300 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarlzpv" event={"ID":"20be453f-795d-41dc-b549-38115aa87b08","Type":"ContainerDied","Data":"13f712033cee40762b4e4672bf4ae6187611346d6b082cec1fefab93e98b9d2a"} Feb 16 02:31:57.949408 master-0 kubenswrapper[31559]: I0216 02:31:57.949335 31559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13f712033cee40762b4e4672bf4ae6187611346d6b082cec1fefab93e98b9d2a" Feb 16 02:31:57.949408 master-0 kubenswrapper[31559]: I0216 02:31:57.949390 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarlzpv" Feb 16 02:31:58.960340 master-0 kubenswrapper[31559]: I0216 02:31:58.960244 31559 generic.go:334] "Generic (PLEG): container finished" podID="c16b56c4-40cf-4fb2-ae54-c32f1df90335" containerID="aadfca8bab767a954bab2cfab5b6af1d6e7a9c0b3242a2a124535d317483302c" exitCode=0 Feb 16 02:31:58.960340 master-0 kubenswrapper[31559]: I0216 02:31:58.960306 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkwwr" event={"ID":"c16b56c4-40cf-4fb2-ae54-c32f1df90335","Type":"ContainerDied","Data":"aadfca8bab767a954bab2cfab5b6af1d6e7a9c0b3242a2a124535d317483302c"} Feb 16 02:32:00.362235 master-0 kubenswrapper[31559]: I0216 02:32:00.362147 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkwwr" Feb 16 02:32:00.499339 master-0 kubenswrapper[31559]: I0216 02:32:00.499049 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fddn5\" (UniqueName: \"kubernetes.io/projected/c16b56c4-40cf-4fb2-ae54-c32f1df90335-kube-api-access-fddn5\") pod \"c16b56c4-40cf-4fb2-ae54-c32f1df90335\" (UID: \"c16b56c4-40cf-4fb2-ae54-c32f1df90335\") " Feb 16 02:32:00.499339 master-0 kubenswrapper[31559]: I0216 02:32:00.499214 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c16b56c4-40cf-4fb2-ae54-c32f1df90335-bundle\") pod \"c16b56c4-40cf-4fb2-ae54-c32f1df90335\" (UID: \"c16b56c4-40cf-4fb2-ae54-c32f1df90335\") " Feb 16 02:32:00.501812 master-0 kubenswrapper[31559]: I0216 02:32:00.501750 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c16b56c4-40cf-4fb2-ae54-c32f1df90335-kube-api-access-fddn5" (OuterVolumeSpecName: "kube-api-access-fddn5") pod "c16b56c4-40cf-4fb2-ae54-c32f1df90335" (UID: "c16b56c4-40cf-4fb2-ae54-c32f1df90335"). InnerVolumeSpecName "kube-api-access-fddn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:32:00.502941 master-0 kubenswrapper[31559]: I0216 02:32:00.502765 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c16b56c4-40cf-4fb2-ae54-c32f1df90335-bundle" (OuterVolumeSpecName: "bundle") pod "c16b56c4-40cf-4fb2-ae54-c32f1df90335" (UID: "c16b56c4-40cf-4fb2-ae54-c32f1df90335"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 02:32:00.503145 master-0 kubenswrapper[31559]: I0216 02:32:00.503088 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c16b56c4-40cf-4fb2-ae54-c32f1df90335-util\") pod \"c16b56c4-40cf-4fb2-ae54-c32f1df90335\" (UID: \"c16b56c4-40cf-4fb2-ae54-c32f1df90335\") " Feb 16 02:32:00.503876 master-0 kubenswrapper[31559]: I0216 02:32:00.503832 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fddn5\" (UniqueName: \"kubernetes.io/projected/c16b56c4-40cf-4fb2-ae54-c32f1df90335-kube-api-access-fddn5\") on node \"master-0\" DevicePath \"\"" Feb 16 02:32:00.503876 master-0 kubenswrapper[31559]: I0216 02:32:00.503875 31559 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c16b56c4-40cf-4fb2-ae54-c32f1df90335-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 02:32:00.516356 master-0 kubenswrapper[31559]: I0216 02:32:00.516276 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-68x4p"] Feb 16 02:32:00.516914 master-0 kubenswrapper[31559]: E0216 02:32:00.516861 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20be453f-795d-41dc-b549-38115aa87b08" containerName="pull" Feb 16 02:32:00.516914 master-0 kubenswrapper[31559]: I0216 02:32:00.516901 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="20be453f-795d-41dc-b549-38115aa87b08" containerName="pull" Feb 16 02:32:00.517075 master-0 kubenswrapper[31559]: E0216 02:32:00.516952 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bbe13e2-1e42-45af-945d-a1137fff5444" containerName="extract" Feb 16 02:32:00.517075 master-0 kubenswrapper[31559]: I0216 02:32:00.516967 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bbe13e2-1e42-45af-945d-a1137fff5444" containerName="extract" Feb 16 02:32:00.517075 master-0 kubenswrapper[31559]: E0216 02:32:00.517071 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bbe13e2-1e42-45af-945d-a1137fff5444" containerName="pull" Feb 16 02:32:00.517294 master-0 kubenswrapper[31559]: I0216 02:32:00.517091 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bbe13e2-1e42-45af-945d-a1137fff5444" containerName="pull" Feb 16 02:32:00.517294 master-0 kubenswrapper[31559]: E0216 02:32:00.517125 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20be453f-795d-41dc-b549-38115aa87b08" containerName="extract" Feb 16 02:32:00.517294 master-0 kubenswrapper[31559]: I0216 02:32:00.517139 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="20be453f-795d-41dc-b549-38115aa87b08" containerName="extract" Feb 16 02:32:00.517294 master-0 kubenswrapper[31559]: E0216 02:32:00.517158 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cab06de-417c-4114-aae4-2cf82637db54" containerName="util" Feb 16 02:32:00.517294 master-0 kubenswrapper[31559]: I0216 02:32:00.517166 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cab06de-417c-4114-aae4-2cf82637db54" containerName="util" Feb 16 02:32:00.517294 master-0 kubenswrapper[31559]: E0216 02:32:00.517180 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bbe13e2-1e42-45af-945d-a1137fff5444" containerName="util" Feb 16 02:32:00.517294 master-0 kubenswrapper[31559]: I0216 02:32:00.517216 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bbe13e2-1e42-45af-945d-a1137fff5444" containerName="util" Feb 16 02:32:00.517294 master-0 kubenswrapper[31559]: E0216 02:32:00.517235 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c16b56c4-40cf-4fb2-ae54-c32f1df90335" containerName="pull" Feb 16 02:32:00.517294 master-0 kubenswrapper[31559]: I0216 02:32:00.517244 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="c16b56c4-40cf-4fb2-ae54-c32f1df90335" containerName="pull" Feb 16 02:32:00.517294 master-0 kubenswrapper[31559]: E0216 02:32:00.517267 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cab06de-417c-4114-aae4-2cf82637db54" containerName="pull" Feb 16 02:32:00.517294 master-0 kubenswrapper[31559]: I0216 02:32:00.517275 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cab06de-417c-4114-aae4-2cf82637db54" containerName="pull" Feb 16 02:32:00.517294 master-0 kubenswrapper[31559]: E0216 02:32:00.517287 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20be453f-795d-41dc-b549-38115aa87b08" containerName="util" Feb 16 02:32:00.517294 master-0 kubenswrapper[31559]: I0216 02:32:00.517296 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="20be453f-795d-41dc-b549-38115aa87b08" containerName="util" Feb 16 02:32:00.517294 master-0 kubenswrapper[31559]: E0216 02:32:00.517313 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c16b56c4-40cf-4fb2-ae54-c32f1df90335" containerName="extract" Feb 16 02:32:00.517294 master-0 kubenswrapper[31559]: I0216 02:32:00.517324 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="c16b56c4-40cf-4fb2-ae54-c32f1df90335" containerName="extract" Feb 16 02:32:00.518329 master-0 kubenswrapper[31559]: E0216 02:32:00.517339 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cab06de-417c-4114-aae4-2cf82637db54" containerName="extract" Feb 16 02:32:00.518329 master-0 kubenswrapper[31559]: I0216 02:32:00.517349 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cab06de-417c-4114-aae4-2cf82637db54" containerName="extract" Feb 16 02:32:00.518329 master-0 kubenswrapper[31559]: E0216 02:32:00.517362 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c16b56c4-40cf-4fb2-ae54-c32f1df90335" containerName="util" Feb 16 02:32:00.518329 master-0 kubenswrapper[31559]: I0216 02:32:00.517371 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="c16b56c4-40cf-4fb2-ae54-c32f1df90335" containerName="util" Feb 16 02:32:00.518329 master-0 kubenswrapper[31559]: I0216 02:32:00.517604 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="20be453f-795d-41dc-b549-38115aa87b08" containerName="extract" Feb 16 02:32:00.518329 master-0 kubenswrapper[31559]: I0216 02:32:00.517643 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cab06de-417c-4114-aae4-2cf82637db54" containerName="extract" Feb 16 02:32:00.518329 master-0 kubenswrapper[31559]: I0216 02:32:00.517919 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bbe13e2-1e42-45af-945d-a1137fff5444" containerName="extract" Feb 16 02:32:00.518329 master-0 kubenswrapper[31559]: I0216 02:32:00.517946 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="c16b56c4-40cf-4fb2-ae54-c32f1df90335" containerName="extract" Feb 16 02:32:00.520803 master-0 kubenswrapper[31559]: I0216 02:32:00.520718 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-68x4p" Feb 16 02:32:00.524593 master-0 kubenswrapper[31559]: I0216 02:32:00.524248 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Feb 16 02:32:00.524593 master-0 kubenswrapper[31559]: I0216 02:32:00.524330 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Feb 16 02:32:00.528492 master-0 kubenswrapper[31559]: I0216 02:32:00.527136 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-68x4p"] Feb 16 02:32:00.557798 master-0 kubenswrapper[31559]: I0216 02:32:00.557690 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c16b56c4-40cf-4fb2-ae54-c32f1df90335-util" (OuterVolumeSpecName: "util") pod "c16b56c4-40cf-4fb2-ae54-c32f1df90335" (UID: "c16b56c4-40cf-4fb2-ae54-c32f1df90335"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 02:32:00.608465 master-0 kubenswrapper[31559]: I0216 02:32:00.604821 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fclp\" (UniqueName: \"kubernetes.io/projected/e9d91f78-2049-4ce3-a35d-5d67e56866f7-kube-api-access-5fclp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-68x4p\" (UID: \"e9d91f78-2049-4ce3-a35d-5d67e56866f7\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-68x4p" Feb 16 02:32:00.608465 master-0 kubenswrapper[31559]: I0216 02:32:00.604923 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e9d91f78-2049-4ce3-a35d-5d67e56866f7-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-68x4p\" (UID: \"e9d91f78-2049-4ce3-a35d-5d67e56866f7\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-68x4p" Feb 16 02:32:00.608465 master-0 kubenswrapper[31559]: I0216 02:32:00.605059 31559 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c16b56c4-40cf-4fb2-ae54-c32f1df90335-util\") on node \"master-0\" DevicePath \"\"" Feb 16 02:32:00.706229 master-0 kubenswrapper[31559]: I0216 02:32:00.706172 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fclp\" (UniqueName: \"kubernetes.io/projected/e9d91f78-2049-4ce3-a35d-5d67e56866f7-kube-api-access-5fclp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-68x4p\" (UID: \"e9d91f78-2049-4ce3-a35d-5d67e56866f7\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-68x4p" Feb 16 02:32:00.706460 master-0 kubenswrapper[31559]: I0216 02:32:00.706248 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e9d91f78-2049-4ce3-a35d-5d67e56866f7-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-68x4p\" (UID: \"e9d91f78-2049-4ce3-a35d-5d67e56866f7\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-68x4p" Feb 16 02:32:00.706773 master-0 kubenswrapper[31559]: I0216 02:32:00.706732 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e9d91f78-2049-4ce3-a35d-5d67e56866f7-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-68x4p\" (UID: \"e9d91f78-2049-4ce3-a35d-5d67e56866f7\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-68x4p" Feb 16 02:32:00.721757 master-0 kubenswrapper[31559]: I0216 02:32:00.721710 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fclp\" (UniqueName: \"kubernetes.io/projected/e9d91f78-2049-4ce3-a35d-5d67e56866f7-kube-api-access-5fclp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-68x4p\" (UID: \"e9d91f78-2049-4ce3-a35d-5d67e56866f7\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-68x4p" Feb 16 02:32:00.854757 master-0 kubenswrapper[31559]: I0216 02:32:00.854556 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-68x4p" Feb 16 02:32:00.988762 master-0 kubenswrapper[31559]: I0216 02:32:00.988709 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkwwr" event={"ID":"c16b56c4-40cf-4fb2-ae54-c32f1df90335","Type":"ContainerDied","Data":"7b7d800bb96e7785fc9a9f57fba98575c909ffebd29b84a1b7cf30e0a8203b1c"} Feb 16 02:32:00.989050 master-0 kubenswrapper[31559]: I0216 02:32:00.989022 31559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b7d800bb96e7785fc9a9f57fba98575c909ffebd29b84a1b7cf30e0a8203b1c" Feb 16 02:32:00.989186 master-0 kubenswrapper[31559]: I0216 02:32:00.988827 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkwwr" Feb 16 02:32:01.189617 master-0 kubenswrapper[31559]: I0216 02:32:01.187682 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-68x4p"] Feb 16 02:32:01.193804 master-0 kubenswrapper[31559]: W0216 02:32:01.193737 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9d91f78_2049_4ce3_a35d_5d67e56866f7.slice/crio-9e6e4c5823d68c1ed1feb1467f5f40b8e4617f15e76927e90153bf2028e78e7f WatchSource:0}: Error finding container 9e6e4c5823d68c1ed1feb1467f5f40b8e4617f15e76927e90153bf2028e78e7f: Status 404 returned error can't find the container with id 9e6e4c5823d68c1ed1feb1467f5f40b8e4617f15e76927e90153bf2028e78e7f Feb 16 02:32:01.997461 master-0 kubenswrapper[31559]: I0216 02:32:01.997382 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-68x4p" event={"ID":"e9d91f78-2049-4ce3-a35d-5d67e56866f7","Type":"ContainerStarted","Data":"9e6e4c5823d68c1ed1feb1467f5f40b8e4617f15e76927e90153bf2028e78e7f"} Feb 16 02:32:06.031333 master-0 kubenswrapper[31559]: I0216 02:32:06.031269 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-68x4p" event={"ID":"e9d91f78-2049-4ce3-a35d-5d67e56866f7","Type":"ContainerStarted","Data":"03a9fad28beb9e10a86b4f669f9bcc938a006bb2e96ca7ddc91ac6d21cd45ff9"} Feb 16 02:32:06.068699 master-0 kubenswrapper[31559]: I0216 02:32:06.068555 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-68x4p" podStartSLOduration=2.119096776 podStartE2EDuration="6.068524003s" podCreationTimestamp="2026-02-16 02:32:00 +0000 UTC" firstStartedPulling="2026-02-16 02:32:01.196819122 +0000 UTC m=+573.541425157" lastFinishedPulling="2026-02-16 02:32:05.146246369 +0000 UTC m=+577.490852384" observedRunningTime="2026-02-16 02:32:06.057952631 +0000 UTC m=+578.402558676" watchObservedRunningTime="2026-02-16 02:32:06.068524003 +0000 UTC m=+578.413130048" Feb 16 02:32:12.302000 master-0 kubenswrapper[31559]: I0216 02:32:12.301713 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-99jjg"] Feb 16 02:32:12.305059 master-0 kubenswrapper[31559]: I0216 02:32:12.305026 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-99jjg" Feb 16 02:32:12.308096 master-0 kubenswrapper[31559]: I0216 02:32:12.308028 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 16 02:32:12.308653 master-0 kubenswrapper[31559]: I0216 02:32:12.308628 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 16 02:32:12.340020 master-0 kubenswrapper[31559]: I0216 02:32:12.339949 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-99jjg"] Feb 16 02:32:12.453875 master-0 kubenswrapper[31559]: I0216 02:32:12.451627 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xjsn\" (UniqueName: \"kubernetes.io/projected/5b480c53-efb7-4c60-b2d4-dd1a1b748e51-kube-api-access-8xjsn\") pod \"nmstate-operator-694c9596b7-99jjg\" (UID: \"5b480c53-efb7-4c60-b2d4-dd1a1b748e51\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-99jjg" Feb 16 02:32:12.553788 master-0 kubenswrapper[31559]: I0216 02:32:12.553645 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xjsn\" (UniqueName: \"kubernetes.io/projected/5b480c53-efb7-4c60-b2d4-dd1a1b748e51-kube-api-access-8xjsn\") pod \"nmstate-operator-694c9596b7-99jjg\" (UID: \"5b480c53-efb7-4c60-b2d4-dd1a1b748e51\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-99jjg" Feb 16 02:32:12.572041 master-0 kubenswrapper[31559]: I0216 02:32:12.571973 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xjsn\" (UniqueName: \"kubernetes.io/projected/5b480c53-efb7-4c60-b2d4-dd1a1b748e51-kube-api-access-8xjsn\") pod \"nmstate-operator-694c9596b7-99jjg\" (UID: \"5b480c53-efb7-4c60-b2d4-dd1a1b748e51\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-99jjg" Feb 16 02:32:12.601429 master-0 kubenswrapper[31559]: I0216 02:32:12.601343 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-n2lp4"] Feb 16 02:32:12.603386 master-0 kubenswrapper[31559]: I0216 02:32:12.603340 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-n2lp4" Feb 16 02:32:12.605496 master-0 kubenswrapper[31559]: I0216 02:32:12.605378 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 16 02:32:12.605757 master-0 kubenswrapper[31559]: I0216 02:32:12.605721 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 16 02:32:12.608752 master-0 kubenswrapper[31559]: I0216 02:32:12.608673 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-n2lp4"] Feb 16 02:32:12.664284 master-0 kubenswrapper[31559]: I0216 02:32:12.664212 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-99jjg" Feb 16 02:32:12.757649 master-0 kubenswrapper[31559]: I0216 02:32:12.757563 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/aadb23c8-a326-49c0-aff8-69fedf4972f6-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-n2lp4\" (UID: \"aadb23c8-a326-49c0-aff8-69fedf4972f6\") " pod="cert-manager/cert-manager-cainjector-5545bd876-n2lp4" Feb 16 02:32:12.757930 master-0 kubenswrapper[31559]: I0216 02:32:12.757693 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5sp4\" (UniqueName: \"kubernetes.io/projected/aadb23c8-a326-49c0-aff8-69fedf4972f6-kube-api-access-g5sp4\") pod \"cert-manager-cainjector-5545bd876-n2lp4\" (UID: \"aadb23c8-a326-49c0-aff8-69fedf4972f6\") " pod="cert-manager/cert-manager-cainjector-5545bd876-n2lp4" Feb 16 02:32:12.860125 master-0 kubenswrapper[31559]: I0216 02:32:12.860067 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/aadb23c8-a326-49c0-aff8-69fedf4972f6-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-n2lp4\" (UID: \"aadb23c8-a326-49c0-aff8-69fedf4972f6\") " pod="cert-manager/cert-manager-cainjector-5545bd876-n2lp4" Feb 16 02:32:12.860361 master-0 kubenswrapper[31559]: I0216 02:32:12.860230 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5sp4\" (UniqueName: \"kubernetes.io/projected/aadb23c8-a326-49c0-aff8-69fedf4972f6-kube-api-access-g5sp4\") pod \"cert-manager-cainjector-5545bd876-n2lp4\" (UID: \"aadb23c8-a326-49c0-aff8-69fedf4972f6\") " pod="cert-manager/cert-manager-cainjector-5545bd876-n2lp4" Feb 16 02:32:12.878113 master-0 kubenswrapper[31559]: I0216 02:32:12.878061 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/aadb23c8-a326-49c0-aff8-69fedf4972f6-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-n2lp4\" (UID: \"aadb23c8-a326-49c0-aff8-69fedf4972f6\") " pod="cert-manager/cert-manager-cainjector-5545bd876-n2lp4" Feb 16 02:32:12.880237 master-0 kubenswrapper[31559]: I0216 02:32:12.880186 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5sp4\" (UniqueName: \"kubernetes.io/projected/aadb23c8-a326-49c0-aff8-69fedf4972f6-kube-api-access-g5sp4\") pod \"cert-manager-cainjector-5545bd876-n2lp4\" (UID: \"aadb23c8-a326-49c0-aff8-69fedf4972f6\") " pod="cert-manager/cert-manager-cainjector-5545bd876-n2lp4" Feb 16 02:32:12.929698 master-0 kubenswrapper[31559]: I0216 02:32:12.929628 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-n2lp4" Feb 16 02:32:13.189419 master-0 kubenswrapper[31559]: W0216 02:32:13.189369 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b480c53_efb7_4c60_b2d4_dd1a1b748e51.slice/crio-644733ef0f72e73b70742db7f468ad6d3994d659c53af0340f66b4c5a8bb598b WatchSource:0}: Error finding container 644733ef0f72e73b70742db7f468ad6d3994d659c53af0340f66b4c5a8bb598b: Status 404 returned error can't find the container with id 644733ef0f72e73b70742db7f468ad6d3994d659c53af0340f66b4c5a8bb598b Feb 16 02:32:13.190585 master-0 kubenswrapper[31559]: I0216 02:32:13.190470 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-99jjg"] Feb 16 02:32:13.383462 master-0 kubenswrapper[31559]: W0216 02:32:13.383375 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaadb23c8_a326_49c0_aff8_69fedf4972f6.slice/crio-82fa383a8096cc366629503b7dee3cc63917a9d5f8fc70299754e667afd35fc5 WatchSource:0}: Error finding container 82fa383a8096cc366629503b7dee3cc63917a9d5f8fc70299754e667afd35fc5: Status 404 returned error can't find the container with id 82fa383a8096cc366629503b7dee3cc63917a9d5f8fc70299754e667afd35fc5 Feb 16 02:32:13.389909 master-0 kubenswrapper[31559]: I0216 02:32:13.389844 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-n2lp4"] Feb 16 02:32:14.102877 master-0 kubenswrapper[31559]: I0216 02:32:14.102804 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-99jjg" event={"ID":"5b480c53-efb7-4c60-b2d4-dd1a1b748e51","Type":"ContainerStarted","Data":"644733ef0f72e73b70742db7f468ad6d3994d659c53af0340f66b4c5a8bb598b"} Feb 16 02:32:14.105543 master-0 kubenswrapper[31559]: I0216 02:32:14.105368 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-n2lp4" event={"ID":"aadb23c8-a326-49c0-aff8-69fedf4972f6","Type":"ContainerStarted","Data":"82fa383a8096cc366629503b7dee3cc63917a9d5f8fc70299754e667afd35fc5"} Feb 16 02:32:14.795337 master-0 kubenswrapper[31559]: I0216 02:32:14.795273 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-8d6hp"] Feb 16 02:32:14.797004 master-0 kubenswrapper[31559]: I0216 02:32:14.796286 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-8d6hp" Feb 16 02:32:14.816430 master-0 kubenswrapper[31559]: I0216 02:32:14.816386 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-8d6hp"] Feb 16 02:32:14.896364 master-0 kubenswrapper[31559]: I0216 02:32:14.896295 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5lcj\" (UniqueName: \"kubernetes.io/projected/81ae9393-d088-4112-aada-785f7454ee53-kube-api-access-v5lcj\") pod \"cert-manager-webhook-6888856db4-8d6hp\" (UID: \"81ae9393-d088-4112-aada-785f7454ee53\") " pod="cert-manager/cert-manager-webhook-6888856db4-8d6hp" Feb 16 02:32:14.896637 master-0 kubenswrapper[31559]: I0216 02:32:14.896424 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/81ae9393-d088-4112-aada-785f7454ee53-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-8d6hp\" (UID: \"81ae9393-d088-4112-aada-785f7454ee53\") " pod="cert-manager/cert-manager-webhook-6888856db4-8d6hp" Feb 16 02:32:14.997975 master-0 kubenswrapper[31559]: I0216 02:32:14.997900 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5lcj\" (UniqueName: \"kubernetes.io/projected/81ae9393-d088-4112-aada-785f7454ee53-kube-api-access-v5lcj\") pod \"cert-manager-webhook-6888856db4-8d6hp\" (UID: \"81ae9393-d088-4112-aada-785f7454ee53\") " pod="cert-manager/cert-manager-webhook-6888856db4-8d6hp" Feb 16 02:32:14.998248 master-0 kubenswrapper[31559]: I0216 02:32:14.998030 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/81ae9393-d088-4112-aada-785f7454ee53-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-8d6hp\" (UID: \"81ae9393-d088-4112-aada-785f7454ee53\") " pod="cert-manager/cert-manager-webhook-6888856db4-8d6hp" Feb 16 02:32:15.018632 master-0 kubenswrapper[31559]: I0216 02:32:15.016209 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5lcj\" (UniqueName: \"kubernetes.io/projected/81ae9393-d088-4112-aada-785f7454ee53-kube-api-access-v5lcj\") pod \"cert-manager-webhook-6888856db4-8d6hp\" (UID: \"81ae9393-d088-4112-aada-785f7454ee53\") " pod="cert-manager/cert-manager-webhook-6888856db4-8d6hp" Feb 16 02:32:15.021864 master-0 kubenswrapper[31559]: I0216 02:32:15.021773 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/81ae9393-d088-4112-aada-785f7454ee53-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-8d6hp\" (UID: \"81ae9393-d088-4112-aada-785f7454ee53\") " pod="cert-manager/cert-manager-webhook-6888856db4-8d6hp" Feb 16 02:32:15.178820 master-0 kubenswrapper[31559]: I0216 02:32:15.178141 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-8d6hp" Feb 16 02:32:15.613937 master-0 kubenswrapper[31559]: I0216 02:32:15.613861 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-8d6hp"] Feb 16 02:32:15.769386 master-0 kubenswrapper[31559]: W0216 02:32:15.768901 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81ae9393_d088_4112_aada_785f7454ee53.slice/crio-b32e74a8bb9e5226125e05b56810ca8b6fd4db7723bc31dbc2b2f73b54db7147 WatchSource:0}: Error finding container b32e74a8bb9e5226125e05b56810ca8b6fd4db7723bc31dbc2b2f73b54db7147: Status 404 returned error can't find the container with id b32e74a8bb9e5226125e05b56810ca8b6fd4db7723bc31dbc2b2f73b54db7147 Feb 16 02:32:16.141470 master-0 kubenswrapper[31559]: I0216 02:32:16.141132 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-8d6hp" event={"ID":"81ae9393-d088-4112-aada-785f7454ee53","Type":"ContainerStarted","Data":"b32e74a8bb9e5226125e05b56810ca8b6fd4db7723bc31dbc2b2f73b54db7147"} Feb 16 02:32:16.165476 master-0 kubenswrapper[31559]: I0216 02:32:16.161710 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-99jjg" event={"ID":"5b480c53-efb7-4c60-b2d4-dd1a1b748e51","Type":"ContainerStarted","Data":"72ae2904e642aa9ad1eff28c43e04c2a01990c9a162c8b0fe2959a7330497d50"} Feb 16 02:32:16.217609 master-0 kubenswrapper[31559]: I0216 02:32:16.213414 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-99jjg" podStartSLOduration=1.549660585 podStartE2EDuration="4.213398039s" podCreationTimestamp="2026-02-16 02:32:12 +0000 UTC" firstStartedPulling="2026-02-16 02:32:13.193494686 +0000 UTC m=+585.538100711" lastFinishedPulling="2026-02-16 02:32:15.85723215 +0000 UTC m=+588.201838165" observedRunningTime="2026-02-16 02:32:16.204882299 +0000 UTC m=+588.549488314" watchObservedRunningTime="2026-02-16 02:32:16.213398039 +0000 UTC m=+588.558004054" Feb 16 02:32:18.994459 master-0 kubenswrapper[31559]: I0216 02:32:18.992922 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-77bb9bb66d-68x8r"] Feb 16 02:32:18.994459 master-0 kubenswrapper[31559]: I0216 02:32:18.993871 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-77bb9bb66d-68x8r" Feb 16 02:32:19.000187 master-0 kubenswrapper[31559]: I0216 02:32:19.000137 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 16 02:32:19.004642 master-0 kubenswrapper[31559]: I0216 02:32:19.003554 31559 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 16 02:32:19.006751 master-0 kubenswrapper[31559]: I0216 02:32:19.005984 31559 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 16 02:32:19.006751 master-0 kubenswrapper[31559]: I0216 02:32:19.006013 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 16 02:32:19.018996 master-0 kubenswrapper[31559]: I0216 02:32:19.018929 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-77bb9bb66d-68x8r"] Feb 16 02:32:19.128455 master-0 kubenswrapper[31559]: I0216 02:32:19.127857 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/35fa7fc2-7ff3-4c54-9b0c-0cd8c0983930-apiservice-cert\") pod \"metallb-operator-controller-manager-77bb9bb66d-68x8r\" (UID: \"35fa7fc2-7ff3-4c54-9b0c-0cd8c0983930\") " pod="metallb-system/metallb-operator-controller-manager-77bb9bb66d-68x8r" Feb 16 02:32:19.128455 master-0 kubenswrapper[31559]: I0216 02:32:19.127900 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pb8z\" (UniqueName: \"kubernetes.io/projected/35fa7fc2-7ff3-4c54-9b0c-0cd8c0983930-kube-api-access-9pb8z\") pod \"metallb-operator-controller-manager-77bb9bb66d-68x8r\" (UID: \"35fa7fc2-7ff3-4c54-9b0c-0cd8c0983930\") " pod="metallb-system/metallb-operator-controller-manager-77bb9bb66d-68x8r" Feb 16 02:32:19.128455 master-0 kubenswrapper[31559]: I0216 02:32:19.127967 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/35fa7fc2-7ff3-4c54-9b0c-0cd8c0983930-webhook-cert\") pod \"metallb-operator-controller-manager-77bb9bb66d-68x8r\" (UID: \"35fa7fc2-7ff3-4c54-9b0c-0cd8c0983930\") " pod="metallb-system/metallb-operator-controller-manager-77bb9bb66d-68x8r" Feb 16 02:32:19.251598 master-0 kubenswrapper[31559]: I0216 02:32:19.251457 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/35fa7fc2-7ff3-4c54-9b0c-0cd8c0983930-webhook-cert\") pod \"metallb-operator-controller-manager-77bb9bb66d-68x8r\" (UID: \"35fa7fc2-7ff3-4c54-9b0c-0cd8c0983930\") " pod="metallb-system/metallb-operator-controller-manager-77bb9bb66d-68x8r" Feb 16 02:32:19.251809 master-0 kubenswrapper[31559]: I0216 02:32:19.251690 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/35fa7fc2-7ff3-4c54-9b0c-0cd8c0983930-apiservice-cert\") pod \"metallb-operator-controller-manager-77bb9bb66d-68x8r\" (UID: \"35fa7fc2-7ff3-4c54-9b0c-0cd8c0983930\") " pod="metallb-system/metallb-operator-controller-manager-77bb9bb66d-68x8r" Feb 16 02:32:19.251809 master-0 kubenswrapper[31559]: I0216 02:32:19.251732 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pb8z\" (UniqueName: \"kubernetes.io/projected/35fa7fc2-7ff3-4c54-9b0c-0cd8c0983930-kube-api-access-9pb8z\") pod \"metallb-operator-controller-manager-77bb9bb66d-68x8r\" (UID: \"35fa7fc2-7ff3-4c54-9b0c-0cd8c0983930\") " pod="metallb-system/metallb-operator-controller-manager-77bb9bb66d-68x8r" Feb 16 02:32:19.254871 master-0 kubenswrapper[31559]: I0216 02:32:19.254824 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/35fa7fc2-7ff3-4c54-9b0c-0cd8c0983930-webhook-cert\") pod \"metallb-operator-controller-manager-77bb9bb66d-68x8r\" (UID: \"35fa7fc2-7ff3-4c54-9b0c-0cd8c0983930\") " pod="metallb-system/metallb-operator-controller-manager-77bb9bb66d-68x8r" Feb 16 02:32:19.272181 master-0 kubenswrapper[31559]: I0216 02:32:19.272132 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/35fa7fc2-7ff3-4c54-9b0c-0cd8c0983930-apiservice-cert\") pod \"metallb-operator-controller-manager-77bb9bb66d-68x8r\" (UID: \"35fa7fc2-7ff3-4c54-9b0c-0cd8c0983930\") " pod="metallb-system/metallb-operator-controller-manager-77bb9bb66d-68x8r" Feb 16 02:32:20.362452 master-0 kubenswrapper[31559]: I0216 02:32:20.359421 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pb8z\" (UniqueName: \"kubernetes.io/projected/35fa7fc2-7ff3-4c54-9b0c-0cd8c0983930-kube-api-access-9pb8z\") pod \"metallb-operator-controller-manager-77bb9bb66d-68x8r\" (UID: \"35fa7fc2-7ff3-4c54-9b0c-0cd8c0983930\") " pod="metallb-system/metallb-operator-controller-manager-77bb9bb66d-68x8r" Feb 16 02:32:20.573967 master-0 kubenswrapper[31559]: I0216 02:32:20.573234 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-77bb9bb66d-68x8r" Feb 16 02:32:20.736000 master-0 kubenswrapper[31559]: I0216 02:32:20.735629 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-599c6f98c8-4w4b2"] Feb 16 02:32:20.745490 master-0 kubenswrapper[31559]: I0216 02:32:20.736812 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-599c6f98c8-4w4b2" Feb 16 02:32:20.745490 master-0 kubenswrapper[31559]: I0216 02:32:20.738893 31559 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 16 02:32:20.745490 master-0 kubenswrapper[31559]: I0216 02:32:20.739131 31559 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 16 02:32:20.749475 master-0 kubenswrapper[31559]: I0216 02:32:20.748780 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-599c6f98c8-4w4b2"] Feb 16 02:32:20.798589 master-0 kubenswrapper[31559]: I0216 02:32:20.794193 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl8q4\" (UniqueName: \"kubernetes.io/projected/a7b52d2c-3950-409f-b570-6eac039d4603-kube-api-access-fl8q4\") pod \"metallb-operator-webhook-server-599c6f98c8-4w4b2\" (UID: \"a7b52d2c-3950-409f-b570-6eac039d4603\") " pod="metallb-system/metallb-operator-webhook-server-599c6f98c8-4w4b2" Feb 16 02:32:20.798589 master-0 kubenswrapper[31559]: I0216 02:32:20.794275 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7b52d2c-3950-409f-b570-6eac039d4603-apiservice-cert\") pod \"metallb-operator-webhook-server-599c6f98c8-4w4b2\" (UID: \"a7b52d2c-3950-409f-b570-6eac039d4603\") " pod="metallb-system/metallb-operator-webhook-server-599c6f98c8-4w4b2" Feb 16 02:32:20.798589 master-0 kubenswrapper[31559]: I0216 02:32:20.794341 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7b52d2c-3950-409f-b570-6eac039d4603-webhook-cert\") pod \"metallb-operator-webhook-server-599c6f98c8-4w4b2\" (UID: \"a7b52d2c-3950-409f-b570-6eac039d4603\") " pod="metallb-system/metallb-operator-webhook-server-599c6f98c8-4w4b2" Feb 16 02:32:20.896447 master-0 kubenswrapper[31559]: I0216 02:32:20.895489 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7b52d2c-3950-409f-b570-6eac039d4603-apiservice-cert\") pod \"metallb-operator-webhook-server-599c6f98c8-4w4b2\" (UID: \"a7b52d2c-3950-409f-b570-6eac039d4603\") " pod="metallb-system/metallb-operator-webhook-server-599c6f98c8-4w4b2" Feb 16 02:32:20.896643 master-0 kubenswrapper[31559]: I0216 02:32:20.896510 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7b52d2c-3950-409f-b570-6eac039d4603-webhook-cert\") pod \"metallb-operator-webhook-server-599c6f98c8-4w4b2\" (UID: \"a7b52d2c-3950-409f-b570-6eac039d4603\") " pod="metallb-system/metallb-operator-webhook-server-599c6f98c8-4w4b2" Feb 16 02:32:20.896643 master-0 kubenswrapper[31559]: I0216 02:32:20.896606 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fl8q4\" (UniqueName: \"kubernetes.io/projected/a7b52d2c-3950-409f-b570-6eac039d4603-kube-api-access-fl8q4\") pod \"metallb-operator-webhook-server-599c6f98c8-4w4b2\" (UID: \"a7b52d2c-3950-409f-b570-6eac039d4603\") " pod="metallb-system/metallb-operator-webhook-server-599c6f98c8-4w4b2" Feb 16 02:32:20.900896 master-0 kubenswrapper[31559]: I0216 02:32:20.900860 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7b52d2c-3950-409f-b570-6eac039d4603-webhook-cert\") pod \"metallb-operator-webhook-server-599c6f98c8-4w4b2\" (UID: \"a7b52d2c-3950-409f-b570-6eac039d4603\") " pod="metallb-system/metallb-operator-webhook-server-599c6f98c8-4w4b2" Feb 16 02:32:20.918487 master-0 kubenswrapper[31559]: I0216 02:32:20.905989 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7b52d2c-3950-409f-b570-6eac039d4603-apiservice-cert\") pod \"metallb-operator-webhook-server-599c6f98c8-4w4b2\" (UID: \"a7b52d2c-3950-409f-b570-6eac039d4603\") " pod="metallb-system/metallb-operator-webhook-server-599c6f98c8-4w4b2" Feb 16 02:32:20.930375 master-0 kubenswrapper[31559]: I0216 02:32:20.929526 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fl8q4\" (UniqueName: \"kubernetes.io/projected/a7b52d2c-3950-409f-b570-6eac039d4603-kube-api-access-fl8q4\") pod \"metallb-operator-webhook-server-599c6f98c8-4w4b2\" (UID: \"a7b52d2c-3950-409f-b570-6eac039d4603\") " pod="metallb-system/metallb-operator-webhook-server-599c6f98c8-4w4b2" Feb 16 02:32:21.122460 master-0 kubenswrapper[31559]: I0216 02:32:21.121787 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-599c6f98c8-4w4b2" Feb 16 02:32:21.182700 master-0 kubenswrapper[31559]: I0216 02:32:21.171477 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-77bb9bb66d-68x8r"] Feb 16 02:32:21.229093 master-0 kubenswrapper[31559]: I0216 02:32:21.223065 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-n2lp4" event={"ID":"aadb23c8-a326-49c0-aff8-69fedf4972f6","Type":"ContainerStarted","Data":"c6c50d05a0cabbefcb1cf2ea47463157209dee05565171897a1c784100d0592f"} Feb 16 02:32:21.229093 master-0 kubenswrapper[31559]: I0216 02:32:21.225973 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-8d6hp" event={"ID":"81ae9393-d088-4112-aada-785f7454ee53","Type":"ContainerStarted","Data":"9cf99b2ca2556f2bca85a579ccd3cccaea9fefe3c8318b0b23fbdd224770af8f"} Feb 16 02:32:21.229093 master-0 kubenswrapper[31559]: I0216 02:32:21.226359 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-6888856db4-8d6hp" Feb 16 02:32:21.229093 master-0 kubenswrapper[31559]: I0216 02:32:21.228399 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-77bb9bb66d-68x8r" event={"ID":"35fa7fc2-7ff3-4c54-9b0c-0cd8c0983930","Type":"ContainerStarted","Data":"37752cbd3d24e42b1ddb7f82812d46f5bdcd4abd7c205d70e703e92122bd61f1"} Feb 16 02:32:21.337454 master-0 kubenswrapper[31559]: I0216 02:32:21.336556 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-5545bd876-n2lp4" podStartSLOduration=2.273919507 podStartE2EDuration="9.32642238s" podCreationTimestamp="2026-02-16 02:32:12 +0000 UTC" firstStartedPulling="2026-02-16 02:32:13.385060359 +0000 UTC m=+585.729666384" lastFinishedPulling="2026-02-16 02:32:20.437563242 +0000 UTC m=+592.782169257" observedRunningTime="2026-02-16 02:32:21.291699312 +0000 UTC m=+593.636305327" watchObservedRunningTime="2026-02-16 02:32:21.32642238 +0000 UTC m=+593.671028395" Feb 16 02:32:21.569150 master-0 kubenswrapper[31559]: I0216 02:32:21.568927 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-6888856db4-8d6hp" podStartSLOduration=2.861380267 podStartE2EDuration="7.56890134s" podCreationTimestamp="2026-02-16 02:32:14 +0000 UTC" firstStartedPulling="2026-02-16 02:32:15.773048471 +0000 UTC m=+588.117654486" lastFinishedPulling="2026-02-16 02:32:20.480569544 +0000 UTC m=+592.825175559" observedRunningTime="2026-02-16 02:32:21.338580041 +0000 UTC m=+593.683186056" watchObservedRunningTime="2026-02-16 02:32:21.56890134 +0000 UTC m=+593.913507365" Feb 16 02:32:21.575204 master-0 kubenswrapper[31559]: I0216 02:32:21.573274 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-599c6f98c8-4w4b2"] Feb 16 02:32:21.579990 master-0 kubenswrapper[31559]: W0216 02:32:21.579915 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda7b52d2c_3950_409f_b570_6eac039d4603.slice/crio-da0449eebfff409754763a31b835e40ff768e4310f23c59482f07826bc9fdef8 WatchSource:0}: Error finding container da0449eebfff409754763a31b835e40ff768e4310f23c59482f07826bc9fdef8: Status 404 returned error can't find the container with id da0449eebfff409754763a31b835e40ff768e4310f23c59482f07826bc9fdef8 Feb 16 02:32:22.237424 master-0 kubenswrapper[31559]: I0216 02:32:22.237347 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-599c6f98c8-4w4b2" event={"ID":"a7b52d2c-3950-409f-b570-6eac039d4603","Type":"ContainerStarted","Data":"da0449eebfff409754763a31b835e40ff768e4310f23c59482f07826bc9fdef8"} Feb 16 02:32:25.188454 master-0 kubenswrapper[31559]: I0216 02:32:25.184667 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-6888856db4-8d6hp" Feb 16 02:32:25.315454 master-0 kubenswrapper[31559]: I0216 02:32:25.315136 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-77bb9bb66d-68x8r" event={"ID":"35fa7fc2-7ff3-4c54-9b0c-0cd8c0983930","Type":"ContainerStarted","Data":"3ce7e8ab149373fcd83eb1b005e68a04730254ac4a55c63d83d196311d127b71"} Feb 16 02:32:25.320459 master-0 kubenswrapper[31559]: I0216 02:32:25.315986 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-77bb9bb66d-68x8r" Feb 16 02:32:25.359453 master-0 kubenswrapper[31559]: I0216 02:32:25.355306 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-77bb9bb66d-68x8r" podStartSLOduration=4.123894549 podStartE2EDuration="7.355283147s" podCreationTimestamp="2026-02-16 02:32:18 +0000 UTC" firstStartedPulling="2026-02-16 02:32:21.173501362 +0000 UTC m=+593.518107377" lastFinishedPulling="2026-02-16 02:32:24.40488996 +0000 UTC m=+596.749495975" observedRunningTime="2026-02-16 02:32:25.351617317 +0000 UTC m=+597.696223332" watchObservedRunningTime="2026-02-16 02:32:25.355283147 +0000 UTC m=+597.699889172" Feb 16 02:32:27.704782 master-0 kubenswrapper[31559]: I0216 02:32:27.702263 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-gcxpl"] Feb 16 02:32:27.704782 master-0 kubenswrapper[31559]: I0216 02:32:27.703823 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-gcxpl" Feb 16 02:32:27.707172 master-0 kubenswrapper[31559]: I0216 02:32:27.706072 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 16 02:32:27.718959 master-0 kubenswrapper[31559]: I0216 02:32:27.718906 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-gcxpl"] Feb 16 02:32:27.760244 master-0 kubenswrapper[31559]: I0216 02:32:27.759922 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vndgc\" (UniqueName: \"kubernetes.io/projected/3992bcd2-e420-4dcb-8cdd-6364fced2ea3-kube-api-access-vndgc\") pod \"obo-prometheus-operator-68bc856cb9-gcxpl\" (UID: \"3992bcd2-e420-4dcb-8cdd-6364fced2ea3\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-gcxpl" Feb 16 02:32:27.761858 master-0 kubenswrapper[31559]: I0216 02:32:27.761462 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 16 02:32:27.863462 master-0 kubenswrapper[31559]: I0216 02:32:27.861317 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vndgc\" (UniqueName: \"kubernetes.io/projected/3992bcd2-e420-4dcb-8cdd-6364fced2ea3-kube-api-access-vndgc\") pod \"obo-prometheus-operator-68bc856cb9-gcxpl\" (UID: \"3992bcd2-e420-4dcb-8cdd-6364fced2ea3\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-gcxpl" Feb 16 02:32:27.867817 master-0 kubenswrapper[31559]: I0216 02:32:27.866476 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-78c94dfdd9-ftz87"] Feb 16 02:32:27.867817 master-0 kubenswrapper[31559]: I0216 02:32:27.867803 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-78c94dfdd9-ftz87" Feb 16 02:32:27.871176 master-0 kubenswrapper[31559]: I0216 02:32:27.871133 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 16 02:32:27.894729 master-0 kubenswrapper[31559]: I0216 02:32:27.889219 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-78c94dfdd9-tlbf7"] Feb 16 02:32:27.894729 master-0 kubenswrapper[31559]: I0216 02:32:27.890220 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-78c94dfdd9-tlbf7" Feb 16 02:32:27.905046 master-0 kubenswrapper[31559]: I0216 02:32:27.904987 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 16 02:32:27.908973 master-0 kubenswrapper[31559]: I0216 02:32:27.908908 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-78c94dfdd9-ftz87"] Feb 16 02:32:27.922801 master-0 kubenswrapper[31559]: I0216 02:32:27.921169 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 16 02:32:27.924499 master-0 kubenswrapper[31559]: I0216 02:32:27.924093 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-78c94dfdd9-tlbf7"] Feb 16 02:32:27.938496 master-0 kubenswrapper[31559]: I0216 02:32:27.937994 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vndgc\" (UniqueName: \"kubernetes.io/projected/3992bcd2-e420-4dcb-8cdd-6364fced2ea3-kube-api-access-vndgc\") pod \"obo-prometheus-operator-68bc856cb9-gcxpl\" (UID: \"3992bcd2-e420-4dcb-8cdd-6364fced2ea3\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-gcxpl" Feb 16 02:32:27.966073 master-0 kubenswrapper[31559]: I0216 02:32:27.964637 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/eb84124a-4308-44e5-9eda-c3f70e494577-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-78c94dfdd9-tlbf7\" (UID: \"eb84124a-4308-44e5-9eda-c3f70e494577\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-78c94dfdd9-tlbf7" Feb 16 02:32:27.966073 master-0 kubenswrapper[31559]: I0216 02:32:27.964693 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d289f4fe-8bec-45e3-be40-6b25c4f5f700-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-78c94dfdd9-ftz87\" (UID: \"d289f4fe-8bec-45e3-be40-6b25c4f5f700\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-78c94dfdd9-ftz87" Feb 16 02:32:27.966073 master-0 kubenswrapper[31559]: I0216 02:32:27.964745 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/eb84124a-4308-44e5-9eda-c3f70e494577-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-78c94dfdd9-tlbf7\" (UID: \"eb84124a-4308-44e5-9eda-c3f70e494577\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-78c94dfdd9-tlbf7" Feb 16 02:32:27.966073 master-0 kubenswrapper[31559]: I0216 02:32:27.964765 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d289f4fe-8bec-45e3-be40-6b25c4f5f700-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-78c94dfdd9-ftz87\" (UID: \"d289f4fe-8bec-45e3-be40-6b25c4f5f700\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-78c94dfdd9-ftz87" Feb 16 02:32:28.024879 master-0 kubenswrapper[31559]: I0216 02:32:28.024746 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-pv82g"] Feb 16 02:32:28.030843 master-0 kubenswrapper[31559]: I0216 02:32:28.029411 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-pv82g" Feb 16 02:32:28.032797 master-0 kubenswrapper[31559]: I0216 02:32:28.031857 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 16 02:32:28.040242 master-0 kubenswrapper[31559]: I0216 02:32:28.038710 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-pv82g"] Feb 16 02:32:28.065128 master-0 kubenswrapper[31559]: I0216 02:32:28.065068 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-gcxpl" Feb 16 02:32:28.066073 master-0 kubenswrapper[31559]: I0216 02:32:28.066012 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8kz5\" (UniqueName: \"kubernetes.io/projected/d6ea2b93-5da3-4daa-90b2-d120b74eb85c-kube-api-access-d8kz5\") pod \"observability-operator-59bdc8b94-pv82g\" (UID: \"d6ea2b93-5da3-4daa-90b2-d120b74eb85c\") " pod="openshift-operators/observability-operator-59bdc8b94-pv82g" Feb 16 02:32:28.066133 master-0 kubenswrapper[31559]: I0216 02:32:28.066090 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/d6ea2b93-5da3-4daa-90b2-d120b74eb85c-observability-operator-tls\") pod \"observability-operator-59bdc8b94-pv82g\" (UID: \"d6ea2b93-5da3-4daa-90b2-d120b74eb85c\") " pod="openshift-operators/observability-operator-59bdc8b94-pv82g" Feb 16 02:32:28.066169 master-0 kubenswrapper[31559]: I0216 02:32:28.066140 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/eb84124a-4308-44e5-9eda-c3f70e494577-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-78c94dfdd9-tlbf7\" (UID: \"eb84124a-4308-44e5-9eda-c3f70e494577\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-78c94dfdd9-tlbf7" Feb 16 02:32:28.066208 master-0 kubenswrapper[31559]: I0216 02:32:28.066186 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d289f4fe-8bec-45e3-be40-6b25c4f5f700-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-78c94dfdd9-ftz87\" (UID: \"d289f4fe-8bec-45e3-be40-6b25c4f5f700\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-78c94dfdd9-ftz87" Feb 16 02:32:28.068923 master-0 kubenswrapper[31559]: I0216 02:32:28.066335 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/eb84124a-4308-44e5-9eda-c3f70e494577-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-78c94dfdd9-tlbf7\" (UID: \"eb84124a-4308-44e5-9eda-c3f70e494577\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-78c94dfdd9-tlbf7" Feb 16 02:32:28.068923 master-0 kubenswrapper[31559]: I0216 02:32:28.066401 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d289f4fe-8bec-45e3-be40-6b25c4f5f700-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-78c94dfdd9-ftz87\" (UID: \"d289f4fe-8bec-45e3-be40-6b25c4f5f700\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-78c94dfdd9-ftz87" Feb 16 02:32:28.071978 master-0 kubenswrapper[31559]: I0216 02:32:28.069698 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d289f4fe-8bec-45e3-be40-6b25c4f5f700-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-78c94dfdd9-ftz87\" (UID: \"d289f4fe-8bec-45e3-be40-6b25c4f5f700\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-78c94dfdd9-ftz87" Feb 16 02:32:28.071978 master-0 kubenswrapper[31559]: I0216 02:32:28.070310 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d289f4fe-8bec-45e3-be40-6b25c4f5f700-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-78c94dfdd9-ftz87\" (UID: \"d289f4fe-8bec-45e3-be40-6b25c4f5f700\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-78c94dfdd9-ftz87" Feb 16 02:32:28.078600 master-0 kubenswrapper[31559]: I0216 02:32:28.072655 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/eb84124a-4308-44e5-9eda-c3f70e494577-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-78c94dfdd9-tlbf7\" (UID: \"eb84124a-4308-44e5-9eda-c3f70e494577\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-78c94dfdd9-tlbf7" Feb 16 02:32:28.078600 master-0 kubenswrapper[31559]: I0216 02:32:28.076991 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/eb84124a-4308-44e5-9eda-c3f70e494577-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-78c94dfdd9-tlbf7\" (UID: \"eb84124a-4308-44e5-9eda-c3f70e494577\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-78c94dfdd9-tlbf7" Feb 16 02:32:28.178220 master-0 kubenswrapper[31559]: I0216 02:32:28.178156 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8kz5\" (UniqueName: \"kubernetes.io/projected/d6ea2b93-5da3-4daa-90b2-d120b74eb85c-kube-api-access-d8kz5\") pod \"observability-operator-59bdc8b94-pv82g\" (UID: \"d6ea2b93-5da3-4daa-90b2-d120b74eb85c\") " pod="openshift-operators/observability-operator-59bdc8b94-pv82g" Feb 16 02:32:28.178220 master-0 kubenswrapper[31559]: I0216 02:32:28.178219 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/d6ea2b93-5da3-4daa-90b2-d120b74eb85c-observability-operator-tls\") pod \"observability-operator-59bdc8b94-pv82g\" (UID: \"d6ea2b93-5da3-4daa-90b2-d120b74eb85c\") " pod="openshift-operators/observability-operator-59bdc8b94-pv82g" Feb 16 02:32:28.185704 master-0 kubenswrapper[31559]: I0216 02:32:28.185648 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/d6ea2b93-5da3-4daa-90b2-d120b74eb85c-observability-operator-tls\") pod \"observability-operator-59bdc8b94-pv82g\" (UID: \"d6ea2b93-5da3-4daa-90b2-d120b74eb85c\") " pod="openshift-operators/observability-operator-59bdc8b94-pv82g" Feb 16 02:32:28.196561 master-0 kubenswrapper[31559]: I0216 02:32:28.196389 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-5ts7h"] Feb 16 02:32:28.197637 master-0 kubenswrapper[31559]: I0216 02:32:28.197609 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-5ts7h" Feb 16 02:32:28.201455 master-0 kubenswrapper[31559]: I0216 02:32:28.201183 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8kz5\" (UniqueName: \"kubernetes.io/projected/d6ea2b93-5da3-4daa-90b2-d120b74eb85c-kube-api-access-d8kz5\") pod \"observability-operator-59bdc8b94-pv82g\" (UID: \"d6ea2b93-5da3-4daa-90b2-d120b74eb85c\") " pod="openshift-operators/observability-operator-59bdc8b94-pv82g" Feb 16 02:32:28.236854 master-0 kubenswrapper[31559]: I0216 02:32:28.236819 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-78c94dfdd9-ftz87" Feb 16 02:32:28.255364 master-0 kubenswrapper[31559]: I0216 02:32:28.255013 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-5ts7h"] Feb 16 02:32:28.302584 master-0 kubenswrapper[31559]: I0216 02:32:28.300131 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/f7eaddb9-2146-4ec1-91fd-38c1ebab0da3-openshift-service-ca\") pod \"perses-operator-5bf474d74f-5ts7h\" (UID: \"f7eaddb9-2146-4ec1-91fd-38c1ebab0da3\") " pod="openshift-operators/perses-operator-5bf474d74f-5ts7h" Feb 16 02:32:28.302584 master-0 kubenswrapper[31559]: I0216 02:32:28.300205 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxjst\" (UniqueName: \"kubernetes.io/projected/f7eaddb9-2146-4ec1-91fd-38c1ebab0da3-kube-api-access-nxjst\") pod \"perses-operator-5bf474d74f-5ts7h\" (UID: \"f7eaddb9-2146-4ec1-91fd-38c1ebab0da3\") " pod="openshift-operators/perses-operator-5bf474d74f-5ts7h" Feb 16 02:32:28.302797 master-0 kubenswrapper[31559]: I0216 02:32:28.302730 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-78c94dfdd9-tlbf7" Feb 16 02:32:28.351717 master-0 kubenswrapper[31559]: I0216 02:32:28.351672 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-pv82g" Feb 16 02:32:28.369031 master-0 kubenswrapper[31559]: I0216 02:32:28.366778 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-599c6f98c8-4w4b2" event={"ID":"a7b52d2c-3950-409f-b570-6eac039d4603","Type":"ContainerStarted","Data":"86d7c9e28e6fc7a451d26925aa6f72c6bf3675706747cd430bc604973d0fe50f"} Feb 16 02:32:28.369031 master-0 kubenswrapper[31559]: I0216 02:32:28.367721 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-599c6f98c8-4w4b2" Feb 16 02:32:28.414847 master-0 kubenswrapper[31559]: I0216 02:32:28.401656 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/f7eaddb9-2146-4ec1-91fd-38c1ebab0da3-openshift-service-ca\") pod \"perses-operator-5bf474d74f-5ts7h\" (UID: \"f7eaddb9-2146-4ec1-91fd-38c1ebab0da3\") " pod="openshift-operators/perses-operator-5bf474d74f-5ts7h" Feb 16 02:32:28.414847 master-0 kubenswrapper[31559]: I0216 02:32:28.401718 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxjst\" (UniqueName: \"kubernetes.io/projected/f7eaddb9-2146-4ec1-91fd-38c1ebab0da3-kube-api-access-nxjst\") pod \"perses-operator-5bf474d74f-5ts7h\" (UID: \"f7eaddb9-2146-4ec1-91fd-38c1ebab0da3\") " pod="openshift-operators/perses-operator-5bf474d74f-5ts7h" Feb 16 02:32:28.414847 master-0 kubenswrapper[31559]: I0216 02:32:28.402830 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/f7eaddb9-2146-4ec1-91fd-38c1ebab0da3-openshift-service-ca\") pod \"perses-operator-5bf474d74f-5ts7h\" (UID: \"f7eaddb9-2146-4ec1-91fd-38c1ebab0da3\") " pod="openshift-operators/perses-operator-5bf474d74f-5ts7h" Feb 16 02:32:28.427471 master-0 kubenswrapper[31559]: I0216 02:32:28.426659 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxjst\" (UniqueName: \"kubernetes.io/projected/f7eaddb9-2146-4ec1-91fd-38c1ebab0da3-kube-api-access-nxjst\") pod \"perses-operator-5bf474d74f-5ts7h\" (UID: \"f7eaddb9-2146-4ec1-91fd-38c1ebab0da3\") " pod="openshift-operators/perses-operator-5bf474d74f-5ts7h" Feb 16 02:32:28.553532 master-0 kubenswrapper[31559]: I0216 02:32:28.551869 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-5ts7h" Feb 16 02:32:28.568904 master-0 kubenswrapper[31559]: I0216 02:32:28.568822 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-599c6f98c8-4w4b2" podStartSLOduration=2.679158278 podStartE2EDuration="8.568791284s" podCreationTimestamp="2026-02-16 02:32:20 +0000 UTC" firstStartedPulling="2026-02-16 02:32:21.582938347 +0000 UTC m=+593.927544362" lastFinishedPulling="2026-02-16 02:32:27.472571343 +0000 UTC m=+599.817177368" observedRunningTime="2026-02-16 02:32:28.405060019 +0000 UTC m=+600.749666034" watchObservedRunningTime="2026-02-16 02:32:28.568791284 +0000 UTC m=+600.913397299" Feb 16 02:32:28.578770 master-0 kubenswrapper[31559]: I0216 02:32:28.578718 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-gcxpl"] Feb 16 02:32:28.768962 master-0 kubenswrapper[31559]: I0216 02:32:28.768038 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-545d4d4674-8z74l"] Feb 16 02:32:28.769776 master-0 kubenswrapper[31559]: I0216 02:32:28.768991 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-8z74l" Feb 16 02:32:28.785699 master-0 kubenswrapper[31559]: I0216 02:32:28.785199 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-8z74l"] Feb 16 02:32:28.811705 master-0 kubenswrapper[31559]: I0216 02:32:28.808354 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a06fbaeb-92be-4317-b5cf-23cb4b031376-bound-sa-token\") pod \"cert-manager-545d4d4674-8z74l\" (UID: \"a06fbaeb-92be-4317-b5cf-23cb4b031376\") " pod="cert-manager/cert-manager-545d4d4674-8z74l" Feb 16 02:32:28.811705 master-0 kubenswrapper[31559]: I0216 02:32:28.808448 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvxdc\" (UniqueName: \"kubernetes.io/projected/a06fbaeb-92be-4317-b5cf-23cb4b031376-kube-api-access-cvxdc\") pod \"cert-manager-545d4d4674-8z74l\" (UID: \"a06fbaeb-92be-4317-b5cf-23cb4b031376\") " pod="cert-manager/cert-manager-545d4d4674-8z74l" Feb 16 02:32:28.857425 master-0 kubenswrapper[31559]: I0216 02:32:28.857372 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-78c94dfdd9-ftz87"] Feb 16 02:32:28.907914 master-0 kubenswrapper[31559]: I0216 02:32:28.905670 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-78c94dfdd9-tlbf7"] Feb 16 02:32:28.910025 master-0 kubenswrapper[31559]: I0216 02:32:28.909981 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a06fbaeb-92be-4317-b5cf-23cb4b031376-bound-sa-token\") pod \"cert-manager-545d4d4674-8z74l\" (UID: \"a06fbaeb-92be-4317-b5cf-23cb4b031376\") " pod="cert-manager/cert-manager-545d4d4674-8z74l" Feb 16 02:32:28.910103 master-0 kubenswrapper[31559]: I0216 02:32:28.910083 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvxdc\" (UniqueName: \"kubernetes.io/projected/a06fbaeb-92be-4317-b5cf-23cb4b031376-kube-api-access-cvxdc\") pod \"cert-manager-545d4d4674-8z74l\" (UID: \"a06fbaeb-92be-4317-b5cf-23cb4b031376\") " pod="cert-manager/cert-manager-545d4d4674-8z74l" Feb 16 02:32:28.927707 master-0 kubenswrapper[31559]: I0216 02:32:28.927656 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvxdc\" (UniqueName: \"kubernetes.io/projected/a06fbaeb-92be-4317-b5cf-23cb4b031376-kube-api-access-cvxdc\") pod \"cert-manager-545d4d4674-8z74l\" (UID: \"a06fbaeb-92be-4317-b5cf-23cb4b031376\") " pod="cert-manager/cert-manager-545d4d4674-8z74l" Feb 16 02:32:28.933597 master-0 kubenswrapper[31559]: I0216 02:32:28.933223 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a06fbaeb-92be-4317-b5cf-23cb4b031376-bound-sa-token\") pod \"cert-manager-545d4d4674-8z74l\" (UID: \"a06fbaeb-92be-4317-b5cf-23cb4b031376\") " pod="cert-manager/cert-manager-545d4d4674-8z74l" Feb 16 02:32:29.004613 master-0 kubenswrapper[31559]: I0216 02:32:29.004549 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-pv82g"] Feb 16 02:32:29.005061 master-0 kubenswrapper[31559]: W0216 02:32:29.005028 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6ea2b93_5da3_4daa_90b2_d120b74eb85c.slice/crio-a8f06bbb9f7cf4f3ca439256c4954c4c9d8dc29aecadacb17f69885630d2324c WatchSource:0}: Error finding container a8f06bbb9f7cf4f3ca439256c4954c4c9d8dc29aecadacb17f69885630d2324c: Status 404 returned error can't find the container with id a8f06bbb9f7cf4f3ca439256c4954c4c9d8dc29aecadacb17f69885630d2324c Feb 16 02:32:29.086914 master-0 kubenswrapper[31559]: I0216 02:32:29.086817 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-8z74l" Feb 16 02:32:29.138581 master-0 kubenswrapper[31559]: I0216 02:32:29.138516 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-5ts7h"] Feb 16 02:32:29.155132 master-0 kubenswrapper[31559]: W0216 02:32:29.155057 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7eaddb9_2146_4ec1_91fd_38c1ebab0da3.slice/crio-c3fba4724f8b9d32510d2c251d49a909bcf560ff78f0b6e193b6c0a9bd4b34cd WatchSource:0}: Error finding container c3fba4724f8b9d32510d2c251d49a909bcf560ff78f0b6e193b6c0a9bd4b34cd: Status 404 returned error can't find the container with id c3fba4724f8b9d32510d2c251d49a909bcf560ff78f0b6e193b6c0a9bd4b34cd Feb 16 02:32:29.379825 master-0 kubenswrapper[31559]: I0216 02:32:29.379342 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-5ts7h" event={"ID":"f7eaddb9-2146-4ec1-91fd-38c1ebab0da3","Type":"ContainerStarted","Data":"c3fba4724f8b9d32510d2c251d49a909bcf560ff78f0b6e193b6c0a9bd4b34cd"} Feb 16 02:32:29.380994 master-0 kubenswrapper[31559]: I0216 02:32:29.380772 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-78c94dfdd9-ftz87" event={"ID":"d289f4fe-8bec-45e3-be40-6b25c4f5f700","Type":"ContainerStarted","Data":"ddc74dbe9d61ea689f1b9d6b74cd9f73bf434061342025eb084ab7ed87d06678"} Feb 16 02:32:29.381869 master-0 kubenswrapper[31559]: I0216 02:32:29.381823 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-gcxpl" event={"ID":"3992bcd2-e420-4dcb-8cdd-6364fced2ea3","Type":"ContainerStarted","Data":"7708ce8a20c32f31386cb35bc2e2ef7df430ad264f14f8cc64429ddf528969fc"} Feb 16 02:32:29.383313 master-0 kubenswrapper[31559]: I0216 02:32:29.383051 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-pv82g" event={"ID":"d6ea2b93-5da3-4daa-90b2-d120b74eb85c","Type":"ContainerStarted","Data":"a8f06bbb9f7cf4f3ca439256c4954c4c9d8dc29aecadacb17f69885630d2324c"} Feb 16 02:32:29.384297 master-0 kubenswrapper[31559]: I0216 02:32:29.384265 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-78c94dfdd9-tlbf7" event={"ID":"eb84124a-4308-44e5-9eda-c3f70e494577","Type":"ContainerStarted","Data":"37bc89afa667175292b1dcc9cc7dac54e5e04e0bd6ecf4258731a4d5ab67ffb2"} Feb 16 02:32:29.550209 master-0 kubenswrapper[31559]: I0216 02:32:29.550138 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-8z74l"] Feb 16 02:32:29.562762 master-0 kubenswrapper[31559]: W0216 02:32:29.562709 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda06fbaeb_92be_4317_b5cf_23cb4b031376.slice/crio-db9837ef5c05ead49c391808128a21692b395fc69dad6995c61716711ddce07a WatchSource:0}: Error finding container db9837ef5c05ead49c391808128a21692b395fc69dad6995c61716711ddce07a: Status 404 returned error can't find the container with id db9837ef5c05ead49c391808128a21692b395fc69dad6995c61716711ddce07a Feb 16 02:32:30.399564 master-0 kubenswrapper[31559]: I0216 02:32:30.399508 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-8z74l" event={"ID":"a06fbaeb-92be-4317-b5cf-23cb4b031376","Type":"ContainerStarted","Data":"5cf33608974c9ee6bbfbef39a7312644798796b4a0f72126f70d5e9aa0bbfe1b"} Feb 16 02:32:30.399564 master-0 kubenswrapper[31559]: I0216 02:32:30.399561 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-8z74l" event={"ID":"a06fbaeb-92be-4317-b5cf-23cb4b031376","Type":"ContainerStarted","Data":"db9837ef5c05ead49c391808128a21692b395fc69dad6995c61716711ddce07a"} Feb 16 02:32:30.423539 master-0 kubenswrapper[31559]: I0216 02:32:30.423471 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-545d4d4674-8z74l" podStartSLOduration=2.423457271 podStartE2EDuration="2.423457271s" podCreationTimestamp="2026-02-16 02:32:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:32:30.417965915 +0000 UTC m=+602.762571930" watchObservedRunningTime="2026-02-16 02:32:30.423457271 +0000 UTC m=+602.768063286" Feb 16 02:32:37.478654 master-0 kubenswrapper[31559]: I0216 02:32:37.478588 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-5ts7h" event={"ID":"f7eaddb9-2146-4ec1-91fd-38c1ebab0da3","Type":"ContainerStarted","Data":"aaab94f622706afd49bb57693bdd5760c411c92933f2b5baa262aa4743925745"} Feb 16 02:32:37.478654 master-0 kubenswrapper[31559]: I0216 02:32:37.478657 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-5ts7h" Feb 16 02:32:37.480944 master-0 kubenswrapper[31559]: I0216 02:32:37.480119 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-78c94dfdd9-ftz87" event={"ID":"d289f4fe-8bec-45e3-be40-6b25c4f5f700","Type":"ContainerStarted","Data":"cf25144763d61120f9345306b0f7f335e1820aa9b640521907f07ff93a6d21c9"} Feb 16 02:32:37.481370 master-0 kubenswrapper[31559]: I0216 02:32:37.481337 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-gcxpl" event={"ID":"3992bcd2-e420-4dcb-8cdd-6364fced2ea3","Type":"ContainerStarted","Data":"ac5a4731f1489de69914a02c64ce00ba27287080278f6fe6ed0a8d0cacb9e987"} Feb 16 02:32:37.483860 master-0 kubenswrapper[31559]: I0216 02:32:37.482715 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-pv82g" event={"ID":"d6ea2b93-5da3-4daa-90b2-d120b74eb85c","Type":"ContainerStarted","Data":"2039172b35c3d872dc4bf160e271bb3343ba183d8dfc4946c0671f994888d90f"} Feb 16 02:32:37.483860 master-0 kubenswrapper[31559]: I0216 02:32:37.482945 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-pv82g" Feb 16 02:32:37.485184 master-0 kubenswrapper[31559]: I0216 02:32:37.484288 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-78c94dfdd9-tlbf7" event={"ID":"eb84124a-4308-44e5-9eda-c3f70e494577","Type":"ContainerStarted","Data":"5b386a54341342521b9005ce1fa685a24ff4bd7f001bb229233a3cdb3bf94de5"} Feb 16 02:32:37.526812 master-0 kubenswrapper[31559]: I0216 02:32:37.526695 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-5ts7h" podStartSLOduration=2.220192969 podStartE2EDuration="9.526667337s" podCreationTimestamp="2026-02-16 02:32:28 +0000 UTC" firstStartedPulling="2026-02-16 02:32:29.168094108 +0000 UTC m=+601.512700123" lastFinishedPulling="2026-02-16 02:32:36.474568466 +0000 UTC m=+608.819174491" observedRunningTime="2026-02-16 02:32:37.520193777 +0000 UTC m=+609.864799792" watchObservedRunningTime="2026-02-16 02:32:37.526667337 +0000 UTC m=+609.871273372" Feb 16 02:32:37.529311 master-0 kubenswrapper[31559]: I0216 02:32:37.529251 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-pv82g" Feb 16 02:32:37.548560 master-0 kubenswrapper[31559]: I0216 02:32:37.548377 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-78c94dfdd9-tlbf7" podStartSLOduration=3.053938151 podStartE2EDuration="10.548356502s" podCreationTimestamp="2026-02-16 02:32:27 +0000 UTC" firstStartedPulling="2026-02-16 02:32:28.930321044 +0000 UTC m=+601.274927069" lastFinishedPulling="2026-02-16 02:32:36.424739375 +0000 UTC m=+608.769345420" observedRunningTime="2026-02-16 02:32:37.542229141 +0000 UTC m=+609.886835156" watchObservedRunningTime="2026-02-16 02:32:37.548356502 +0000 UTC m=+609.892962517" Feb 16 02:32:37.594240 master-0 kubenswrapper[31559]: I0216 02:32:37.594085 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-pv82g" podStartSLOduration=3.04895528 podStartE2EDuration="10.594062962s" podCreationTimestamp="2026-02-16 02:32:27 +0000 UTC" firstStartedPulling="2026-02-16 02:32:29.009226104 +0000 UTC m=+601.353832119" lastFinishedPulling="2026-02-16 02:32:36.554333786 +0000 UTC m=+608.898939801" observedRunningTime="2026-02-16 02:32:37.591702263 +0000 UTC m=+609.936308298" watchObservedRunningTime="2026-02-16 02:32:37.594062962 +0000 UTC m=+609.938668997" Feb 16 02:32:37.632970 master-0 kubenswrapper[31559]: I0216 02:32:37.631597 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-gcxpl" podStartSLOduration=2.745953415 podStartE2EDuration="10.631581849s" podCreationTimestamp="2026-02-16 02:32:27 +0000 UTC" firstStartedPulling="2026-02-16 02:32:28.588945852 +0000 UTC m=+600.933551867" lastFinishedPulling="2026-02-16 02:32:36.474574246 +0000 UTC m=+608.819180301" observedRunningTime="2026-02-16 02:32:37.629933128 +0000 UTC m=+609.974539163" watchObservedRunningTime="2026-02-16 02:32:37.631581849 +0000 UTC m=+609.976187854" Feb 16 02:32:37.667477 master-0 kubenswrapper[31559]: I0216 02:32:37.661312 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-78c94dfdd9-ftz87" podStartSLOduration=3.115267048 podStartE2EDuration="10.661293283s" podCreationTimestamp="2026-02-16 02:32:27 +0000 UTC" firstStartedPulling="2026-02-16 02:32:28.878673639 +0000 UTC m=+601.223279654" lastFinishedPulling="2026-02-16 02:32:36.424699874 +0000 UTC m=+608.769305889" observedRunningTime="2026-02-16 02:32:37.658849762 +0000 UTC m=+610.003455777" watchObservedRunningTime="2026-02-16 02:32:37.661293283 +0000 UTC m=+610.005899298" Feb 16 02:32:41.128144 master-0 kubenswrapper[31559]: I0216 02:32:41.127987 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-599c6f98c8-4w4b2" Feb 16 02:32:46.124484 master-0 kubenswrapper[31559]: I0216 02:32:46.124365 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Feb 16 02:32:46.126000 master-0 kubenswrapper[31559]: I0216 02:32:46.125942 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Feb 16 02:32:46.128716 master-0 kubenswrapper[31559]: I0216 02:32:46.128643 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-9tdhq" Feb 16 02:32:46.131943 master-0 kubenswrapper[31559]: I0216 02:32:46.131880 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 16 02:32:46.143736 master-0 kubenswrapper[31559]: I0216 02:32:46.143650 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Feb 16 02:32:46.224712 master-0 kubenswrapper[31559]: I0216 02:32:46.224633 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d6d69fdb-255c-45f7-84f9-54da25a243d8-kube-api-access\") pod \"installer-4-master-0\" (UID: \"d6d69fdb-255c-45f7-84f9-54da25a243d8\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 16 02:32:46.224712 master-0 kubenswrapper[31559]: I0216 02:32:46.224721 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d6d69fdb-255c-45f7-84f9-54da25a243d8-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"d6d69fdb-255c-45f7-84f9-54da25a243d8\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 16 02:32:46.225079 master-0 kubenswrapper[31559]: I0216 02:32:46.224803 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d6d69fdb-255c-45f7-84f9-54da25a243d8-var-lock\") pod \"installer-4-master-0\" (UID: \"d6d69fdb-255c-45f7-84f9-54da25a243d8\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 16 02:32:46.326915 master-0 kubenswrapper[31559]: I0216 02:32:46.326792 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d6d69fdb-255c-45f7-84f9-54da25a243d8-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"d6d69fdb-255c-45f7-84f9-54da25a243d8\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 16 02:32:46.327218 master-0 kubenswrapper[31559]: I0216 02:32:46.327003 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d6d69fdb-255c-45f7-84f9-54da25a243d8-var-lock\") pod \"installer-4-master-0\" (UID: \"d6d69fdb-255c-45f7-84f9-54da25a243d8\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 16 02:32:46.327218 master-0 kubenswrapper[31559]: I0216 02:32:46.327111 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d6d69fdb-255c-45f7-84f9-54da25a243d8-kube-api-access\") pod \"installer-4-master-0\" (UID: \"d6d69fdb-255c-45f7-84f9-54da25a243d8\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 16 02:32:46.327694 master-0 kubenswrapper[31559]: I0216 02:32:46.327636 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d6d69fdb-255c-45f7-84f9-54da25a243d8-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"d6d69fdb-255c-45f7-84f9-54da25a243d8\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 16 02:32:46.327806 master-0 kubenswrapper[31559]: I0216 02:32:46.327715 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d6d69fdb-255c-45f7-84f9-54da25a243d8-var-lock\") pod \"installer-4-master-0\" (UID: \"d6d69fdb-255c-45f7-84f9-54da25a243d8\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 16 02:32:46.356951 master-0 kubenswrapper[31559]: I0216 02:32:46.356878 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d6d69fdb-255c-45f7-84f9-54da25a243d8-kube-api-access\") pod \"installer-4-master-0\" (UID: \"d6d69fdb-255c-45f7-84f9-54da25a243d8\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 16 02:32:46.447062 master-0 kubenswrapper[31559]: I0216 02:32:46.446974 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Feb 16 02:32:46.968343 master-0 kubenswrapper[31559]: I0216 02:32:46.968242 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Feb 16 02:32:47.613819 master-0 kubenswrapper[31559]: I0216 02:32:47.613527 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"d6d69fdb-255c-45f7-84f9-54da25a243d8","Type":"ContainerStarted","Data":"9d920be4fe38e94375cffabfd9a1de1b225ceb2f967fb538c813df96bf70cf18"} Feb 16 02:32:47.613819 master-0 kubenswrapper[31559]: I0216 02:32:47.613619 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"d6d69fdb-255c-45f7-84f9-54da25a243d8","Type":"ContainerStarted","Data":"edaafaf53a1027a12471ed62cf8fe675b03305c2eb6c584c09ef67adba08c2f2"} Feb 16 02:32:47.651173 master-0 kubenswrapper[31559]: I0216 02:32:47.651040 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-4-master-0" podStartSLOduration=1.6510040259999998 podStartE2EDuration="1.651004026s" podCreationTimestamp="2026-02-16 02:32:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:32:47.637239626 +0000 UTC m=+619.981845691" watchObservedRunningTime="2026-02-16 02:32:47.651004026 +0000 UTC m=+619.995610081" Feb 16 02:32:48.556202 master-0 kubenswrapper[31559]: I0216 02:32:48.556145 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-5ts7h" Feb 16 02:33:00.578787 master-0 kubenswrapper[31559]: I0216 02:33:00.578689 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-77bb9bb66d-68x8r" Feb 16 02:33:08.621335 master-0 kubenswrapper[31559]: I0216 02:33:08.621242 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-nd8kr"] Feb 16 02:33:08.622928 master-0 kubenswrapper[31559]: I0216 02:33:08.622897 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-nd8kr" Feb 16 02:33:08.628930 master-0 kubenswrapper[31559]: I0216 02:33:08.628867 31559 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 16 02:33:08.629784 master-0 kubenswrapper[31559]: I0216 02:33:08.629738 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sgwf\" (UniqueName: \"kubernetes.io/projected/19493b3f-c29a-413d-84a0-dbd3edc939de-kube-api-access-2sgwf\") pod \"frr-k8s-webhook-server-78b44bf5bb-nd8kr\" (UID: \"19493b3f-c29a-413d-84a0-dbd3edc939de\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-nd8kr" Feb 16 02:33:08.630010 master-0 kubenswrapper[31559]: I0216 02:33:08.629976 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/19493b3f-c29a-413d-84a0-dbd3edc939de-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-nd8kr\" (UID: \"19493b3f-c29a-413d-84a0-dbd3edc939de\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-nd8kr" Feb 16 02:33:08.639790 master-0 kubenswrapper[31559]: I0216 02:33:08.639741 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-nd8kr"] Feb 16 02:33:08.649586 master-0 kubenswrapper[31559]: I0216 02:33:08.649508 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-dvzts"] Feb 16 02:33:08.656812 master-0 kubenswrapper[31559]: I0216 02:33:08.654010 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-dvzts" Feb 16 02:33:08.660615 master-0 kubenswrapper[31559]: I0216 02:33:08.660546 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 16 02:33:08.660876 master-0 kubenswrapper[31559]: I0216 02:33:08.660844 31559 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 16 02:33:08.735560 master-0 kubenswrapper[31559]: I0216 02:33:08.733785 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sgwf\" (UniqueName: \"kubernetes.io/projected/19493b3f-c29a-413d-84a0-dbd3edc939de-kube-api-access-2sgwf\") pod \"frr-k8s-webhook-server-78b44bf5bb-nd8kr\" (UID: \"19493b3f-c29a-413d-84a0-dbd3edc939de\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-nd8kr" Feb 16 02:33:08.735560 master-0 kubenswrapper[31559]: I0216 02:33:08.733860 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/5edbab06-c9ca-47ce-88ea-85c0ee5381f6-reloader\") pod \"frr-k8s-dvzts\" (UID: \"5edbab06-c9ca-47ce-88ea-85c0ee5381f6\") " pod="metallb-system/frr-k8s-dvzts" Feb 16 02:33:08.735560 master-0 kubenswrapper[31559]: I0216 02:33:08.733896 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/5edbab06-c9ca-47ce-88ea-85c0ee5381f6-frr-sockets\") pod \"frr-k8s-dvzts\" (UID: \"5edbab06-c9ca-47ce-88ea-85c0ee5381f6\") " pod="metallb-system/frr-k8s-dvzts" Feb 16 02:33:08.735560 master-0 kubenswrapper[31559]: I0216 02:33:08.733920 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/5edbab06-c9ca-47ce-88ea-85c0ee5381f6-metrics\") pod \"frr-k8s-dvzts\" (UID: \"5edbab06-c9ca-47ce-88ea-85c0ee5381f6\") " pod="metallb-system/frr-k8s-dvzts" Feb 16 02:33:08.735560 master-0 kubenswrapper[31559]: I0216 02:33:08.733935 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/5edbab06-c9ca-47ce-88ea-85c0ee5381f6-frr-startup\") pod \"frr-k8s-dvzts\" (UID: \"5edbab06-c9ca-47ce-88ea-85c0ee5381f6\") " pod="metallb-system/frr-k8s-dvzts" Feb 16 02:33:08.735560 master-0 kubenswrapper[31559]: I0216 02:33:08.733957 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5z25d\" (UniqueName: \"kubernetes.io/projected/5edbab06-c9ca-47ce-88ea-85c0ee5381f6-kube-api-access-5z25d\") pod \"frr-k8s-dvzts\" (UID: \"5edbab06-c9ca-47ce-88ea-85c0ee5381f6\") " pod="metallb-system/frr-k8s-dvzts" Feb 16 02:33:08.735560 master-0 kubenswrapper[31559]: I0216 02:33:08.734005 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/5edbab06-c9ca-47ce-88ea-85c0ee5381f6-frr-conf\") pod \"frr-k8s-dvzts\" (UID: \"5edbab06-c9ca-47ce-88ea-85c0ee5381f6\") " pod="metallb-system/frr-k8s-dvzts" Feb 16 02:33:08.735560 master-0 kubenswrapper[31559]: I0216 02:33:08.734034 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/19493b3f-c29a-413d-84a0-dbd3edc939de-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-nd8kr\" (UID: \"19493b3f-c29a-413d-84a0-dbd3edc939de\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-nd8kr" Feb 16 02:33:08.735560 master-0 kubenswrapper[31559]: I0216 02:33:08.734061 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5edbab06-c9ca-47ce-88ea-85c0ee5381f6-metrics-certs\") pod \"frr-k8s-dvzts\" (UID: \"5edbab06-c9ca-47ce-88ea-85c0ee5381f6\") " pod="metallb-system/frr-k8s-dvzts" Feb 16 02:33:08.735560 master-0 kubenswrapper[31559]: E0216 02:33:08.734656 31559 secret.go:189] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Feb 16 02:33:08.735560 master-0 kubenswrapper[31559]: E0216 02:33:08.734697 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/19493b3f-c29a-413d-84a0-dbd3edc939de-cert podName:19493b3f-c29a-413d-84a0-dbd3edc939de nodeName:}" failed. No retries permitted until 2026-02-16 02:33:09.234680303 +0000 UTC m=+641.579286318 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/19493b3f-c29a-413d-84a0-dbd3edc939de-cert") pod "frr-k8s-webhook-server-78b44bf5bb-nd8kr" (UID: "19493b3f-c29a-413d-84a0-dbd3edc939de") : secret "frr-k8s-webhook-server-cert" not found Feb 16 02:33:08.756485 master-0 kubenswrapper[31559]: I0216 02:33:08.754494 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-2gs6z"] Feb 16 02:33:08.756485 master-0 kubenswrapper[31559]: I0216 02:33:08.756126 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-2gs6z" Feb 16 02:33:08.759255 master-0 kubenswrapper[31559]: I0216 02:33:08.758692 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 16 02:33:08.759479 master-0 kubenswrapper[31559]: I0216 02:33:08.759392 31559 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 16 02:33:08.760915 master-0 kubenswrapper[31559]: I0216 02:33:08.759598 31559 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 16 02:33:08.768471 master-0 kubenswrapper[31559]: I0216 02:33:08.762936 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-rz89s"] Feb 16 02:33:08.768471 master-0 kubenswrapper[31559]: I0216 02:33:08.764196 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-rz89s" Feb 16 02:33:08.768471 master-0 kubenswrapper[31559]: I0216 02:33:08.765766 31559 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 16 02:33:08.777349 master-0 kubenswrapper[31559]: I0216 02:33:08.777283 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-rz89s"] Feb 16 02:33:08.784712 master-0 kubenswrapper[31559]: I0216 02:33:08.782267 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sgwf\" (UniqueName: \"kubernetes.io/projected/19493b3f-c29a-413d-84a0-dbd3edc939de-kube-api-access-2sgwf\") pod \"frr-k8s-webhook-server-78b44bf5bb-nd8kr\" (UID: \"19493b3f-c29a-413d-84a0-dbd3edc939de\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-nd8kr" Feb 16 02:33:08.839457 master-0 kubenswrapper[31559]: I0216 02:33:08.838591 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/5edbab06-c9ca-47ce-88ea-85c0ee5381f6-frr-conf\") pod \"frr-k8s-dvzts\" (UID: \"5edbab06-c9ca-47ce-88ea-85c0ee5381f6\") " pod="metallb-system/frr-k8s-dvzts" Feb 16 02:33:08.839457 master-0 kubenswrapper[31559]: I0216 02:33:08.838687 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5edbab06-c9ca-47ce-88ea-85c0ee5381f6-metrics-certs\") pod \"frr-k8s-dvzts\" (UID: \"5edbab06-c9ca-47ce-88ea-85c0ee5381f6\") " pod="metallb-system/frr-k8s-dvzts" Feb 16 02:33:08.839457 master-0 kubenswrapper[31559]: I0216 02:33:08.838719 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/5edbab06-c9ca-47ce-88ea-85c0ee5381f6-reloader\") pod \"frr-k8s-dvzts\" (UID: \"5edbab06-c9ca-47ce-88ea-85c0ee5381f6\") " pod="metallb-system/frr-k8s-dvzts" Feb 16 02:33:08.839457 master-0 kubenswrapper[31559]: I0216 02:33:08.839105 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/5edbab06-c9ca-47ce-88ea-85c0ee5381f6-reloader\") pod \"frr-k8s-dvzts\" (UID: \"5edbab06-c9ca-47ce-88ea-85c0ee5381f6\") " pod="metallb-system/frr-k8s-dvzts" Feb 16 02:33:08.839457 master-0 kubenswrapper[31559]: I0216 02:33:08.839120 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/5edbab06-c9ca-47ce-88ea-85c0ee5381f6-frr-sockets\") pod \"frr-k8s-dvzts\" (UID: \"5edbab06-c9ca-47ce-88ea-85c0ee5381f6\") " pod="metallb-system/frr-k8s-dvzts" Feb 16 02:33:08.839457 master-0 kubenswrapper[31559]: I0216 02:33:08.839172 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/5edbab06-c9ca-47ce-88ea-85c0ee5381f6-metrics\") pod \"frr-k8s-dvzts\" (UID: \"5edbab06-c9ca-47ce-88ea-85c0ee5381f6\") " pod="metallb-system/frr-k8s-dvzts" Feb 16 02:33:08.839457 master-0 kubenswrapper[31559]: I0216 02:33:08.839190 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/5edbab06-c9ca-47ce-88ea-85c0ee5381f6-frr-startup\") pod \"frr-k8s-dvzts\" (UID: \"5edbab06-c9ca-47ce-88ea-85c0ee5381f6\") " pod="metallb-system/frr-k8s-dvzts" Feb 16 02:33:08.839457 master-0 kubenswrapper[31559]: I0216 02:33:08.839214 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5z25d\" (UniqueName: \"kubernetes.io/projected/5edbab06-c9ca-47ce-88ea-85c0ee5381f6-kube-api-access-5z25d\") pod \"frr-k8s-dvzts\" (UID: \"5edbab06-c9ca-47ce-88ea-85c0ee5381f6\") " pod="metallb-system/frr-k8s-dvzts" Feb 16 02:33:08.839457 master-0 kubenswrapper[31559]: I0216 02:33:08.839401 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/5edbab06-c9ca-47ce-88ea-85c0ee5381f6-frr-conf\") pod \"frr-k8s-dvzts\" (UID: \"5edbab06-c9ca-47ce-88ea-85c0ee5381f6\") " pod="metallb-system/frr-k8s-dvzts" Feb 16 02:33:08.840013 master-0 kubenswrapper[31559]: I0216 02:33:08.839875 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/5edbab06-c9ca-47ce-88ea-85c0ee5381f6-frr-sockets\") pod \"frr-k8s-dvzts\" (UID: \"5edbab06-c9ca-47ce-88ea-85c0ee5381f6\") " pod="metallb-system/frr-k8s-dvzts" Feb 16 02:33:08.854456 master-0 kubenswrapper[31559]: I0216 02:33:08.840131 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/5edbab06-c9ca-47ce-88ea-85c0ee5381f6-metrics\") pod \"frr-k8s-dvzts\" (UID: \"5edbab06-c9ca-47ce-88ea-85c0ee5381f6\") " pod="metallb-system/frr-k8s-dvzts" Feb 16 02:33:08.854456 master-0 kubenswrapper[31559]: I0216 02:33:08.841610 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/5edbab06-c9ca-47ce-88ea-85c0ee5381f6-frr-startup\") pod \"frr-k8s-dvzts\" (UID: \"5edbab06-c9ca-47ce-88ea-85c0ee5381f6\") " pod="metallb-system/frr-k8s-dvzts" Feb 16 02:33:08.854456 master-0 kubenswrapper[31559]: I0216 02:33:08.842940 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5edbab06-c9ca-47ce-88ea-85c0ee5381f6-metrics-certs\") pod \"frr-k8s-dvzts\" (UID: \"5edbab06-c9ca-47ce-88ea-85c0ee5381f6\") " pod="metallb-system/frr-k8s-dvzts" Feb 16 02:33:08.877534 master-0 kubenswrapper[31559]: I0216 02:33:08.877290 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5z25d\" (UniqueName: \"kubernetes.io/projected/5edbab06-c9ca-47ce-88ea-85c0ee5381f6-kube-api-access-5z25d\") pod \"frr-k8s-dvzts\" (UID: \"5edbab06-c9ca-47ce-88ea-85c0ee5381f6\") " pod="metallb-system/frr-k8s-dvzts" Feb 16 02:33:08.940779 master-0 kubenswrapper[31559]: I0216 02:33:08.940383 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7bf5962c-cc36-4cf3-8ffe-e16bce282d53-memberlist\") pod \"speaker-2gs6z\" (UID: \"7bf5962c-cc36-4cf3-8ffe-e16bce282d53\") " pod="metallb-system/speaker-2gs6z" Feb 16 02:33:08.940779 master-0 kubenswrapper[31559]: I0216 02:33:08.940597 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18768123-7a96-4518-b12c-bf696c0449db-metrics-certs\") pod \"controller-69bbfbf88f-rz89s\" (UID: \"18768123-7a96-4518-b12c-bf696c0449db\") " pod="metallb-system/controller-69bbfbf88f-rz89s" Feb 16 02:33:08.940779 master-0 kubenswrapper[31559]: I0216 02:33:08.940722 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/18768123-7a96-4518-b12c-bf696c0449db-cert\") pod \"controller-69bbfbf88f-rz89s\" (UID: \"18768123-7a96-4518-b12c-bf696c0449db\") " pod="metallb-system/controller-69bbfbf88f-rz89s" Feb 16 02:33:08.941245 master-0 kubenswrapper[31559]: I0216 02:33:08.940991 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9n9f\" (UniqueName: \"kubernetes.io/projected/7bf5962c-cc36-4cf3-8ffe-e16bce282d53-kube-api-access-v9n9f\") pod \"speaker-2gs6z\" (UID: \"7bf5962c-cc36-4cf3-8ffe-e16bce282d53\") " pod="metallb-system/speaker-2gs6z" Feb 16 02:33:08.941245 master-0 kubenswrapper[31559]: I0216 02:33:08.941096 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7bf5962c-cc36-4cf3-8ffe-e16bce282d53-metrics-certs\") pod \"speaker-2gs6z\" (UID: \"7bf5962c-cc36-4cf3-8ffe-e16bce282d53\") " pod="metallb-system/speaker-2gs6z" Feb 16 02:33:08.941245 master-0 kubenswrapper[31559]: I0216 02:33:08.941179 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/7bf5962c-cc36-4cf3-8ffe-e16bce282d53-metallb-excludel2\") pod \"speaker-2gs6z\" (UID: \"7bf5962c-cc36-4cf3-8ffe-e16bce282d53\") " pod="metallb-system/speaker-2gs6z" Feb 16 02:33:08.941245 master-0 kubenswrapper[31559]: I0216 02:33:08.941225 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92vph\" (UniqueName: \"kubernetes.io/projected/18768123-7a96-4518-b12c-bf696c0449db-kube-api-access-92vph\") pod \"controller-69bbfbf88f-rz89s\" (UID: \"18768123-7a96-4518-b12c-bf696c0449db\") " pod="metallb-system/controller-69bbfbf88f-rz89s" Feb 16 02:33:09.003420 master-0 kubenswrapper[31559]: I0216 02:33:09.003334 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-dvzts" Feb 16 02:33:09.041776 master-0 kubenswrapper[31559]: I0216 02:33:09.041711 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9n9f\" (UniqueName: \"kubernetes.io/projected/7bf5962c-cc36-4cf3-8ffe-e16bce282d53-kube-api-access-v9n9f\") pod \"speaker-2gs6z\" (UID: \"7bf5962c-cc36-4cf3-8ffe-e16bce282d53\") " pod="metallb-system/speaker-2gs6z" Feb 16 02:33:09.042454 master-0 kubenswrapper[31559]: I0216 02:33:09.042398 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7bf5962c-cc36-4cf3-8ffe-e16bce282d53-metrics-certs\") pod \"speaker-2gs6z\" (UID: \"7bf5962c-cc36-4cf3-8ffe-e16bce282d53\") " pod="metallb-system/speaker-2gs6z" Feb 16 02:33:09.042611 master-0 kubenswrapper[31559]: I0216 02:33:09.042569 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/7bf5962c-cc36-4cf3-8ffe-e16bce282d53-metallb-excludel2\") pod \"speaker-2gs6z\" (UID: \"7bf5962c-cc36-4cf3-8ffe-e16bce282d53\") " pod="metallb-system/speaker-2gs6z" Feb 16 02:33:09.042690 master-0 kubenswrapper[31559]: I0216 02:33:09.042639 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92vph\" (UniqueName: \"kubernetes.io/projected/18768123-7a96-4518-b12c-bf696c0449db-kube-api-access-92vph\") pod \"controller-69bbfbf88f-rz89s\" (UID: \"18768123-7a96-4518-b12c-bf696c0449db\") " pod="metallb-system/controller-69bbfbf88f-rz89s" Feb 16 02:33:09.042761 master-0 kubenswrapper[31559]: I0216 02:33:09.042735 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7bf5962c-cc36-4cf3-8ffe-e16bce282d53-memberlist\") pod \"speaker-2gs6z\" (UID: \"7bf5962c-cc36-4cf3-8ffe-e16bce282d53\") " pod="metallb-system/speaker-2gs6z" Feb 16 02:33:09.043050 master-0 kubenswrapper[31559]: I0216 02:33:09.043011 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18768123-7a96-4518-b12c-bf696c0449db-metrics-certs\") pod \"controller-69bbfbf88f-rz89s\" (UID: \"18768123-7a96-4518-b12c-bf696c0449db\") " pod="metallb-system/controller-69bbfbf88f-rz89s" Feb 16 02:33:09.043227 master-0 kubenswrapper[31559]: I0216 02:33:09.043189 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/18768123-7a96-4518-b12c-bf696c0449db-cert\") pod \"controller-69bbfbf88f-rz89s\" (UID: \"18768123-7a96-4518-b12c-bf696c0449db\") " pod="metallb-system/controller-69bbfbf88f-rz89s" Feb 16 02:33:09.043784 master-0 kubenswrapper[31559]: I0216 02:33:09.043574 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/7bf5962c-cc36-4cf3-8ffe-e16bce282d53-metallb-excludel2\") pod \"speaker-2gs6z\" (UID: \"7bf5962c-cc36-4cf3-8ffe-e16bce282d53\") " pod="metallb-system/speaker-2gs6z" Feb 16 02:33:09.044098 master-0 kubenswrapper[31559]: E0216 02:33:09.044051 31559 secret.go:189] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 16 02:33:09.044178 master-0 kubenswrapper[31559]: E0216 02:33:09.044133 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7bf5962c-cc36-4cf3-8ffe-e16bce282d53-memberlist podName:7bf5962c-cc36-4cf3-8ffe-e16bce282d53 nodeName:}" failed. No retries permitted until 2026-02-16 02:33:09.544108872 +0000 UTC m=+641.888714927 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/7bf5962c-cc36-4cf3-8ffe-e16bce282d53-memberlist") pod "speaker-2gs6z" (UID: "7bf5962c-cc36-4cf3-8ffe-e16bce282d53") : secret "metallb-memberlist" not found Feb 16 02:33:09.047619 master-0 kubenswrapper[31559]: I0216 02:33:09.046579 31559 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 16 02:33:09.047619 master-0 kubenswrapper[31559]: I0216 02:33:09.047543 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18768123-7a96-4518-b12c-bf696c0449db-metrics-certs\") pod \"controller-69bbfbf88f-rz89s\" (UID: \"18768123-7a96-4518-b12c-bf696c0449db\") " pod="metallb-system/controller-69bbfbf88f-rz89s" Feb 16 02:33:09.048518 master-0 kubenswrapper[31559]: I0216 02:33:09.048477 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7bf5962c-cc36-4cf3-8ffe-e16bce282d53-metrics-certs\") pod \"speaker-2gs6z\" (UID: \"7bf5962c-cc36-4cf3-8ffe-e16bce282d53\") " pod="metallb-system/speaker-2gs6z" Feb 16 02:33:09.059599 master-0 kubenswrapper[31559]: I0216 02:33:09.059538 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/18768123-7a96-4518-b12c-bf696c0449db-cert\") pod \"controller-69bbfbf88f-rz89s\" (UID: \"18768123-7a96-4518-b12c-bf696c0449db\") " pod="metallb-system/controller-69bbfbf88f-rz89s" Feb 16 02:33:09.068602 master-0 kubenswrapper[31559]: I0216 02:33:09.068549 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92vph\" (UniqueName: \"kubernetes.io/projected/18768123-7a96-4518-b12c-bf696c0449db-kube-api-access-92vph\") pod \"controller-69bbfbf88f-rz89s\" (UID: \"18768123-7a96-4518-b12c-bf696c0449db\") " pod="metallb-system/controller-69bbfbf88f-rz89s" Feb 16 02:33:09.072501 master-0 kubenswrapper[31559]: I0216 02:33:09.072453 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9n9f\" (UniqueName: \"kubernetes.io/projected/7bf5962c-cc36-4cf3-8ffe-e16bce282d53-kube-api-access-v9n9f\") pod \"speaker-2gs6z\" (UID: \"7bf5962c-cc36-4cf3-8ffe-e16bce282d53\") " pod="metallb-system/speaker-2gs6z" Feb 16 02:33:09.124027 master-0 kubenswrapper[31559]: I0216 02:33:09.123959 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-rz89s" Feb 16 02:33:09.250552 master-0 kubenswrapper[31559]: I0216 02:33:09.250464 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/19493b3f-c29a-413d-84a0-dbd3edc939de-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-nd8kr\" (UID: \"19493b3f-c29a-413d-84a0-dbd3edc939de\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-nd8kr" Feb 16 02:33:09.262999 master-0 kubenswrapper[31559]: I0216 02:33:09.257792 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/19493b3f-c29a-413d-84a0-dbd3edc939de-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-nd8kr\" (UID: \"19493b3f-c29a-413d-84a0-dbd3edc939de\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-nd8kr" Feb 16 02:33:09.273831 master-0 kubenswrapper[31559]: I0216 02:33:09.273758 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-nd8kr" Feb 16 02:33:09.555363 master-0 kubenswrapper[31559]: I0216 02:33:09.555253 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7bf5962c-cc36-4cf3-8ffe-e16bce282d53-memberlist\") pod \"speaker-2gs6z\" (UID: \"7bf5962c-cc36-4cf3-8ffe-e16bce282d53\") " pod="metallb-system/speaker-2gs6z" Feb 16 02:33:09.555796 master-0 kubenswrapper[31559]: E0216 02:33:09.555476 31559 secret.go:189] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 16 02:33:09.555796 master-0 kubenswrapper[31559]: E0216 02:33:09.555523 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7bf5962c-cc36-4cf3-8ffe-e16bce282d53-memberlist podName:7bf5962c-cc36-4cf3-8ffe-e16bce282d53 nodeName:}" failed. No retries permitted until 2026-02-16 02:33:10.555509634 +0000 UTC m=+642.900115649 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/7bf5962c-cc36-4cf3-8ffe-e16bce282d53-memberlist") pod "speaker-2gs6z" (UID: "7bf5962c-cc36-4cf3-8ffe-e16bce282d53") : secret "metallb-memberlist" not found Feb 16 02:33:09.645129 master-0 kubenswrapper[31559]: I0216 02:33:09.645019 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-rz89s"] Feb 16 02:33:09.646255 master-0 kubenswrapper[31559]: W0216 02:33:09.646187 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18768123_7a96_4518_b12c_bf696c0449db.slice/crio-8f9109548ba8ad253c03f7ebf9f16da558757e5119be1fa0dd58cf72e1878beb WatchSource:0}: Error finding container 8f9109548ba8ad253c03f7ebf9f16da558757e5119be1fa0dd58cf72e1878beb: Status 404 returned error can't find the container with id 8f9109548ba8ad253c03f7ebf9f16da558757e5119be1fa0dd58cf72e1878beb Feb 16 02:33:09.731328 master-0 kubenswrapper[31559]: I0216 02:33:09.731278 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-nd8kr"] Feb 16 02:33:09.751005 master-0 kubenswrapper[31559]: W0216 02:33:09.750931 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod19493b3f_c29a_413d_84a0_dbd3edc939de.slice/crio-236498370d1bb7dfa7def62f7a4f62473609049b19be32445e4ba9ba379583a3 WatchSource:0}: Error finding container 236498370d1bb7dfa7def62f7a4f62473609049b19be32445e4ba9ba379583a3: Status 404 returned error can't find the container with id 236498370d1bb7dfa7def62f7a4f62473609049b19be32445e4ba9ba379583a3 Feb 16 02:33:09.843811 master-0 kubenswrapper[31559]: I0216 02:33:09.843760 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-rz89s" event={"ID":"18768123-7a96-4518-b12c-bf696c0449db","Type":"ContainerStarted","Data":"01bb0e4b726be16008dce2158911ec9d1aca1898cab47c9b964bbad40d026ff8"} Feb 16 02:33:09.843953 master-0 kubenswrapper[31559]: I0216 02:33:09.843935 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-rz89s" event={"ID":"18768123-7a96-4518-b12c-bf696c0449db","Type":"ContainerStarted","Data":"8f9109548ba8ad253c03f7ebf9f16da558757e5119be1fa0dd58cf72e1878beb"} Feb 16 02:33:09.845636 master-0 kubenswrapper[31559]: I0216 02:33:09.845585 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-nd8kr" event={"ID":"19493b3f-c29a-413d-84a0-dbd3edc939de","Type":"ContainerStarted","Data":"236498370d1bb7dfa7def62f7a4f62473609049b19be32445e4ba9ba379583a3"} Feb 16 02:33:09.847692 master-0 kubenswrapper[31559]: I0216 02:33:09.847620 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dvzts" event={"ID":"5edbab06-c9ca-47ce-88ea-85c0ee5381f6","Type":"ContainerStarted","Data":"fbca5ab0aca166a4a713ee39ffde8fa09d897f2a4c058102e953ded63c581b2a"} Feb 16 02:33:10.576103 master-0 kubenswrapper[31559]: I0216 02:33:10.576027 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7bf5962c-cc36-4cf3-8ffe-e16bce282d53-memberlist\") pod \"speaker-2gs6z\" (UID: \"7bf5962c-cc36-4cf3-8ffe-e16bce282d53\") " pod="metallb-system/speaker-2gs6z" Feb 16 02:33:10.579648 master-0 kubenswrapper[31559]: I0216 02:33:10.579619 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7bf5962c-cc36-4cf3-8ffe-e16bce282d53-memberlist\") pod \"speaker-2gs6z\" (UID: \"7bf5962c-cc36-4cf3-8ffe-e16bce282d53\") " pod="metallb-system/speaker-2gs6z" Feb 16 02:33:10.613369 master-0 kubenswrapper[31559]: I0216 02:33:10.613312 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-2gs6z" Feb 16 02:33:10.639419 master-0 kubenswrapper[31559]: W0216 02:33:10.639348 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7bf5962c_cc36_4cf3_8ffe_e16bce282d53.slice/crio-1eec88ef0d1074c5349a2daa4e6aebb6a5bfc1b537803eb425b2e6a0a6f90760 WatchSource:0}: Error finding container 1eec88ef0d1074c5349a2daa4e6aebb6a5bfc1b537803eb425b2e6a0a6f90760: Status 404 returned error can't find the container with id 1eec88ef0d1074c5349a2daa4e6aebb6a5bfc1b537803eb425b2e6a0a6f90760 Feb 16 02:33:10.729749 master-0 kubenswrapper[31559]: I0216 02:33:10.729624 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-kv979"] Feb 16 02:33:10.734427 master-0 kubenswrapper[31559]: I0216 02:33:10.731067 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-kv979" Feb 16 02:33:10.752844 master-0 kubenswrapper[31559]: I0216 02:33:10.742281 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-zxqf8"] Feb 16 02:33:10.752844 master-0 kubenswrapper[31559]: I0216 02:33:10.747336 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-zxqf8" Feb 16 02:33:10.752844 master-0 kubenswrapper[31559]: I0216 02:33:10.748814 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 16 02:33:10.752844 master-0 kubenswrapper[31559]: I0216 02:33:10.750653 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-kv979"] Feb 16 02:33:10.757975 master-0 kubenswrapper[31559]: I0216 02:33:10.756784 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-zxqf8"] Feb 16 02:33:10.766899 master-0 kubenswrapper[31559]: I0216 02:33:10.765321 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-kc4qt"] Feb 16 02:33:10.768073 master-0 kubenswrapper[31559]: I0216 02:33:10.768031 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-kc4qt" Feb 16 02:33:10.861759 master-0 kubenswrapper[31559]: I0216 02:33:10.861312 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-2gs6z" event={"ID":"7bf5962c-cc36-4cf3-8ffe-e16bce282d53","Type":"ContainerStarted","Data":"1eec88ef0d1074c5349a2daa4e6aebb6a5bfc1b537803eb425b2e6a0a6f90760"} Feb 16 02:33:10.883490 master-0 kubenswrapper[31559]: I0216 02:33:10.882508 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/320d604e-e520-413c-ae30-a9fdc811fa53-nmstate-lock\") pod \"nmstate-handler-kc4qt\" (UID: \"320d604e-e520-413c-ae30-a9fdc811fa53\") " pod="openshift-nmstate/nmstate-handler-kc4qt" Feb 16 02:33:10.883490 master-0 kubenswrapper[31559]: I0216 02:33:10.882653 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfl8x\" (UniqueName: \"kubernetes.io/projected/320d604e-e520-413c-ae30-a9fdc811fa53-kube-api-access-xfl8x\") pod \"nmstate-handler-kc4qt\" (UID: \"320d604e-e520-413c-ae30-a9fdc811fa53\") " pod="openshift-nmstate/nmstate-handler-kc4qt" Feb 16 02:33:10.883490 master-0 kubenswrapper[31559]: I0216 02:33:10.882746 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pkjl\" (UniqueName: \"kubernetes.io/projected/f63a64f9-6655-4020-9694-1c03f02981d9-kube-api-access-6pkjl\") pod \"nmstate-metrics-58c85c668d-kv979\" (UID: \"f63a64f9-6655-4020-9694-1c03f02981d9\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-kv979" Feb 16 02:33:10.883490 master-0 kubenswrapper[31559]: I0216 02:33:10.882810 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/1648575c-6eb8-42a0-9c03-857e848692c6-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-zxqf8\" (UID: \"1648575c-6eb8-42a0-9c03-857e848692c6\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-zxqf8" Feb 16 02:33:10.883490 master-0 kubenswrapper[31559]: I0216 02:33:10.882875 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/320d604e-e520-413c-ae30-a9fdc811fa53-dbus-socket\") pod \"nmstate-handler-kc4qt\" (UID: \"320d604e-e520-413c-ae30-a9fdc811fa53\") " pod="openshift-nmstate/nmstate-handler-kc4qt" Feb 16 02:33:10.883490 master-0 kubenswrapper[31559]: I0216 02:33:10.882986 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpk5b\" (UniqueName: \"kubernetes.io/projected/1648575c-6eb8-42a0-9c03-857e848692c6-kube-api-access-cpk5b\") pod \"nmstate-webhook-866bcb46dc-zxqf8\" (UID: \"1648575c-6eb8-42a0-9c03-857e848692c6\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-zxqf8" Feb 16 02:33:10.883490 master-0 kubenswrapper[31559]: I0216 02:33:10.883094 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/320d604e-e520-413c-ae30-a9fdc811fa53-ovs-socket\") pod \"nmstate-handler-kc4qt\" (UID: \"320d604e-e520-413c-ae30-a9fdc811fa53\") " pod="openshift-nmstate/nmstate-handler-kc4qt" Feb 16 02:33:10.904039 master-0 kubenswrapper[31559]: I0216 02:33:10.903991 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-hj8qf"] Feb 16 02:33:10.905168 master-0 kubenswrapper[31559]: I0216 02:33:10.905146 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-hj8qf" Feb 16 02:33:10.907744 master-0 kubenswrapper[31559]: I0216 02:33:10.907630 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 16 02:33:10.908008 master-0 kubenswrapper[31559]: I0216 02:33:10.907972 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 16 02:33:10.914177 master-0 kubenswrapper[31559]: I0216 02:33:10.914107 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-hj8qf"] Feb 16 02:33:10.993474 master-0 kubenswrapper[31559]: I0216 02:33:10.993411 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfl8x\" (UniqueName: \"kubernetes.io/projected/320d604e-e520-413c-ae30-a9fdc811fa53-kube-api-access-xfl8x\") pod \"nmstate-handler-kc4qt\" (UID: \"320d604e-e520-413c-ae30-a9fdc811fa53\") " pod="openshift-nmstate/nmstate-handler-kc4qt" Feb 16 02:33:10.993562 master-0 kubenswrapper[31559]: I0216 02:33:10.993497 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pkjl\" (UniqueName: \"kubernetes.io/projected/f63a64f9-6655-4020-9694-1c03f02981d9-kube-api-access-6pkjl\") pod \"nmstate-metrics-58c85c668d-kv979\" (UID: \"f63a64f9-6655-4020-9694-1c03f02981d9\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-kv979" Feb 16 02:33:10.993562 master-0 kubenswrapper[31559]: I0216 02:33:10.993525 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/1648575c-6eb8-42a0-9c03-857e848692c6-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-zxqf8\" (UID: \"1648575c-6eb8-42a0-9c03-857e848692c6\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-zxqf8" Feb 16 02:33:10.993652 master-0 kubenswrapper[31559]: E0216 02:33:10.993633 31559 secret.go:189] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Feb 16 02:33:10.993818 master-0 kubenswrapper[31559]: I0216 02:33:10.993792 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/320d604e-e520-413c-ae30-a9fdc811fa53-dbus-socket\") pod \"nmstate-handler-kc4qt\" (UID: \"320d604e-e520-413c-ae30-a9fdc811fa53\") " pod="openshift-nmstate/nmstate-handler-kc4qt" Feb 16 02:33:10.993862 master-0 kubenswrapper[31559]: I0216 02:33:10.993851 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpk5b\" (UniqueName: \"kubernetes.io/projected/1648575c-6eb8-42a0-9c03-857e848692c6-kube-api-access-cpk5b\") pod \"nmstate-webhook-866bcb46dc-zxqf8\" (UID: \"1648575c-6eb8-42a0-9c03-857e848692c6\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-zxqf8" Feb 16 02:33:10.993951 master-0 kubenswrapper[31559]: E0216 02:33:10.993878 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1648575c-6eb8-42a0-9c03-857e848692c6-tls-key-pair podName:1648575c-6eb8-42a0-9c03-857e848692c6 nodeName:}" failed. No retries permitted until 2026-02-16 02:33:11.493855059 +0000 UTC m=+643.838461074 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/1648575c-6eb8-42a0-9c03-857e848692c6-tls-key-pair") pod "nmstate-webhook-866bcb46dc-zxqf8" (UID: "1648575c-6eb8-42a0-9c03-857e848692c6") : secret "openshift-nmstate-webhook" not found Feb 16 02:33:10.993995 master-0 kubenswrapper[31559]: I0216 02:33:10.993957 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/320d604e-e520-413c-ae30-a9fdc811fa53-dbus-socket\") pod \"nmstate-handler-kc4qt\" (UID: \"320d604e-e520-413c-ae30-a9fdc811fa53\") " pod="openshift-nmstate/nmstate-handler-kc4qt" Feb 16 02:33:10.994105 master-0 kubenswrapper[31559]: I0216 02:33:10.994080 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/320d604e-e520-413c-ae30-a9fdc811fa53-ovs-socket\") pod \"nmstate-handler-kc4qt\" (UID: \"320d604e-e520-413c-ae30-a9fdc811fa53\") " pod="openshift-nmstate/nmstate-handler-kc4qt" Feb 16 02:33:10.994143 master-0 kubenswrapper[31559]: I0216 02:33:10.994091 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/320d604e-e520-413c-ae30-a9fdc811fa53-ovs-socket\") pod \"nmstate-handler-kc4qt\" (UID: \"320d604e-e520-413c-ae30-a9fdc811fa53\") " pod="openshift-nmstate/nmstate-handler-kc4qt" Feb 16 02:33:10.994191 master-0 kubenswrapper[31559]: I0216 02:33:10.994176 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/320d604e-e520-413c-ae30-a9fdc811fa53-nmstate-lock\") pod \"nmstate-handler-kc4qt\" (UID: \"320d604e-e520-413c-ae30-a9fdc811fa53\") " pod="openshift-nmstate/nmstate-handler-kc4qt" Feb 16 02:33:10.998465 master-0 kubenswrapper[31559]: I0216 02:33:10.994804 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/320d604e-e520-413c-ae30-a9fdc811fa53-nmstate-lock\") pod \"nmstate-handler-kc4qt\" (UID: \"320d604e-e520-413c-ae30-a9fdc811fa53\") " pod="openshift-nmstate/nmstate-handler-kc4qt" Feb 16 02:33:11.022040 master-0 kubenswrapper[31559]: I0216 02:33:11.020182 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfl8x\" (UniqueName: \"kubernetes.io/projected/320d604e-e520-413c-ae30-a9fdc811fa53-kube-api-access-xfl8x\") pod \"nmstate-handler-kc4qt\" (UID: \"320d604e-e520-413c-ae30-a9fdc811fa53\") " pod="openshift-nmstate/nmstate-handler-kc4qt" Feb 16 02:33:11.022265 master-0 kubenswrapper[31559]: I0216 02:33:11.022103 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpk5b\" (UniqueName: \"kubernetes.io/projected/1648575c-6eb8-42a0-9c03-857e848692c6-kube-api-access-cpk5b\") pod \"nmstate-webhook-866bcb46dc-zxqf8\" (UID: \"1648575c-6eb8-42a0-9c03-857e848692c6\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-zxqf8" Feb 16 02:33:11.023959 master-0 kubenswrapper[31559]: I0216 02:33:11.023351 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pkjl\" (UniqueName: \"kubernetes.io/projected/f63a64f9-6655-4020-9694-1c03f02981d9-kube-api-access-6pkjl\") pod \"nmstate-metrics-58c85c668d-kv979\" (UID: \"f63a64f9-6655-4020-9694-1c03f02981d9\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-kv979" Feb 16 02:33:11.096530 master-0 kubenswrapper[31559]: I0216 02:33:11.095645 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/009d278d-7798-419f-9fd3-c927da6cce48-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-hj8qf\" (UID: \"009d278d-7798-419f-9fd3-c927da6cce48\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-hj8qf" Feb 16 02:33:11.096530 master-0 kubenswrapper[31559]: I0216 02:33:11.095709 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gg8l\" (UniqueName: \"kubernetes.io/projected/009d278d-7798-419f-9fd3-c927da6cce48-kube-api-access-2gg8l\") pod \"nmstate-console-plugin-5c78fc5d65-hj8qf\" (UID: \"009d278d-7798-419f-9fd3-c927da6cce48\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-hj8qf" Feb 16 02:33:11.096530 master-0 kubenswrapper[31559]: I0216 02:33:11.095755 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/009d278d-7798-419f-9fd3-c927da6cce48-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-hj8qf\" (UID: \"009d278d-7798-419f-9fd3-c927da6cce48\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-hj8qf" Feb 16 02:33:11.096530 master-0 kubenswrapper[31559]: I0216 02:33:11.095946 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-kv979" Feb 16 02:33:11.108687 master-0 kubenswrapper[31559]: I0216 02:33:11.108639 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-665f676c66-czhzj"] Feb 16 02:33:11.110707 master-0 kubenswrapper[31559]: I0216 02:33:11.110538 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-665f676c66-czhzj" Feb 16 02:33:11.119821 master-0 kubenswrapper[31559]: I0216 02:33:11.119391 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-665f676c66-czhzj"] Feb 16 02:33:11.120034 master-0 kubenswrapper[31559]: I0216 02:33:11.119997 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-kc4qt" Feb 16 02:33:11.198033 master-0 kubenswrapper[31559]: I0216 02:33:11.197993 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctgng\" (UniqueName: \"kubernetes.io/projected/f056aa8e-08ab-4ecc-a809-97e23224e014-kube-api-access-ctgng\") pod \"console-665f676c66-czhzj\" (UID: \"f056aa8e-08ab-4ecc-a809-97e23224e014\") " pod="openshift-console/console-665f676c66-czhzj" Feb 16 02:33:11.198155 master-0 kubenswrapper[31559]: I0216 02:33:11.198052 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f056aa8e-08ab-4ecc-a809-97e23224e014-trusted-ca-bundle\") pod \"console-665f676c66-czhzj\" (UID: \"f056aa8e-08ab-4ecc-a809-97e23224e014\") " pod="openshift-console/console-665f676c66-czhzj" Feb 16 02:33:11.198155 master-0 kubenswrapper[31559]: I0216 02:33:11.198116 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f056aa8e-08ab-4ecc-a809-97e23224e014-console-serving-cert\") pod \"console-665f676c66-czhzj\" (UID: \"f056aa8e-08ab-4ecc-a809-97e23224e014\") " pod="openshift-console/console-665f676c66-czhzj" Feb 16 02:33:11.198257 master-0 kubenswrapper[31559]: I0216 02:33:11.198169 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f056aa8e-08ab-4ecc-a809-97e23224e014-service-ca\") pod \"console-665f676c66-czhzj\" (UID: \"f056aa8e-08ab-4ecc-a809-97e23224e014\") " pod="openshift-console/console-665f676c66-czhzj" Feb 16 02:33:11.198257 master-0 kubenswrapper[31559]: I0216 02:33:11.198204 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/009d278d-7798-419f-9fd3-c927da6cce48-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-hj8qf\" (UID: \"009d278d-7798-419f-9fd3-c927da6cce48\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-hj8qf" Feb 16 02:33:11.198257 master-0 kubenswrapper[31559]: I0216 02:33:11.198250 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f056aa8e-08ab-4ecc-a809-97e23224e014-console-oauth-config\") pod \"console-665f676c66-czhzj\" (UID: \"f056aa8e-08ab-4ecc-a809-97e23224e014\") " pod="openshift-console/console-665f676c66-czhzj" Feb 16 02:33:11.198399 master-0 kubenswrapper[31559]: I0216 02:33:11.198288 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gg8l\" (UniqueName: \"kubernetes.io/projected/009d278d-7798-419f-9fd3-c927da6cce48-kube-api-access-2gg8l\") pod \"nmstate-console-plugin-5c78fc5d65-hj8qf\" (UID: \"009d278d-7798-419f-9fd3-c927da6cce48\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-hj8qf" Feb 16 02:33:11.198399 master-0 kubenswrapper[31559]: I0216 02:33:11.198317 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f056aa8e-08ab-4ecc-a809-97e23224e014-oauth-serving-cert\") pod \"console-665f676c66-czhzj\" (UID: \"f056aa8e-08ab-4ecc-a809-97e23224e014\") " pod="openshift-console/console-665f676c66-czhzj" Feb 16 02:33:11.198399 master-0 kubenswrapper[31559]: I0216 02:33:11.198354 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f056aa8e-08ab-4ecc-a809-97e23224e014-console-config\") pod \"console-665f676c66-czhzj\" (UID: \"f056aa8e-08ab-4ecc-a809-97e23224e014\") " pod="openshift-console/console-665f676c66-czhzj" Feb 16 02:33:11.198553 master-0 kubenswrapper[31559]: I0216 02:33:11.198400 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/009d278d-7798-419f-9fd3-c927da6cce48-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-hj8qf\" (UID: \"009d278d-7798-419f-9fd3-c927da6cce48\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-hj8qf" Feb 16 02:33:11.199958 master-0 kubenswrapper[31559]: I0216 02:33:11.199920 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/009d278d-7798-419f-9fd3-c927da6cce48-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-hj8qf\" (UID: \"009d278d-7798-419f-9fd3-c927da6cce48\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-hj8qf" Feb 16 02:33:11.204164 master-0 kubenswrapper[31559]: I0216 02:33:11.204128 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/009d278d-7798-419f-9fd3-c927da6cce48-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-hj8qf\" (UID: \"009d278d-7798-419f-9fd3-c927da6cce48\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-hj8qf" Feb 16 02:33:11.226152 master-0 kubenswrapper[31559]: I0216 02:33:11.226109 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gg8l\" (UniqueName: \"kubernetes.io/projected/009d278d-7798-419f-9fd3-c927da6cce48-kube-api-access-2gg8l\") pod \"nmstate-console-plugin-5c78fc5d65-hj8qf\" (UID: \"009d278d-7798-419f-9fd3-c927da6cce48\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-hj8qf" Feb 16 02:33:11.248472 master-0 kubenswrapper[31559]: I0216 02:33:11.248392 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-hj8qf" Feb 16 02:33:11.301626 master-0 kubenswrapper[31559]: I0216 02:33:11.300149 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f056aa8e-08ab-4ecc-a809-97e23224e014-console-serving-cert\") pod \"console-665f676c66-czhzj\" (UID: \"f056aa8e-08ab-4ecc-a809-97e23224e014\") " pod="openshift-console/console-665f676c66-czhzj" Feb 16 02:33:11.301626 master-0 kubenswrapper[31559]: I0216 02:33:11.300223 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f056aa8e-08ab-4ecc-a809-97e23224e014-service-ca\") pod \"console-665f676c66-czhzj\" (UID: \"f056aa8e-08ab-4ecc-a809-97e23224e014\") " pod="openshift-console/console-665f676c66-czhzj" Feb 16 02:33:11.301626 master-0 kubenswrapper[31559]: I0216 02:33:11.300276 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f056aa8e-08ab-4ecc-a809-97e23224e014-console-oauth-config\") pod \"console-665f676c66-czhzj\" (UID: \"f056aa8e-08ab-4ecc-a809-97e23224e014\") " pod="openshift-console/console-665f676c66-czhzj" Feb 16 02:33:11.301626 master-0 kubenswrapper[31559]: I0216 02:33:11.300301 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f056aa8e-08ab-4ecc-a809-97e23224e014-oauth-serving-cert\") pod \"console-665f676c66-czhzj\" (UID: \"f056aa8e-08ab-4ecc-a809-97e23224e014\") " pod="openshift-console/console-665f676c66-czhzj" Feb 16 02:33:11.301626 master-0 kubenswrapper[31559]: I0216 02:33:11.300326 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f056aa8e-08ab-4ecc-a809-97e23224e014-console-config\") pod \"console-665f676c66-czhzj\" (UID: \"f056aa8e-08ab-4ecc-a809-97e23224e014\") " pod="openshift-console/console-665f676c66-czhzj" Feb 16 02:33:11.301626 master-0 kubenswrapper[31559]: I0216 02:33:11.300391 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctgng\" (UniqueName: \"kubernetes.io/projected/f056aa8e-08ab-4ecc-a809-97e23224e014-kube-api-access-ctgng\") pod \"console-665f676c66-czhzj\" (UID: \"f056aa8e-08ab-4ecc-a809-97e23224e014\") " pod="openshift-console/console-665f676c66-czhzj" Feb 16 02:33:11.301626 master-0 kubenswrapper[31559]: I0216 02:33:11.300412 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f056aa8e-08ab-4ecc-a809-97e23224e014-trusted-ca-bundle\") pod \"console-665f676c66-czhzj\" (UID: \"f056aa8e-08ab-4ecc-a809-97e23224e014\") " pod="openshift-console/console-665f676c66-czhzj" Feb 16 02:33:11.301626 master-0 kubenswrapper[31559]: I0216 02:33:11.301579 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f056aa8e-08ab-4ecc-a809-97e23224e014-trusted-ca-bundle\") pod \"console-665f676c66-czhzj\" (UID: \"f056aa8e-08ab-4ecc-a809-97e23224e014\") " pod="openshift-console/console-665f676c66-czhzj" Feb 16 02:33:11.301980 master-0 kubenswrapper[31559]: I0216 02:33:11.301704 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f056aa8e-08ab-4ecc-a809-97e23224e014-oauth-serving-cert\") pod \"console-665f676c66-czhzj\" (UID: \"f056aa8e-08ab-4ecc-a809-97e23224e014\") " pod="openshift-console/console-665f676c66-czhzj" Feb 16 02:33:11.304644 master-0 kubenswrapper[31559]: I0216 02:33:11.302278 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f056aa8e-08ab-4ecc-a809-97e23224e014-service-ca\") pod \"console-665f676c66-czhzj\" (UID: \"f056aa8e-08ab-4ecc-a809-97e23224e014\") " pod="openshift-console/console-665f676c66-czhzj" Feb 16 02:33:11.304644 master-0 kubenswrapper[31559]: I0216 02:33:11.302410 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f056aa8e-08ab-4ecc-a809-97e23224e014-console-config\") pod \"console-665f676c66-czhzj\" (UID: \"f056aa8e-08ab-4ecc-a809-97e23224e014\") " pod="openshift-console/console-665f676c66-czhzj" Feb 16 02:33:11.305725 master-0 kubenswrapper[31559]: I0216 02:33:11.305365 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f056aa8e-08ab-4ecc-a809-97e23224e014-console-serving-cert\") pod \"console-665f676c66-czhzj\" (UID: \"f056aa8e-08ab-4ecc-a809-97e23224e014\") " pod="openshift-console/console-665f676c66-czhzj" Feb 16 02:33:11.306933 master-0 kubenswrapper[31559]: I0216 02:33:11.306910 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f056aa8e-08ab-4ecc-a809-97e23224e014-console-oauth-config\") pod \"console-665f676c66-czhzj\" (UID: \"f056aa8e-08ab-4ecc-a809-97e23224e014\") " pod="openshift-console/console-665f676c66-czhzj" Feb 16 02:33:11.317684 master-0 kubenswrapper[31559]: I0216 02:33:11.316682 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctgng\" (UniqueName: \"kubernetes.io/projected/f056aa8e-08ab-4ecc-a809-97e23224e014-kube-api-access-ctgng\") pod \"console-665f676c66-czhzj\" (UID: \"f056aa8e-08ab-4ecc-a809-97e23224e014\") " pod="openshift-console/console-665f676c66-czhzj" Feb 16 02:33:11.509464 master-0 kubenswrapper[31559]: I0216 02:33:11.503792 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-665f676c66-czhzj" Feb 16 02:33:11.529607 master-0 kubenswrapper[31559]: I0216 02:33:11.528358 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/1648575c-6eb8-42a0-9c03-857e848692c6-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-zxqf8\" (UID: \"1648575c-6eb8-42a0-9c03-857e848692c6\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-zxqf8" Feb 16 02:33:11.550168 master-0 kubenswrapper[31559]: I0216 02:33:11.545415 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/1648575c-6eb8-42a0-9c03-857e848692c6-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-zxqf8\" (UID: \"1648575c-6eb8-42a0-9c03-857e848692c6\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-zxqf8" Feb 16 02:33:11.610999 master-0 kubenswrapper[31559]: W0216 02:33:11.610956 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf63a64f9_6655_4020_9694_1c03f02981d9.slice/crio-a6c98ab90ccd02ab0aa96d6ecd3c54d8613e55027640cb8884327d0a196f890f WatchSource:0}: Error finding container a6c98ab90ccd02ab0aa96d6ecd3c54d8613e55027640cb8884327d0a196f890f: Status 404 returned error can't find the container with id a6c98ab90ccd02ab0aa96d6ecd3c54d8613e55027640cb8884327d0a196f890f Feb 16 02:33:11.612759 master-0 kubenswrapper[31559]: I0216 02:33:11.612718 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-kv979"] Feb 16 02:33:11.701527 master-0 kubenswrapper[31559]: I0216 02:33:11.699083 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-hj8qf"] Feb 16 02:33:11.708022 master-0 kubenswrapper[31559]: W0216 02:33:11.707963 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod009d278d_7798_419f_9fd3_c927da6cce48.slice/crio-3a4dcdfc36ad8b095f08e79151ea08cbd53805cf841723d1cfb21dfaa6a9f947 WatchSource:0}: Error finding container 3a4dcdfc36ad8b095f08e79151ea08cbd53805cf841723d1cfb21dfaa6a9f947: Status 404 returned error can't find the container with id 3a4dcdfc36ad8b095f08e79151ea08cbd53805cf841723d1cfb21dfaa6a9f947 Feb 16 02:33:11.711198 master-0 kubenswrapper[31559]: I0216 02:33:11.711160 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-zxqf8" Feb 16 02:33:11.876164 master-0 kubenswrapper[31559]: I0216 02:33:11.876105 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-2gs6z" event={"ID":"7bf5962c-cc36-4cf3-8ffe-e16bce282d53","Type":"ContainerStarted","Data":"1478eb704c0491dccea199101937b5701df6962bdaffb731a9b24f2a2138eaac"} Feb 16 02:33:11.880496 master-0 kubenswrapper[31559]: I0216 02:33:11.879506 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-kv979" event={"ID":"f63a64f9-6655-4020-9694-1c03f02981d9","Type":"ContainerStarted","Data":"a6c98ab90ccd02ab0aa96d6ecd3c54d8613e55027640cb8884327d0a196f890f"} Feb 16 02:33:11.880908 master-0 kubenswrapper[31559]: I0216 02:33:11.880673 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-hj8qf" event={"ID":"009d278d-7798-419f-9fd3-c927da6cce48","Type":"ContainerStarted","Data":"3a4dcdfc36ad8b095f08e79151ea08cbd53805cf841723d1cfb21dfaa6a9f947"} Feb 16 02:33:11.881794 master-0 kubenswrapper[31559]: I0216 02:33:11.881765 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-kc4qt" event={"ID":"320d604e-e520-413c-ae30-a9fdc811fa53","Type":"ContainerStarted","Data":"2a8ffad5fa50316877083a1fae6b16d8b13f8459cf2af73ad0befe7fe4726612"} Feb 16 02:33:11.972646 master-0 kubenswrapper[31559]: I0216 02:33:11.970907 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-665f676c66-czhzj"] Feb 16 02:33:12.180275 master-0 kubenswrapper[31559]: I0216 02:33:12.180223 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-zxqf8"] Feb 16 02:33:12.180655 master-0 kubenswrapper[31559]: W0216 02:33:12.180576 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1648575c_6eb8_42a0_9c03_857e848692c6.slice/crio-eb830e8b2e9fc60a7de46cb10bb48a5a9b907af6944db66b92fe17363f3e71bd WatchSource:0}: Error finding container eb830e8b2e9fc60a7de46cb10bb48a5a9b907af6944db66b92fe17363f3e71bd: Status 404 returned error can't find the container with id eb830e8b2e9fc60a7de46cb10bb48a5a9b907af6944db66b92fe17363f3e71bd Feb 16 02:33:12.892909 master-0 kubenswrapper[31559]: I0216 02:33:12.892849 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-zxqf8" event={"ID":"1648575c-6eb8-42a0-9c03-857e848692c6","Type":"ContainerStarted","Data":"eb830e8b2e9fc60a7de46cb10bb48a5a9b907af6944db66b92fe17363f3e71bd"} Feb 16 02:33:12.896318 master-0 kubenswrapper[31559]: I0216 02:33:12.896267 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-rz89s" event={"ID":"18768123-7a96-4518-b12c-bf696c0449db","Type":"ContainerStarted","Data":"b1218459fc36fa25a4cdaff97bf4163907d23c1e3a45497ef0607a56bee81c2a"} Feb 16 02:33:12.896413 master-0 kubenswrapper[31559]: I0216 02:33:12.896367 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-rz89s" Feb 16 02:33:12.901280 master-0 kubenswrapper[31559]: I0216 02:33:12.900722 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-665f676c66-czhzj" event={"ID":"f056aa8e-08ab-4ecc-a809-97e23224e014","Type":"ContainerStarted","Data":"19297db94222c336507c192c2aec3bd87e1cab94b1ccc6624e57fa01165dacc9"} Feb 16 02:33:12.901280 master-0 kubenswrapper[31559]: I0216 02:33:12.900763 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-665f676c66-czhzj" event={"ID":"f056aa8e-08ab-4ecc-a809-97e23224e014","Type":"ContainerStarted","Data":"df786cc2eebf9dac92f1f863e2c70d58b26b0170ad7998eeb6202e2057026374"} Feb 16 02:33:12.942448 master-0 kubenswrapper[31559]: I0216 02:33:12.942318 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-rz89s" podStartSLOduration=2.07097352 podStartE2EDuration="4.942289821s" podCreationTimestamp="2026-02-16 02:33:08 +0000 UTC" firstStartedPulling="2026-02-16 02:33:09.829151712 +0000 UTC m=+642.173757757" lastFinishedPulling="2026-02-16 02:33:12.700468043 +0000 UTC m=+645.045074058" observedRunningTime="2026-02-16 02:33:12.920136547 +0000 UTC m=+645.264742562" watchObservedRunningTime="2026-02-16 02:33:12.942289821 +0000 UTC m=+645.286895826" Feb 16 02:33:12.944793 master-0 kubenswrapper[31559]: I0216 02:33:12.944704 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-665f676c66-czhzj" podStartSLOduration=1.9446786820000002 podStartE2EDuration="1.944678682s" podCreationTimestamp="2026-02-16 02:33:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:33:12.944005765 +0000 UTC m=+645.288611820" watchObservedRunningTime="2026-02-16 02:33:12.944678682 +0000 UTC m=+645.289284697" Feb 16 02:33:13.912899 master-0 kubenswrapper[31559]: I0216 02:33:13.912786 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-2gs6z" event={"ID":"7bf5962c-cc36-4cf3-8ffe-e16bce282d53","Type":"ContainerStarted","Data":"9a53d9b4ea5c88e29da4cdd45a48e5564326ade42990caa3b584667bf14316d8"} Feb 16 02:33:13.913996 master-0 kubenswrapper[31559]: I0216 02:33:13.913029 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-2gs6z" Feb 16 02:33:13.934749 master-0 kubenswrapper[31559]: I0216 02:33:13.934654 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-2gs6z" podStartSLOduration=4.209324778 podStartE2EDuration="5.934636699s" podCreationTimestamp="2026-02-16 02:33:08 +0000 UTC" firstStartedPulling="2026-02-16 02:33:10.983014052 +0000 UTC m=+643.327620067" lastFinishedPulling="2026-02-16 02:33:12.708325973 +0000 UTC m=+645.052931988" observedRunningTime="2026-02-16 02:33:13.929480048 +0000 UTC m=+646.274086063" watchObservedRunningTime="2026-02-16 02:33:13.934636699 +0000 UTC m=+646.279242724" Feb 16 02:33:17.966735 master-0 kubenswrapper[31559]: I0216 02:33:17.966665 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-kv979" event={"ID":"f63a64f9-6655-4020-9694-1c03f02981d9","Type":"ContainerStarted","Data":"0b6f141c21c6703eedf5628fe24634321df743746273c7d4baf190f76d0a85eb"} Feb 16 02:33:17.966735 master-0 kubenswrapper[31559]: I0216 02:33:17.966723 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-kv979" event={"ID":"f63a64f9-6655-4020-9694-1c03f02981d9","Type":"ContainerStarted","Data":"ef29a1707343c7c34071529a6e6a08f367fac3bc99db3c3fa0b8a8a50709af6f"} Feb 16 02:33:17.973817 master-0 kubenswrapper[31559]: I0216 02:33:17.973753 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-nd8kr" event={"ID":"19493b3f-c29a-413d-84a0-dbd3edc939de","Type":"ContainerStarted","Data":"6f263055f8db2f5371cbd64fd407546137a01fc9c71b97e44ed3b4612942430d"} Feb 16 02:33:17.974270 master-0 kubenswrapper[31559]: I0216 02:33:17.974248 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-nd8kr" Feb 16 02:33:17.976742 master-0 kubenswrapper[31559]: I0216 02:33:17.976711 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-hj8qf" event={"ID":"009d278d-7798-419f-9fd3-c927da6cce48","Type":"ContainerStarted","Data":"8181aa995445c4a2f17bdc26f3fe2112cdcdbb0409abe86d44d364768506a0f3"} Feb 16 02:33:17.979337 master-0 kubenswrapper[31559]: I0216 02:33:17.979284 31559 generic.go:334] "Generic (PLEG): container finished" podID="5edbab06-c9ca-47ce-88ea-85c0ee5381f6" containerID="132600cebc0a6fff6d07ed19989bda279ff4a90b45320bcad0037d086a91dfd2" exitCode=0 Feb 16 02:33:17.979450 master-0 kubenswrapper[31559]: I0216 02:33:17.979357 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dvzts" event={"ID":"5edbab06-c9ca-47ce-88ea-85c0ee5381f6","Type":"ContainerDied","Data":"132600cebc0a6fff6d07ed19989bda279ff4a90b45320bcad0037d086a91dfd2"} Feb 16 02:33:17.981870 master-0 kubenswrapper[31559]: I0216 02:33:17.981828 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-kc4qt" event={"ID":"320d604e-e520-413c-ae30-a9fdc811fa53","Type":"ContainerStarted","Data":"6411a26a71b1c429619c9b3d7214d3894712b3f3a9b984dbfdda578da51bd118"} Feb 16 02:33:17.982488 master-0 kubenswrapper[31559]: I0216 02:33:17.982427 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-kc4qt" Feb 16 02:33:17.991462 master-0 kubenswrapper[31559]: I0216 02:33:17.991374 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-zxqf8" event={"ID":"1648575c-6eb8-42a0-9c03-857e848692c6","Type":"ContainerStarted","Data":"99cf0d11fc7b26a1382d7a39b097923fb176297b0aa83fe8796980b21bf92b72"} Feb 16 02:33:17.992266 master-0 kubenswrapper[31559]: I0216 02:33:17.992236 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-zxqf8" Feb 16 02:33:18.089565 master-0 kubenswrapper[31559]: I0216 02:33:18.088564 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-nd8kr" podStartSLOduration=2.6626031340000003 podStartE2EDuration="10.088537909s" podCreationTimestamp="2026-02-16 02:33:08 +0000 UTC" firstStartedPulling="2026-02-16 02:33:09.750729185 +0000 UTC m=+642.095335250" lastFinishedPulling="2026-02-16 02:33:17.176664 +0000 UTC m=+649.521270025" observedRunningTime="2026-02-16 02:33:18.076866942 +0000 UTC m=+650.421472977" watchObservedRunningTime="2026-02-16 02:33:18.088537909 +0000 UTC m=+650.433143924" Feb 16 02:33:18.110511 master-0 kubenswrapper[31559]: I0216 02:33:18.110409 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-zxqf8" podStartSLOduration=3.120431868 podStartE2EDuration="8.110388716s" podCreationTimestamp="2026-02-16 02:33:10 +0000 UTC" firstStartedPulling="2026-02-16 02:33:12.184832004 +0000 UTC m=+644.529438019" lastFinishedPulling="2026-02-16 02:33:17.174788842 +0000 UTC m=+649.519394867" observedRunningTime="2026-02-16 02:33:18.103284535 +0000 UTC m=+650.447890630" watchObservedRunningTime="2026-02-16 02:33:18.110388716 +0000 UTC m=+650.454994741" Feb 16 02:33:18.150701 master-0 kubenswrapper[31559]: I0216 02:33:18.150488 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-kv979" podStartSLOduration=2.639606644 podStartE2EDuration="8.150465066s" podCreationTimestamp="2026-02-16 02:33:10 +0000 UTC" firstStartedPulling="2026-02-16 02:33:11.613164448 +0000 UTC m=+643.957770463" lastFinishedPulling="2026-02-16 02:33:17.12402284 +0000 UTC m=+649.468628885" observedRunningTime="2026-02-16 02:33:18.137015354 +0000 UTC m=+650.481621379" watchObservedRunningTime="2026-02-16 02:33:18.150465066 +0000 UTC m=+650.495071091" Feb 16 02:33:18.190557 master-0 kubenswrapper[31559]: I0216 02:33:18.190419 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-hj8qf" podStartSLOduration=2.7775762779999997 podStartE2EDuration="8.190393483s" podCreationTimestamp="2026-02-16 02:33:10 +0000 UTC" firstStartedPulling="2026-02-16 02:33:11.711093582 +0000 UTC m=+644.055699597" lastFinishedPulling="2026-02-16 02:33:17.123910747 +0000 UTC m=+649.468516802" observedRunningTime="2026-02-16 02:33:18.176116679 +0000 UTC m=+650.520722694" watchObservedRunningTime="2026-02-16 02:33:18.190393483 +0000 UTC m=+650.534999498" Feb 16 02:33:18.200101 master-0 kubenswrapper[31559]: I0216 02:33:18.199827 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-kc4qt" podStartSLOduration=2.181934391 podStartE2EDuration="8.199811323s" podCreationTimestamp="2026-02-16 02:33:10 +0000 UTC" firstStartedPulling="2026-02-16 02:33:11.15885504 +0000 UTC m=+643.503461065" lastFinishedPulling="2026-02-16 02:33:17.176731942 +0000 UTC m=+649.521337997" observedRunningTime="2026-02-16 02:33:18.195108053 +0000 UTC m=+650.539714068" watchObservedRunningTime="2026-02-16 02:33:18.199811323 +0000 UTC m=+650.544417328" Feb 16 02:33:19.008095 master-0 kubenswrapper[31559]: I0216 02:33:19.008024 31559 generic.go:334] "Generic (PLEG): container finished" podID="5edbab06-c9ca-47ce-88ea-85c0ee5381f6" containerID="f7e3cb73d018e8ba074b3d529d7082d4afc6588b3f7c2a850dc0bd803ec0543f" exitCode=0 Feb 16 02:33:19.009207 master-0 kubenswrapper[31559]: I0216 02:33:19.008090 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dvzts" event={"ID":"5edbab06-c9ca-47ce-88ea-85c0ee5381f6","Type":"ContainerDied","Data":"f7e3cb73d018e8ba074b3d529d7082d4afc6588b3f7c2a850dc0bd803ec0543f"} Feb 16 02:33:19.133499 master-0 kubenswrapper[31559]: I0216 02:33:19.133420 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-rz89s" Feb 16 02:33:20.027247 master-0 kubenswrapper[31559]: I0216 02:33:20.024533 31559 generic.go:334] "Generic (PLEG): container finished" podID="5edbab06-c9ca-47ce-88ea-85c0ee5381f6" containerID="57a41347fd70b8da9318f9550a579038f6d3096e5e49a476adb836c5ac3f7075" exitCode=0 Feb 16 02:33:20.027247 master-0 kubenswrapper[31559]: I0216 02:33:20.026593 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dvzts" event={"ID":"5edbab06-c9ca-47ce-88ea-85c0ee5381f6","Type":"ContainerDied","Data":"57a41347fd70b8da9318f9550a579038f6d3096e5e49a476adb836c5ac3f7075"} Feb 16 02:33:20.713606 master-0 kubenswrapper[31559]: I0216 02:33:20.709824 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-2gs6z" Feb 16 02:33:20.831077 master-0 kubenswrapper[31559]: I0216 02:33:20.830987 31559 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 16 02:33:20.831323 master-0 kubenswrapper[31559]: I0216 02:33:20.831266 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="532487ad51c30257b744e7c1c79fb34f" containerName="kube-controller-manager-recovery-controller" containerID="cri-o://47f370468f9a506b6024de7fb2029d49ff3b6445c9e16b06204e3c886ebdacc9" gracePeriod=30 Feb 16 02:33:20.831478 master-0 kubenswrapper[31559]: I0216 02:33:20.831425 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="532487ad51c30257b744e7c1c79fb34f" containerName="kube-controller-manager" containerID="cri-o://8e65952f68dc70a998ca96fc43cd86d783845ba696b7cee768810bbdde0b1b72" gracePeriod=30 Feb 16 02:33:20.831575 master-0 kubenswrapper[31559]: I0216 02:33:20.831510 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="532487ad51c30257b744e7c1c79fb34f" containerName="kube-controller-manager-cert-syncer" containerID="cri-o://ab9f31d8a9dea7f17fe5df1556062b9ee37acd8a1e22d617b3329084d777dce1" gracePeriod=30 Feb 16 02:33:20.831575 master-0 kubenswrapper[31559]: I0216 02:33:20.831551 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="532487ad51c30257b744e7c1c79fb34f" containerName="cluster-policy-controller" containerID="cri-o://789e61bec232bf870ef2e4f73549435ac6af8ac001a93d4407c58240635552e4" gracePeriod=30 Feb 16 02:33:20.834836 master-0 kubenswrapper[31559]: I0216 02:33:20.833190 31559 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 16 02:33:20.835274 master-0 kubenswrapper[31559]: E0216 02:33:20.835226 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="532487ad51c30257b744e7c1c79fb34f" containerName="kube-controller-manager-cert-syncer" Feb 16 02:33:20.835274 master-0 kubenswrapper[31559]: I0216 02:33:20.835254 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="532487ad51c30257b744e7c1c79fb34f" containerName="kube-controller-manager-cert-syncer" Feb 16 02:33:20.835274 master-0 kubenswrapper[31559]: E0216 02:33:20.835275 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="532487ad51c30257b744e7c1c79fb34f" containerName="kube-controller-manager-cert-syncer" Feb 16 02:33:20.835530 master-0 kubenswrapper[31559]: I0216 02:33:20.835284 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="532487ad51c30257b744e7c1c79fb34f" containerName="kube-controller-manager-cert-syncer" Feb 16 02:33:20.835530 master-0 kubenswrapper[31559]: E0216 02:33:20.835302 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="532487ad51c30257b744e7c1c79fb34f" containerName="cluster-policy-controller" Feb 16 02:33:20.835530 master-0 kubenswrapper[31559]: I0216 02:33:20.835311 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="532487ad51c30257b744e7c1c79fb34f" containerName="cluster-policy-controller" Feb 16 02:33:20.835530 master-0 kubenswrapper[31559]: E0216 02:33:20.835331 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="532487ad51c30257b744e7c1c79fb34f" containerName="kube-controller-manager" Feb 16 02:33:20.835530 master-0 kubenswrapper[31559]: I0216 02:33:20.835339 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="532487ad51c30257b744e7c1c79fb34f" containerName="kube-controller-manager" Feb 16 02:33:20.835530 master-0 kubenswrapper[31559]: E0216 02:33:20.835360 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="532487ad51c30257b744e7c1c79fb34f" containerName="kube-controller-manager" Feb 16 02:33:20.835530 master-0 kubenswrapper[31559]: I0216 02:33:20.835368 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="532487ad51c30257b744e7c1c79fb34f" containerName="kube-controller-manager" Feb 16 02:33:20.835530 master-0 kubenswrapper[31559]: E0216 02:33:20.835380 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="532487ad51c30257b744e7c1c79fb34f" containerName="kube-controller-manager" Feb 16 02:33:20.835530 master-0 kubenswrapper[31559]: I0216 02:33:20.835388 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="532487ad51c30257b744e7c1c79fb34f" containerName="kube-controller-manager" Feb 16 02:33:20.835530 master-0 kubenswrapper[31559]: E0216 02:33:20.835410 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="532487ad51c30257b744e7c1c79fb34f" containerName="kube-controller-manager-recovery-controller" Feb 16 02:33:20.835530 master-0 kubenswrapper[31559]: I0216 02:33:20.835418 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="532487ad51c30257b744e7c1c79fb34f" containerName="kube-controller-manager-recovery-controller" Feb 16 02:33:20.836206 master-0 kubenswrapper[31559]: I0216 02:33:20.835648 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="532487ad51c30257b744e7c1c79fb34f" containerName="kube-controller-manager-cert-syncer" Feb 16 02:33:20.836206 master-0 kubenswrapper[31559]: I0216 02:33:20.835674 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="532487ad51c30257b744e7c1c79fb34f" containerName="kube-controller-manager" Feb 16 02:33:20.836206 master-0 kubenswrapper[31559]: I0216 02:33:20.835693 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="532487ad51c30257b744e7c1c79fb34f" containerName="kube-controller-manager-cert-syncer" Feb 16 02:33:20.836206 master-0 kubenswrapper[31559]: I0216 02:33:20.835713 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="532487ad51c30257b744e7c1c79fb34f" containerName="kube-controller-manager-recovery-controller" Feb 16 02:33:20.836206 master-0 kubenswrapper[31559]: I0216 02:33:20.835728 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="532487ad51c30257b744e7c1c79fb34f" containerName="kube-controller-manager" Feb 16 02:33:20.836206 master-0 kubenswrapper[31559]: I0216 02:33:20.835740 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="532487ad51c30257b744e7c1c79fb34f" containerName="kube-controller-manager" Feb 16 02:33:20.836206 master-0 kubenswrapper[31559]: I0216 02:33:20.835758 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="532487ad51c30257b744e7c1c79fb34f" containerName="cluster-policy-controller" Feb 16 02:33:20.836206 master-0 kubenswrapper[31559]: I0216 02:33:20.835777 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="532487ad51c30257b744e7c1c79fb34f" containerName="cluster-policy-controller" Feb 16 02:33:20.836206 master-0 kubenswrapper[31559]: E0216 02:33:20.835957 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="532487ad51c30257b744e7c1c79fb34f" containerName="cluster-policy-controller" Feb 16 02:33:20.836206 master-0 kubenswrapper[31559]: I0216 02:33:20.835968 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="532487ad51c30257b744e7c1c79fb34f" containerName="cluster-policy-controller" Feb 16 02:33:20.836206 master-0 kubenswrapper[31559]: E0216 02:33:20.835991 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="532487ad51c30257b744e7c1c79fb34f" containerName="kube-controller-manager" Feb 16 02:33:20.836206 master-0 kubenswrapper[31559]: I0216 02:33:20.836000 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="532487ad51c30257b744e7c1c79fb34f" containerName="kube-controller-manager" Feb 16 02:33:20.836206 master-0 kubenswrapper[31559]: I0216 02:33:20.836206 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="532487ad51c30257b744e7c1c79fb34f" containerName="kube-controller-manager" Feb 16 02:33:20.900736 master-0 kubenswrapper[31559]: I0216 02:33:20.900632 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4e95c6249d50e8af594b7c6338f6db3-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"f4e95c6249d50e8af594b7c6338f6db3\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:33:20.900736 master-0 kubenswrapper[31559]: I0216 02:33:20.900742 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4e95c6249d50e8af594b7c6338f6db3-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"f4e95c6249d50e8af594b7c6338f6db3\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:33:21.004370 master-0 kubenswrapper[31559]: I0216 02:33:21.004251 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4e95c6249d50e8af594b7c6338f6db3-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"f4e95c6249d50e8af594b7c6338f6db3\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:33:21.004370 master-0 kubenswrapper[31559]: I0216 02:33:21.004369 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4e95c6249d50e8af594b7c6338f6db3-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"f4e95c6249d50e8af594b7c6338f6db3\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:33:21.004735 master-0 kubenswrapper[31559]: I0216 02:33:21.004547 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4e95c6249d50e8af594b7c6338f6db3-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"f4e95c6249d50e8af594b7c6338f6db3\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:33:21.005519 master-0 kubenswrapper[31559]: I0216 02:33:21.005473 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4e95c6249d50e8af594b7c6338f6db3-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"f4e95c6249d50e8af594b7c6338f6db3\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:33:21.036812 master-0 kubenswrapper[31559]: I0216 02:33:21.036709 31559 generic.go:334] "Generic (PLEG): container finished" podID="d6d69fdb-255c-45f7-84f9-54da25a243d8" containerID="9d920be4fe38e94375cffabfd9a1de1b225ceb2f967fb538c813df96bf70cf18" exitCode=0 Feb 16 02:33:21.039932 master-0 kubenswrapper[31559]: I0216 02:33:21.036819 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"d6d69fdb-255c-45f7-84f9-54da25a243d8","Type":"ContainerDied","Data":"9d920be4fe38e94375cffabfd9a1de1b225ceb2f967fb538c813df96bf70cf18"} Feb 16 02:33:21.044672 master-0 kubenswrapper[31559]: I0216 02:33:21.044080 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dvzts" event={"ID":"5edbab06-c9ca-47ce-88ea-85c0ee5381f6","Type":"ContainerStarted","Data":"b49e362dab407755d5168bc31910a1a84ec1ee55d73c28e5503b735dc72f9acb"} Feb 16 02:33:21.047921 master-0 kubenswrapper[31559]: I0216 02:33:21.047854 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_532487ad51c30257b744e7c1c79fb34f/kube-controller-manager/3.log" Feb 16 02:33:21.048941 master-0 kubenswrapper[31559]: I0216 02:33:21.048878 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_532487ad51c30257b744e7c1c79fb34f/kube-controller-manager-cert-syncer/1.log" Feb 16 02:33:21.050170 master-0 kubenswrapper[31559]: I0216 02:33:21.050114 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_532487ad51c30257b744e7c1c79fb34f/cluster-policy-controller/3.log" Feb 16 02:33:21.051253 master-0 kubenswrapper[31559]: I0216 02:33:21.051203 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_532487ad51c30257b744e7c1c79fb34f/kube-controller-manager-cert-syncer/0.log" Feb 16 02:33:21.051253 master-0 kubenswrapper[31559]: I0216 02:33:21.051249 31559 generic.go:334] "Generic (PLEG): container finished" podID="532487ad51c30257b744e7c1c79fb34f" containerID="8e65952f68dc70a998ca96fc43cd86d783845ba696b7cee768810bbdde0b1b72" exitCode=0 Feb 16 02:33:21.051545 master-0 kubenswrapper[31559]: I0216 02:33:21.051265 31559 generic.go:334] "Generic (PLEG): container finished" podID="532487ad51c30257b744e7c1c79fb34f" containerID="ab9f31d8a9dea7f17fe5df1556062b9ee37acd8a1e22d617b3329084d777dce1" exitCode=2 Feb 16 02:33:21.051545 master-0 kubenswrapper[31559]: I0216 02:33:21.051277 31559 generic.go:334] "Generic (PLEG): container finished" podID="532487ad51c30257b744e7c1c79fb34f" containerID="789e61bec232bf870ef2e4f73549435ac6af8ac001a93d4407c58240635552e4" exitCode=0 Feb 16 02:33:21.051545 master-0 kubenswrapper[31559]: I0216 02:33:21.051284 31559 generic.go:334] "Generic (PLEG): container finished" podID="532487ad51c30257b744e7c1c79fb34f" containerID="47f370468f9a506b6024de7fb2029d49ff3b6445c9e16b06204e3c886ebdacc9" exitCode=0 Feb 16 02:33:21.051545 master-0 kubenswrapper[31559]: I0216 02:33:21.051323 31559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48118d83188cff04b48b2d21c92d5267795e6e491327e95878cf252a4b94caea" Feb 16 02:33:21.051545 master-0 kubenswrapper[31559]: I0216 02:33:21.051360 31559 scope.go:117] "RemoveContainer" containerID="7936c2730f8175510de2d253b1e10cfbfb35dc725232ac6b9454cae07e1ba691" Feb 16 02:33:21.067682 master-0 kubenswrapper[31559]: I0216 02:33:21.067599 31559 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="532487ad51c30257b744e7c1c79fb34f" podUID="f4e95c6249d50e8af594b7c6338f6db3" Feb 16 02:33:21.088347 master-0 kubenswrapper[31559]: I0216 02:33:21.087944 31559 scope.go:117] "RemoveContainer" containerID="c1607c7a684a009a85d360c6358aedc027d89ca14606abafaf65b0d9cbaca7c9" Feb 16 02:33:21.104631 master-0 kubenswrapper[31559]: I0216 02:33:21.104419 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_532487ad51c30257b744e7c1c79fb34f/kube-controller-manager-cert-syncer/1.log" Feb 16 02:33:21.105828 master-0 kubenswrapper[31559]: I0216 02:33:21.105801 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_532487ad51c30257b744e7c1c79fb34f/cluster-policy-controller/3.log" Feb 16 02:33:21.106964 master-0 kubenswrapper[31559]: I0216 02:33:21.106925 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:33:21.111707 master-0 kubenswrapper[31559]: I0216 02:33:21.111666 31559 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="532487ad51c30257b744e7c1c79fb34f" podUID="f4e95c6249d50e8af594b7c6338f6db3" Feb 16 02:33:21.130870 master-0 kubenswrapper[31559]: I0216 02:33:21.130818 31559 scope.go:117] "RemoveContainer" containerID="da8d9bb8a4f56bffe45c048c1c373bcf9e89a06a44a652b40fb9bf76cc60fa15" Feb 16 02:33:21.207557 master-0 kubenswrapper[31559]: I0216 02:33:21.206115 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/532487ad51c30257b744e7c1c79fb34f-cert-dir\") pod \"532487ad51c30257b744e7c1c79fb34f\" (UID: \"532487ad51c30257b744e7c1c79fb34f\") " Feb 16 02:33:21.207557 master-0 kubenswrapper[31559]: I0216 02:33:21.206271 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/532487ad51c30257b744e7c1c79fb34f-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "532487ad51c30257b744e7c1c79fb34f" (UID: "532487ad51c30257b744e7c1c79fb34f"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:33:21.207557 master-0 kubenswrapper[31559]: I0216 02:33:21.206314 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/532487ad51c30257b744e7c1c79fb34f-resource-dir\") pod \"532487ad51c30257b744e7c1c79fb34f\" (UID: \"532487ad51c30257b744e7c1c79fb34f\") " Feb 16 02:33:21.207557 master-0 kubenswrapper[31559]: I0216 02:33:21.206344 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/532487ad51c30257b744e7c1c79fb34f-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "532487ad51c30257b744e7c1c79fb34f" (UID: "532487ad51c30257b744e7c1c79fb34f"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:33:21.207557 master-0 kubenswrapper[31559]: I0216 02:33:21.206960 31559 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/532487ad51c30257b744e7c1c79fb34f-cert-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 02:33:21.207557 master-0 kubenswrapper[31559]: I0216 02:33:21.206989 31559 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/532487ad51c30257b744e7c1c79fb34f-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 02:33:21.504918 master-0 kubenswrapper[31559]: I0216 02:33:21.504846 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-665f676c66-czhzj" Feb 16 02:33:21.505110 master-0 kubenswrapper[31559]: I0216 02:33:21.504947 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-665f676c66-czhzj" Feb 16 02:33:21.513361 master-0 kubenswrapper[31559]: I0216 02:33:21.513237 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-665f676c66-czhzj" Feb 16 02:33:21.939020 master-0 kubenswrapper[31559]: I0216 02:33:21.938604 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="532487ad51c30257b744e7c1c79fb34f" path="/var/lib/kubelet/pods/532487ad51c30257b744e7c1c79fb34f/volumes" Feb 16 02:33:22.070628 master-0 kubenswrapper[31559]: I0216 02:33:22.070470 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dvzts" event={"ID":"5edbab06-c9ca-47ce-88ea-85c0ee5381f6","Type":"ContainerStarted","Data":"1d60c425c88fc368ff0226d2996635de2718ed355a05712ae918eb21fae4d2a7"} Feb 16 02:33:22.070628 master-0 kubenswrapper[31559]: I0216 02:33:22.070558 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dvzts" event={"ID":"5edbab06-c9ca-47ce-88ea-85c0ee5381f6","Type":"ContainerStarted","Data":"2db80755c66a07df4bf7aa97642eb3d7a886a2466226a2f72d2fbb7ada1b930f"} Feb 16 02:33:22.070628 master-0 kubenswrapper[31559]: I0216 02:33:22.070593 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dvzts" event={"ID":"5edbab06-c9ca-47ce-88ea-85c0ee5381f6","Type":"ContainerStarted","Data":"a978a78e6ca8daa522afde3611e0a486be7eedf10b1e9eb49f670eb0901bfef6"} Feb 16 02:33:22.070628 master-0 kubenswrapper[31559]: I0216 02:33:22.070623 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dvzts" event={"ID":"5edbab06-c9ca-47ce-88ea-85c0ee5381f6","Type":"ContainerStarted","Data":"d41532d5cd2aa7be1e86d7521f277fa8a7873ccf53c7e276ea1a6000e5be418a"} Feb 16 02:33:22.073642 master-0 kubenswrapper[31559]: I0216 02:33:22.073574 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_532487ad51c30257b744e7c1c79fb34f/kube-controller-manager-cert-syncer/1.log" Feb 16 02:33:22.074808 master-0 kubenswrapper[31559]: I0216 02:33:22.074758 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:33:22.078988 master-0 kubenswrapper[31559]: I0216 02:33:22.078905 31559 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="532487ad51c30257b744e7c1c79fb34f" podUID="f4e95c6249d50e8af594b7c6338f6db3" Feb 16 02:33:22.079544 master-0 kubenswrapper[31559]: I0216 02:33:22.079387 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-665f676c66-czhzj" Feb 16 02:33:22.109181 master-0 kubenswrapper[31559]: I0216 02:33:22.109093 31559 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="532487ad51c30257b744e7c1c79fb34f" podUID="f4e95c6249d50e8af594b7c6338f6db3" Feb 16 02:33:22.501792 master-0 kubenswrapper[31559]: I0216 02:33:22.501726 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Feb 16 02:33:22.641761 master-0 kubenswrapper[31559]: I0216 02:33:22.641667 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d6d69fdb-255c-45f7-84f9-54da25a243d8-kube-api-access\") pod \"d6d69fdb-255c-45f7-84f9-54da25a243d8\" (UID: \"d6d69fdb-255c-45f7-84f9-54da25a243d8\") " Feb 16 02:33:22.642070 master-0 kubenswrapper[31559]: I0216 02:33:22.641878 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d6d69fdb-255c-45f7-84f9-54da25a243d8-kubelet-dir\") pod \"d6d69fdb-255c-45f7-84f9-54da25a243d8\" (UID: \"d6d69fdb-255c-45f7-84f9-54da25a243d8\") " Feb 16 02:33:22.642070 master-0 kubenswrapper[31559]: I0216 02:33:22.641996 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d6d69fdb-255c-45f7-84f9-54da25a243d8-var-lock\") pod \"d6d69fdb-255c-45f7-84f9-54da25a243d8\" (UID: \"d6d69fdb-255c-45f7-84f9-54da25a243d8\") " Feb 16 02:33:22.642219 master-0 kubenswrapper[31559]: I0216 02:33:22.642070 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6d69fdb-255c-45f7-84f9-54da25a243d8-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d6d69fdb-255c-45f7-84f9-54da25a243d8" (UID: "d6d69fdb-255c-45f7-84f9-54da25a243d8"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:33:22.642293 master-0 kubenswrapper[31559]: I0216 02:33:22.642234 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6d69fdb-255c-45f7-84f9-54da25a243d8-var-lock" (OuterVolumeSpecName: "var-lock") pod "d6d69fdb-255c-45f7-84f9-54da25a243d8" (UID: "d6d69fdb-255c-45f7-84f9-54da25a243d8"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:33:22.642987 master-0 kubenswrapper[31559]: I0216 02:33:22.642934 31559 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d6d69fdb-255c-45f7-84f9-54da25a243d8-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 02:33:22.643078 master-0 kubenswrapper[31559]: I0216 02:33:22.643021 31559 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d6d69fdb-255c-45f7-84f9-54da25a243d8-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 16 02:33:22.648014 master-0 kubenswrapper[31559]: I0216 02:33:22.647917 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6d69fdb-255c-45f7-84f9-54da25a243d8-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d6d69fdb-255c-45f7-84f9-54da25a243d8" (UID: "d6d69fdb-255c-45f7-84f9-54da25a243d8"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:33:22.745782 master-0 kubenswrapper[31559]: I0216 02:33:22.745679 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d6d69fdb-255c-45f7-84f9-54da25a243d8-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 16 02:33:23.092400 master-0 kubenswrapper[31559]: I0216 02:33:23.092233 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"d6d69fdb-255c-45f7-84f9-54da25a243d8","Type":"ContainerDied","Data":"edaafaf53a1027a12471ed62cf8fe675b03305c2eb6c584c09ef67adba08c2f2"} Feb 16 02:33:23.092400 master-0 kubenswrapper[31559]: I0216 02:33:23.092315 31559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="edaafaf53a1027a12471ed62cf8fe675b03305c2eb6c584c09ef67adba08c2f2" Feb 16 02:33:23.093309 master-0 kubenswrapper[31559]: I0216 02:33:23.092429 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Feb 16 02:33:23.106527 master-0 kubenswrapper[31559]: I0216 02:33:23.106422 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dvzts" event={"ID":"5edbab06-c9ca-47ce-88ea-85c0ee5381f6","Type":"ContainerStarted","Data":"3201871d548a2788e580de631711652662d872fcb73cb5407889b815b5f8eb6b"} Feb 16 02:33:23.106737 master-0 kubenswrapper[31559]: I0216 02:33:23.106554 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-dvzts" Feb 16 02:33:23.152354 master-0 kubenswrapper[31559]: I0216 02:33:23.152233 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-dvzts" podStartSLOduration=7.211713135 podStartE2EDuration="15.152209533s" podCreationTimestamp="2026-02-16 02:33:08 +0000 UTC" firstStartedPulling="2026-02-16 02:33:09.190718725 +0000 UTC m=+641.535324770" lastFinishedPulling="2026-02-16 02:33:17.131215113 +0000 UTC m=+649.475821168" observedRunningTime="2026-02-16 02:33:23.145948734 +0000 UTC m=+655.490554789" watchObservedRunningTime="2026-02-16 02:33:23.152209533 +0000 UTC m=+655.496815578" Feb 16 02:33:24.004473 master-0 kubenswrapper[31559]: I0216 02:33:24.004377 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-dvzts" Feb 16 02:33:24.069850 master-0 kubenswrapper[31559]: I0216 02:33:24.069013 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-dvzts" Feb 16 02:33:26.151396 master-0 kubenswrapper[31559]: I0216 02:33:26.151338 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-kc4qt" Feb 16 02:33:28.498243 master-0 kubenswrapper[31559]: I0216 02:33:28.497987 31559 scope.go:117] "RemoveContainer" containerID="789e61bec232bf870ef2e4f73549435ac6af8ac001a93d4407c58240635552e4" Feb 16 02:33:28.533601 master-0 kubenswrapper[31559]: I0216 02:33:28.533150 31559 scope.go:117] "RemoveContainer" containerID="4ae7fbb377e485e07836359913aacb47c32cf66aed58a1093fb7dd0ba0700be6" Feb 16 02:33:28.563084 master-0 kubenswrapper[31559]: I0216 02:33:28.563009 31559 scope.go:117] "RemoveContainer" containerID="47f370468f9a506b6024de7fb2029d49ff3b6445c9e16b06204e3c886ebdacc9" Feb 16 02:33:28.591484 master-0 kubenswrapper[31559]: I0216 02:33:28.591416 31559 scope.go:117] "RemoveContainer" containerID="ab9f31d8a9dea7f17fe5df1556062b9ee37acd8a1e22d617b3329084d777dce1" Feb 16 02:33:29.279557 master-0 kubenswrapper[31559]: I0216 02:33:29.279483 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-nd8kr" Feb 16 02:33:31.721552 master-0 kubenswrapper[31559]: I0216 02:33:31.721414 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-zxqf8" Feb 16 02:33:32.924806 master-0 kubenswrapper[31559]: I0216 02:33:32.924649 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:33:32.966740 master-0 kubenswrapper[31559]: I0216 02:33:32.966669 31559 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="b5b3e5e2-1d39-4531-9b22-53ebf9da15be" Feb 16 02:33:32.967050 master-0 kubenswrapper[31559]: I0216 02:33:32.967025 31559 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="b5b3e5e2-1d39-4531-9b22-53ebf9da15be" Feb 16 02:33:32.992904 master-0 kubenswrapper[31559]: I0216 02:33:32.992832 31559 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:33:32.999804 master-0 kubenswrapper[31559]: I0216 02:33:32.999654 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 16 02:33:33.012825 master-0 kubenswrapper[31559]: I0216 02:33:33.012753 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 16 02:33:33.014624 master-0 kubenswrapper[31559]: I0216 02:33:33.014513 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:33:33.023273 master-0 kubenswrapper[31559]: I0216 02:33:33.023135 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 16 02:33:33.279127 master-0 kubenswrapper[31559]: I0216 02:33:33.279001 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"f4e95c6249d50e8af594b7c6338f6db3","Type":"ContainerStarted","Data":"98321f978e18857a2f54b8dfcff0313a6b770e31fff2a68d94c0d84f3da854ff"} Feb 16 02:33:34.306241 master-0 kubenswrapper[31559]: I0216 02:33:34.306075 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"f4e95c6249d50e8af594b7c6338f6db3","Type":"ContainerStarted","Data":"350aedb3a3332622880ec96a649b3fa9380fa149986cdbb40127ad62bf807853"} Feb 16 02:33:34.306241 master-0 kubenswrapper[31559]: I0216 02:33:34.306157 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"f4e95c6249d50e8af594b7c6338f6db3","Type":"ContainerStarted","Data":"358b0b025effa4340526540e27145f5baeeba3c07f1c4ab117d4a14ce3d2b82b"} Feb 16 02:33:34.306241 master-0 kubenswrapper[31559]: I0216 02:33:34.306185 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"f4e95c6249d50e8af594b7c6338f6db3","Type":"ContainerStarted","Data":"0484a9c889030406067ba76f80bf851a7ec1d3ba54da4525bda9bc69d11499ca"} Feb 16 02:33:35.322149 master-0 kubenswrapper[31559]: I0216 02:33:35.322034 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"f4e95c6249d50e8af594b7c6338f6db3","Type":"ContainerStarted","Data":"bb0af84b58c235921020fb7cdd9c046193bff113836b9b60bff2f9b96fe016ae"} Feb 16 02:33:35.365294 master-0 kubenswrapper[31559]: I0216 02:33:35.365136 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=2.365096118 podStartE2EDuration="2.365096118s" podCreationTimestamp="2026-02-16 02:33:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:33:35.349998613 +0000 UTC m=+667.694604668" watchObservedRunningTime="2026-02-16 02:33:35.365096118 +0000 UTC m=+667.709702173" Feb 16 02:33:39.006977 master-0 kubenswrapper[31559]: I0216 02:33:39.006888 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-dvzts" Feb 16 02:33:43.014857 master-0 kubenswrapper[31559]: I0216 02:33:43.014730 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:33:43.014857 master-0 kubenswrapper[31559]: I0216 02:33:43.014826 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:33:43.014857 master-0 kubenswrapper[31559]: I0216 02:33:43.014851 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:33:43.014857 master-0 kubenswrapper[31559]: I0216 02:33:43.014871 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:33:43.016823 master-0 kubenswrapper[31559]: I0216 02:33:43.015209 31559 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Feb 16 02:33:43.016823 master-0 kubenswrapper[31559]: I0216 02:33:43.015275 31559 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="f4e95c6249d50e8af594b7c6338f6db3" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Feb 16 02:33:43.022758 master-0 kubenswrapper[31559]: I0216 02:33:43.022680 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:33:43.431906 master-0 kubenswrapper[31559]: I0216 02:33:43.431804 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:33:53.015815 master-0 kubenswrapper[31559]: I0216 02:33:53.015727 31559 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Feb 16 02:33:53.015815 master-0 kubenswrapper[31559]: I0216 02:33:53.015783 31559 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="f4e95c6249d50e8af594b7c6338f6db3" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Feb 16 02:34:03.015715 master-0 kubenswrapper[31559]: I0216 02:34:03.015587 31559 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Feb 16 02:34:03.015715 master-0 kubenswrapper[31559]: I0216 02:34:03.015701 31559 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="f4e95c6249d50e8af594b7c6338f6db3" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Feb 16 02:34:03.017056 master-0 kubenswrapper[31559]: I0216 02:34:03.015791 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:34:03.017139 master-0 kubenswrapper[31559]: I0216 02:34:03.017044 31559 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"0484a9c889030406067ba76f80bf851a7ec1d3ba54da4525bda9bc69d11499ca"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Feb 16 02:34:03.017376 master-0 kubenswrapper[31559]: I0216 02:34:03.017307 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="f4e95c6249d50e8af594b7c6338f6db3" containerName="kube-controller-manager" containerID="cri-o://0484a9c889030406067ba76f80bf851a7ec1d3ba54da4525bda9bc69d11499ca" gracePeriod=30 Feb 16 02:34:34.052964 master-0 kubenswrapper[31559]: I0216 02:34:34.052896 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_f4e95c6249d50e8af594b7c6338f6db3/kube-controller-manager/0.log" Feb 16 02:34:34.053629 master-0 kubenswrapper[31559]: I0216 02:34:34.052984 31559 generic.go:334] "Generic (PLEG): container finished" podID="f4e95c6249d50e8af594b7c6338f6db3" containerID="0484a9c889030406067ba76f80bf851a7ec1d3ba54da4525bda9bc69d11499ca" exitCode=137 Feb 16 02:34:34.053629 master-0 kubenswrapper[31559]: I0216 02:34:34.053027 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"f4e95c6249d50e8af594b7c6338f6db3","Type":"ContainerDied","Data":"0484a9c889030406067ba76f80bf851a7ec1d3ba54da4525bda9bc69d11499ca"} Feb 16 02:34:34.053629 master-0 kubenswrapper[31559]: I0216 02:34:34.053069 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"f4e95c6249d50e8af594b7c6338f6db3","Type":"ContainerStarted","Data":"72fbfbf46be99fcc7f35295ab0dfa899e511bf9cc4e9ff6c4021895322a0af05"} Feb 16 02:34:43.014750 master-0 kubenswrapper[31559]: I0216 02:34:43.014618 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:34:43.014750 master-0 kubenswrapper[31559]: I0216 02:34:43.014734 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:34:43.021336 master-0 kubenswrapper[31559]: I0216 02:34:43.020108 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:34:43.171553 master-0 kubenswrapper[31559]: I0216 02:34:43.171497 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 02:34:53.163587 master-0 kubenswrapper[31559]: I0216 02:34:53.163525 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-596bfc78ff-27s7m"] Feb 16 02:34:58.268454 master-0 kubenswrapper[31559]: I0216 02:34:58.268311 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-storage/vg-manager-7rn9k"] Feb 16 02:34:58.276591 master-0 kubenswrapper[31559]: E0216 02:34:58.276512 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6d69fdb-255c-45f7-84f9-54da25a243d8" containerName="installer" Feb 16 02:34:58.276591 master-0 kubenswrapper[31559]: I0216 02:34:58.276571 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6d69fdb-255c-45f7-84f9-54da25a243d8" containerName="installer" Feb 16 02:34:58.276941 master-0 kubenswrapper[31559]: I0216 02:34:58.276824 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6d69fdb-255c-45f7-84f9-54da25a243d8" containerName="installer" Feb 16 02:34:58.277732 master-0 kubenswrapper[31559]: I0216 02:34:58.277701 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/vg-manager-7rn9k" Feb 16 02:34:58.280224 master-0 kubenswrapper[31559]: I0216 02:34:58.280133 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"vg-manager-metrics-cert" Feb 16 02:34:58.308022 master-0 kubenswrapper[31559]: I0216 02:34:58.307914 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/vg-manager-7rn9k"] Feb 16 02:34:58.357764 master-0 kubenswrapper[31559]: I0216 02:34:58.357676 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/af7d6727-be97-4a9f-9a47-9ef306aa5cf9-node-plugin-dir\") pod \"vg-manager-7rn9k\" (UID: \"af7d6727-be97-4a9f-9a47-9ef306aa5cf9\") " pod="openshift-storage/vg-manager-7rn9k" Feb 16 02:34:58.357934 master-0 kubenswrapper[31559]: I0216 02:34:58.357835 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/af7d6727-be97-4a9f-9a47-9ef306aa5cf9-metrics-cert\") pod \"vg-manager-7rn9k\" (UID: \"af7d6727-be97-4a9f-9a47-9ef306aa5cf9\") " pod="openshift-storage/vg-manager-7rn9k" Feb 16 02:34:58.357934 master-0 kubenswrapper[31559]: I0216 02:34:58.357878 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/af7d6727-be97-4a9f-9a47-9ef306aa5cf9-pod-volumes-dir\") pod \"vg-manager-7rn9k\" (UID: \"af7d6727-be97-4a9f-9a47-9ef306aa5cf9\") " pod="openshift-storage/vg-manager-7rn9k" Feb 16 02:34:58.358079 master-0 kubenswrapper[31559]: I0216 02:34:58.357931 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/af7d6727-be97-4a9f-9a47-9ef306aa5cf9-registration-dir\") pod \"vg-manager-7rn9k\" (UID: \"af7d6727-be97-4a9f-9a47-9ef306aa5cf9\") " pod="openshift-storage/vg-manager-7rn9k" Feb 16 02:34:58.358079 master-0 kubenswrapper[31559]: I0216 02:34:58.357971 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/af7d6727-be97-4a9f-9a47-9ef306aa5cf9-csi-plugin-dir\") pod \"vg-manager-7rn9k\" (UID: \"af7d6727-be97-4a9f-9a47-9ef306aa5cf9\") " pod="openshift-storage/vg-manager-7rn9k" Feb 16 02:34:58.358079 master-0 kubenswrapper[31559]: I0216 02:34:58.358000 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/af7d6727-be97-4a9f-9a47-9ef306aa5cf9-lvmd-config\") pod \"vg-manager-7rn9k\" (UID: \"af7d6727-be97-4a9f-9a47-9ef306aa5cf9\") " pod="openshift-storage/vg-manager-7rn9k" Feb 16 02:34:58.358320 master-0 kubenswrapper[31559]: I0216 02:34:58.358079 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/af7d6727-be97-4a9f-9a47-9ef306aa5cf9-device-dir\") pod \"vg-manager-7rn9k\" (UID: \"af7d6727-be97-4a9f-9a47-9ef306aa5cf9\") " pod="openshift-storage/vg-manager-7rn9k" Feb 16 02:34:58.358320 master-0 kubenswrapper[31559]: I0216 02:34:58.358115 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/af7d6727-be97-4a9f-9a47-9ef306aa5cf9-run-udev\") pod \"vg-manager-7rn9k\" (UID: \"af7d6727-be97-4a9f-9a47-9ef306aa5cf9\") " pod="openshift-storage/vg-manager-7rn9k" Feb 16 02:34:58.358320 master-0 kubenswrapper[31559]: I0216 02:34:58.358199 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/af7d6727-be97-4a9f-9a47-9ef306aa5cf9-file-lock-dir\") pod \"vg-manager-7rn9k\" (UID: \"af7d6727-be97-4a9f-9a47-9ef306aa5cf9\") " pod="openshift-storage/vg-manager-7rn9k" Feb 16 02:34:58.358320 master-0 kubenswrapper[31559]: I0216 02:34:58.358234 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7s5f\" (UniqueName: \"kubernetes.io/projected/af7d6727-be97-4a9f-9a47-9ef306aa5cf9-kube-api-access-x7s5f\") pod \"vg-manager-7rn9k\" (UID: \"af7d6727-be97-4a9f-9a47-9ef306aa5cf9\") " pod="openshift-storage/vg-manager-7rn9k" Feb 16 02:34:58.358320 master-0 kubenswrapper[31559]: I0216 02:34:58.358274 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/af7d6727-be97-4a9f-9a47-9ef306aa5cf9-sys\") pod \"vg-manager-7rn9k\" (UID: \"af7d6727-be97-4a9f-9a47-9ef306aa5cf9\") " pod="openshift-storage/vg-manager-7rn9k" Feb 16 02:34:58.459988 master-0 kubenswrapper[31559]: I0216 02:34:58.459528 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/af7d6727-be97-4a9f-9a47-9ef306aa5cf9-pod-volumes-dir\") pod \"vg-manager-7rn9k\" (UID: \"af7d6727-be97-4a9f-9a47-9ef306aa5cf9\") " pod="openshift-storage/vg-manager-7rn9k" Feb 16 02:34:58.459988 master-0 kubenswrapper[31559]: I0216 02:34:58.459602 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/af7d6727-be97-4a9f-9a47-9ef306aa5cf9-registration-dir\") pod \"vg-manager-7rn9k\" (UID: \"af7d6727-be97-4a9f-9a47-9ef306aa5cf9\") " pod="openshift-storage/vg-manager-7rn9k" Feb 16 02:34:58.459988 master-0 kubenswrapper[31559]: I0216 02:34:58.459630 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/af7d6727-be97-4a9f-9a47-9ef306aa5cf9-csi-plugin-dir\") pod \"vg-manager-7rn9k\" (UID: \"af7d6727-be97-4a9f-9a47-9ef306aa5cf9\") " pod="openshift-storage/vg-manager-7rn9k" Feb 16 02:34:58.459988 master-0 kubenswrapper[31559]: I0216 02:34:58.459647 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/af7d6727-be97-4a9f-9a47-9ef306aa5cf9-lvmd-config\") pod \"vg-manager-7rn9k\" (UID: \"af7d6727-be97-4a9f-9a47-9ef306aa5cf9\") " pod="openshift-storage/vg-manager-7rn9k" Feb 16 02:34:58.459988 master-0 kubenswrapper[31559]: I0216 02:34:58.459701 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/af7d6727-be97-4a9f-9a47-9ef306aa5cf9-device-dir\") pod \"vg-manager-7rn9k\" (UID: \"af7d6727-be97-4a9f-9a47-9ef306aa5cf9\") " pod="openshift-storage/vg-manager-7rn9k" Feb 16 02:34:58.459988 master-0 kubenswrapper[31559]: I0216 02:34:58.459727 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/af7d6727-be97-4a9f-9a47-9ef306aa5cf9-run-udev\") pod \"vg-manager-7rn9k\" (UID: \"af7d6727-be97-4a9f-9a47-9ef306aa5cf9\") " pod="openshift-storage/vg-manager-7rn9k" Feb 16 02:34:58.459988 master-0 kubenswrapper[31559]: I0216 02:34:58.459752 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/af7d6727-be97-4a9f-9a47-9ef306aa5cf9-file-lock-dir\") pod \"vg-manager-7rn9k\" (UID: \"af7d6727-be97-4a9f-9a47-9ef306aa5cf9\") " pod="openshift-storage/vg-manager-7rn9k" Feb 16 02:34:58.459988 master-0 kubenswrapper[31559]: I0216 02:34:58.459772 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7s5f\" (UniqueName: \"kubernetes.io/projected/af7d6727-be97-4a9f-9a47-9ef306aa5cf9-kube-api-access-x7s5f\") pod \"vg-manager-7rn9k\" (UID: \"af7d6727-be97-4a9f-9a47-9ef306aa5cf9\") " pod="openshift-storage/vg-manager-7rn9k" Feb 16 02:34:58.459988 master-0 kubenswrapper[31559]: I0216 02:34:58.459792 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/af7d6727-be97-4a9f-9a47-9ef306aa5cf9-sys\") pod \"vg-manager-7rn9k\" (UID: \"af7d6727-be97-4a9f-9a47-9ef306aa5cf9\") " pod="openshift-storage/vg-manager-7rn9k" Feb 16 02:34:58.459988 master-0 kubenswrapper[31559]: I0216 02:34:58.459826 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/af7d6727-be97-4a9f-9a47-9ef306aa5cf9-node-plugin-dir\") pod \"vg-manager-7rn9k\" (UID: \"af7d6727-be97-4a9f-9a47-9ef306aa5cf9\") " pod="openshift-storage/vg-manager-7rn9k" Feb 16 02:34:58.459988 master-0 kubenswrapper[31559]: I0216 02:34:58.459837 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/af7d6727-be97-4a9f-9a47-9ef306aa5cf9-pod-volumes-dir\") pod \"vg-manager-7rn9k\" (UID: \"af7d6727-be97-4a9f-9a47-9ef306aa5cf9\") " pod="openshift-storage/vg-manager-7rn9k" Feb 16 02:34:58.459988 master-0 kubenswrapper[31559]: I0216 02:34:58.459886 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/af7d6727-be97-4a9f-9a47-9ef306aa5cf9-metrics-cert\") pod \"vg-manager-7rn9k\" (UID: \"af7d6727-be97-4a9f-9a47-9ef306aa5cf9\") " pod="openshift-storage/vg-manager-7rn9k" Feb 16 02:34:58.461004 master-0 kubenswrapper[31559]: I0216 02:34:58.460152 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/af7d6727-be97-4a9f-9a47-9ef306aa5cf9-run-udev\") pod \"vg-manager-7rn9k\" (UID: \"af7d6727-be97-4a9f-9a47-9ef306aa5cf9\") " pod="openshift-storage/vg-manager-7rn9k" Feb 16 02:34:58.461004 master-0 kubenswrapper[31559]: I0216 02:34:58.460264 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/af7d6727-be97-4a9f-9a47-9ef306aa5cf9-registration-dir\") pod \"vg-manager-7rn9k\" (UID: \"af7d6727-be97-4a9f-9a47-9ef306aa5cf9\") " pod="openshift-storage/vg-manager-7rn9k" Feb 16 02:34:58.461004 master-0 kubenswrapper[31559]: I0216 02:34:58.460633 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/af7d6727-be97-4a9f-9a47-9ef306aa5cf9-csi-plugin-dir\") pod \"vg-manager-7rn9k\" (UID: \"af7d6727-be97-4a9f-9a47-9ef306aa5cf9\") " pod="openshift-storage/vg-manager-7rn9k" Feb 16 02:34:58.461004 master-0 kubenswrapper[31559]: I0216 02:34:58.460775 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/af7d6727-be97-4a9f-9a47-9ef306aa5cf9-sys\") pod \"vg-manager-7rn9k\" (UID: \"af7d6727-be97-4a9f-9a47-9ef306aa5cf9\") " pod="openshift-storage/vg-manager-7rn9k" Feb 16 02:34:58.461004 master-0 kubenswrapper[31559]: I0216 02:34:58.460845 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/af7d6727-be97-4a9f-9a47-9ef306aa5cf9-lvmd-config\") pod \"vg-manager-7rn9k\" (UID: \"af7d6727-be97-4a9f-9a47-9ef306aa5cf9\") " pod="openshift-storage/vg-manager-7rn9k" Feb 16 02:34:58.461004 master-0 kubenswrapper[31559]: I0216 02:34:58.460925 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/af7d6727-be97-4a9f-9a47-9ef306aa5cf9-device-dir\") pod \"vg-manager-7rn9k\" (UID: \"af7d6727-be97-4a9f-9a47-9ef306aa5cf9\") " pod="openshift-storage/vg-manager-7rn9k" Feb 16 02:34:58.461004 master-0 kubenswrapper[31559]: I0216 02:34:58.460938 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/af7d6727-be97-4a9f-9a47-9ef306aa5cf9-node-plugin-dir\") pod \"vg-manager-7rn9k\" (UID: \"af7d6727-be97-4a9f-9a47-9ef306aa5cf9\") " pod="openshift-storage/vg-manager-7rn9k" Feb 16 02:34:58.461004 master-0 kubenswrapper[31559]: I0216 02:34:58.460968 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/af7d6727-be97-4a9f-9a47-9ef306aa5cf9-file-lock-dir\") pod \"vg-manager-7rn9k\" (UID: \"af7d6727-be97-4a9f-9a47-9ef306aa5cf9\") " pod="openshift-storage/vg-manager-7rn9k" Feb 16 02:34:58.471970 master-0 kubenswrapper[31559]: I0216 02:34:58.471765 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/af7d6727-be97-4a9f-9a47-9ef306aa5cf9-metrics-cert\") pod \"vg-manager-7rn9k\" (UID: \"af7d6727-be97-4a9f-9a47-9ef306aa5cf9\") " pod="openshift-storage/vg-manager-7rn9k" Feb 16 02:34:58.485061 master-0 kubenswrapper[31559]: I0216 02:34:58.484889 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7s5f\" (UniqueName: \"kubernetes.io/projected/af7d6727-be97-4a9f-9a47-9ef306aa5cf9-kube-api-access-x7s5f\") pod \"vg-manager-7rn9k\" (UID: \"af7d6727-be97-4a9f-9a47-9ef306aa5cf9\") " pod="openshift-storage/vg-manager-7rn9k" Feb 16 02:34:58.635813 master-0 kubenswrapper[31559]: I0216 02:34:58.635636 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/vg-manager-7rn9k" Feb 16 02:34:59.216135 master-0 kubenswrapper[31559]: I0216 02:34:59.216051 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/vg-manager-7rn9k"] Feb 16 02:34:59.366819 master-0 kubenswrapper[31559]: I0216 02:34:59.366739 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-7rn9k" event={"ID":"af7d6727-be97-4a9f-9a47-9ef306aa5cf9","Type":"ContainerStarted","Data":"c7e27db2d9826270edcd31aec5f81b4553aa9a5ffa487920393f2f87f9eb4632"} Feb 16 02:35:00.447455 master-0 kubenswrapper[31559]: I0216 02:35:00.443847 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-7rn9k" event={"ID":"af7d6727-be97-4a9f-9a47-9ef306aa5cf9","Type":"ContainerStarted","Data":"f82dcfbda84aa6d32ef6cc1fc42ba396220ed8285d48cb8279789a7233fccc9c"} Feb 16 02:35:00.486454 master-0 kubenswrapper[31559]: I0216 02:35:00.485866 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-storage/vg-manager-7rn9k" podStartSLOduration=2.485847274 podStartE2EDuration="2.485847274s" podCreationTimestamp="2026-02-16 02:34:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:35:00.482105377 +0000 UTC m=+752.826711432" watchObservedRunningTime="2026-02-16 02:35:00.485847274 +0000 UTC m=+752.830453289" Feb 16 02:35:01.455972 master-0 kubenswrapper[31559]: I0216 02:35:01.455898 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-7rn9k_af7d6727-be97-4a9f-9a47-9ef306aa5cf9/vg-manager/0.log" Feb 16 02:35:01.455972 master-0 kubenswrapper[31559]: I0216 02:35:01.455963 31559 generic.go:334] "Generic (PLEG): container finished" podID="af7d6727-be97-4a9f-9a47-9ef306aa5cf9" containerID="f82dcfbda84aa6d32ef6cc1fc42ba396220ed8285d48cb8279789a7233fccc9c" exitCode=1 Feb 16 02:35:01.456896 master-0 kubenswrapper[31559]: I0216 02:35:01.455997 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-7rn9k" event={"ID":"af7d6727-be97-4a9f-9a47-9ef306aa5cf9","Type":"ContainerDied","Data":"f82dcfbda84aa6d32ef6cc1fc42ba396220ed8285d48cb8279789a7233fccc9c"} Feb 16 02:35:01.456896 master-0 kubenswrapper[31559]: I0216 02:35:01.456780 31559 scope.go:117] "RemoveContainer" containerID="f82dcfbda84aa6d32ef6cc1fc42ba396220ed8285d48cb8279789a7233fccc9c" Feb 16 02:35:01.822464 master-0 kubenswrapper[31559]: I0216 02:35:01.822353 31559 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/topolvm.io-reg.sock" Feb 16 02:35:02.475002 master-0 kubenswrapper[31559]: I0216 02:35:02.471399 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-7rn9k_af7d6727-be97-4a9f-9a47-9ef306aa5cf9/vg-manager/0.log" Feb 16 02:35:02.475002 master-0 kubenswrapper[31559]: I0216 02:35:02.471465 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-7rn9k" event={"ID":"af7d6727-be97-4a9f-9a47-9ef306aa5cf9","Type":"ContainerStarted","Data":"8c1a66037916bfa91cdc6cbce9183272aeeaf232e97f779ae07da62db30af168"} Feb 16 02:35:02.766549 master-0 kubenswrapper[31559]: I0216 02:35:02.766200 31559 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/topolvm.io-reg.sock","Timestamp":"2026-02-16T02:35:01.822397638Z","Handler":null,"Name":""} Feb 16 02:35:02.766816 master-0 kubenswrapper[31559]: I0216 02:35:02.766640 31559 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: topolvm.io endpoint: /var/lib/kubelet/plugins/topolvm.io/node/csi-topolvm.sock versions: 1.0.0 Feb 16 02:35:02.766816 master-0 kubenswrapper[31559]: I0216 02:35:02.766718 31559 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: topolvm.io at endpoint: /var/lib/kubelet/plugins/topolvm.io/node/csi-topolvm.sock Feb 16 02:35:08.636475 master-0 kubenswrapper[31559]: I0216 02:35:08.636321 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-storage/vg-manager-7rn9k" Feb 16 02:35:08.640301 master-0 kubenswrapper[31559]: I0216 02:35:08.640223 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-storage/vg-manager-7rn9k" Feb 16 02:35:09.602972 master-0 kubenswrapper[31559]: I0216 02:35:09.598665 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/vg-manager-7rn9k" Feb 16 02:35:09.602972 master-0 kubenswrapper[31559]: I0216 02:35:09.599843 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-storage/vg-manager-7rn9k" Feb 16 02:35:11.771605 master-0 kubenswrapper[31559]: I0216 02:35:11.771543 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-l5t7v"] Feb 16 02:35:11.772545 master-0 kubenswrapper[31559]: I0216 02:35:11.772517 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-l5t7v" Feb 16 02:35:11.774452 master-0 kubenswrapper[31559]: I0216 02:35:11.774393 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 16 02:35:11.774969 master-0 kubenswrapper[31559]: I0216 02:35:11.774939 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 16 02:35:11.788420 master-0 kubenswrapper[31559]: I0216 02:35:11.788353 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-l5t7v"] Feb 16 02:35:11.953425 master-0 kubenswrapper[31559]: I0216 02:35:11.953360 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vt8xd\" (UniqueName: \"kubernetes.io/projected/7d0c0f07-c7d6-4ca7-ae31-56bcadc57a9e-kube-api-access-vt8xd\") pod \"openstack-operator-index-l5t7v\" (UID: \"7d0c0f07-c7d6-4ca7-ae31-56bcadc57a9e\") " pod="openstack-operators/openstack-operator-index-l5t7v" Feb 16 02:35:12.056758 master-0 kubenswrapper[31559]: I0216 02:35:12.056626 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vt8xd\" (UniqueName: \"kubernetes.io/projected/7d0c0f07-c7d6-4ca7-ae31-56bcadc57a9e-kube-api-access-vt8xd\") pod \"openstack-operator-index-l5t7v\" (UID: \"7d0c0f07-c7d6-4ca7-ae31-56bcadc57a9e\") " pod="openstack-operators/openstack-operator-index-l5t7v" Feb 16 02:35:12.075734 master-0 kubenswrapper[31559]: I0216 02:35:12.075650 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vt8xd\" (UniqueName: \"kubernetes.io/projected/7d0c0f07-c7d6-4ca7-ae31-56bcadc57a9e-kube-api-access-vt8xd\") pod \"openstack-operator-index-l5t7v\" (UID: \"7d0c0f07-c7d6-4ca7-ae31-56bcadc57a9e\") " pod="openstack-operators/openstack-operator-index-l5t7v" Feb 16 02:35:12.090497 master-0 kubenswrapper[31559]: I0216 02:35:12.090390 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-l5t7v" Feb 16 02:35:12.608688 master-0 kubenswrapper[31559]: I0216 02:35:12.608637 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-l5t7v"] Feb 16 02:35:12.609875 master-0 kubenswrapper[31559]: I0216 02:35:12.609687 31559 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 02:35:12.631563 master-0 kubenswrapper[31559]: I0216 02:35:12.631505 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-l5t7v" event={"ID":"7d0c0f07-c7d6-4ca7-ae31-56bcadc57a9e","Type":"ContainerStarted","Data":"d87a9242ed8b0318a7f15297450e2b8e84d1d63d3d78ad02251a4f86e2bc7ee3"} Feb 16 02:35:14.656624 master-0 kubenswrapper[31559]: I0216 02:35:14.656532 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-l5t7v" event={"ID":"7d0c0f07-c7d6-4ca7-ae31-56bcadc57a9e","Type":"ContainerStarted","Data":"a587043e2e6a11005e1a39fc9db8ddc5f5c61bac78dd2676aa93236bc6130eda"} Feb 16 02:35:14.683072 master-0 kubenswrapper[31559]: I0216 02:35:14.682942 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-l5t7v" podStartSLOduration=2.850819849 podStartE2EDuration="3.682915429s" podCreationTimestamp="2026-02-16 02:35:11 +0000 UTC" firstStartedPulling="2026-02-16 02:35:12.609616951 +0000 UTC m=+764.954223006" lastFinishedPulling="2026-02-16 02:35:13.441712531 +0000 UTC m=+765.786318586" observedRunningTime="2026-02-16 02:35:14.681643685 +0000 UTC m=+767.026249730" watchObservedRunningTime="2026-02-16 02:35:14.682915429 +0000 UTC m=+767.027521484" Feb 16 02:35:18.207910 master-0 kubenswrapper[31559]: I0216 02:35:18.207807 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-596bfc78ff-27s7m" podUID="97ecc023-a5a8-48d9-a333-732b37168e6e" containerName="console" containerID="cri-o://c4150c13da4f2c88b68d35fab7abb2bd849195b09f70a02c5361e2905dacf22f" gracePeriod=15 Feb 16 02:35:18.716199 master-0 kubenswrapper[31559]: I0216 02:35:18.716076 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-596bfc78ff-27s7m_97ecc023-a5a8-48d9-a333-732b37168e6e/console/0.log" Feb 16 02:35:18.716199 master-0 kubenswrapper[31559]: I0216 02:35:18.716155 31559 generic.go:334] "Generic (PLEG): container finished" podID="97ecc023-a5a8-48d9-a333-732b37168e6e" containerID="c4150c13da4f2c88b68d35fab7abb2bd849195b09f70a02c5361e2905dacf22f" exitCode=2 Feb 16 02:35:18.716624 master-0 kubenswrapper[31559]: I0216 02:35:18.716219 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-596bfc78ff-27s7m" event={"ID":"97ecc023-a5a8-48d9-a333-732b37168e6e","Type":"ContainerDied","Data":"c4150c13da4f2c88b68d35fab7abb2bd849195b09f70a02c5361e2905dacf22f"} Feb 16 02:35:18.814917 master-0 kubenswrapper[31559]: I0216 02:35:18.814846 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-596bfc78ff-27s7m_97ecc023-a5a8-48d9-a333-732b37168e6e/console/0.log" Feb 16 02:35:18.815511 master-0 kubenswrapper[31559]: I0216 02:35:18.814945 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-596bfc78ff-27s7m" Feb 16 02:35:18.893676 master-0 kubenswrapper[31559]: I0216 02:35:18.893569 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/97ecc023-a5a8-48d9-a333-732b37168e6e-service-ca\") pod \"97ecc023-a5a8-48d9-a333-732b37168e6e\" (UID: \"97ecc023-a5a8-48d9-a333-732b37168e6e\") " Feb 16 02:35:18.893948 master-0 kubenswrapper[31559]: I0216 02:35:18.893744 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/97ecc023-a5a8-48d9-a333-732b37168e6e-oauth-serving-cert\") pod \"97ecc023-a5a8-48d9-a333-732b37168e6e\" (UID: \"97ecc023-a5a8-48d9-a333-732b37168e6e\") " Feb 16 02:35:18.893948 master-0 kubenswrapper[31559]: I0216 02:35:18.893853 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwr2j\" (UniqueName: \"kubernetes.io/projected/97ecc023-a5a8-48d9-a333-732b37168e6e-kube-api-access-fwr2j\") pod \"97ecc023-a5a8-48d9-a333-732b37168e6e\" (UID: \"97ecc023-a5a8-48d9-a333-732b37168e6e\") " Feb 16 02:35:18.893948 master-0 kubenswrapper[31559]: I0216 02:35:18.893904 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/97ecc023-a5a8-48d9-a333-732b37168e6e-console-serving-cert\") pod \"97ecc023-a5a8-48d9-a333-732b37168e6e\" (UID: \"97ecc023-a5a8-48d9-a333-732b37168e6e\") " Feb 16 02:35:18.894119 master-0 kubenswrapper[31559]: I0216 02:35:18.893962 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/97ecc023-a5a8-48d9-a333-732b37168e6e-console-config\") pod \"97ecc023-a5a8-48d9-a333-732b37168e6e\" (UID: \"97ecc023-a5a8-48d9-a333-732b37168e6e\") " Feb 16 02:35:18.894119 master-0 kubenswrapper[31559]: I0216 02:35:18.894075 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97ecc023-a5a8-48d9-a333-732b37168e6e-trusted-ca-bundle\") pod \"97ecc023-a5a8-48d9-a333-732b37168e6e\" (UID: \"97ecc023-a5a8-48d9-a333-732b37168e6e\") " Feb 16 02:35:18.894319 master-0 kubenswrapper[31559]: I0216 02:35:18.894256 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97ecc023-a5a8-48d9-a333-732b37168e6e-service-ca" (OuterVolumeSpecName: "service-ca") pod "97ecc023-a5a8-48d9-a333-732b37168e6e" (UID: "97ecc023-a5a8-48d9-a333-732b37168e6e"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:35:18.894385 master-0 kubenswrapper[31559]: I0216 02:35:18.894293 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/97ecc023-a5a8-48d9-a333-732b37168e6e-console-oauth-config\") pod \"97ecc023-a5a8-48d9-a333-732b37168e6e\" (UID: \"97ecc023-a5a8-48d9-a333-732b37168e6e\") " Feb 16 02:35:18.894385 master-0 kubenswrapper[31559]: I0216 02:35:18.894336 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97ecc023-a5a8-48d9-a333-732b37168e6e-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "97ecc023-a5a8-48d9-a333-732b37168e6e" (UID: "97ecc023-a5a8-48d9-a333-732b37168e6e"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:35:18.895103 master-0 kubenswrapper[31559]: I0216 02:35:18.895063 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97ecc023-a5a8-48d9-a333-732b37168e6e-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "97ecc023-a5a8-48d9-a333-732b37168e6e" (UID: "97ecc023-a5a8-48d9-a333-732b37168e6e"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:35:18.895513 master-0 kubenswrapper[31559]: I0216 02:35:18.895382 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97ecc023-a5a8-48d9-a333-732b37168e6e-console-config" (OuterVolumeSpecName: "console-config") pod "97ecc023-a5a8-48d9-a333-732b37168e6e" (UID: "97ecc023-a5a8-48d9-a333-732b37168e6e"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:35:18.896027 master-0 kubenswrapper[31559]: I0216 02:35:18.895979 31559 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/97ecc023-a5a8-48d9-a333-732b37168e6e-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 16 02:35:18.896105 master-0 kubenswrapper[31559]: I0216 02:35:18.896024 31559 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/97ecc023-a5a8-48d9-a333-732b37168e6e-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 16 02:35:18.896105 master-0 kubenswrapper[31559]: I0216 02:35:18.896048 31559 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/97ecc023-a5a8-48d9-a333-732b37168e6e-console-config\") on node \"master-0\" DevicePath \"\"" Feb 16 02:35:18.896105 master-0 kubenswrapper[31559]: I0216 02:35:18.896070 31559 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97ecc023-a5a8-48d9-a333-732b37168e6e-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 02:35:18.899104 master-0 kubenswrapper[31559]: I0216 02:35:18.899026 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97ecc023-a5a8-48d9-a333-732b37168e6e-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "97ecc023-a5a8-48d9-a333-732b37168e6e" (UID: "97ecc023-a5a8-48d9-a333-732b37168e6e"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:35:18.899402 master-0 kubenswrapper[31559]: I0216 02:35:18.899247 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97ecc023-a5a8-48d9-a333-732b37168e6e-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "97ecc023-a5a8-48d9-a333-732b37168e6e" (UID: "97ecc023-a5a8-48d9-a333-732b37168e6e"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:35:18.900490 master-0 kubenswrapper[31559]: I0216 02:35:18.900402 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97ecc023-a5a8-48d9-a333-732b37168e6e-kube-api-access-fwr2j" (OuterVolumeSpecName: "kube-api-access-fwr2j") pod "97ecc023-a5a8-48d9-a333-732b37168e6e" (UID: "97ecc023-a5a8-48d9-a333-732b37168e6e"). InnerVolumeSpecName "kube-api-access-fwr2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:35:18.998139 master-0 kubenswrapper[31559]: I0216 02:35:18.998052 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwr2j\" (UniqueName: \"kubernetes.io/projected/97ecc023-a5a8-48d9-a333-732b37168e6e-kube-api-access-fwr2j\") on node \"master-0\" DevicePath \"\"" Feb 16 02:35:18.998139 master-0 kubenswrapper[31559]: I0216 02:35:18.998110 31559 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/97ecc023-a5a8-48d9-a333-732b37168e6e-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 16 02:35:18.998139 master-0 kubenswrapper[31559]: I0216 02:35:18.998130 31559 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/97ecc023-a5a8-48d9-a333-732b37168e6e-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Feb 16 02:35:19.735217 master-0 kubenswrapper[31559]: I0216 02:35:19.735109 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-596bfc78ff-27s7m_97ecc023-a5a8-48d9-a333-732b37168e6e/console/0.log" Feb 16 02:35:19.735217 master-0 kubenswrapper[31559]: I0216 02:35:19.735211 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-596bfc78ff-27s7m" event={"ID":"97ecc023-a5a8-48d9-a333-732b37168e6e","Type":"ContainerDied","Data":"541f406008455daa206a2595b98ec52252459822ea2f311546034ffbfed6e440"} Feb 16 02:35:19.736302 master-0 kubenswrapper[31559]: I0216 02:35:19.735264 31559 scope.go:117] "RemoveContainer" containerID="c4150c13da4f2c88b68d35fab7abb2bd849195b09f70a02c5361e2905dacf22f" Feb 16 02:35:19.736302 master-0 kubenswrapper[31559]: I0216 02:35:19.735372 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-596bfc78ff-27s7m" Feb 16 02:35:19.822123 master-0 kubenswrapper[31559]: I0216 02:35:19.821944 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-596bfc78ff-27s7m"] Feb 16 02:35:19.834403 master-0 kubenswrapper[31559]: I0216 02:35:19.834304 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-596bfc78ff-27s7m"] Feb 16 02:35:19.936295 master-0 kubenswrapper[31559]: I0216 02:35:19.936216 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97ecc023-a5a8-48d9-a333-732b37168e6e" path="/var/lib/kubelet/pods/97ecc023-a5a8-48d9-a333-732b37168e6e/volumes" Feb 16 02:35:22.092229 master-0 kubenswrapper[31559]: I0216 02:35:22.092053 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-l5t7v" Feb 16 02:35:22.092229 master-0 kubenswrapper[31559]: I0216 02:35:22.092263 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-l5t7v" Feb 16 02:35:22.154362 master-0 kubenswrapper[31559]: I0216 02:35:22.154300 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-l5t7v" Feb 16 02:35:22.845727 master-0 kubenswrapper[31559]: I0216 02:35:22.845673 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-l5t7v" Feb 16 02:35:24.027501 master-0 kubenswrapper[31559]: I0216 02:35:24.027363 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c219r8qd"] Feb 16 02:35:24.028756 master-0 kubenswrapper[31559]: E0216 02:35:24.028068 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97ecc023-a5a8-48d9-a333-732b37168e6e" containerName="console" Feb 16 02:35:24.028756 master-0 kubenswrapper[31559]: I0216 02:35:24.028103 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="97ecc023-a5a8-48d9-a333-732b37168e6e" containerName="console" Feb 16 02:35:24.028756 master-0 kubenswrapper[31559]: I0216 02:35:24.028597 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="97ecc023-a5a8-48d9-a333-732b37168e6e" containerName="console" Feb 16 02:35:24.031632 master-0 kubenswrapper[31559]: I0216 02:35:24.031565 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c219r8qd" Feb 16 02:35:24.054032 master-0 kubenswrapper[31559]: I0216 02:35:24.053960 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c219r8qd"] Feb 16 02:35:24.221873 master-0 kubenswrapper[31559]: I0216 02:35:24.221774 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvkgv\" (UniqueName: \"kubernetes.io/projected/e0e7bd42-250e-460f-a598-cb9bdc6b40f0-kube-api-access-wvkgv\") pod \"4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c219r8qd\" (UID: \"e0e7bd42-250e-460f-a598-cb9bdc6b40f0\") " pod="openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c219r8qd" Feb 16 02:35:24.222337 master-0 kubenswrapper[31559]: I0216 02:35:24.222209 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e0e7bd42-250e-460f-a598-cb9bdc6b40f0-util\") pod \"4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c219r8qd\" (UID: \"e0e7bd42-250e-460f-a598-cb9bdc6b40f0\") " pod="openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c219r8qd" Feb 16 02:35:24.222502 master-0 kubenswrapper[31559]: I0216 02:35:24.222463 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e0e7bd42-250e-460f-a598-cb9bdc6b40f0-bundle\") pod \"4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c219r8qd\" (UID: \"e0e7bd42-250e-460f-a598-cb9bdc6b40f0\") " pod="openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c219r8qd" Feb 16 02:35:24.325301 master-0 kubenswrapper[31559]: I0216 02:35:24.325100 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e0e7bd42-250e-460f-a598-cb9bdc6b40f0-util\") pod \"4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c219r8qd\" (UID: \"e0e7bd42-250e-460f-a598-cb9bdc6b40f0\") " pod="openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c219r8qd" Feb 16 02:35:24.325755 master-0 kubenswrapper[31559]: I0216 02:35:24.325707 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e0e7bd42-250e-460f-a598-cb9bdc6b40f0-util\") pod \"4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c219r8qd\" (UID: \"e0e7bd42-250e-460f-a598-cb9bdc6b40f0\") " pod="openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c219r8qd" Feb 16 02:35:24.325951 master-0 kubenswrapper[31559]: I0216 02:35:24.325902 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e0e7bd42-250e-460f-a598-cb9bdc6b40f0-bundle\") pod \"4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c219r8qd\" (UID: \"e0e7bd42-250e-460f-a598-cb9bdc6b40f0\") " pod="openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c219r8qd" Feb 16 02:35:24.326223 master-0 kubenswrapper[31559]: I0216 02:35:24.326036 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e0e7bd42-250e-460f-a598-cb9bdc6b40f0-bundle\") pod \"4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c219r8qd\" (UID: \"e0e7bd42-250e-460f-a598-cb9bdc6b40f0\") " pod="openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c219r8qd" Feb 16 02:35:24.326484 master-0 kubenswrapper[31559]: I0216 02:35:24.326407 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvkgv\" (UniqueName: \"kubernetes.io/projected/e0e7bd42-250e-460f-a598-cb9bdc6b40f0-kube-api-access-wvkgv\") pod \"4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c219r8qd\" (UID: \"e0e7bd42-250e-460f-a598-cb9bdc6b40f0\") " pod="openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c219r8qd" Feb 16 02:35:24.356957 master-0 kubenswrapper[31559]: I0216 02:35:24.356899 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvkgv\" (UniqueName: \"kubernetes.io/projected/e0e7bd42-250e-460f-a598-cb9bdc6b40f0-kube-api-access-wvkgv\") pod \"4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c219r8qd\" (UID: \"e0e7bd42-250e-460f-a598-cb9bdc6b40f0\") " pod="openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c219r8qd" Feb 16 02:35:24.657046 master-0 kubenswrapper[31559]: I0216 02:35:24.656847 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c219r8qd" Feb 16 02:35:25.196137 master-0 kubenswrapper[31559]: W0216 02:35:25.196039 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0e7bd42_250e_460f_a598_cb9bdc6b40f0.slice/crio-e9bbd7bfb00f89c40ec0dbe12b4f001baf038a2343cba1de7489b50bc507ceb9 WatchSource:0}: Error finding container e9bbd7bfb00f89c40ec0dbe12b4f001baf038a2343cba1de7489b50bc507ceb9: Status 404 returned error can't find the container with id e9bbd7bfb00f89c40ec0dbe12b4f001baf038a2343cba1de7489b50bc507ceb9 Feb 16 02:35:25.197093 master-0 kubenswrapper[31559]: I0216 02:35:25.196736 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c219r8qd"] Feb 16 02:35:25.816027 master-0 kubenswrapper[31559]: I0216 02:35:25.815817 31559 generic.go:334] "Generic (PLEG): container finished" podID="e0e7bd42-250e-460f-a598-cb9bdc6b40f0" containerID="d0c90eb84fc6f0343baeff46a79fa9bddda71cd9a730a1fc6f22cd3e471b3b33" exitCode=0 Feb 16 02:35:25.816027 master-0 kubenswrapper[31559]: I0216 02:35:25.815923 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c219r8qd" event={"ID":"e0e7bd42-250e-460f-a598-cb9bdc6b40f0","Type":"ContainerDied","Data":"d0c90eb84fc6f0343baeff46a79fa9bddda71cd9a730a1fc6f22cd3e471b3b33"} Feb 16 02:35:25.816027 master-0 kubenswrapper[31559]: I0216 02:35:25.815980 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c219r8qd" event={"ID":"e0e7bd42-250e-460f-a598-cb9bdc6b40f0","Type":"ContainerStarted","Data":"e9bbd7bfb00f89c40ec0dbe12b4f001baf038a2343cba1de7489b50bc507ceb9"} Feb 16 02:35:26.834415 master-0 kubenswrapper[31559]: I0216 02:35:26.831703 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c219r8qd" event={"ID":"e0e7bd42-250e-460f-a598-cb9bdc6b40f0","Type":"ContainerStarted","Data":"853729aa6e30701cc2d0d57ae681d1f93a144674d53827249c0853d5ccb35c2b"} Feb 16 02:35:27.860620 master-0 kubenswrapper[31559]: I0216 02:35:27.858310 31559 generic.go:334] "Generic (PLEG): container finished" podID="e0e7bd42-250e-460f-a598-cb9bdc6b40f0" containerID="853729aa6e30701cc2d0d57ae681d1f93a144674d53827249c0853d5ccb35c2b" exitCode=0 Feb 16 02:35:27.860620 master-0 kubenswrapper[31559]: I0216 02:35:27.858391 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c219r8qd" event={"ID":"e0e7bd42-250e-460f-a598-cb9bdc6b40f0","Type":"ContainerDied","Data":"853729aa6e30701cc2d0d57ae681d1f93a144674d53827249c0853d5ccb35c2b"} Feb 16 02:35:28.661236 master-0 kubenswrapper[31559]: I0216 02:35:28.661056 31559 scope.go:117] "RemoveContainer" containerID="8e65952f68dc70a998ca96fc43cd86d783845ba696b7cee768810bbdde0b1b72" Feb 16 02:35:28.873779 master-0 kubenswrapper[31559]: I0216 02:35:28.873690 31559 generic.go:334] "Generic (PLEG): container finished" podID="e0e7bd42-250e-460f-a598-cb9bdc6b40f0" containerID="c70499807107b33c5a45edd5df439da424c8a1ee4251b86e0e2b1d961b28589e" exitCode=0 Feb 16 02:35:28.874751 master-0 kubenswrapper[31559]: I0216 02:35:28.873768 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c219r8qd" event={"ID":"e0e7bd42-250e-460f-a598-cb9bdc6b40f0","Type":"ContainerDied","Data":"c70499807107b33c5a45edd5df439da424c8a1ee4251b86e0e2b1d961b28589e"} Feb 16 02:35:30.317046 master-0 kubenswrapper[31559]: I0216 02:35:30.316968 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c219r8qd" Feb 16 02:35:30.464487 master-0 kubenswrapper[31559]: I0216 02:35:30.464397 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e0e7bd42-250e-460f-a598-cb9bdc6b40f0-util\") pod \"e0e7bd42-250e-460f-a598-cb9bdc6b40f0\" (UID: \"e0e7bd42-250e-460f-a598-cb9bdc6b40f0\") " Feb 16 02:35:30.464879 master-0 kubenswrapper[31559]: I0216 02:35:30.464850 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvkgv\" (UniqueName: \"kubernetes.io/projected/e0e7bd42-250e-460f-a598-cb9bdc6b40f0-kube-api-access-wvkgv\") pod \"e0e7bd42-250e-460f-a598-cb9bdc6b40f0\" (UID: \"e0e7bd42-250e-460f-a598-cb9bdc6b40f0\") " Feb 16 02:35:30.465253 master-0 kubenswrapper[31559]: I0216 02:35:30.465225 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e0e7bd42-250e-460f-a598-cb9bdc6b40f0-bundle\") pod \"e0e7bd42-250e-460f-a598-cb9bdc6b40f0\" (UID: \"e0e7bd42-250e-460f-a598-cb9bdc6b40f0\") " Feb 16 02:35:30.466570 master-0 kubenswrapper[31559]: I0216 02:35:30.466497 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0e7bd42-250e-460f-a598-cb9bdc6b40f0-bundle" (OuterVolumeSpecName: "bundle") pod "e0e7bd42-250e-460f-a598-cb9bdc6b40f0" (UID: "e0e7bd42-250e-460f-a598-cb9bdc6b40f0"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 02:35:30.469553 master-0 kubenswrapper[31559]: I0216 02:35:30.469473 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0e7bd42-250e-460f-a598-cb9bdc6b40f0-kube-api-access-wvkgv" (OuterVolumeSpecName: "kube-api-access-wvkgv") pod "e0e7bd42-250e-460f-a598-cb9bdc6b40f0" (UID: "e0e7bd42-250e-460f-a598-cb9bdc6b40f0"). InnerVolumeSpecName "kube-api-access-wvkgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:35:30.500226 master-0 kubenswrapper[31559]: I0216 02:35:30.500046 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0e7bd42-250e-460f-a598-cb9bdc6b40f0-util" (OuterVolumeSpecName: "util") pod "e0e7bd42-250e-460f-a598-cb9bdc6b40f0" (UID: "e0e7bd42-250e-460f-a598-cb9bdc6b40f0"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 02:35:30.568203 master-0 kubenswrapper[31559]: I0216 02:35:30.568094 31559 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e0e7bd42-250e-460f-a598-cb9bdc6b40f0-util\") on node \"master-0\" DevicePath \"\"" Feb 16 02:35:30.568203 master-0 kubenswrapper[31559]: I0216 02:35:30.568173 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvkgv\" (UniqueName: \"kubernetes.io/projected/e0e7bd42-250e-460f-a598-cb9bdc6b40f0-kube-api-access-wvkgv\") on node \"master-0\" DevicePath \"\"" Feb 16 02:35:30.568203 master-0 kubenswrapper[31559]: I0216 02:35:30.568204 31559 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e0e7bd42-250e-460f-a598-cb9bdc6b40f0-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 02:35:30.908683 master-0 kubenswrapper[31559]: I0216 02:35:30.908093 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c219r8qd" event={"ID":"e0e7bd42-250e-460f-a598-cb9bdc6b40f0","Type":"ContainerDied","Data":"e9bbd7bfb00f89c40ec0dbe12b4f001baf038a2343cba1de7489b50bc507ceb9"} Feb 16 02:35:30.908683 master-0 kubenswrapper[31559]: I0216 02:35:30.908166 31559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9bbd7bfb00f89c40ec0dbe12b4f001baf038a2343cba1de7489b50bc507ceb9" Feb 16 02:35:30.908683 master-0 kubenswrapper[31559]: I0216 02:35:30.908235 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c219r8qd" Feb 16 02:35:34.745002 master-0 kubenswrapper[31559]: I0216 02:35:34.744330 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-7f8db498b4-m2vnz"] Feb 16 02:35:34.745002 master-0 kubenswrapper[31559]: E0216 02:35:34.744696 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0e7bd42-250e-460f-a598-cb9bdc6b40f0" containerName="util" Feb 16 02:35:34.745002 master-0 kubenswrapper[31559]: I0216 02:35:34.744709 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0e7bd42-250e-460f-a598-cb9bdc6b40f0" containerName="util" Feb 16 02:35:34.745002 master-0 kubenswrapper[31559]: E0216 02:35:34.744738 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0e7bd42-250e-460f-a598-cb9bdc6b40f0" containerName="extract" Feb 16 02:35:34.745002 master-0 kubenswrapper[31559]: I0216 02:35:34.744745 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0e7bd42-250e-460f-a598-cb9bdc6b40f0" containerName="extract" Feb 16 02:35:34.745002 master-0 kubenswrapper[31559]: E0216 02:35:34.744769 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0e7bd42-250e-460f-a598-cb9bdc6b40f0" containerName="pull" Feb 16 02:35:34.745002 master-0 kubenswrapper[31559]: I0216 02:35:34.744775 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0e7bd42-250e-460f-a598-cb9bdc6b40f0" containerName="pull" Feb 16 02:35:34.745002 master-0 kubenswrapper[31559]: I0216 02:35:34.744936 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0e7bd42-250e-460f-a598-cb9bdc6b40f0" containerName="extract" Feb 16 02:35:34.745838 master-0 kubenswrapper[31559]: I0216 02:35:34.745396 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-7f8db498b4-m2vnz" Feb 16 02:35:34.856480 master-0 kubenswrapper[31559]: I0216 02:35:34.855545 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vstrj\" (UniqueName: \"kubernetes.io/projected/989fd3d8-30af-4220-a31c-967b16291cd0-kube-api-access-vstrj\") pod \"openstack-operator-controller-init-7f8db498b4-m2vnz\" (UID: \"989fd3d8-30af-4220-a31c-967b16291cd0\") " pod="openstack-operators/openstack-operator-controller-init-7f8db498b4-m2vnz" Feb 16 02:35:34.865455 master-0 kubenswrapper[31559]: I0216 02:35:34.861923 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-7f8db498b4-m2vnz"] Feb 16 02:35:34.959641 master-0 kubenswrapper[31559]: I0216 02:35:34.959592 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vstrj\" (UniqueName: \"kubernetes.io/projected/989fd3d8-30af-4220-a31c-967b16291cd0-kube-api-access-vstrj\") pod \"openstack-operator-controller-init-7f8db498b4-m2vnz\" (UID: \"989fd3d8-30af-4220-a31c-967b16291cd0\") " pod="openstack-operators/openstack-operator-controller-init-7f8db498b4-m2vnz" Feb 16 02:35:34.991402 master-0 kubenswrapper[31559]: I0216 02:35:34.990221 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vstrj\" (UniqueName: \"kubernetes.io/projected/989fd3d8-30af-4220-a31c-967b16291cd0-kube-api-access-vstrj\") pod \"openstack-operator-controller-init-7f8db498b4-m2vnz\" (UID: \"989fd3d8-30af-4220-a31c-967b16291cd0\") " pod="openstack-operators/openstack-operator-controller-init-7f8db498b4-m2vnz" Feb 16 02:35:35.054206 master-0 kubenswrapper[31559]: I0216 02:35:35.054156 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-7f8db498b4-m2vnz" Feb 16 02:35:35.515016 master-0 kubenswrapper[31559]: I0216 02:35:35.511928 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-7f8db498b4-m2vnz"] Feb 16 02:35:35.515016 master-0 kubenswrapper[31559]: W0216 02:35:35.514571 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod989fd3d8_30af_4220_a31c_967b16291cd0.slice/crio-1c0906d5f4f44e1bb53bebf04bc87fa09ff0c8f41f8a1862782f0c76026db05a WatchSource:0}: Error finding container 1c0906d5f4f44e1bb53bebf04bc87fa09ff0c8f41f8a1862782f0c76026db05a: Status 404 returned error can't find the container with id 1c0906d5f4f44e1bb53bebf04bc87fa09ff0c8f41f8a1862782f0c76026db05a Feb 16 02:35:35.990627 master-0 kubenswrapper[31559]: I0216 02:35:35.990537 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7f8db498b4-m2vnz" event={"ID":"989fd3d8-30af-4220-a31c-967b16291cd0","Type":"ContainerStarted","Data":"1c0906d5f4f44e1bb53bebf04bc87fa09ff0c8f41f8a1862782f0c76026db05a"} Feb 16 02:35:41.074260 master-0 kubenswrapper[31559]: I0216 02:35:41.074053 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7f8db498b4-m2vnz" event={"ID":"989fd3d8-30af-4220-a31c-967b16291cd0","Type":"ContainerStarted","Data":"f5ffe3b4f60aa9da7560b9bf0076bed1fe1f2cde2b909b5ca48e3f7aa39f6ca8"} Feb 16 02:35:41.075638 master-0 kubenswrapper[31559]: I0216 02:35:41.074456 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-7f8db498b4-m2vnz" Feb 16 02:35:41.130144 master-0 kubenswrapper[31559]: I0216 02:35:41.130016 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-7f8db498b4-m2vnz" podStartSLOduration=2.740130604 podStartE2EDuration="7.129987459s" podCreationTimestamp="2026-02-16 02:35:34 +0000 UTC" firstStartedPulling="2026-02-16 02:35:35.518466104 +0000 UTC m=+787.863072129" lastFinishedPulling="2026-02-16 02:35:39.908322929 +0000 UTC m=+792.252928984" observedRunningTime="2026-02-16 02:35:41.122055702 +0000 UTC m=+793.466661747" watchObservedRunningTime="2026-02-16 02:35:41.129987459 +0000 UTC m=+793.474593514" Feb 16 02:35:45.059606 master-0 kubenswrapper[31559]: I0216 02:35:45.058594 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-7f8db498b4-m2vnz" Feb 16 02:36:06.029285 master-0 kubenswrapper[31559]: I0216 02:36:06.029225 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-8rrtb"] Feb 16 02:36:06.030263 master-0 kubenswrapper[31559]: I0216 02:36:06.030233 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-8rrtb" Feb 16 02:36:06.047013 master-0 kubenswrapper[31559]: I0216 02:36:06.046139 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-hpq8s"] Feb 16 02:36:06.047634 master-0 kubenswrapper[31559]: I0216 02:36:06.047609 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-hpq8s" Feb 16 02:36:06.063419 master-0 kubenswrapper[31559]: I0216 02:36:06.062257 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-8rrtb"] Feb 16 02:36:06.075122 master-0 kubenswrapper[31559]: I0216 02:36:06.075064 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-hpq8s"] Feb 16 02:36:06.089647 master-0 kubenswrapper[31559]: I0216 02:36:06.089610 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-bktfj"] Feb 16 02:36:06.090620 master-0 kubenswrapper[31559]: I0216 02:36:06.090593 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-bktfj" Feb 16 02:36:06.134934 master-0 kubenswrapper[31559]: I0216 02:36:06.134870 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-bktfj"] Feb 16 02:36:06.136978 master-0 kubenswrapper[31559]: I0216 02:36:06.136934 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sld98\" (UniqueName: \"kubernetes.io/projected/2037bd66-cf51-4a19-aaf6-6a95950a0640-kube-api-access-sld98\") pod \"barbican-operator-controller-manager-868647ff47-8rrtb\" (UID: \"2037bd66-cf51-4a19-aaf6-6a95950a0640\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-8rrtb" Feb 16 02:36:06.137071 master-0 kubenswrapper[31559]: I0216 02:36:06.137025 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b698w\" (UniqueName: \"kubernetes.io/projected/3c209763-741a-4879-af3f-6d378990293e-kube-api-access-b698w\") pod \"cinder-operator-controller-manager-5d946d989d-hpq8s\" (UID: \"3c209763-741a-4879-af3f-6d378990293e\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-hpq8s" Feb 16 02:36:06.150763 master-0 kubenswrapper[31559]: I0216 02:36:06.150625 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-5nzdl"] Feb 16 02:36:06.155083 master-0 kubenswrapper[31559]: I0216 02:36:06.154906 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-5nzdl" Feb 16 02:36:06.162624 master-0 kubenswrapper[31559]: I0216 02:36:06.162568 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-sdw27"] Feb 16 02:36:06.163647 master-0 kubenswrapper[31559]: I0216 02:36:06.163629 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-sdw27" Feb 16 02:36:06.224973 master-0 kubenswrapper[31559]: I0216 02:36:06.222245 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-5nzdl"] Feb 16 02:36:06.238898 master-0 kubenswrapper[31559]: I0216 02:36:06.238798 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sld98\" (UniqueName: \"kubernetes.io/projected/2037bd66-cf51-4a19-aaf6-6a95950a0640-kube-api-access-sld98\") pod \"barbican-operator-controller-manager-868647ff47-8rrtb\" (UID: \"2037bd66-cf51-4a19-aaf6-6a95950a0640\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-8rrtb" Feb 16 02:36:06.238898 master-0 kubenswrapper[31559]: I0216 02:36:06.238899 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nchh4\" (UniqueName: \"kubernetes.io/projected/5984cc07-3c42-4d98-aed0-1ffa30bff993-kube-api-access-nchh4\") pod \"designate-operator-controller-manager-6d8bf5c495-bktfj\" (UID: \"5984cc07-3c42-4d98-aed0-1ffa30bff993\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-bktfj" Feb 16 02:36:06.239152 master-0 kubenswrapper[31559]: I0216 02:36:06.238924 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b698w\" (UniqueName: \"kubernetes.io/projected/3c209763-741a-4879-af3f-6d378990293e-kube-api-access-b698w\") pod \"cinder-operator-controller-manager-5d946d989d-hpq8s\" (UID: \"3c209763-741a-4879-af3f-6d378990293e\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-hpq8s" Feb 16 02:36:06.239152 master-0 kubenswrapper[31559]: I0216 02:36:06.238994 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2zgb\" (UniqueName: \"kubernetes.io/projected/ac5aa459-16a0-47f5-9cb1-7872b5342ce5-kube-api-access-k2zgb\") pod \"glance-operator-controller-manager-77987464f4-5nzdl\" (UID: \"ac5aa459-16a0-47f5-9cb1-7872b5342ce5\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-5nzdl" Feb 16 02:36:06.239152 master-0 kubenswrapper[31559]: I0216 02:36:06.239014 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bt229\" (UniqueName: \"kubernetes.io/projected/76205954-3caf-4932-bb2c-c2315357a221-kube-api-access-bt229\") pod \"heat-operator-controller-manager-69f49c598c-sdw27\" (UID: \"76205954-3caf-4932-bb2c-c2315357a221\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-sdw27" Feb 16 02:36:06.262465 master-0 kubenswrapper[31559]: I0216 02:36:06.260948 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-sdw27"] Feb 16 02:36:06.293527 master-0 kubenswrapper[31559]: I0216 02:36:06.292412 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sld98\" (UniqueName: \"kubernetes.io/projected/2037bd66-cf51-4a19-aaf6-6a95950a0640-kube-api-access-sld98\") pod \"barbican-operator-controller-manager-868647ff47-8rrtb\" (UID: \"2037bd66-cf51-4a19-aaf6-6a95950a0640\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-8rrtb" Feb 16 02:36:06.293527 master-0 kubenswrapper[31559]: I0216 02:36:06.292963 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b698w\" (UniqueName: \"kubernetes.io/projected/3c209763-741a-4879-af3f-6d378990293e-kube-api-access-b698w\") pod \"cinder-operator-controller-manager-5d946d989d-hpq8s\" (UID: \"3c209763-741a-4879-af3f-6d378990293e\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-hpq8s" Feb 16 02:36:06.304941 master-0 kubenswrapper[31559]: I0216 02:36:06.304875 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-kvc45"] Feb 16 02:36:06.306101 master-0 kubenswrapper[31559]: I0216 02:36:06.306064 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-kvc45" Feb 16 02:36:06.340657 master-0 kubenswrapper[31559]: I0216 02:36:06.340590 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2zgb\" (UniqueName: \"kubernetes.io/projected/ac5aa459-16a0-47f5-9cb1-7872b5342ce5-kube-api-access-k2zgb\") pod \"glance-operator-controller-manager-77987464f4-5nzdl\" (UID: \"ac5aa459-16a0-47f5-9cb1-7872b5342ce5\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-5nzdl" Feb 16 02:36:06.340657 master-0 kubenswrapper[31559]: I0216 02:36:06.340656 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bt229\" (UniqueName: \"kubernetes.io/projected/76205954-3caf-4932-bb2c-c2315357a221-kube-api-access-bt229\") pod \"heat-operator-controller-manager-69f49c598c-sdw27\" (UID: \"76205954-3caf-4932-bb2c-c2315357a221\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-sdw27" Feb 16 02:36:06.340896 master-0 kubenswrapper[31559]: I0216 02:36:06.340789 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nchh4\" (UniqueName: \"kubernetes.io/projected/5984cc07-3c42-4d98-aed0-1ffa30bff993-kube-api-access-nchh4\") pod \"designate-operator-controller-manager-6d8bf5c495-bktfj\" (UID: \"5984cc07-3c42-4d98-aed0-1ffa30bff993\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-bktfj" Feb 16 02:36:06.344683 master-0 kubenswrapper[31559]: I0216 02:36:06.344640 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-5f879c76b6-ftls6"] Feb 16 02:36:06.346375 master-0 kubenswrapper[31559]: I0216 02:36:06.346278 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-ftls6" Feb 16 02:36:06.354721 master-0 kubenswrapper[31559]: I0216 02:36:06.354175 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 16 02:36:06.372845 master-0 kubenswrapper[31559]: I0216 02:36:06.369388 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nchh4\" (UniqueName: \"kubernetes.io/projected/5984cc07-3c42-4d98-aed0-1ffa30bff993-kube-api-access-nchh4\") pod \"designate-operator-controller-manager-6d8bf5c495-bktfj\" (UID: \"5984cc07-3c42-4d98-aed0-1ffa30bff993\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-bktfj" Feb 16 02:36:06.376528 master-0 kubenswrapper[31559]: I0216 02:36:06.376485 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-kvc45"] Feb 16 02:36:06.377156 master-0 kubenswrapper[31559]: I0216 02:36:06.377115 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2zgb\" (UniqueName: \"kubernetes.io/projected/ac5aa459-16a0-47f5-9cb1-7872b5342ce5-kube-api-access-k2zgb\") pod \"glance-operator-controller-manager-77987464f4-5nzdl\" (UID: \"ac5aa459-16a0-47f5-9cb1-7872b5342ce5\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-5nzdl" Feb 16 02:36:06.381855 master-0 kubenswrapper[31559]: I0216 02:36:06.381781 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bt229\" (UniqueName: \"kubernetes.io/projected/76205954-3caf-4932-bb2c-c2315357a221-kube-api-access-bt229\") pod \"heat-operator-controller-manager-69f49c598c-sdw27\" (UID: \"76205954-3caf-4932-bb2c-c2315357a221\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-sdw27" Feb 16 02:36:06.410253 master-0 kubenswrapper[31559]: I0216 02:36:06.401562 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-5f879c76b6-ftls6"] Feb 16 02:36:06.410253 master-0 kubenswrapper[31559]: I0216 02:36:06.403162 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-8rrtb" Feb 16 02:36:06.419368 master-0 kubenswrapper[31559]: I0216 02:36:06.413765 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-2nhlv"] Feb 16 02:36:06.419368 master-0 kubenswrapper[31559]: I0216 02:36:06.415052 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-2nhlv" Feb 16 02:36:06.428644 master-0 kubenswrapper[31559]: I0216 02:36:06.428590 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-lfqbs"] Feb 16 02:36:06.434316 master-0 kubenswrapper[31559]: I0216 02:36:06.434274 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-lfqbs" Feb 16 02:36:06.435421 master-0 kubenswrapper[31559]: I0216 02:36:06.435388 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-hpq8s" Feb 16 02:36:06.438421 master-0 kubenswrapper[31559]: I0216 02:36:06.438338 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-2nhlv"] Feb 16 02:36:06.442157 master-0 kubenswrapper[31559]: I0216 02:36:06.442115 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42c97\" (UniqueName: \"kubernetes.io/projected/c4437af6-8e55-42d9-8c6c-61965282cea0-kube-api-access-42c97\") pod \"horizon-operator-controller-manager-5b9b8895d5-kvc45\" (UID: \"c4437af6-8e55-42d9-8c6c-61965282cea0\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-kvc45" Feb 16 02:36:06.442259 master-0 kubenswrapper[31559]: I0216 02:36:06.442161 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5a00ec8c-7fd9-4450-8879-af88897ebfc6-cert\") pod \"infra-operator-controller-manager-5f879c76b6-ftls6\" (UID: \"5a00ec8c-7fd9-4450-8879-af88897ebfc6\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-ftls6" Feb 16 02:36:06.442328 master-0 kubenswrapper[31559]: I0216 02:36:06.442315 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqsln\" (UniqueName: \"kubernetes.io/projected/5a00ec8c-7fd9-4450-8879-af88897ebfc6-kube-api-access-rqsln\") pod \"infra-operator-controller-manager-5f879c76b6-ftls6\" (UID: \"5a00ec8c-7fd9-4450-8879-af88897ebfc6\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-ftls6" Feb 16 02:36:06.467057 master-0 kubenswrapper[31559]: I0216 02:36:06.455076 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-bktfj" Feb 16 02:36:06.467057 master-0 kubenswrapper[31559]: I0216 02:36:06.455551 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-lfqbs"] Feb 16 02:36:06.473618 master-0 kubenswrapper[31559]: I0216 02:36:06.473549 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-sth8f"] Feb 16 02:36:06.474928 master-0 kubenswrapper[31559]: I0216 02:36:06.474893 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-sth8f" Feb 16 02:36:06.495826 master-0 kubenswrapper[31559]: I0216 02:36:06.495554 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-5nzdl" Feb 16 02:36:06.514093 master-0 kubenswrapper[31559]: I0216 02:36:06.502121 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-5mbdg"] Feb 16 02:36:06.514093 master-0 kubenswrapper[31559]: I0216 02:36:06.508502 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-5mbdg" Feb 16 02:36:06.522017 master-0 kubenswrapper[31559]: I0216 02:36:06.521979 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-sdw27" Feb 16 02:36:06.530971 master-0 kubenswrapper[31559]: I0216 02:36:06.530914 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-sth8f"] Feb 16 02:36:06.539133 master-0 kubenswrapper[31559]: I0216 02:36:06.539075 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-5mbdg"] Feb 16 02:36:06.547381 master-0 kubenswrapper[31559]: I0216 02:36:06.543640 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqsln\" (UniqueName: \"kubernetes.io/projected/5a00ec8c-7fd9-4450-8879-af88897ebfc6-kube-api-access-rqsln\") pod \"infra-operator-controller-manager-5f879c76b6-ftls6\" (UID: \"5a00ec8c-7fd9-4450-8879-af88897ebfc6\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-ftls6" Feb 16 02:36:06.547381 master-0 kubenswrapper[31559]: I0216 02:36:06.543838 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmmbh\" (UniqueName: \"kubernetes.io/projected/09111ae4-1998-425c-b30a-afbbc4a75dba-kube-api-access-cmmbh\") pod \"ironic-operator-controller-manager-554564d7fc-2nhlv\" (UID: \"09111ae4-1998-425c-b30a-afbbc4a75dba\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-2nhlv" Feb 16 02:36:06.547381 master-0 kubenswrapper[31559]: I0216 02:36:06.543942 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b76cw\" (UniqueName: \"kubernetes.io/projected/85c662d6-ab4c-4999-93cf-da87dfff2271-kube-api-access-b76cw\") pod \"keystone-operator-controller-manager-b4d948c87-lfqbs\" (UID: \"85c662d6-ab4c-4999-93cf-da87dfff2271\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-lfqbs" Feb 16 02:36:06.547381 master-0 kubenswrapper[31559]: I0216 02:36:06.543979 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpnw4\" (UniqueName: \"kubernetes.io/projected/15fefa75-bf9a-46e2-b22e-8fcdd3188788-kube-api-access-zpnw4\") pod \"manila-operator-controller-manager-54f6768c69-sth8f\" (UID: \"15fefa75-bf9a-46e2-b22e-8fcdd3188788\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-sth8f" Feb 16 02:36:06.547381 master-0 kubenswrapper[31559]: I0216 02:36:06.544030 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42c97\" (UniqueName: \"kubernetes.io/projected/c4437af6-8e55-42d9-8c6c-61965282cea0-kube-api-access-42c97\") pod \"horizon-operator-controller-manager-5b9b8895d5-kvc45\" (UID: \"c4437af6-8e55-42d9-8c6c-61965282cea0\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-kvc45" Feb 16 02:36:06.547381 master-0 kubenswrapper[31559]: E0216 02:36:06.544316 31559 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 02:36:06.547381 master-0 kubenswrapper[31559]: E0216 02:36:06.544401 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a00ec8c-7fd9-4450-8879-af88897ebfc6-cert podName:5a00ec8c-7fd9-4450-8879-af88897ebfc6 nodeName:}" failed. No retries permitted until 2026-02-16 02:36:07.044380757 +0000 UTC m=+819.388986772 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5a00ec8c-7fd9-4450-8879-af88897ebfc6-cert") pod "infra-operator-controller-manager-5f879c76b6-ftls6" (UID: "5a00ec8c-7fd9-4450-8879-af88897ebfc6") : secret "infra-operator-webhook-server-cert" not found Feb 16 02:36:06.547381 master-0 kubenswrapper[31559]: I0216 02:36:06.546184 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5a00ec8c-7fd9-4450-8879-af88897ebfc6-cert\") pod \"infra-operator-controller-manager-5f879c76b6-ftls6\" (UID: \"5a00ec8c-7fd9-4450-8879-af88897ebfc6\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-ftls6" Feb 16 02:36:06.559826 master-0 kubenswrapper[31559]: I0216 02:36:06.559776 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqsln\" (UniqueName: \"kubernetes.io/projected/5a00ec8c-7fd9-4450-8879-af88897ebfc6-kube-api-access-rqsln\") pod \"infra-operator-controller-manager-5f879c76b6-ftls6\" (UID: \"5a00ec8c-7fd9-4450-8879-af88897ebfc6\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-ftls6" Feb 16 02:36:06.562768 master-0 kubenswrapper[31559]: I0216 02:36:06.562730 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42c97\" (UniqueName: \"kubernetes.io/projected/c4437af6-8e55-42d9-8c6c-61965282cea0-kube-api-access-42c97\") pod \"horizon-operator-controller-manager-5b9b8895d5-kvc45\" (UID: \"c4437af6-8e55-42d9-8c6c-61965282cea0\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-kvc45" Feb 16 02:36:06.578625 master-0 kubenswrapper[31559]: I0216 02:36:06.569740 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-6wwcr"] Feb 16 02:36:06.578625 master-0 kubenswrapper[31559]: I0216 02:36:06.571509 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-6wwcr" Feb 16 02:36:06.580324 master-0 kubenswrapper[31559]: I0216 02:36:06.580118 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-6wwcr"] Feb 16 02:36:06.594201 master-0 kubenswrapper[31559]: I0216 02:36:06.594150 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-pzmcd"] Feb 16 02:36:06.595281 master-0 kubenswrapper[31559]: I0216 02:36:06.595252 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-pzmcd" Feb 16 02:36:06.606366 master-0 kubenswrapper[31559]: I0216 02:36:06.606242 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-pzmcd"] Feb 16 02:36:06.624823 master-0 kubenswrapper[31559]: I0216 02:36:06.624628 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-rwgkl"] Feb 16 02:36:06.636287 master-0 kubenswrapper[31559]: I0216 02:36:06.632927 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-rwgkl"] Feb 16 02:36:06.636287 master-0 kubenswrapper[31559]: I0216 02:36:06.633037 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-rwgkl" Feb 16 02:36:06.646349 master-0 kubenswrapper[31559]: I0216 02:36:06.646308 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-7cfks"] Feb 16 02:36:06.648205 master-0 kubenswrapper[31559]: I0216 02:36:06.648163 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b76cw\" (UniqueName: \"kubernetes.io/projected/85c662d6-ab4c-4999-93cf-da87dfff2271-kube-api-access-b76cw\") pod \"keystone-operator-controller-manager-b4d948c87-lfqbs\" (UID: \"85c662d6-ab4c-4999-93cf-da87dfff2271\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-lfqbs" Feb 16 02:36:06.648276 master-0 kubenswrapper[31559]: I0216 02:36:06.648205 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpnw4\" (UniqueName: \"kubernetes.io/projected/15fefa75-bf9a-46e2-b22e-8fcdd3188788-kube-api-access-zpnw4\") pod \"manila-operator-controller-manager-54f6768c69-sth8f\" (UID: \"15fefa75-bf9a-46e2-b22e-8fcdd3188788\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-sth8f" Feb 16 02:36:06.648276 master-0 kubenswrapper[31559]: I0216 02:36:06.648264 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nsqt\" (UniqueName: \"kubernetes.io/projected/6c48078c-3fd8-4c0c-a7ea-52e0e015324d-kube-api-access-5nsqt\") pod \"mariadb-operator-controller-manager-6994f66f48-5mbdg\" (UID: \"6c48078c-3fd8-4c0c-a7ea-52e0e015324d\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-5mbdg" Feb 16 02:36:06.648341 master-0 kubenswrapper[31559]: I0216 02:36:06.648312 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqcsz\" (UniqueName: \"kubernetes.io/projected/030323ce-de29-4144-bba5-f811e997f7d8-kube-api-access-zqcsz\") pod \"neutron-operator-controller-manager-64ddbf8bb-6wwcr\" (UID: \"030323ce-de29-4144-bba5-f811e997f7d8\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-6wwcr" Feb 16 02:36:06.648374 master-0 kubenswrapper[31559]: I0216 02:36:06.648340 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmmbh\" (UniqueName: \"kubernetes.io/projected/09111ae4-1998-425c-b30a-afbbc4a75dba-kube-api-access-cmmbh\") pod \"ironic-operator-controller-manager-554564d7fc-2nhlv\" (UID: \"09111ae4-1998-425c-b30a-afbbc4a75dba\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-2nhlv" Feb 16 02:36:06.648801 master-0 kubenswrapper[31559]: I0216 02:36:06.648759 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-7cfks" Feb 16 02:36:06.684507 master-0 kubenswrapper[31559]: I0216 02:36:06.684292 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpnw4\" (UniqueName: \"kubernetes.io/projected/15fefa75-bf9a-46e2-b22e-8fcdd3188788-kube-api-access-zpnw4\") pod \"manila-operator-controller-manager-54f6768c69-sth8f\" (UID: \"15fefa75-bf9a-46e2-b22e-8fcdd3188788\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-sth8f" Feb 16 02:36:06.684819 master-0 kubenswrapper[31559]: I0216 02:36:06.684646 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-kvc45" Feb 16 02:36:06.688423 master-0 kubenswrapper[31559]: I0216 02:36:06.688145 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89bwxg9f"] Feb 16 02:36:06.689787 master-0 kubenswrapper[31559]: I0216 02:36:06.689221 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89bwxg9f" Feb 16 02:36:06.695324 master-0 kubenswrapper[31559]: I0216 02:36:06.688373 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmmbh\" (UniqueName: \"kubernetes.io/projected/09111ae4-1998-425c-b30a-afbbc4a75dba-kube-api-access-cmmbh\") pod \"ironic-operator-controller-manager-554564d7fc-2nhlv\" (UID: \"09111ae4-1998-425c-b30a-afbbc4a75dba\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-2nhlv" Feb 16 02:36:06.699287 master-0 kubenswrapper[31559]: I0216 02:36:06.699236 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 16 02:36:06.699775 master-0 kubenswrapper[31559]: I0216 02:36:06.699649 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b76cw\" (UniqueName: \"kubernetes.io/projected/85c662d6-ab4c-4999-93cf-da87dfff2271-kube-api-access-b76cw\") pod \"keystone-operator-controller-manager-b4d948c87-lfqbs\" (UID: \"85c662d6-ab4c-4999-93cf-da87dfff2271\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-lfqbs" Feb 16 02:36:06.701845 master-0 kubenswrapper[31559]: I0216 02:36:06.701817 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-7cfks"] Feb 16 02:36:06.728783 master-0 kubenswrapper[31559]: I0216 02:36:06.726756 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-qlvbm"] Feb 16 02:36:06.729596 master-0 kubenswrapper[31559]: I0216 02:36:06.729560 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-qlvbm" Feb 16 02:36:06.751135 master-0 kubenswrapper[31559]: I0216 02:36:06.750840 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8deae9af-57d0-43f7-a94f-b9e4153c5f4d-cert\") pod \"openstack-baremetal-operator-controller-manager-5f8cd6b89bwxg9f\" (UID: \"8deae9af-57d0-43f7-a94f-b9e4153c5f4d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89bwxg9f" Feb 16 02:36:06.751135 master-0 kubenswrapper[31559]: I0216 02:36:06.750882 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pgql\" (UniqueName: \"kubernetes.io/projected/f954fd81-2f66-4e9e-b6da-f5b3080b09a3-kube-api-access-8pgql\") pod \"nova-operator-controller-manager-567668f5cf-pzmcd\" (UID: \"f954fd81-2f66-4e9e-b6da-f5b3080b09a3\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-pzmcd" Feb 16 02:36:06.751135 master-0 kubenswrapper[31559]: I0216 02:36:06.750912 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7blw\" (UniqueName: \"kubernetes.io/projected/54cfc1b0-0f93-4c97-8675-11d76038c0e7-kube-api-access-x7blw\") pod \"octavia-operator-controller-manager-69f8888797-rwgkl\" (UID: \"54cfc1b0-0f93-4c97-8675-11d76038c0e7\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-rwgkl" Feb 16 02:36:06.751135 master-0 kubenswrapper[31559]: I0216 02:36:06.750969 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nsqt\" (UniqueName: \"kubernetes.io/projected/6c48078c-3fd8-4c0c-a7ea-52e0e015324d-kube-api-access-5nsqt\") pod \"mariadb-operator-controller-manager-6994f66f48-5mbdg\" (UID: \"6c48078c-3fd8-4c0c-a7ea-52e0e015324d\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-5mbdg" Feb 16 02:36:06.751135 master-0 kubenswrapper[31559]: I0216 02:36:06.750994 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lz29n\" (UniqueName: \"kubernetes.io/projected/8deae9af-57d0-43f7-a94f-b9e4153c5f4d-kube-api-access-lz29n\") pod \"openstack-baremetal-operator-controller-manager-5f8cd6b89bwxg9f\" (UID: \"8deae9af-57d0-43f7-a94f-b9e4153c5f4d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89bwxg9f" Feb 16 02:36:06.751135 master-0 kubenswrapper[31559]: I0216 02:36:06.751028 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhwh9\" (UniqueName: \"kubernetes.io/projected/9a9d5eac-39bd-4752-bb69-d936ad60f9a9-kube-api-access-bhwh9\") pod \"ovn-operator-controller-manager-d44cf6b75-7cfks\" (UID: \"9a9d5eac-39bd-4752-bb69-d936ad60f9a9\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-7cfks" Feb 16 02:36:06.751135 master-0 kubenswrapper[31559]: I0216 02:36:06.751064 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqcsz\" (UniqueName: \"kubernetes.io/projected/030323ce-de29-4144-bba5-f811e997f7d8-kube-api-access-zqcsz\") pod \"neutron-operator-controller-manager-64ddbf8bb-6wwcr\" (UID: \"030323ce-de29-4144-bba5-f811e997f7d8\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-6wwcr" Feb 16 02:36:06.754072 master-0 kubenswrapper[31559]: I0216 02:36:06.754034 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-qlvbm"] Feb 16 02:36:06.773909 master-0 kubenswrapper[31559]: I0216 02:36:06.768324 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89bwxg9f"] Feb 16 02:36:06.796966 master-0 kubenswrapper[31559]: I0216 02:36:06.778779 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-9f485"] Feb 16 02:36:06.796966 master-0 kubenswrapper[31559]: I0216 02:36:06.779923 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-9f485" Feb 16 02:36:06.796966 master-0 kubenswrapper[31559]: I0216 02:36:06.784803 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nsqt\" (UniqueName: \"kubernetes.io/projected/6c48078c-3fd8-4c0c-a7ea-52e0e015324d-kube-api-access-5nsqt\") pod \"mariadb-operator-controller-manager-6994f66f48-5mbdg\" (UID: \"6c48078c-3fd8-4c0c-a7ea-52e0e015324d\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-5mbdg" Feb 16 02:36:06.796966 master-0 kubenswrapper[31559]: I0216 02:36:06.789496 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-9qhgd"] Feb 16 02:36:06.796966 master-0 kubenswrapper[31559]: I0216 02:36:06.790498 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-9qhgd" Feb 16 02:36:06.797643 master-0 kubenswrapper[31559]: I0216 02:36:06.797574 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqcsz\" (UniqueName: \"kubernetes.io/projected/030323ce-de29-4144-bba5-f811e997f7d8-kube-api-access-zqcsz\") pod \"neutron-operator-controller-manager-64ddbf8bb-6wwcr\" (UID: \"030323ce-de29-4144-bba5-f811e997f7d8\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-6wwcr" Feb 16 02:36:06.798661 master-0 kubenswrapper[31559]: I0216 02:36:06.798616 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-9f485"] Feb 16 02:36:06.800808 master-0 kubenswrapper[31559]: I0216 02:36:06.800386 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-2nhlv" Feb 16 02:36:06.810415 master-0 kubenswrapper[31559]: I0216 02:36:06.809779 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-lfqbs" Feb 16 02:36:06.810415 master-0 kubenswrapper[31559]: I0216 02:36:06.809877 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-9qhgd"] Feb 16 02:36:06.825371 master-0 kubenswrapper[31559]: I0216 02:36:06.825256 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-sth8f" Feb 16 02:36:06.836764 master-0 kubenswrapper[31559]: I0216 02:36:06.835731 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-5mbdg" Feb 16 02:36:06.852903 master-0 kubenswrapper[31559]: I0216 02:36:06.852725 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bspvx\" (UniqueName: \"kubernetes.io/projected/a78ae1a9-3cbf-4147-9761-50a806bafceb-kube-api-access-bspvx\") pod \"telemetry-operator-controller-manager-7f45b4ff68-9qhgd\" (UID: \"a78ae1a9-3cbf-4147-9761-50a806bafceb\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-9qhgd" Feb 16 02:36:06.852903 master-0 kubenswrapper[31559]: I0216 02:36:06.852803 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8deae9af-57d0-43f7-a94f-b9e4153c5f4d-cert\") pod \"openstack-baremetal-operator-controller-manager-5f8cd6b89bwxg9f\" (UID: \"8deae9af-57d0-43f7-a94f-b9e4153c5f4d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89bwxg9f" Feb 16 02:36:06.852903 master-0 kubenswrapper[31559]: I0216 02:36:06.852823 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pgql\" (UniqueName: \"kubernetes.io/projected/f954fd81-2f66-4e9e-b6da-f5b3080b09a3-kube-api-access-8pgql\") pod \"nova-operator-controller-manager-567668f5cf-pzmcd\" (UID: \"f954fd81-2f66-4e9e-b6da-f5b3080b09a3\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-pzmcd" Feb 16 02:36:06.852903 master-0 kubenswrapper[31559]: I0216 02:36:06.852845 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7blw\" (UniqueName: \"kubernetes.io/projected/54cfc1b0-0f93-4c97-8675-11d76038c0e7-kube-api-access-x7blw\") pod \"octavia-operator-controller-manager-69f8888797-rwgkl\" (UID: \"54cfc1b0-0f93-4c97-8675-11d76038c0e7\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-rwgkl" Feb 16 02:36:06.852903 master-0 kubenswrapper[31559]: I0216 02:36:06.852873 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77gqj\" (UniqueName: \"kubernetes.io/projected/d7bce36d-c48f-4156-ba5e-170d77b35445-kube-api-access-77gqj\") pod \"swift-operator-controller-manager-68f46476f-9f485\" (UID: \"d7bce36d-c48f-4156-ba5e-170d77b35445\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-9f485" Feb 16 02:36:06.852903 master-0 kubenswrapper[31559]: I0216 02:36:06.852921 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lz29n\" (UniqueName: \"kubernetes.io/projected/8deae9af-57d0-43f7-a94f-b9e4153c5f4d-kube-api-access-lz29n\") pod \"openstack-baremetal-operator-controller-manager-5f8cd6b89bwxg9f\" (UID: \"8deae9af-57d0-43f7-a94f-b9e4153c5f4d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89bwxg9f" Feb 16 02:36:06.856801 master-0 kubenswrapper[31559]: I0216 02:36:06.852949 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhwh9\" (UniqueName: \"kubernetes.io/projected/9a9d5eac-39bd-4752-bb69-d936ad60f9a9-kube-api-access-bhwh9\") pod \"ovn-operator-controller-manager-d44cf6b75-7cfks\" (UID: \"9a9d5eac-39bd-4752-bb69-d936ad60f9a9\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-7cfks" Feb 16 02:36:06.856801 master-0 kubenswrapper[31559]: I0216 02:36:06.852979 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptp2r\" (UniqueName: \"kubernetes.io/projected/c1e8e453-6a51-4f81-90de-4ee9bbc7a53b-kube-api-access-ptp2r\") pod \"placement-operator-controller-manager-8497b45c89-qlvbm\" (UID: \"c1e8e453-6a51-4f81-90de-4ee9bbc7a53b\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-qlvbm" Feb 16 02:36:06.856801 master-0 kubenswrapper[31559]: E0216 02:36:06.853124 31559 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 02:36:06.856801 master-0 kubenswrapper[31559]: E0216 02:36:06.853163 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8deae9af-57d0-43f7-a94f-b9e4153c5f4d-cert podName:8deae9af-57d0-43f7-a94f-b9e4153c5f4d nodeName:}" failed. No retries permitted until 2026-02-16 02:36:07.353149606 +0000 UTC m=+819.697755621 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8deae9af-57d0-43f7-a94f-b9e4153c5f4d-cert") pod "openstack-baremetal-operator-controller-manager-5f8cd6b89bwxg9f" (UID: "8deae9af-57d0-43f7-a94f-b9e4153c5f4d") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 02:36:06.869232 master-0 kubenswrapper[31559]: I0216 02:36:06.864050 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-fxcr2"] Feb 16 02:36:06.869232 master-0 kubenswrapper[31559]: I0216 02:36:06.867653 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-fxcr2" Feb 16 02:36:06.874290 master-0 kubenswrapper[31559]: I0216 02:36:06.872897 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhwh9\" (UniqueName: \"kubernetes.io/projected/9a9d5eac-39bd-4752-bb69-d936ad60f9a9-kube-api-access-bhwh9\") pod \"ovn-operator-controller-manager-d44cf6b75-7cfks\" (UID: \"9a9d5eac-39bd-4752-bb69-d936ad60f9a9\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-7cfks" Feb 16 02:36:06.877680 master-0 kubenswrapper[31559]: I0216 02:36:06.875981 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pgql\" (UniqueName: \"kubernetes.io/projected/f954fd81-2f66-4e9e-b6da-f5b3080b09a3-kube-api-access-8pgql\") pod \"nova-operator-controller-manager-567668f5cf-pzmcd\" (UID: \"f954fd81-2f66-4e9e-b6da-f5b3080b09a3\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-pzmcd" Feb 16 02:36:06.879701 master-0 kubenswrapper[31559]: I0216 02:36:06.879659 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-fxcr2"] Feb 16 02:36:06.895116 master-0 kubenswrapper[31559]: I0216 02:36:06.895065 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lz29n\" (UniqueName: \"kubernetes.io/projected/8deae9af-57d0-43f7-a94f-b9e4153c5f4d-kube-api-access-lz29n\") pod \"openstack-baremetal-operator-controller-manager-5f8cd6b89bwxg9f\" (UID: \"8deae9af-57d0-43f7-a94f-b9e4153c5f4d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89bwxg9f" Feb 16 02:36:06.895501 master-0 kubenswrapper[31559]: I0216 02:36:06.895466 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-6wwcr" Feb 16 02:36:06.905296 master-0 kubenswrapper[31559]: I0216 02:36:06.905240 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-vr8wx"] Feb 16 02:36:06.906517 master-0 kubenswrapper[31559]: I0216 02:36:06.906490 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-vr8wx" Feb 16 02:36:06.909640 master-0 kubenswrapper[31559]: I0216 02:36:06.909605 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7blw\" (UniqueName: \"kubernetes.io/projected/54cfc1b0-0f93-4c97-8675-11d76038c0e7-kube-api-access-x7blw\") pod \"octavia-operator-controller-manager-69f8888797-rwgkl\" (UID: \"54cfc1b0-0f93-4c97-8675-11d76038c0e7\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-rwgkl" Feb 16 02:36:06.913456 master-0 kubenswrapper[31559]: I0216 02:36:06.913413 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-vr8wx"] Feb 16 02:36:06.919717 master-0 kubenswrapper[31559]: I0216 02:36:06.919670 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-pzmcd" Feb 16 02:36:06.956502 master-0 kubenswrapper[31559]: I0216 02:36:06.954428 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptp2r\" (UniqueName: \"kubernetes.io/projected/c1e8e453-6a51-4f81-90de-4ee9bbc7a53b-kube-api-access-ptp2r\") pod \"placement-operator-controller-manager-8497b45c89-qlvbm\" (UID: \"c1e8e453-6a51-4f81-90de-4ee9bbc7a53b\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-qlvbm" Feb 16 02:36:06.956502 master-0 kubenswrapper[31559]: I0216 02:36:06.954534 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bspvx\" (UniqueName: \"kubernetes.io/projected/a78ae1a9-3cbf-4147-9761-50a806bafceb-kube-api-access-bspvx\") pod \"telemetry-operator-controller-manager-7f45b4ff68-9qhgd\" (UID: \"a78ae1a9-3cbf-4147-9761-50a806bafceb\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-9qhgd" Feb 16 02:36:06.956502 master-0 kubenswrapper[31559]: I0216 02:36:06.954627 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zk9b\" (UniqueName: \"kubernetes.io/projected/e9fd7b60-f1b5-4dd2-963a-1ec710c2a29c-kube-api-access-5zk9b\") pod \"test-operator-controller-manager-7866795846-fxcr2\" (UID: \"e9fd7b60-f1b5-4dd2-963a-1ec710c2a29c\") " pod="openstack-operators/test-operator-controller-manager-7866795846-fxcr2" Feb 16 02:36:06.956502 master-0 kubenswrapper[31559]: I0216 02:36:06.954717 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77gqj\" (UniqueName: \"kubernetes.io/projected/d7bce36d-c48f-4156-ba5e-170d77b35445-kube-api-access-77gqj\") pod \"swift-operator-controller-manager-68f46476f-9f485\" (UID: \"d7bce36d-c48f-4156-ba5e-170d77b35445\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-9f485" Feb 16 02:36:06.970505 master-0 kubenswrapper[31559]: I0216 02:36:06.960745 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-74d597bfd6-6nxrj"] Feb 16 02:36:06.970505 master-0 kubenswrapper[31559]: I0216 02:36:06.962020 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-6nxrj" Feb 16 02:36:06.970505 master-0 kubenswrapper[31559]: I0216 02:36:06.965175 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 16 02:36:06.970505 master-0 kubenswrapper[31559]: I0216 02:36:06.965401 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 16 02:36:06.984581 master-0 kubenswrapper[31559]: I0216 02:36:06.984532 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77gqj\" (UniqueName: \"kubernetes.io/projected/d7bce36d-c48f-4156-ba5e-170d77b35445-kube-api-access-77gqj\") pod \"swift-operator-controller-manager-68f46476f-9f485\" (UID: \"d7bce36d-c48f-4156-ba5e-170d77b35445\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-9f485" Feb 16 02:36:06.996008 master-0 kubenswrapper[31559]: I0216 02:36:06.995520 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-74d597bfd6-6nxrj"] Feb 16 02:36:07.013845 master-0 kubenswrapper[31559]: I0216 02:36:07.012237 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptp2r\" (UniqueName: \"kubernetes.io/projected/c1e8e453-6a51-4f81-90de-4ee9bbc7a53b-kube-api-access-ptp2r\") pod \"placement-operator-controller-manager-8497b45c89-qlvbm\" (UID: \"c1e8e453-6a51-4f81-90de-4ee9bbc7a53b\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-qlvbm" Feb 16 02:36:07.013845 master-0 kubenswrapper[31559]: I0216 02:36:07.012625 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bspvx\" (UniqueName: \"kubernetes.io/projected/a78ae1a9-3cbf-4147-9761-50a806bafceb-kube-api-access-bspvx\") pod \"telemetry-operator-controller-manager-7f45b4ff68-9qhgd\" (UID: \"a78ae1a9-3cbf-4147-9761-50a806bafceb\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-9qhgd" Feb 16 02:36:07.026857 master-0 kubenswrapper[31559]: I0216 02:36:07.026786 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mpg97"] Feb 16 02:36:07.028166 master-0 kubenswrapper[31559]: I0216 02:36:07.028114 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mpg97" Feb 16 02:36:07.057819 master-0 kubenswrapper[31559]: I0216 02:36:07.057325 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5a00ec8c-7fd9-4450-8879-af88897ebfc6-cert\") pod \"infra-operator-controller-manager-5f879c76b6-ftls6\" (UID: \"5a00ec8c-7fd9-4450-8879-af88897ebfc6\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-ftls6" Feb 16 02:36:07.057819 master-0 kubenswrapper[31559]: I0216 02:36:07.057403 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/326ce8bf-5b0c-4354-a324-ba2842ea7cd9-webhook-certs\") pod \"openstack-operator-controller-manager-74d597bfd6-6nxrj\" (UID: \"326ce8bf-5b0c-4354-a324-ba2842ea7cd9\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-6nxrj" Feb 16 02:36:07.057819 master-0 kubenswrapper[31559]: I0216 02:36:07.057525 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rm58\" (UniqueName: \"kubernetes.io/projected/326ce8bf-5b0c-4354-a324-ba2842ea7cd9-kube-api-access-9rm58\") pod \"openstack-operator-controller-manager-74d597bfd6-6nxrj\" (UID: \"326ce8bf-5b0c-4354-a324-ba2842ea7cd9\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-6nxrj" Feb 16 02:36:07.057819 master-0 kubenswrapper[31559]: I0216 02:36:07.057569 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/326ce8bf-5b0c-4354-a324-ba2842ea7cd9-metrics-certs\") pod \"openstack-operator-controller-manager-74d597bfd6-6nxrj\" (UID: \"326ce8bf-5b0c-4354-a324-ba2842ea7cd9\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-6nxrj" Feb 16 02:36:07.057819 master-0 kubenswrapper[31559]: I0216 02:36:07.057623 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zk9b\" (UniqueName: \"kubernetes.io/projected/e9fd7b60-f1b5-4dd2-963a-1ec710c2a29c-kube-api-access-5zk9b\") pod \"test-operator-controller-manager-7866795846-fxcr2\" (UID: \"e9fd7b60-f1b5-4dd2-963a-1ec710c2a29c\") " pod="openstack-operators/test-operator-controller-manager-7866795846-fxcr2" Feb 16 02:36:07.057819 master-0 kubenswrapper[31559]: I0216 02:36:07.057673 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkgxk\" (UniqueName: \"kubernetes.io/projected/32209368-fa30-48bc-9066-5e433752be92-kube-api-access-vkgxk\") pod \"watcher-operator-controller-manager-5db88f68c-vr8wx\" (UID: \"32209368-fa30-48bc-9066-5e433752be92\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-vr8wx" Feb 16 02:36:07.058504 master-0 kubenswrapper[31559]: E0216 02:36:07.058107 31559 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 02:36:07.058504 master-0 kubenswrapper[31559]: E0216 02:36:07.058148 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a00ec8c-7fd9-4450-8879-af88897ebfc6-cert podName:5a00ec8c-7fd9-4450-8879-af88897ebfc6 nodeName:}" failed. No retries permitted until 2026-02-16 02:36:08.058133739 +0000 UTC m=+820.402739754 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5a00ec8c-7fd9-4450-8879-af88897ebfc6-cert") pod "infra-operator-controller-manager-5f879c76b6-ftls6" (UID: "5a00ec8c-7fd9-4450-8879-af88897ebfc6") : secret "infra-operator-webhook-server-cert" not found Feb 16 02:36:07.066464 master-0 kubenswrapper[31559]: I0216 02:36:07.063220 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mpg97"] Feb 16 02:36:07.070692 master-0 kubenswrapper[31559]: I0216 02:36:07.069030 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-rwgkl" Feb 16 02:36:07.092607 master-0 kubenswrapper[31559]: I0216 02:36:07.087766 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zk9b\" (UniqueName: \"kubernetes.io/projected/e9fd7b60-f1b5-4dd2-963a-1ec710c2a29c-kube-api-access-5zk9b\") pod \"test-operator-controller-manager-7866795846-fxcr2\" (UID: \"e9fd7b60-f1b5-4dd2-963a-1ec710c2a29c\") " pod="openstack-operators/test-operator-controller-manager-7866795846-fxcr2" Feb 16 02:36:07.092607 master-0 kubenswrapper[31559]: I0216 02:36:07.092540 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-7cfks" Feb 16 02:36:07.158972 master-0 kubenswrapper[31559]: E0216 02:36:07.158922 31559 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 02:36:07.159155 master-0 kubenswrapper[31559]: E0216 02:36:07.158995 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/326ce8bf-5b0c-4354-a324-ba2842ea7cd9-metrics-certs podName:326ce8bf-5b0c-4354-a324-ba2842ea7cd9 nodeName:}" failed. No retries permitted until 2026-02-16 02:36:07.658978487 +0000 UTC m=+820.003584502 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/326ce8bf-5b0c-4354-a324-ba2842ea7cd9-metrics-certs") pod "openstack-operator-controller-manager-74d597bfd6-6nxrj" (UID: "326ce8bf-5b0c-4354-a324-ba2842ea7cd9") : secret "metrics-server-cert" not found Feb 16 02:36:07.160107 master-0 kubenswrapper[31559]: I0216 02:36:07.160061 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/326ce8bf-5b0c-4354-a324-ba2842ea7cd9-metrics-certs\") pod \"openstack-operator-controller-manager-74d597bfd6-6nxrj\" (UID: \"326ce8bf-5b0c-4354-a324-ba2842ea7cd9\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-6nxrj" Feb 16 02:36:07.160370 master-0 kubenswrapper[31559]: I0216 02:36:07.160346 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xg9q\" (UniqueName: \"kubernetes.io/projected/02612eda-1ad2-4ece-9080-f565e8c98910-kube-api-access-4xg9q\") pod \"rabbitmq-cluster-operator-manager-668c99d594-mpg97\" (UID: \"02612eda-1ad2-4ece-9080-f565e8c98910\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mpg97" Feb 16 02:36:07.160531 master-0 kubenswrapper[31559]: I0216 02:36:07.160471 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkgxk\" (UniqueName: \"kubernetes.io/projected/32209368-fa30-48bc-9066-5e433752be92-kube-api-access-vkgxk\") pod \"watcher-operator-controller-manager-5db88f68c-vr8wx\" (UID: \"32209368-fa30-48bc-9066-5e433752be92\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-vr8wx" Feb 16 02:36:07.160710 master-0 kubenswrapper[31559]: I0216 02:36:07.160679 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/326ce8bf-5b0c-4354-a324-ba2842ea7cd9-webhook-certs\") pod \"openstack-operator-controller-manager-74d597bfd6-6nxrj\" (UID: \"326ce8bf-5b0c-4354-a324-ba2842ea7cd9\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-6nxrj" Feb 16 02:36:07.160749 master-0 kubenswrapper[31559]: I0216 02:36:07.160735 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rm58\" (UniqueName: \"kubernetes.io/projected/326ce8bf-5b0c-4354-a324-ba2842ea7cd9-kube-api-access-9rm58\") pod \"openstack-operator-controller-manager-74d597bfd6-6nxrj\" (UID: \"326ce8bf-5b0c-4354-a324-ba2842ea7cd9\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-6nxrj" Feb 16 02:36:07.160901 master-0 kubenswrapper[31559]: E0216 02:36:07.160882 31559 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 02:36:07.160954 master-0 kubenswrapper[31559]: E0216 02:36:07.160940 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/326ce8bf-5b0c-4354-a324-ba2842ea7cd9-webhook-certs podName:326ce8bf-5b0c-4354-a324-ba2842ea7cd9 nodeName:}" failed. No retries permitted until 2026-02-16 02:36:07.660921598 +0000 UTC m=+820.005527613 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/326ce8bf-5b0c-4354-a324-ba2842ea7cd9-webhook-certs") pod "openstack-operator-controller-manager-74d597bfd6-6nxrj" (UID: "326ce8bf-5b0c-4354-a324-ba2842ea7cd9") : secret "webhook-server-cert" not found Feb 16 02:36:07.188089 master-0 kubenswrapper[31559]: I0216 02:36:07.188048 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkgxk\" (UniqueName: \"kubernetes.io/projected/32209368-fa30-48bc-9066-5e433752be92-kube-api-access-vkgxk\") pod \"watcher-operator-controller-manager-5db88f68c-vr8wx\" (UID: \"32209368-fa30-48bc-9066-5e433752be92\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-vr8wx" Feb 16 02:36:07.192565 master-0 kubenswrapper[31559]: I0216 02:36:07.192515 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rm58\" (UniqueName: \"kubernetes.io/projected/326ce8bf-5b0c-4354-a324-ba2842ea7cd9-kube-api-access-9rm58\") pod \"openstack-operator-controller-manager-74d597bfd6-6nxrj\" (UID: \"326ce8bf-5b0c-4354-a324-ba2842ea7cd9\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-6nxrj" Feb 16 02:36:07.220205 master-0 kubenswrapper[31559]: I0216 02:36:07.220170 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-qlvbm" Feb 16 02:36:07.239892 master-0 kubenswrapper[31559]: I0216 02:36:07.239834 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-9f485" Feb 16 02:36:07.264160 master-0 kubenswrapper[31559]: I0216 02:36:07.264109 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-9qhgd" Feb 16 02:36:07.278764 master-0 kubenswrapper[31559]: I0216 02:36:07.278713 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xg9q\" (UniqueName: \"kubernetes.io/projected/02612eda-1ad2-4ece-9080-f565e8c98910-kube-api-access-4xg9q\") pod \"rabbitmq-cluster-operator-manager-668c99d594-mpg97\" (UID: \"02612eda-1ad2-4ece-9080-f565e8c98910\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mpg97" Feb 16 02:36:07.303507 master-0 kubenswrapper[31559]: I0216 02:36:07.302927 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xg9q\" (UniqueName: \"kubernetes.io/projected/02612eda-1ad2-4ece-9080-f565e8c98910-kube-api-access-4xg9q\") pod \"rabbitmq-cluster-operator-manager-668c99d594-mpg97\" (UID: \"02612eda-1ad2-4ece-9080-f565e8c98910\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mpg97" Feb 16 02:36:07.329990 master-0 kubenswrapper[31559]: I0216 02:36:07.327065 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-fxcr2" Feb 16 02:36:07.353872 master-0 kubenswrapper[31559]: I0216 02:36:07.353755 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-bktfj"] Feb 16 02:36:07.372317 master-0 kubenswrapper[31559]: I0216 02:36:07.363030 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-hpq8s"] Feb 16 02:36:07.390837 master-0 kubenswrapper[31559]: I0216 02:36:07.388947 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-vr8wx" Feb 16 02:36:07.390837 master-0 kubenswrapper[31559]: I0216 02:36:07.389660 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8deae9af-57d0-43f7-a94f-b9e4153c5f4d-cert\") pod \"openstack-baremetal-operator-controller-manager-5f8cd6b89bwxg9f\" (UID: \"8deae9af-57d0-43f7-a94f-b9e4153c5f4d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89bwxg9f" Feb 16 02:36:07.390837 master-0 kubenswrapper[31559]: E0216 02:36:07.389863 31559 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 02:36:07.390837 master-0 kubenswrapper[31559]: E0216 02:36:07.389915 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8deae9af-57d0-43f7-a94f-b9e4153c5f4d-cert podName:8deae9af-57d0-43f7-a94f-b9e4153c5f4d nodeName:}" failed. No retries permitted until 2026-02-16 02:36:08.389900138 +0000 UTC m=+820.734506153 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8deae9af-57d0-43f7-a94f-b9e4153c5f4d-cert") pod "openstack-baremetal-operator-controller-manager-5f8cd6b89bwxg9f" (UID: "8deae9af-57d0-43f7-a94f-b9e4153c5f4d") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 02:36:07.400655 master-0 kubenswrapper[31559]: I0216 02:36:07.399854 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-bktfj" event={"ID":"5984cc07-3c42-4d98-aed0-1ffa30bff993","Type":"ContainerStarted","Data":"6f09e62771ac6eb50e6e98be51ae0ac03786c941af885c1614516bf3849e00de"} Feb 16 02:36:07.402499 master-0 kubenswrapper[31559]: I0216 02:36:07.401521 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-hpq8s" event={"ID":"3c209763-741a-4879-af3f-6d378990293e","Type":"ContainerStarted","Data":"e8f17c7abd7c72563442819114072b55879b5a5ff86815b47688465df76aa8e8"} Feb 16 02:36:07.518388 master-0 kubenswrapper[31559]: I0216 02:36:07.518347 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-8rrtb"] Feb 16 02:36:07.535670 master-0 kubenswrapper[31559]: I0216 02:36:07.535627 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mpg97" Feb 16 02:36:07.695451 master-0 kubenswrapper[31559]: I0216 02:36:07.694566 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/326ce8bf-5b0c-4354-a324-ba2842ea7cd9-webhook-certs\") pod \"openstack-operator-controller-manager-74d597bfd6-6nxrj\" (UID: \"326ce8bf-5b0c-4354-a324-ba2842ea7cd9\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-6nxrj" Feb 16 02:36:07.695451 master-0 kubenswrapper[31559]: I0216 02:36:07.694650 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/326ce8bf-5b0c-4354-a324-ba2842ea7cd9-metrics-certs\") pod \"openstack-operator-controller-manager-74d597bfd6-6nxrj\" (UID: \"326ce8bf-5b0c-4354-a324-ba2842ea7cd9\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-6nxrj" Feb 16 02:36:07.695451 master-0 kubenswrapper[31559]: E0216 02:36:07.694700 31559 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 02:36:07.695451 master-0 kubenswrapper[31559]: E0216 02:36:07.694777 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/326ce8bf-5b0c-4354-a324-ba2842ea7cd9-webhook-certs podName:326ce8bf-5b0c-4354-a324-ba2842ea7cd9 nodeName:}" failed. No retries permitted until 2026-02-16 02:36:08.694757614 +0000 UTC m=+821.039363719 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/326ce8bf-5b0c-4354-a324-ba2842ea7cd9-webhook-certs") pod "openstack-operator-controller-manager-74d597bfd6-6nxrj" (UID: "326ce8bf-5b0c-4354-a324-ba2842ea7cd9") : secret "webhook-server-cert" not found Feb 16 02:36:07.695451 master-0 kubenswrapper[31559]: E0216 02:36:07.694819 31559 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 02:36:07.695451 master-0 kubenswrapper[31559]: E0216 02:36:07.694865 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/326ce8bf-5b0c-4354-a324-ba2842ea7cd9-metrics-certs podName:326ce8bf-5b0c-4354-a324-ba2842ea7cd9 nodeName:}" failed. No retries permitted until 2026-02-16 02:36:08.694852137 +0000 UTC m=+821.039458152 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/326ce8bf-5b0c-4354-a324-ba2842ea7cd9-metrics-certs") pod "openstack-operator-controller-manager-74d597bfd6-6nxrj" (UID: "326ce8bf-5b0c-4354-a324-ba2842ea7cd9") : secret "metrics-server-cert" not found Feb 16 02:36:08.028059 master-0 kubenswrapper[31559]: I0216 02:36:08.028015 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-sdw27"] Feb 16 02:36:08.051368 master-0 kubenswrapper[31559]: I0216 02:36:08.051164 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-sth8f"] Feb 16 02:36:08.058838 master-0 kubenswrapper[31559]: I0216 02:36:08.058673 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-5nzdl"] Feb 16 02:36:08.075675 master-0 kubenswrapper[31559]: I0216 02:36:08.065483 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-lfqbs"] Feb 16 02:36:08.075675 master-0 kubenswrapper[31559]: I0216 02:36:08.071691 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-2nhlv"] Feb 16 02:36:08.094336 master-0 kubenswrapper[31559]: I0216 02:36:08.094233 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-kvc45"] Feb 16 02:36:08.105806 master-0 kubenswrapper[31559]: I0216 02:36:08.105746 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5a00ec8c-7fd9-4450-8879-af88897ebfc6-cert\") pod \"infra-operator-controller-manager-5f879c76b6-ftls6\" (UID: \"5a00ec8c-7fd9-4450-8879-af88897ebfc6\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-ftls6" Feb 16 02:36:08.109562 master-0 kubenswrapper[31559]: E0216 02:36:08.108624 31559 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 02:36:08.109562 master-0 kubenswrapper[31559]: E0216 02:36:08.108689 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a00ec8c-7fd9-4450-8879-af88897ebfc6-cert podName:5a00ec8c-7fd9-4450-8879-af88897ebfc6 nodeName:}" failed. No retries permitted until 2026-02-16 02:36:10.108673244 +0000 UTC m=+822.453279259 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5a00ec8c-7fd9-4450-8879-af88897ebfc6-cert") pod "infra-operator-controller-manager-5f879c76b6-ftls6" (UID: "5a00ec8c-7fd9-4450-8879-af88897ebfc6") : secret "infra-operator-webhook-server-cert" not found Feb 16 02:36:08.361101 master-0 kubenswrapper[31559]: I0216 02:36:08.361033 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-pzmcd"] Feb 16 02:36:08.377267 master-0 kubenswrapper[31559]: I0216 02:36:08.377205 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-6wwcr"] Feb 16 02:36:08.389265 master-0 kubenswrapper[31559]: I0216 02:36:08.389208 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-5mbdg"] Feb 16 02:36:08.446475 master-0 kubenswrapper[31559]: I0216 02:36:08.434487 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8deae9af-57d0-43f7-a94f-b9e4153c5f4d-cert\") pod \"openstack-baremetal-operator-controller-manager-5f8cd6b89bwxg9f\" (UID: \"8deae9af-57d0-43f7-a94f-b9e4153c5f4d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89bwxg9f" Feb 16 02:36:08.446475 master-0 kubenswrapper[31559]: E0216 02:36:08.434817 31559 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 02:36:08.446475 master-0 kubenswrapper[31559]: E0216 02:36:08.434866 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8deae9af-57d0-43f7-a94f-b9e4153c5f4d-cert podName:8deae9af-57d0-43f7-a94f-b9e4153c5f4d nodeName:}" failed. No retries permitted until 2026-02-16 02:36:10.434851528 +0000 UTC m=+822.779457543 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8deae9af-57d0-43f7-a94f-b9e4153c5f4d-cert") pod "openstack-baremetal-operator-controller-manager-5f8cd6b89bwxg9f" (UID: "8deae9af-57d0-43f7-a94f-b9e4153c5f4d") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 02:36:08.495456 master-0 kubenswrapper[31559]: I0216 02:36:08.476620 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-6wwcr" event={"ID":"030323ce-de29-4144-bba5-f811e997f7d8","Type":"ContainerStarted","Data":"4248377bd00df407135b88e44d9c7e4f63087a355d904eb284abe72f6d6f671d"} Feb 16 02:36:08.495456 master-0 kubenswrapper[31559]: I0216 02:36:08.492749 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-sth8f" event={"ID":"15fefa75-bf9a-46e2-b22e-8fcdd3188788","Type":"ContainerStarted","Data":"da2eef96670f1a5ae21fd8d875af0ef3a5b3fd3b524aeced5f31d4c354d880c0"} Feb 16 02:36:08.495456 master-0 kubenswrapper[31559]: I0216 02:36:08.495153 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-5mbdg" event={"ID":"6c48078c-3fd8-4c0c-a7ea-52e0e015324d","Type":"ContainerStarted","Data":"81ab039a3f8338034be7e6b92301c3d45bd814317101f5e956603e3e96799819"} Feb 16 02:36:08.509482 master-0 kubenswrapper[31559]: I0216 02:36:08.505325 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-lfqbs" event={"ID":"85c662d6-ab4c-4999-93cf-da87dfff2271","Type":"ContainerStarted","Data":"dc1be96477bfe912236d04c26f4dee41ac2db0d97a700490126cef747335d2c7"} Feb 16 02:36:08.509482 master-0 kubenswrapper[31559]: I0216 02:36:08.506523 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-5nzdl" event={"ID":"ac5aa459-16a0-47f5-9cb1-7872b5342ce5","Type":"ContainerStarted","Data":"3f07607e4e2e211332705fc589f569b418507e68cbdd626a00484bfaaea126da"} Feb 16 02:36:08.509482 master-0 kubenswrapper[31559]: I0216 02:36:08.507177 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-pzmcd" event={"ID":"f954fd81-2f66-4e9e-b6da-f5b3080b09a3","Type":"ContainerStarted","Data":"e1a7d78fa910b6203713dfc5ff90498457c81b6c3c5fcc4b404cb584681b2b62"} Feb 16 02:36:08.509482 master-0 kubenswrapper[31559]: I0216 02:36:08.508228 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-sdw27" event={"ID":"76205954-3caf-4932-bb2c-c2315357a221","Type":"ContainerStarted","Data":"4f7e68e45ecaa2908bb7e7ab1912955ae6ebcb56073b50e9d03d44db6331b4fe"} Feb 16 02:36:08.509482 master-0 kubenswrapper[31559]: I0216 02:36:08.508887 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-8rrtb" event={"ID":"2037bd66-cf51-4a19-aaf6-6a95950a0640","Type":"ContainerStarted","Data":"c3d5969fce947f47955cc7a9c23b617bd77998229515ec01f2685a5c55754583"} Feb 16 02:36:08.509818 master-0 kubenswrapper[31559]: I0216 02:36:08.509550 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-kvc45" event={"ID":"c4437af6-8e55-42d9-8c6c-61965282cea0","Type":"ContainerStarted","Data":"0bd2a0d346f02a327e3b8980a22675eaaa9d57e7141583ac685fd651f1aff5d6"} Feb 16 02:36:08.517457 master-0 kubenswrapper[31559]: I0216 02:36:08.515068 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-2nhlv" event={"ID":"09111ae4-1998-425c-b30a-afbbc4a75dba","Type":"ContainerStarted","Data":"f74314384d00b0db5d18be06ee16614bfd4481e174420d9da5ca76a4cd64f585"} Feb 16 02:36:08.739054 master-0 kubenswrapper[31559]: I0216 02:36:08.738988 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/326ce8bf-5b0c-4354-a324-ba2842ea7cd9-webhook-certs\") pod \"openstack-operator-controller-manager-74d597bfd6-6nxrj\" (UID: \"326ce8bf-5b0c-4354-a324-ba2842ea7cd9\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-6nxrj" Feb 16 02:36:08.739321 master-0 kubenswrapper[31559]: E0216 02:36:08.739274 31559 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 02:36:08.739395 master-0 kubenswrapper[31559]: I0216 02:36:08.739377 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/326ce8bf-5b0c-4354-a324-ba2842ea7cd9-metrics-certs\") pod \"openstack-operator-controller-manager-74d597bfd6-6nxrj\" (UID: \"326ce8bf-5b0c-4354-a324-ba2842ea7cd9\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-6nxrj" Feb 16 02:36:08.739555 master-0 kubenswrapper[31559]: E0216 02:36:08.739540 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/326ce8bf-5b0c-4354-a324-ba2842ea7cd9-webhook-certs podName:326ce8bf-5b0c-4354-a324-ba2842ea7cd9 nodeName:}" failed. No retries permitted until 2026-02-16 02:36:10.73952109 +0000 UTC m=+823.084127105 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/326ce8bf-5b0c-4354-a324-ba2842ea7cd9-webhook-certs") pod "openstack-operator-controller-manager-74d597bfd6-6nxrj" (UID: "326ce8bf-5b0c-4354-a324-ba2842ea7cd9") : secret "webhook-server-cert" not found Feb 16 02:36:08.739660 master-0 kubenswrapper[31559]: E0216 02:36:08.739606 31559 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 02:36:08.739727 master-0 kubenswrapper[31559]: E0216 02:36:08.739708 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/326ce8bf-5b0c-4354-a324-ba2842ea7cd9-metrics-certs podName:326ce8bf-5b0c-4354-a324-ba2842ea7cd9 nodeName:}" failed. No retries permitted until 2026-02-16 02:36:10.739683124 +0000 UTC m=+823.084289149 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/326ce8bf-5b0c-4354-a324-ba2842ea7cd9-metrics-certs") pod "openstack-operator-controller-manager-74d597bfd6-6nxrj" (UID: "326ce8bf-5b0c-4354-a324-ba2842ea7cd9") : secret "metrics-server-cert" not found Feb 16 02:36:08.814477 master-0 kubenswrapper[31559]: I0216 02:36:08.813208 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-9qhgd"] Feb 16 02:36:08.856521 master-0 kubenswrapper[31559]: I0216 02:36:08.854567 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mpg97"] Feb 16 02:36:08.866076 master-0 kubenswrapper[31559]: W0216 02:36:08.866020 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod54cfc1b0_0f93_4c97_8675_11d76038c0e7.slice/crio-087ee8f3bc8dfcdb7a77c2ddb3f6898907fe720639a77369276e1f58072cd410 WatchSource:0}: Error finding container 087ee8f3bc8dfcdb7a77c2ddb3f6898907fe720639a77369276e1f58072cd410: Status 404 returned error can't find the container with id 087ee8f3bc8dfcdb7a77c2ddb3f6898907fe720639a77369276e1f58072cd410 Feb 16 02:36:08.866424 master-0 kubenswrapper[31559]: E0216 02:36:08.866326 31559 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vkgxk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-5db88f68c-vr8wx_openstack-operators(32209368-fa30-48bc-9066-5e433752be92): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 02:36:08.867863 master-0 kubenswrapper[31559]: E0216 02:36:08.867791 31559 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-vr8wx" podUID="32209368-fa30-48bc-9066-5e433752be92" Feb 16 02:36:08.868208 master-0 kubenswrapper[31559]: I0216 02:36:08.867995 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-7cfks"] Feb 16 02:36:08.868461 master-0 kubenswrapper[31559]: E0216 02:36:08.868404 31559 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x7blw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-69f8888797-rwgkl_openstack-operators(54cfc1b0-0f93-4c97-8675-11d76038c0e7): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 02:36:08.870614 master-0 kubenswrapper[31559]: E0216 02:36:08.869641 31559 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-rwgkl" podUID="54cfc1b0-0f93-4c97-8675-11d76038c0e7" Feb 16 02:36:08.870614 master-0 kubenswrapper[31559]: W0216 02:36:08.870119 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1e8e453_6a51_4f81_90de_4ee9bbc7a53b.slice/crio-d38c26d834b908629128faf546886f69664673f73e6bcf24051a159517832b9b WatchSource:0}: Error finding container d38c26d834b908629128faf546886f69664673f73e6bcf24051a159517832b9b: Status 404 returned error can't find the container with id d38c26d834b908629128faf546886f69664673f73e6bcf24051a159517832b9b Feb 16 02:36:08.874287 master-0 kubenswrapper[31559]: E0216 02:36:08.874244 31559 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ptp2r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-8497b45c89-qlvbm_openstack-operators(c1e8e453-6a51-4f81-90de-4ee9bbc7a53b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 02:36:08.876106 master-0 kubenswrapper[31559]: E0216 02:36:08.875974 31559 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-qlvbm" podUID="c1e8e453-6a51-4f81-90de-4ee9bbc7a53b" Feb 16 02:36:08.880101 master-0 kubenswrapper[31559]: I0216 02:36:08.880083 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-9f485"] Feb 16 02:36:08.888091 master-0 kubenswrapper[31559]: I0216 02:36:08.888059 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-qlvbm"] Feb 16 02:36:08.895008 master-0 kubenswrapper[31559]: I0216 02:36:08.894986 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-rwgkl"] Feb 16 02:36:08.902889 master-0 kubenswrapper[31559]: I0216 02:36:08.902865 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-vr8wx"] Feb 16 02:36:08.909585 master-0 kubenswrapper[31559]: I0216 02:36:08.909528 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-fxcr2"] Feb 16 02:36:09.531461 master-0 kubenswrapper[31559]: I0216 02:36:09.530985 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-9qhgd" event={"ID":"a78ae1a9-3cbf-4147-9761-50a806bafceb","Type":"ContainerStarted","Data":"9db26fca12dd045d84fee32946b18f5dc1318bec237b890dd1dcb498b6523599"} Feb 16 02:36:09.538992 master-0 kubenswrapper[31559]: I0216 02:36:09.533322 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-vr8wx" event={"ID":"32209368-fa30-48bc-9066-5e433752be92","Type":"ContainerStarted","Data":"13ce3de8ccd479f4ab7735c5b75a9790f6b43cc0b7ef7b92faeb64877937facc"} Feb 16 02:36:09.539328 master-0 kubenswrapper[31559]: E0216 02:36:09.539256 31559 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-vr8wx" podUID="32209368-fa30-48bc-9066-5e433752be92" Feb 16 02:36:09.543500 master-0 kubenswrapper[31559]: I0216 02:36:09.543456 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-9f485" event={"ID":"d7bce36d-c48f-4156-ba5e-170d77b35445","Type":"ContainerStarted","Data":"907e743dc3d04035c1cf69615f61a58712eb1453c3d24cb89b0d18bb0c5df569"} Feb 16 02:36:09.548458 master-0 kubenswrapper[31559]: I0216 02:36:09.546461 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-rwgkl" event={"ID":"54cfc1b0-0f93-4c97-8675-11d76038c0e7","Type":"ContainerStarted","Data":"087ee8f3bc8dfcdb7a77c2ddb3f6898907fe720639a77369276e1f58072cd410"} Feb 16 02:36:09.548458 master-0 kubenswrapper[31559]: E0216 02:36:09.548146 31559 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-rwgkl" podUID="54cfc1b0-0f93-4c97-8675-11d76038c0e7" Feb 16 02:36:09.551744 master-0 kubenswrapper[31559]: I0216 02:36:09.551679 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-qlvbm" event={"ID":"c1e8e453-6a51-4f81-90de-4ee9bbc7a53b","Type":"ContainerStarted","Data":"d38c26d834b908629128faf546886f69664673f73e6bcf24051a159517832b9b"} Feb 16 02:36:09.569804 master-0 kubenswrapper[31559]: E0216 02:36:09.569567 31559 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-qlvbm" podUID="c1e8e453-6a51-4f81-90de-4ee9bbc7a53b" Feb 16 02:36:09.570833 master-0 kubenswrapper[31559]: I0216 02:36:09.570795 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-fxcr2" event={"ID":"e9fd7b60-f1b5-4dd2-963a-1ec710c2a29c","Type":"ContainerStarted","Data":"cdaf35c32624e6fe36d2538a33851ec7501003503c26debc372825314831fec1"} Feb 16 02:36:09.598418 master-0 kubenswrapper[31559]: I0216 02:36:09.598369 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mpg97" event={"ID":"02612eda-1ad2-4ece-9080-f565e8c98910","Type":"ContainerStarted","Data":"0eef1957e2bed373df8c378b4c5f8e2f12c5b0c0b285688c1517b62e682c6fa8"} Feb 16 02:36:09.602512 master-0 kubenswrapper[31559]: I0216 02:36:09.600159 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-7cfks" event={"ID":"9a9d5eac-39bd-4752-bb69-d936ad60f9a9","Type":"ContainerStarted","Data":"1bedcc1b2a42d2a42b97f9f90f311222eb32072960e0521a5b898e7c87df0fb0"} Feb 16 02:36:10.179559 master-0 kubenswrapper[31559]: I0216 02:36:10.179492 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5a00ec8c-7fd9-4450-8879-af88897ebfc6-cert\") pod \"infra-operator-controller-manager-5f879c76b6-ftls6\" (UID: \"5a00ec8c-7fd9-4450-8879-af88897ebfc6\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-ftls6" Feb 16 02:36:10.179783 master-0 kubenswrapper[31559]: E0216 02:36:10.179700 31559 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 02:36:10.179783 master-0 kubenswrapper[31559]: E0216 02:36:10.179747 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a00ec8c-7fd9-4450-8879-af88897ebfc6-cert podName:5a00ec8c-7fd9-4450-8879-af88897ebfc6 nodeName:}" failed. No retries permitted until 2026-02-16 02:36:14.179734661 +0000 UTC m=+826.524340676 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5a00ec8c-7fd9-4450-8879-af88897ebfc6-cert") pod "infra-operator-controller-manager-5f879c76b6-ftls6" (UID: "5a00ec8c-7fd9-4450-8879-af88897ebfc6") : secret "infra-operator-webhook-server-cert" not found Feb 16 02:36:10.485092 master-0 kubenswrapper[31559]: I0216 02:36:10.484977 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8deae9af-57d0-43f7-a94f-b9e4153c5f4d-cert\") pod \"openstack-baremetal-operator-controller-manager-5f8cd6b89bwxg9f\" (UID: \"8deae9af-57d0-43f7-a94f-b9e4153c5f4d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89bwxg9f" Feb 16 02:36:10.485466 master-0 kubenswrapper[31559]: E0216 02:36:10.485413 31559 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 02:36:10.485939 master-0 kubenswrapper[31559]: E0216 02:36:10.485914 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8deae9af-57d0-43f7-a94f-b9e4153c5f4d-cert podName:8deae9af-57d0-43f7-a94f-b9e4153c5f4d nodeName:}" failed. No retries permitted until 2026-02-16 02:36:14.485536043 +0000 UTC m=+826.830142058 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8deae9af-57d0-43f7-a94f-b9e4153c5f4d-cert") pod "openstack-baremetal-operator-controller-manager-5f8cd6b89bwxg9f" (UID: "8deae9af-57d0-43f7-a94f-b9e4153c5f4d") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 02:36:10.614078 master-0 kubenswrapper[31559]: E0216 02:36:10.614038 31559 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-rwgkl" podUID="54cfc1b0-0f93-4c97-8675-11d76038c0e7" Feb 16 02:36:10.614382 master-0 kubenswrapper[31559]: E0216 02:36:10.614353 31559 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-qlvbm" podUID="c1e8e453-6a51-4f81-90de-4ee9bbc7a53b" Feb 16 02:36:10.625093 master-0 kubenswrapper[31559]: E0216 02:36:10.622833 31559 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-vr8wx" podUID="32209368-fa30-48bc-9066-5e433752be92" Feb 16 02:36:10.794395 master-0 kubenswrapper[31559]: I0216 02:36:10.791811 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/326ce8bf-5b0c-4354-a324-ba2842ea7cd9-webhook-certs\") pod \"openstack-operator-controller-manager-74d597bfd6-6nxrj\" (UID: \"326ce8bf-5b0c-4354-a324-ba2842ea7cd9\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-6nxrj" Feb 16 02:36:10.794395 master-0 kubenswrapper[31559]: I0216 02:36:10.791917 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/326ce8bf-5b0c-4354-a324-ba2842ea7cd9-metrics-certs\") pod \"openstack-operator-controller-manager-74d597bfd6-6nxrj\" (UID: \"326ce8bf-5b0c-4354-a324-ba2842ea7cd9\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-6nxrj" Feb 16 02:36:10.794395 master-0 kubenswrapper[31559]: E0216 02:36:10.792168 31559 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 02:36:10.794395 master-0 kubenswrapper[31559]: E0216 02:36:10.792222 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/326ce8bf-5b0c-4354-a324-ba2842ea7cd9-metrics-certs podName:326ce8bf-5b0c-4354-a324-ba2842ea7cd9 nodeName:}" failed. No retries permitted until 2026-02-16 02:36:14.792208436 +0000 UTC m=+827.136814451 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/326ce8bf-5b0c-4354-a324-ba2842ea7cd9-metrics-certs") pod "openstack-operator-controller-manager-74d597bfd6-6nxrj" (UID: "326ce8bf-5b0c-4354-a324-ba2842ea7cd9") : secret "metrics-server-cert" not found Feb 16 02:36:10.794395 master-0 kubenswrapper[31559]: E0216 02:36:10.792658 31559 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 02:36:10.794395 master-0 kubenswrapper[31559]: E0216 02:36:10.792704 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/326ce8bf-5b0c-4354-a324-ba2842ea7cd9-webhook-certs podName:326ce8bf-5b0c-4354-a324-ba2842ea7cd9 nodeName:}" failed. No retries permitted until 2026-02-16 02:36:14.792677989 +0000 UTC m=+827.137284004 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/326ce8bf-5b0c-4354-a324-ba2842ea7cd9-webhook-certs") pod "openstack-operator-controller-manager-74d597bfd6-6nxrj" (UID: "326ce8bf-5b0c-4354-a324-ba2842ea7cd9") : secret "webhook-server-cert" not found Feb 16 02:36:14.271447 master-0 kubenswrapper[31559]: I0216 02:36:14.271342 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5a00ec8c-7fd9-4450-8879-af88897ebfc6-cert\") pod \"infra-operator-controller-manager-5f879c76b6-ftls6\" (UID: \"5a00ec8c-7fd9-4450-8879-af88897ebfc6\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-ftls6" Feb 16 02:36:14.277827 master-0 kubenswrapper[31559]: E0216 02:36:14.271500 31559 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 02:36:14.277827 master-0 kubenswrapper[31559]: E0216 02:36:14.271573 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a00ec8c-7fd9-4450-8879-af88897ebfc6-cert podName:5a00ec8c-7fd9-4450-8879-af88897ebfc6 nodeName:}" failed. No retries permitted until 2026-02-16 02:36:22.271556309 +0000 UTC m=+834.616162324 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5a00ec8c-7fd9-4450-8879-af88897ebfc6-cert") pod "infra-operator-controller-manager-5f879c76b6-ftls6" (UID: "5a00ec8c-7fd9-4450-8879-af88897ebfc6") : secret "infra-operator-webhook-server-cert" not found Feb 16 02:36:14.578206 master-0 kubenswrapper[31559]: I0216 02:36:14.578077 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8deae9af-57d0-43f7-a94f-b9e4153c5f4d-cert\") pod \"openstack-baremetal-operator-controller-manager-5f8cd6b89bwxg9f\" (UID: \"8deae9af-57d0-43f7-a94f-b9e4153c5f4d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89bwxg9f" Feb 16 02:36:14.578458 master-0 kubenswrapper[31559]: E0216 02:36:14.578331 31559 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 02:36:14.578458 master-0 kubenswrapper[31559]: E0216 02:36:14.578448 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8deae9af-57d0-43f7-a94f-b9e4153c5f4d-cert podName:8deae9af-57d0-43f7-a94f-b9e4153c5f4d nodeName:}" failed. No retries permitted until 2026-02-16 02:36:22.578406697 +0000 UTC m=+834.923012722 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8deae9af-57d0-43f7-a94f-b9e4153c5f4d-cert") pod "openstack-baremetal-operator-controller-manager-5f8cd6b89bwxg9f" (UID: "8deae9af-57d0-43f7-a94f-b9e4153c5f4d") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 02:36:14.887771 master-0 kubenswrapper[31559]: I0216 02:36:14.887322 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/326ce8bf-5b0c-4354-a324-ba2842ea7cd9-metrics-certs\") pod \"openstack-operator-controller-manager-74d597bfd6-6nxrj\" (UID: \"326ce8bf-5b0c-4354-a324-ba2842ea7cd9\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-6nxrj" Feb 16 02:36:14.887771 master-0 kubenswrapper[31559]: E0216 02:36:14.887585 31559 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 02:36:14.887771 master-0 kubenswrapper[31559]: E0216 02:36:14.887687 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/326ce8bf-5b0c-4354-a324-ba2842ea7cd9-metrics-certs podName:326ce8bf-5b0c-4354-a324-ba2842ea7cd9 nodeName:}" failed. No retries permitted until 2026-02-16 02:36:22.887665718 +0000 UTC m=+835.232271813 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/326ce8bf-5b0c-4354-a324-ba2842ea7cd9-metrics-certs") pod "openstack-operator-controller-manager-74d597bfd6-6nxrj" (UID: "326ce8bf-5b0c-4354-a324-ba2842ea7cd9") : secret "metrics-server-cert" not found Feb 16 02:36:14.887771 master-0 kubenswrapper[31559]: I0216 02:36:14.887746 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/326ce8bf-5b0c-4354-a324-ba2842ea7cd9-webhook-certs\") pod \"openstack-operator-controller-manager-74d597bfd6-6nxrj\" (UID: \"326ce8bf-5b0c-4354-a324-ba2842ea7cd9\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-6nxrj" Feb 16 02:36:14.888118 master-0 kubenswrapper[31559]: E0216 02:36:14.887937 31559 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 02:36:14.888118 master-0 kubenswrapper[31559]: E0216 02:36:14.888030 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/326ce8bf-5b0c-4354-a324-ba2842ea7cd9-webhook-certs podName:326ce8bf-5b0c-4354-a324-ba2842ea7cd9 nodeName:}" failed. No retries permitted until 2026-02-16 02:36:22.888012247 +0000 UTC m=+835.232618252 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/326ce8bf-5b0c-4354-a324-ba2842ea7cd9-webhook-certs") pod "openstack-operator-controller-manager-74d597bfd6-6nxrj" (UID: "326ce8bf-5b0c-4354-a324-ba2842ea7cd9") : secret "webhook-server-cert" not found Feb 16 02:36:22.354163 master-0 kubenswrapper[31559]: I0216 02:36:22.353992 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5a00ec8c-7fd9-4450-8879-af88897ebfc6-cert\") pod \"infra-operator-controller-manager-5f879c76b6-ftls6\" (UID: \"5a00ec8c-7fd9-4450-8879-af88897ebfc6\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-ftls6" Feb 16 02:36:22.355065 master-0 kubenswrapper[31559]: E0216 02:36:22.354205 31559 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 02:36:22.355065 master-0 kubenswrapper[31559]: E0216 02:36:22.354252 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a00ec8c-7fd9-4450-8879-af88897ebfc6-cert podName:5a00ec8c-7fd9-4450-8879-af88897ebfc6 nodeName:}" failed. No retries permitted until 2026-02-16 02:36:38.354238682 +0000 UTC m=+850.698844697 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5a00ec8c-7fd9-4450-8879-af88897ebfc6-cert") pod "infra-operator-controller-manager-5f879c76b6-ftls6" (UID: "5a00ec8c-7fd9-4450-8879-af88897ebfc6") : secret "infra-operator-webhook-server-cert" not found Feb 16 02:36:22.662310 master-0 kubenswrapper[31559]: I0216 02:36:22.661976 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8deae9af-57d0-43f7-a94f-b9e4153c5f4d-cert\") pod \"openstack-baremetal-operator-controller-manager-5f8cd6b89bwxg9f\" (UID: \"8deae9af-57d0-43f7-a94f-b9e4153c5f4d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89bwxg9f" Feb 16 02:36:22.662310 master-0 kubenswrapper[31559]: E0216 02:36:22.662167 31559 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 02:36:22.662310 master-0 kubenswrapper[31559]: E0216 02:36:22.662279 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8deae9af-57d0-43f7-a94f-b9e4153c5f4d-cert podName:8deae9af-57d0-43f7-a94f-b9e4153c5f4d nodeName:}" failed. No retries permitted until 2026-02-16 02:36:38.662257211 +0000 UTC m=+851.006863226 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8deae9af-57d0-43f7-a94f-b9e4153c5f4d-cert") pod "openstack-baremetal-operator-controller-manager-5f8cd6b89bwxg9f" (UID: "8deae9af-57d0-43f7-a94f-b9e4153c5f4d") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 02:36:22.969000 master-0 kubenswrapper[31559]: I0216 02:36:22.968909 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/326ce8bf-5b0c-4354-a324-ba2842ea7cd9-webhook-certs\") pod \"openstack-operator-controller-manager-74d597bfd6-6nxrj\" (UID: \"326ce8bf-5b0c-4354-a324-ba2842ea7cd9\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-6nxrj" Feb 16 02:36:22.969000 master-0 kubenswrapper[31559]: I0216 02:36:22.969009 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/326ce8bf-5b0c-4354-a324-ba2842ea7cd9-metrics-certs\") pod \"openstack-operator-controller-manager-74d597bfd6-6nxrj\" (UID: \"326ce8bf-5b0c-4354-a324-ba2842ea7cd9\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-6nxrj" Feb 16 02:36:22.969369 master-0 kubenswrapper[31559]: E0216 02:36:22.969134 31559 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 02:36:22.969369 master-0 kubenswrapper[31559]: E0216 02:36:22.969225 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/326ce8bf-5b0c-4354-a324-ba2842ea7cd9-webhook-certs podName:326ce8bf-5b0c-4354-a324-ba2842ea7cd9 nodeName:}" failed. No retries permitted until 2026-02-16 02:36:38.969206992 +0000 UTC m=+851.313813007 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/326ce8bf-5b0c-4354-a324-ba2842ea7cd9-webhook-certs") pod "openstack-operator-controller-manager-74d597bfd6-6nxrj" (UID: "326ce8bf-5b0c-4354-a324-ba2842ea7cd9") : secret "webhook-server-cert" not found Feb 16 02:36:22.969369 master-0 kubenswrapper[31559]: E0216 02:36:22.969247 31559 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 02:36:22.969369 master-0 kubenswrapper[31559]: E0216 02:36:22.969331 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/326ce8bf-5b0c-4354-a324-ba2842ea7cd9-metrics-certs podName:326ce8bf-5b0c-4354-a324-ba2842ea7cd9 nodeName:}" failed. No retries permitted until 2026-02-16 02:36:38.969312365 +0000 UTC m=+851.313918380 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/326ce8bf-5b0c-4354-a324-ba2842ea7cd9-metrics-certs") pod "openstack-operator-controller-manager-74d597bfd6-6nxrj" (UID: "326ce8bf-5b0c-4354-a324-ba2842ea7cd9") : secret "metrics-server-cert" not found Feb 16 02:36:27.874818 master-0 kubenswrapper[31559]: I0216 02:36:27.874672 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-7cfks" event={"ID":"9a9d5eac-39bd-4752-bb69-d936ad60f9a9","Type":"ContainerStarted","Data":"bacc6ccf294841e0e9f2ac8f05b28c4a02db4f0a5447f7ef4bc2ba79ee3f973a"} Feb 16 02:36:27.875419 master-0 kubenswrapper[31559]: I0216 02:36:27.875302 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-7cfks" Feb 16 02:36:27.883331 master-0 kubenswrapper[31559]: I0216 02:36:27.883293 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-pzmcd" event={"ID":"f954fd81-2f66-4e9e-b6da-f5b3080b09a3","Type":"ContainerStarted","Data":"ba5101390b83f3b38889bbe6bc830d2311a190ad8890f7b039bf30d2a75fd357"} Feb 16 02:36:27.884030 master-0 kubenswrapper[31559]: I0216 02:36:27.883914 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-pzmcd" Feb 16 02:36:27.887387 master-0 kubenswrapper[31559]: I0216 02:36:27.886703 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-bktfj" event={"ID":"5984cc07-3c42-4d98-aed0-1ffa30bff993","Type":"ContainerStarted","Data":"25f576a2caf0df1669df39b3ef55beb1aa03709ae1afd520121aac74af04c65c"} Feb 16 02:36:27.887387 master-0 kubenswrapper[31559]: I0216 02:36:27.887359 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-bktfj" Feb 16 02:36:27.889334 master-0 kubenswrapper[31559]: I0216 02:36:27.888904 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-sth8f" event={"ID":"15fefa75-bf9a-46e2-b22e-8fcdd3188788","Type":"ContainerStarted","Data":"4c666c2cc9c76b787713955a946d6d49af64feda0e0d07a751089cca57f1b996"} Feb 16 02:36:27.889334 master-0 kubenswrapper[31559]: I0216 02:36:27.889269 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-sth8f" Feb 16 02:36:27.891258 master-0 kubenswrapper[31559]: I0216 02:36:27.890823 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-5mbdg" event={"ID":"6c48078c-3fd8-4c0c-a7ea-52e0e015324d","Type":"ContainerStarted","Data":"7b4be5c1e6653c12c6f2646f9b9795704ef88fa05943425a1ff3c94a0f013bbd"} Feb 16 02:36:27.891595 master-0 kubenswrapper[31559]: I0216 02:36:27.891212 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-5mbdg" Feb 16 02:36:27.896947 master-0 kubenswrapper[31559]: I0216 02:36:27.896484 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-hpq8s" event={"ID":"3c209763-741a-4879-af3f-6d378990293e","Type":"ContainerStarted","Data":"ea4f4e16ae99fff611c083499ff1a2d402f0fd1238e03e20f88a15f84ca06f25"} Feb 16 02:36:27.897447 master-0 kubenswrapper[31559]: I0216 02:36:27.897254 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-hpq8s" Feb 16 02:36:27.898589 master-0 kubenswrapper[31559]: I0216 02:36:27.898498 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-7cfks" podStartSLOduration=4.324210802 podStartE2EDuration="21.898481881s" podCreationTimestamp="2026-02-16 02:36:06 +0000 UTC" firstStartedPulling="2026-02-16 02:36:08.826759022 +0000 UTC m=+821.171365047" lastFinishedPulling="2026-02-16 02:36:26.401030111 +0000 UTC m=+838.745636126" observedRunningTime="2026-02-16 02:36:27.892626488 +0000 UTC m=+840.237232503" watchObservedRunningTime="2026-02-16 02:36:27.898481881 +0000 UTC m=+840.243087896" Feb 16 02:36:27.907538 master-0 kubenswrapper[31559]: I0216 02:36:27.907073 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-2nhlv" event={"ID":"09111ae4-1998-425c-b30a-afbbc4a75dba","Type":"ContainerStarted","Data":"bf8ace07de8734688ddbe2ed4d8dbd77d72e31cbaa4be9cecb4ec87e9aa9873e"} Feb 16 02:36:27.907759 master-0 kubenswrapper[31559]: I0216 02:36:27.907712 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-2nhlv" Feb 16 02:36:27.909321 master-0 kubenswrapper[31559]: I0216 02:36:27.909273 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-5nzdl" event={"ID":"ac5aa459-16a0-47f5-9cb1-7872b5342ce5","Type":"ContainerStarted","Data":"73c1322449eaa792fae99bcdb186bda512291d2c31ad08b773fcf7ca0af1992e"} Feb 16 02:36:27.909549 master-0 kubenswrapper[31559]: I0216 02:36:27.909416 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987464f4-5nzdl" Feb 16 02:36:27.925953 master-0 kubenswrapper[31559]: I0216 02:36:27.925771 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-sth8f" podStartSLOduration=3.603218408 podStartE2EDuration="21.925671112s" podCreationTimestamp="2026-02-16 02:36:06 +0000 UTC" firstStartedPulling="2026-02-16 02:36:08.078727821 +0000 UTC m=+820.423333836" lastFinishedPulling="2026-02-16 02:36:26.401180535 +0000 UTC m=+838.745786540" observedRunningTime="2026-02-16 02:36:27.918566496 +0000 UTC m=+840.263172521" watchObservedRunningTime="2026-02-16 02:36:27.925671112 +0000 UTC m=+840.270277127" Feb 16 02:36:27.950624 master-0 kubenswrapper[31559]: I0216 02:36:27.946788 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-pzmcd" podStartSLOduration=4.425806209 podStartE2EDuration="21.946772254s" podCreationTimestamp="2026-02-16 02:36:06 +0000 UTC" firstStartedPulling="2026-02-16 02:36:08.370823353 +0000 UTC m=+820.715429398" lastFinishedPulling="2026-02-16 02:36:25.891789398 +0000 UTC m=+838.236395443" observedRunningTime="2026-02-16 02:36:27.945967343 +0000 UTC m=+840.290573358" watchObservedRunningTime="2026-02-16 02:36:27.946772254 +0000 UTC m=+840.291378269" Feb 16 02:36:27.984366 master-0 kubenswrapper[31559]: I0216 02:36:27.983975 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-bktfj" podStartSLOduration=3.44661489 podStartE2EDuration="21.983959447s" podCreationTimestamp="2026-02-16 02:36:06 +0000 UTC" firstStartedPulling="2026-02-16 02:36:07.3544221 +0000 UTC m=+819.699028105" lastFinishedPulling="2026-02-16 02:36:25.891766607 +0000 UTC m=+838.236372662" observedRunningTime="2026-02-16 02:36:27.982180141 +0000 UTC m=+840.326786156" watchObservedRunningTime="2026-02-16 02:36:27.983959447 +0000 UTC m=+840.328565462" Feb 16 02:36:28.011964 master-0 kubenswrapper[31559]: I0216 02:36:28.011303 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-5mbdg" podStartSLOduration=3.989662959 podStartE2EDuration="22.011287972s" podCreationTimestamp="2026-02-16 02:36:06 +0000 UTC" firstStartedPulling="2026-02-16 02:36:08.378751481 +0000 UTC m=+820.723357526" lastFinishedPulling="2026-02-16 02:36:26.400376494 +0000 UTC m=+838.744982539" observedRunningTime="2026-02-16 02:36:28.006873577 +0000 UTC m=+840.351479592" watchObservedRunningTime="2026-02-16 02:36:28.011287972 +0000 UTC m=+840.355893987" Feb 16 02:36:28.027907 master-0 kubenswrapper[31559]: I0216 02:36:28.027824 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-2nhlv" podStartSLOduration=3.150137533 podStartE2EDuration="22.027802834s" podCreationTimestamp="2026-02-16 02:36:06 +0000 UTC" firstStartedPulling="2026-02-16 02:36:08.058477361 +0000 UTC m=+820.403083376" lastFinishedPulling="2026-02-16 02:36:26.936142662 +0000 UTC m=+839.280748677" observedRunningTime="2026-02-16 02:36:28.024690463 +0000 UTC m=+840.369296468" watchObservedRunningTime="2026-02-16 02:36:28.027802834 +0000 UTC m=+840.372408839" Feb 16 02:36:28.070964 master-0 kubenswrapper[31559]: I0216 02:36:28.070730 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-hpq8s" podStartSLOduration=4.540802225 podStartE2EDuration="23.070705517s" podCreationTimestamp="2026-02-16 02:36:05 +0000 UTC" firstStartedPulling="2026-02-16 02:36:07.361753222 +0000 UTC m=+819.706359237" lastFinishedPulling="2026-02-16 02:36:25.891656514 +0000 UTC m=+838.236262529" observedRunningTime="2026-02-16 02:36:28.048798034 +0000 UTC m=+840.393404049" watchObservedRunningTime="2026-02-16 02:36:28.070705517 +0000 UTC m=+840.415311532" Feb 16 02:36:28.073498 master-0 kubenswrapper[31559]: I0216 02:36:28.073212 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987464f4-5nzdl" podStartSLOduration=4.239656039 podStartE2EDuration="22.073203912s" podCreationTimestamp="2026-02-16 02:36:06 +0000 UTC" firstStartedPulling="2026-02-16 02:36:08.058276256 +0000 UTC m=+820.402882271" lastFinishedPulling="2026-02-16 02:36:25.891824129 +0000 UTC m=+838.236430144" observedRunningTime="2026-02-16 02:36:28.070345908 +0000 UTC m=+840.414951923" watchObservedRunningTime="2026-02-16 02:36:28.073203912 +0000 UTC m=+840.417809927" Feb 16 02:36:28.929516 master-0 kubenswrapper[31559]: I0216 02:36:28.927190 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-vr8wx" event={"ID":"32209368-fa30-48bc-9066-5e433752be92","Type":"ContainerStarted","Data":"82cf0bc7c9040d09b2da4de6dbc73203745bc7bf80a64d284cefff11d25cf15e"} Feb 16 02:36:28.929516 master-0 kubenswrapper[31559]: I0216 02:36:28.927503 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-vr8wx" Feb 16 02:36:28.935457 master-0 kubenswrapper[31559]: I0216 02:36:28.932532 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-rwgkl" event={"ID":"54cfc1b0-0f93-4c97-8675-11d76038c0e7","Type":"ContainerStarted","Data":"3ded48251a20029493bb565344aa992123864347c1249c589b26fe696f6b83d8"} Feb 16 02:36:28.935457 master-0 kubenswrapper[31559]: I0216 02:36:28.932835 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-rwgkl" Feb 16 02:36:28.935457 master-0 kubenswrapper[31559]: I0216 02:36:28.933914 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-8rrtb" event={"ID":"2037bd66-cf51-4a19-aaf6-6a95950a0640","Type":"ContainerStarted","Data":"34399ab2796c31960b84fd7a51301919ed2419dccad2c1a5ddeb7f9d848e9625"} Feb 16 02:36:28.935457 master-0 kubenswrapper[31559]: I0216 02:36:28.934271 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-8rrtb" Feb 16 02:36:28.937159 master-0 kubenswrapper[31559]: I0216 02:36:28.935867 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-qlvbm" event={"ID":"c1e8e453-6a51-4f81-90de-4ee9bbc7a53b","Type":"ContainerStarted","Data":"cfb0130282e456123761326849991652cbf35912f5b0a54b175a9b7307cfd232"} Feb 16 02:36:28.937159 master-0 kubenswrapper[31559]: I0216 02:36:28.936259 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-qlvbm" Feb 16 02:36:28.939060 master-0 kubenswrapper[31559]: I0216 02:36:28.938653 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mpg97" event={"ID":"02612eda-1ad2-4ece-9080-f565e8c98910","Type":"ContainerStarted","Data":"977f87bc0d6a22d01c87cf915c8e5a0351a4c7bffd03061cbc832e486b8049af"} Feb 16 02:36:28.956527 master-0 kubenswrapper[31559]: I0216 02:36:28.955542 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-vr8wx" podStartSLOduration=4.798311946 podStartE2EDuration="22.955529777s" podCreationTimestamp="2026-02-16 02:36:06 +0000 UTC" firstStartedPulling="2026-02-16 02:36:08.864959632 +0000 UTC m=+821.209565657" lastFinishedPulling="2026-02-16 02:36:27.022177423 +0000 UTC m=+839.366783488" observedRunningTime="2026-02-16 02:36:28.952287542 +0000 UTC m=+841.296893557" watchObservedRunningTime="2026-02-16 02:36:28.955529777 +0000 UTC m=+841.300135782" Feb 16 02:36:28.956527 master-0 kubenswrapper[31559]: I0216 02:36:28.955738 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-lfqbs" event={"ID":"85c662d6-ab4c-4999-93cf-da87dfff2271","Type":"ContainerStarted","Data":"45a076e8c6e33906b10297450573f78bc8d3263a9d6132281f4008c2047ea95d"} Feb 16 02:36:28.956776 master-0 kubenswrapper[31559]: I0216 02:36:28.956551 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-lfqbs" Feb 16 02:36:28.958802 master-0 kubenswrapper[31559]: I0216 02:36:28.958782 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-9qhgd" event={"ID":"a78ae1a9-3cbf-4147-9761-50a806bafceb","Type":"ContainerStarted","Data":"f9fb336b72e818207584ffe92c4252c44a5282d70e8ea2fd31acdcc05758b9e6"} Feb 16 02:36:28.959230 master-0 kubenswrapper[31559]: I0216 02:36:28.959215 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-9qhgd" Feb 16 02:36:28.960323 master-0 kubenswrapper[31559]: I0216 02:36:28.960303 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-9f485" event={"ID":"d7bce36d-c48f-4156-ba5e-170d77b35445","Type":"ContainerStarted","Data":"9c2f4dadec0a29064d997b1f8389164275a908e873247bfd5a01d1eb35936d48"} Feb 16 02:36:28.960712 master-0 kubenswrapper[31559]: I0216 02:36:28.960693 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-9f485" Feb 16 02:36:28.963323 master-0 kubenswrapper[31559]: I0216 02:36:28.963306 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-sdw27" event={"ID":"76205954-3caf-4932-bb2c-c2315357a221","Type":"ContainerStarted","Data":"f1134ddfaac2fd9e817e54f0f1f0f26c551a4e62e72043fdd2e20962352035ab"} Feb 16 02:36:28.963559 master-0 kubenswrapper[31559]: I0216 02:36:28.963523 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-sdw27" Feb 16 02:36:28.972500 master-0 kubenswrapper[31559]: I0216 02:36:28.964415 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-6wwcr" event={"ID":"030323ce-de29-4144-bba5-f811e997f7d8","Type":"ContainerStarted","Data":"37e29e0f9619045786a3fc88fa61dcbf5943a1365e26591ba1d03cfbb659a471"} Feb 16 02:36:28.972500 master-0 kubenswrapper[31559]: I0216 02:36:28.965926 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-6wwcr" Feb 16 02:36:28.988197 master-0 kubenswrapper[31559]: I0216 02:36:28.985838 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-rwgkl" podStartSLOduration=4.830713663 podStartE2EDuration="22.985823689s" podCreationTimestamp="2026-02-16 02:36:06 +0000 UTC" firstStartedPulling="2026-02-16 02:36:08.86831883 +0000 UTC m=+821.212924845" lastFinishedPulling="2026-02-16 02:36:27.023428856 +0000 UTC m=+839.368034871" observedRunningTime="2026-02-16 02:36:28.985756547 +0000 UTC m=+841.330362562" watchObservedRunningTime="2026-02-16 02:36:28.985823689 +0000 UTC m=+841.330429704" Feb 16 02:36:28.988709 master-0 kubenswrapper[31559]: I0216 02:36:28.988649 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-kvc45" event={"ID":"c4437af6-8e55-42d9-8c6c-61965282cea0","Type":"ContainerStarted","Data":"179e62440b1e8b7d1adc1ce891d51c940dd688fe08c303ecaf4483ada6877b7e"} Feb 16 02:36:28.989407 master-0 kubenswrapper[31559]: I0216 02:36:28.989383 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-kvc45" Feb 16 02:36:28.996113 master-0 kubenswrapper[31559]: I0216 02:36:28.995824 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-fxcr2" event={"ID":"e9fd7b60-f1b5-4dd2-963a-1ec710c2a29c","Type":"ContainerStarted","Data":"c27d6dbdf00db0ab6e48f784e0c9129017cc4ed333760018dd6afa81c61bd5ba"} Feb 16 02:36:29.017460 master-0 kubenswrapper[31559]: I0216 02:36:29.016329 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mpg97" podStartSLOduration=4.8481751299999996 podStartE2EDuration="23.016310697s" podCreationTimestamp="2026-02-16 02:36:06 +0000 UTC" firstStartedPulling="2026-02-16 02:36:08.825600652 +0000 UTC m=+821.170206667" lastFinishedPulling="2026-02-16 02:36:26.993736219 +0000 UTC m=+839.338342234" observedRunningTime="2026-02-16 02:36:29.009768546 +0000 UTC m=+841.354374561" watchObservedRunningTime="2026-02-16 02:36:29.016310697 +0000 UTC m=+841.360916712" Feb 16 02:36:29.053521 master-0 kubenswrapper[31559]: I0216 02:36:29.049732 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-qlvbm" podStartSLOduration=3.340055331 podStartE2EDuration="23.049715401s" podCreationTimestamp="2026-02-16 02:36:06 +0000 UTC" firstStartedPulling="2026-02-16 02:36:08.87406983 +0000 UTC m=+821.218675845" lastFinishedPulling="2026-02-16 02:36:28.5837299 +0000 UTC m=+840.928335915" observedRunningTime="2026-02-16 02:36:29.045426309 +0000 UTC m=+841.390032324" watchObservedRunningTime="2026-02-16 02:36:29.049715401 +0000 UTC m=+841.394321406" Feb 16 02:36:29.085619 master-0 kubenswrapper[31559]: I0216 02:36:29.085470 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-8rrtb" podStartSLOduration=5.205213218 podStartE2EDuration="24.085414425s" podCreationTimestamp="2026-02-16 02:36:05 +0000 UTC" firstStartedPulling="2026-02-16 02:36:07.520053174 +0000 UTC m=+819.864659179" lastFinishedPulling="2026-02-16 02:36:26.400254371 +0000 UTC m=+838.744860386" observedRunningTime="2026-02-16 02:36:29.072951509 +0000 UTC m=+841.417557524" watchObservedRunningTime="2026-02-16 02:36:29.085414425 +0000 UTC m=+841.430020440" Feb 16 02:36:29.101744 master-0 kubenswrapper[31559]: I0216 02:36:29.100481 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-lfqbs" podStartSLOduration=4.165490589 podStartE2EDuration="23.100465969s" podCreationTimestamp="2026-02-16 02:36:06 +0000 UTC" firstStartedPulling="2026-02-16 02:36:08.060608907 +0000 UTC m=+820.405214922" lastFinishedPulling="2026-02-16 02:36:26.995584287 +0000 UTC m=+839.340190302" observedRunningTime="2026-02-16 02:36:29.097125411 +0000 UTC m=+841.441731426" watchObservedRunningTime="2026-02-16 02:36:29.100465969 +0000 UTC m=+841.445071984" Feb 16 02:36:29.126936 master-0 kubenswrapper[31559]: I0216 02:36:29.126853 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-9qhgd" podStartSLOduration=5.579752031 podStartE2EDuration="23.126836789s" podCreationTimestamp="2026-02-16 02:36:06 +0000 UTC" firstStartedPulling="2026-02-16 02:36:08.853191894 +0000 UTC m=+821.197797909" lastFinishedPulling="2026-02-16 02:36:26.400276652 +0000 UTC m=+838.744882667" observedRunningTime="2026-02-16 02:36:29.125901634 +0000 UTC m=+841.470507649" watchObservedRunningTime="2026-02-16 02:36:29.126836789 +0000 UTC m=+841.471442804" Feb 16 02:36:29.175737 master-0 kubenswrapper[31559]: I0216 02:36:29.175663 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-6wwcr" podStartSLOduration=4.611773255 podStartE2EDuration="23.175647246s" podCreationTimestamp="2026-02-16 02:36:06 +0000 UTC" firstStartedPulling="2026-02-16 02:36:08.372314522 +0000 UTC m=+820.716920547" lastFinishedPulling="2026-02-16 02:36:26.936188513 +0000 UTC m=+839.280794538" observedRunningTime="2026-02-16 02:36:29.170765608 +0000 UTC m=+841.515371623" watchObservedRunningTime="2026-02-16 02:36:29.175647246 +0000 UTC m=+841.520253261" Feb 16 02:36:29.213520 master-0 kubenswrapper[31559]: I0216 02:36:29.210379 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-kvc45" podStartSLOduration=4.333818852 podStartE2EDuration="23.210361444s" podCreationTimestamp="2026-02-16 02:36:06 +0000 UTC" firstStartedPulling="2026-02-16 02:36:08.058736978 +0000 UTC m=+820.403342983" lastFinishedPulling="2026-02-16 02:36:26.93527955 +0000 UTC m=+839.279885575" observedRunningTime="2026-02-16 02:36:29.20486881 +0000 UTC m=+841.549474825" watchObservedRunningTime="2026-02-16 02:36:29.210361444 +0000 UTC m=+841.554967459" Feb 16 02:36:29.263467 master-0 kubenswrapper[31559]: I0216 02:36:29.262097 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68f46476f-9f485" podStartSLOduration=6.226286937 podStartE2EDuration="23.262083567s" podCreationTimestamp="2026-02-16 02:36:06 +0000 UTC" firstStartedPulling="2026-02-16 02:36:08.855793082 +0000 UTC m=+821.200399097" lastFinishedPulling="2026-02-16 02:36:25.891589722 +0000 UTC m=+838.236195727" observedRunningTime="2026-02-16 02:36:29.259734776 +0000 UTC m=+841.604340791" watchObservedRunningTime="2026-02-16 02:36:29.262083567 +0000 UTC m=+841.606689572" Feb 16 02:36:29.299457 master-0 kubenswrapper[31559]: I0216 02:36:29.299098 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7866795846-fxcr2" podStartSLOduration=6.262283058 podStartE2EDuration="23.299082295s" podCreationTimestamp="2026-02-16 02:36:06 +0000 UTC" firstStartedPulling="2026-02-16 02:36:08.85496155 +0000 UTC m=+821.199567565" lastFinishedPulling="2026-02-16 02:36:25.891760777 +0000 UTC m=+838.236366802" observedRunningTime="2026-02-16 02:36:29.298686665 +0000 UTC m=+841.643292680" watchObservedRunningTime="2026-02-16 02:36:29.299082295 +0000 UTC m=+841.643688300" Feb 16 02:36:29.338453 master-0 kubenswrapper[31559]: I0216 02:36:29.333582 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-sdw27" podStartSLOduration=4.450796462 podStartE2EDuration="23.333566547s" podCreationTimestamp="2026-02-16 02:36:06 +0000 UTC" firstStartedPulling="2026-02-16 02:36:08.053442689 +0000 UTC m=+820.398048704" lastFinishedPulling="2026-02-16 02:36:26.936212764 +0000 UTC m=+839.280818789" observedRunningTime="2026-02-16 02:36:29.332227612 +0000 UTC m=+841.676833627" watchObservedRunningTime="2026-02-16 02:36:29.333566547 +0000 UTC m=+841.678172562" Feb 16 02:36:30.006824 master-0 kubenswrapper[31559]: I0216 02:36:30.006781 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7866795846-fxcr2" Feb 16 02:36:36.409882 master-0 kubenswrapper[31559]: I0216 02:36:36.409761 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-8rrtb" Feb 16 02:36:36.439416 master-0 kubenswrapper[31559]: I0216 02:36:36.439301 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-hpq8s" Feb 16 02:36:36.463501 master-0 kubenswrapper[31559]: I0216 02:36:36.462423 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-bktfj" Feb 16 02:36:36.501588 master-0 kubenswrapper[31559]: I0216 02:36:36.500007 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987464f4-5nzdl" Feb 16 02:36:36.525649 master-0 kubenswrapper[31559]: I0216 02:36:36.525589 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-sdw27" Feb 16 02:36:36.687574 master-0 kubenswrapper[31559]: I0216 02:36:36.687487 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-kvc45" Feb 16 02:36:36.805475 master-0 kubenswrapper[31559]: I0216 02:36:36.805124 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-2nhlv" Feb 16 02:36:36.816515 master-0 kubenswrapper[31559]: I0216 02:36:36.812898 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-lfqbs" Feb 16 02:36:36.828668 master-0 kubenswrapper[31559]: I0216 02:36:36.828552 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-sth8f" Feb 16 02:36:36.838206 master-0 kubenswrapper[31559]: I0216 02:36:36.838060 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-5mbdg" Feb 16 02:36:36.900508 master-0 kubenswrapper[31559]: I0216 02:36:36.900410 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-6wwcr" Feb 16 02:36:36.926208 master-0 kubenswrapper[31559]: I0216 02:36:36.926133 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-pzmcd" Feb 16 02:36:37.073976 master-0 kubenswrapper[31559]: I0216 02:36:37.073808 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-rwgkl" Feb 16 02:36:37.096268 master-0 kubenswrapper[31559]: I0216 02:36:37.096202 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-7cfks" Feb 16 02:36:37.223086 master-0 kubenswrapper[31559]: I0216 02:36:37.223016 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-qlvbm" Feb 16 02:36:37.247747 master-0 kubenswrapper[31559]: I0216 02:36:37.247205 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-9f485" Feb 16 02:36:37.277312 master-0 kubenswrapper[31559]: I0216 02:36:37.276965 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-9qhgd" Feb 16 02:36:37.330800 master-0 kubenswrapper[31559]: I0216 02:36:37.330603 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7866795846-fxcr2" Feb 16 02:36:37.392162 master-0 kubenswrapper[31559]: I0216 02:36:37.392108 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-vr8wx" Feb 16 02:36:38.359511 master-0 kubenswrapper[31559]: I0216 02:36:38.359380 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5a00ec8c-7fd9-4450-8879-af88897ebfc6-cert\") pod \"infra-operator-controller-manager-5f879c76b6-ftls6\" (UID: \"5a00ec8c-7fd9-4450-8879-af88897ebfc6\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-ftls6" Feb 16 02:36:38.364692 master-0 kubenswrapper[31559]: I0216 02:36:38.364635 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5a00ec8c-7fd9-4450-8879-af88897ebfc6-cert\") pod \"infra-operator-controller-manager-5f879c76b6-ftls6\" (UID: \"5a00ec8c-7fd9-4450-8879-af88897ebfc6\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-ftls6" Feb 16 02:36:38.539016 master-0 kubenswrapper[31559]: I0216 02:36:38.538927 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-ftls6" Feb 16 02:36:38.669659 master-0 kubenswrapper[31559]: I0216 02:36:38.669054 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8deae9af-57d0-43f7-a94f-b9e4153c5f4d-cert\") pod \"openstack-baremetal-operator-controller-manager-5f8cd6b89bwxg9f\" (UID: \"8deae9af-57d0-43f7-a94f-b9e4153c5f4d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89bwxg9f" Feb 16 02:36:38.673919 master-0 kubenswrapper[31559]: I0216 02:36:38.673629 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8deae9af-57d0-43f7-a94f-b9e4153c5f4d-cert\") pod \"openstack-baremetal-operator-controller-manager-5f8cd6b89bwxg9f\" (UID: \"8deae9af-57d0-43f7-a94f-b9e4153c5f4d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89bwxg9f" Feb 16 02:36:38.738486 master-0 kubenswrapper[31559]: I0216 02:36:38.729190 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89bwxg9f" Feb 16 02:36:38.974616 master-0 kubenswrapper[31559]: I0216 02:36:38.974080 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/326ce8bf-5b0c-4354-a324-ba2842ea7cd9-webhook-certs\") pod \"openstack-operator-controller-manager-74d597bfd6-6nxrj\" (UID: \"326ce8bf-5b0c-4354-a324-ba2842ea7cd9\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-6nxrj" Feb 16 02:36:38.974857 master-0 kubenswrapper[31559]: I0216 02:36:38.974661 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/326ce8bf-5b0c-4354-a324-ba2842ea7cd9-metrics-certs\") pod \"openstack-operator-controller-manager-74d597bfd6-6nxrj\" (UID: \"326ce8bf-5b0c-4354-a324-ba2842ea7cd9\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-6nxrj" Feb 16 02:36:38.979343 master-0 kubenswrapper[31559]: I0216 02:36:38.979312 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/326ce8bf-5b0c-4354-a324-ba2842ea7cd9-metrics-certs\") pod \"openstack-operator-controller-manager-74d597bfd6-6nxrj\" (UID: \"326ce8bf-5b0c-4354-a324-ba2842ea7cd9\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-6nxrj" Feb 16 02:36:38.979796 master-0 kubenswrapper[31559]: I0216 02:36:38.979733 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/326ce8bf-5b0c-4354-a324-ba2842ea7cd9-webhook-certs\") pod \"openstack-operator-controller-manager-74d597bfd6-6nxrj\" (UID: \"326ce8bf-5b0c-4354-a324-ba2842ea7cd9\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-6nxrj" Feb 16 02:36:39.066114 master-0 kubenswrapper[31559]: I0216 02:36:39.066043 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-5f879c76b6-ftls6"] Feb 16 02:36:39.091370 master-0 kubenswrapper[31559]: W0216 02:36:39.091322 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a00ec8c_7fd9_4450_8879_af88897ebfc6.slice/crio-8f18b7ee607ec480835418f61465f2a84aee12151f03cf40680d94f28df27810 WatchSource:0}: Error finding container 8f18b7ee607ec480835418f61465f2a84aee12151f03cf40680d94f28df27810: Status 404 returned error can't find the container with id 8f18b7ee607ec480835418f61465f2a84aee12151f03cf40680d94f28df27810 Feb 16 02:36:39.122463 master-0 kubenswrapper[31559]: I0216 02:36:39.122395 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-ftls6" event={"ID":"5a00ec8c-7fd9-4450-8879-af88897ebfc6","Type":"ContainerStarted","Data":"8f18b7ee607ec480835418f61465f2a84aee12151f03cf40680d94f28df27810"} Feb 16 02:36:39.199540 master-0 kubenswrapper[31559]: I0216 02:36:39.199263 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-6nxrj" Feb 16 02:36:39.256407 master-0 kubenswrapper[31559]: I0216 02:36:39.256323 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89bwxg9f"] Feb 16 02:36:39.766805 master-0 kubenswrapper[31559]: W0216 02:36:39.763034 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod326ce8bf_5b0c_4354_a324_ba2842ea7cd9.slice/crio-5454ba40e8b1f15dca42c8453dfd0d36776201e338a199c12ef3b93523d21c42 WatchSource:0}: Error finding container 5454ba40e8b1f15dca42c8453dfd0d36776201e338a199c12ef3b93523d21c42: Status 404 returned error can't find the container with id 5454ba40e8b1f15dca42c8453dfd0d36776201e338a199c12ef3b93523d21c42 Feb 16 02:36:39.766805 master-0 kubenswrapper[31559]: I0216 02:36:39.764519 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-74d597bfd6-6nxrj"] Feb 16 02:36:40.135038 master-0 kubenswrapper[31559]: I0216 02:36:40.134846 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89bwxg9f" event={"ID":"8deae9af-57d0-43f7-a94f-b9e4153c5f4d","Type":"ContainerStarted","Data":"171ecb2c0f3ddfa8048698572fb361e01d0d740ddcc5c4c55c1a974a97d387f4"} Feb 16 02:36:40.136782 master-0 kubenswrapper[31559]: I0216 02:36:40.136724 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-6nxrj" event={"ID":"326ce8bf-5b0c-4354-a324-ba2842ea7cd9","Type":"ContainerStarted","Data":"9058f9dcfb19ac0c865febd58619ab526152ea2e9b6147b4d0a42c531233fba5"} Feb 16 02:36:40.136975 master-0 kubenswrapper[31559]: I0216 02:36:40.136788 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-6nxrj" event={"ID":"326ce8bf-5b0c-4354-a324-ba2842ea7cd9","Type":"ContainerStarted","Data":"5454ba40e8b1f15dca42c8453dfd0d36776201e338a199c12ef3b93523d21c42"} Feb 16 02:36:40.136975 master-0 kubenswrapper[31559]: I0216 02:36:40.136907 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-6nxrj" Feb 16 02:36:40.186502 master-0 kubenswrapper[31559]: I0216 02:36:40.185555 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-6nxrj" podStartSLOduration=34.185534496 podStartE2EDuration="34.185534496s" podCreationTimestamp="2026-02-16 02:36:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:36:40.179266582 +0000 UTC m=+852.523872687" watchObservedRunningTime="2026-02-16 02:36:40.185534496 +0000 UTC m=+852.530140511" Feb 16 02:36:42.160611 master-0 kubenswrapper[31559]: I0216 02:36:42.160542 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89bwxg9f" event={"ID":"8deae9af-57d0-43f7-a94f-b9e4153c5f4d","Type":"ContainerStarted","Data":"9e80c0b93ef928e7a5f37e77bb77783f2432e6f03d60b793b29076dcba808bef"} Feb 16 02:36:42.161236 master-0 kubenswrapper[31559]: I0216 02:36:42.160754 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89bwxg9f" Feb 16 02:36:42.163782 master-0 kubenswrapper[31559]: I0216 02:36:42.162954 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-ftls6" event={"ID":"5a00ec8c-7fd9-4450-8879-af88897ebfc6","Type":"ContainerStarted","Data":"2fdc35a6e7f90af7b6af540dd3598764934d96c9e05374a7c3143ea6fb93d928"} Feb 16 02:36:42.163782 master-0 kubenswrapper[31559]: I0216 02:36:42.163588 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-ftls6" Feb 16 02:36:42.228167 master-0 kubenswrapper[31559]: I0216 02:36:42.228061 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89bwxg9f" podStartSLOduration=33.687873716 podStartE2EDuration="36.228036706s" podCreationTimestamp="2026-02-16 02:36:06 +0000 UTC" firstStartedPulling="2026-02-16 02:36:39.273160155 +0000 UTC m=+851.617766170" lastFinishedPulling="2026-02-16 02:36:41.813323135 +0000 UTC m=+854.157929160" observedRunningTime="2026-02-16 02:36:42.204402398 +0000 UTC m=+854.549008423" watchObservedRunningTime="2026-02-16 02:36:42.228036706 +0000 UTC m=+854.572642731" Feb 16 02:36:42.243636 master-0 kubenswrapper[31559]: I0216 02:36:42.243427 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-ftls6" podStartSLOduration=33.546871057 podStartE2EDuration="36.243403578s" podCreationTimestamp="2026-02-16 02:36:06 +0000 UTC" firstStartedPulling="2026-02-16 02:36:39.095696752 +0000 UTC m=+851.440302807" lastFinishedPulling="2026-02-16 02:36:41.792229273 +0000 UTC m=+854.136835328" observedRunningTime="2026-02-16 02:36:42.233876619 +0000 UTC m=+854.578482634" watchObservedRunningTime="2026-02-16 02:36:42.243403578 +0000 UTC m=+854.588009603" Feb 16 02:36:48.552548 master-0 kubenswrapper[31559]: I0216 02:36:48.552359 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-ftls6" Feb 16 02:36:48.736742 master-0 kubenswrapper[31559]: I0216 02:36:48.736334 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89bwxg9f" Feb 16 02:36:49.212721 master-0 kubenswrapper[31559]: I0216 02:36:49.212627 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-6nxrj" Feb 16 02:37:32.289895 master-0 kubenswrapper[31559]: I0216 02:37:32.288766 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c7b6fb887-gwljd"] Feb 16 02:37:32.291847 master-0 kubenswrapper[31559]: I0216 02:37:32.291725 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7b6fb887-gwljd" Feb 16 02:37:32.296060 master-0 kubenswrapper[31559]: I0216 02:37:32.295762 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 16 02:37:32.296060 master-0 kubenswrapper[31559]: I0216 02:37:32.295985 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 16 02:37:32.296223 master-0 kubenswrapper[31559]: I0216 02:37:32.296105 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 16 02:37:32.323824 master-0 kubenswrapper[31559]: I0216 02:37:32.323774 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c7b6fb887-gwljd"] Feb 16 02:37:32.330536 master-0 kubenswrapper[31559]: I0216 02:37:32.330486 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d78499c-8nnpn"] Feb 16 02:37:32.332479 master-0 kubenswrapper[31559]: I0216 02:37:32.332153 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d78499c-8nnpn" Feb 16 02:37:32.334225 master-0 kubenswrapper[31559]: I0216 02:37:32.334173 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 16 02:37:32.346325 master-0 kubenswrapper[31559]: I0216 02:37:32.345334 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d78499c-8nnpn"] Feb 16 02:37:32.397673 master-0 kubenswrapper[31559]: I0216 02:37:32.397584 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b82f3e51-fd8b-4c28-bf1c-e1bc3e1d874d-config\") pod \"dnsmasq-dns-7d78499c-8nnpn\" (UID: \"b82f3e51-fd8b-4c28-bf1c-e1bc3e1d874d\") " pod="openstack/dnsmasq-dns-7d78499c-8nnpn" Feb 16 02:37:32.397673 master-0 kubenswrapper[31559]: I0216 02:37:32.397645 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b82f3e51-fd8b-4c28-bf1c-e1bc3e1d874d-dns-svc\") pod \"dnsmasq-dns-7d78499c-8nnpn\" (UID: \"b82f3e51-fd8b-4c28-bf1c-e1bc3e1d874d\") " pod="openstack/dnsmasq-dns-7d78499c-8nnpn" Feb 16 02:37:32.397938 master-0 kubenswrapper[31559]: I0216 02:37:32.397728 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tk89l\" (UniqueName: \"kubernetes.io/projected/b82f3e51-fd8b-4c28-bf1c-e1bc3e1d874d-kube-api-access-tk89l\") pod \"dnsmasq-dns-7d78499c-8nnpn\" (UID: \"b82f3e51-fd8b-4c28-bf1c-e1bc3e1d874d\") " pod="openstack/dnsmasq-dns-7d78499c-8nnpn" Feb 16 02:37:32.397938 master-0 kubenswrapper[31559]: I0216 02:37:32.397770 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c20b360-7af1-42e9-af09-713bcb62953f-config\") pod \"dnsmasq-dns-5c7b6fb887-gwljd\" (UID: \"9c20b360-7af1-42e9-af09-713bcb62953f\") " pod="openstack/dnsmasq-dns-5c7b6fb887-gwljd" Feb 16 02:37:32.397938 master-0 kubenswrapper[31559]: I0216 02:37:32.397857 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6flv\" (UniqueName: \"kubernetes.io/projected/9c20b360-7af1-42e9-af09-713bcb62953f-kube-api-access-r6flv\") pod \"dnsmasq-dns-5c7b6fb887-gwljd\" (UID: \"9c20b360-7af1-42e9-af09-713bcb62953f\") " pod="openstack/dnsmasq-dns-5c7b6fb887-gwljd" Feb 16 02:37:32.502112 master-0 kubenswrapper[31559]: I0216 02:37:32.499923 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b82f3e51-fd8b-4c28-bf1c-e1bc3e1d874d-config\") pod \"dnsmasq-dns-7d78499c-8nnpn\" (UID: \"b82f3e51-fd8b-4c28-bf1c-e1bc3e1d874d\") " pod="openstack/dnsmasq-dns-7d78499c-8nnpn" Feb 16 02:37:32.502112 master-0 kubenswrapper[31559]: I0216 02:37:32.500675 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b82f3e51-fd8b-4c28-bf1c-e1bc3e1d874d-dns-svc\") pod \"dnsmasq-dns-7d78499c-8nnpn\" (UID: \"b82f3e51-fd8b-4c28-bf1c-e1bc3e1d874d\") " pod="openstack/dnsmasq-dns-7d78499c-8nnpn" Feb 16 02:37:32.502112 master-0 kubenswrapper[31559]: I0216 02:37:32.500925 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tk89l\" (UniqueName: \"kubernetes.io/projected/b82f3e51-fd8b-4c28-bf1c-e1bc3e1d874d-kube-api-access-tk89l\") pod \"dnsmasq-dns-7d78499c-8nnpn\" (UID: \"b82f3e51-fd8b-4c28-bf1c-e1bc3e1d874d\") " pod="openstack/dnsmasq-dns-7d78499c-8nnpn" Feb 16 02:37:32.502112 master-0 kubenswrapper[31559]: I0216 02:37:32.500982 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b82f3e51-fd8b-4c28-bf1c-e1bc3e1d874d-config\") pod \"dnsmasq-dns-7d78499c-8nnpn\" (UID: \"b82f3e51-fd8b-4c28-bf1c-e1bc3e1d874d\") " pod="openstack/dnsmasq-dns-7d78499c-8nnpn" Feb 16 02:37:32.502112 master-0 kubenswrapper[31559]: I0216 02:37:32.500993 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c20b360-7af1-42e9-af09-713bcb62953f-config\") pod \"dnsmasq-dns-5c7b6fb887-gwljd\" (UID: \"9c20b360-7af1-42e9-af09-713bcb62953f\") " pod="openstack/dnsmasq-dns-5c7b6fb887-gwljd" Feb 16 02:37:32.502112 master-0 kubenswrapper[31559]: I0216 02:37:32.501120 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6flv\" (UniqueName: \"kubernetes.io/projected/9c20b360-7af1-42e9-af09-713bcb62953f-kube-api-access-r6flv\") pod \"dnsmasq-dns-5c7b6fb887-gwljd\" (UID: \"9c20b360-7af1-42e9-af09-713bcb62953f\") " pod="openstack/dnsmasq-dns-5c7b6fb887-gwljd" Feb 16 02:37:32.502112 master-0 kubenswrapper[31559]: I0216 02:37:32.501603 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b82f3e51-fd8b-4c28-bf1c-e1bc3e1d874d-dns-svc\") pod \"dnsmasq-dns-7d78499c-8nnpn\" (UID: \"b82f3e51-fd8b-4c28-bf1c-e1bc3e1d874d\") " pod="openstack/dnsmasq-dns-7d78499c-8nnpn" Feb 16 02:37:32.503039 master-0 kubenswrapper[31559]: I0216 02:37:32.502189 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c20b360-7af1-42e9-af09-713bcb62953f-config\") pod \"dnsmasq-dns-5c7b6fb887-gwljd\" (UID: \"9c20b360-7af1-42e9-af09-713bcb62953f\") " pod="openstack/dnsmasq-dns-5c7b6fb887-gwljd" Feb 16 02:37:32.521197 master-0 kubenswrapper[31559]: I0216 02:37:32.520686 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6flv\" (UniqueName: \"kubernetes.io/projected/9c20b360-7af1-42e9-af09-713bcb62953f-kube-api-access-r6flv\") pod \"dnsmasq-dns-5c7b6fb887-gwljd\" (UID: \"9c20b360-7af1-42e9-af09-713bcb62953f\") " pod="openstack/dnsmasq-dns-5c7b6fb887-gwljd" Feb 16 02:37:32.523047 master-0 kubenswrapper[31559]: I0216 02:37:32.522978 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tk89l\" (UniqueName: \"kubernetes.io/projected/b82f3e51-fd8b-4c28-bf1c-e1bc3e1d874d-kube-api-access-tk89l\") pod \"dnsmasq-dns-7d78499c-8nnpn\" (UID: \"b82f3e51-fd8b-4c28-bf1c-e1bc3e1d874d\") " pod="openstack/dnsmasq-dns-7d78499c-8nnpn" Feb 16 02:37:32.677523 master-0 kubenswrapper[31559]: I0216 02:37:32.677456 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7b6fb887-gwljd" Feb 16 02:37:32.683854 master-0 kubenswrapper[31559]: I0216 02:37:32.683809 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d78499c-8nnpn" Feb 16 02:37:33.203466 master-0 kubenswrapper[31559]: I0216 02:37:33.201815 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c7b6fb887-gwljd"] Feb 16 02:37:33.236551 master-0 kubenswrapper[31559]: I0216 02:37:33.233612 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d78499c-8nnpn"] Feb 16 02:37:33.285152 master-0 kubenswrapper[31559]: I0216 02:37:33.285102 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bcd98d69f-xwh4z"] Feb 16 02:37:33.287445 master-0 kubenswrapper[31559]: I0216 02:37:33.286822 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bcd98d69f-xwh4z" Feb 16 02:37:33.294817 master-0 kubenswrapper[31559]: I0216 02:37:33.292873 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bcd98d69f-xwh4z"] Feb 16 02:37:33.337028 master-0 kubenswrapper[31559]: I0216 02:37:33.335320 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95fe90e7-e416-4f71-a5ec-4d1529dea48a-dns-svc\") pod \"dnsmasq-dns-5bcd98d69f-xwh4z\" (UID: \"95fe90e7-e416-4f71-a5ec-4d1529dea48a\") " pod="openstack/dnsmasq-dns-5bcd98d69f-xwh4z" Feb 16 02:37:33.337028 master-0 kubenswrapper[31559]: I0216 02:37:33.335611 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95fe90e7-e416-4f71-a5ec-4d1529dea48a-config\") pod \"dnsmasq-dns-5bcd98d69f-xwh4z\" (UID: \"95fe90e7-e416-4f71-a5ec-4d1529dea48a\") " pod="openstack/dnsmasq-dns-5bcd98d69f-xwh4z" Feb 16 02:37:33.337028 master-0 kubenswrapper[31559]: I0216 02:37:33.335730 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blbww\" (UniqueName: \"kubernetes.io/projected/95fe90e7-e416-4f71-a5ec-4d1529dea48a-kube-api-access-blbww\") pod \"dnsmasq-dns-5bcd98d69f-xwh4z\" (UID: \"95fe90e7-e416-4f71-a5ec-4d1529dea48a\") " pod="openstack/dnsmasq-dns-5bcd98d69f-xwh4z" Feb 16 02:37:33.344647 master-0 kubenswrapper[31559]: I0216 02:37:33.344588 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c7b6fb887-gwljd"] Feb 16 02:37:33.439551 master-0 kubenswrapper[31559]: I0216 02:37:33.439047 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blbww\" (UniqueName: \"kubernetes.io/projected/95fe90e7-e416-4f71-a5ec-4d1529dea48a-kube-api-access-blbww\") pod \"dnsmasq-dns-5bcd98d69f-xwh4z\" (UID: \"95fe90e7-e416-4f71-a5ec-4d1529dea48a\") " pod="openstack/dnsmasq-dns-5bcd98d69f-xwh4z" Feb 16 02:37:33.439551 master-0 kubenswrapper[31559]: I0216 02:37:33.439143 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95fe90e7-e416-4f71-a5ec-4d1529dea48a-dns-svc\") pod \"dnsmasq-dns-5bcd98d69f-xwh4z\" (UID: \"95fe90e7-e416-4f71-a5ec-4d1529dea48a\") " pod="openstack/dnsmasq-dns-5bcd98d69f-xwh4z" Feb 16 02:37:33.439551 master-0 kubenswrapper[31559]: I0216 02:37:33.439234 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95fe90e7-e416-4f71-a5ec-4d1529dea48a-config\") pod \"dnsmasq-dns-5bcd98d69f-xwh4z\" (UID: \"95fe90e7-e416-4f71-a5ec-4d1529dea48a\") " pod="openstack/dnsmasq-dns-5bcd98d69f-xwh4z" Feb 16 02:37:33.440514 master-0 kubenswrapper[31559]: I0216 02:37:33.440338 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95fe90e7-e416-4f71-a5ec-4d1529dea48a-config\") pod \"dnsmasq-dns-5bcd98d69f-xwh4z\" (UID: \"95fe90e7-e416-4f71-a5ec-4d1529dea48a\") " pod="openstack/dnsmasq-dns-5bcd98d69f-xwh4z" Feb 16 02:37:33.443299 master-0 kubenswrapper[31559]: I0216 02:37:33.443267 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95fe90e7-e416-4f71-a5ec-4d1529dea48a-dns-svc\") pod \"dnsmasq-dns-5bcd98d69f-xwh4z\" (UID: \"95fe90e7-e416-4f71-a5ec-4d1529dea48a\") " pod="openstack/dnsmasq-dns-5bcd98d69f-xwh4z" Feb 16 02:37:33.472338 master-0 kubenswrapper[31559]: I0216 02:37:33.468987 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blbww\" (UniqueName: \"kubernetes.io/projected/95fe90e7-e416-4f71-a5ec-4d1529dea48a-kube-api-access-blbww\") pod \"dnsmasq-dns-5bcd98d69f-xwh4z\" (UID: \"95fe90e7-e416-4f71-a5ec-4d1529dea48a\") " pod="openstack/dnsmasq-dns-5bcd98d69f-xwh4z" Feb 16 02:37:33.585392 master-0 kubenswrapper[31559]: I0216 02:37:33.585324 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d78499c-8nnpn"] Feb 16 02:37:33.608500 master-0 kubenswrapper[31559]: I0216 02:37:33.608305 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b98d7b55c-hn84p"] Feb 16 02:37:33.610168 master-0 kubenswrapper[31559]: I0216 02:37:33.609770 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b98d7b55c-hn84p" Feb 16 02:37:33.624677 master-0 kubenswrapper[31559]: I0216 02:37:33.622876 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b98d7b55c-hn84p"] Feb 16 02:37:33.624677 master-0 kubenswrapper[31559]: I0216 02:37:33.623657 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bcd98d69f-xwh4z" Feb 16 02:37:33.762834 master-0 kubenswrapper[31559]: I0216 02:37:33.762785 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1cac3b2-dea2-4c55-a8db-a71dbc5f95f4-config\") pod \"dnsmasq-dns-6b98d7b55c-hn84p\" (UID: \"c1cac3b2-dea2-4c55-a8db-a71dbc5f95f4\") " pod="openstack/dnsmasq-dns-6b98d7b55c-hn84p" Feb 16 02:37:33.762961 master-0 kubenswrapper[31559]: I0216 02:37:33.762863 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1cac3b2-dea2-4c55-a8db-a71dbc5f95f4-dns-svc\") pod \"dnsmasq-dns-6b98d7b55c-hn84p\" (UID: \"c1cac3b2-dea2-4c55-a8db-a71dbc5f95f4\") " pod="openstack/dnsmasq-dns-6b98d7b55c-hn84p" Feb 16 02:37:33.762961 master-0 kubenswrapper[31559]: I0216 02:37:33.762892 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9p2pk\" (UniqueName: \"kubernetes.io/projected/c1cac3b2-dea2-4c55-a8db-a71dbc5f95f4-kube-api-access-9p2pk\") pod \"dnsmasq-dns-6b98d7b55c-hn84p\" (UID: \"c1cac3b2-dea2-4c55-a8db-a71dbc5f95f4\") " pod="openstack/dnsmasq-dns-6b98d7b55c-hn84p" Feb 16 02:37:33.858497 master-0 kubenswrapper[31559]: I0216 02:37:33.858369 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d78499c-8nnpn" event={"ID":"b82f3e51-fd8b-4c28-bf1c-e1bc3e1d874d","Type":"ContainerStarted","Data":"b77fa396d98203af59ef7081951217c1042c3f5893885fdd6cb1ca58e4ae1446"} Feb 16 02:37:33.860701 master-0 kubenswrapper[31559]: I0216 02:37:33.860631 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6fb887-gwljd" event={"ID":"9c20b360-7af1-42e9-af09-713bcb62953f","Type":"ContainerStarted","Data":"c0462479ee798b7b85332bbaf6948c46593a6f360a3d31e5e18d7e41e66542a5"} Feb 16 02:37:33.865165 master-0 kubenswrapper[31559]: I0216 02:37:33.865096 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1cac3b2-dea2-4c55-a8db-a71dbc5f95f4-config\") pod \"dnsmasq-dns-6b98d7b55c-hn84p\" (UID: \"c1cac3b2-dea2-4c55-a8db-a71dbc5f95f4\") " pod="openstack/dnsmasq-dns-6b98d7b55c-hn84p" Feb 16 02:37:33.865355 master-0 kubenswrapper[31559]: I0216 02:37:33.865324 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1cac3b2-dea2-4c55-a8db-a71dbc5f95f4-dns-svc\") pod \"dnsmasq-dns-6b98d7b55c-hn84p\" (UID: \"c1cac3b2-dea2-4c55-a8db-a71dbc5f95f4\") " pod="openstack/dnsmasq-dns-6b98d7b55c-hn84p" Feb 16 02:37:33.865398 master-0 kubenswrapper[31559]: I0216 02:37:33.865382 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9p2pk\" (UniqueName: \"kubernetes.io/projected/c1cac3b2-dea2-4c55-a8db-a71dbc5f95f4-kube-api-access-9p2pk\") pod \"dnsmasq-dns-6b98d7b55c-hn84p\" (UID: \"c1cac3b2-dea2-4c55-a8db-a71dbc5f95f4\") " pod="openstack/dnsmasq-dns-6b98d7b55c-hn84p" Feb 16 02:37:33.867283 master-0 kubenswrapper[31559]: I0216 02:37:33.867206 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1cac3b2-dea2-4c55-a8db-a71dbc5f95f4-config\") pod \"dnsmasq-dns-6b98d7b55c-hn84p\" (UID: \"c1cac3b2-dea2-4c55-a8db-a71dbc5f95f4\") " pod="openstack/dnsmasq-dns-6b98d7b55c-hn84p" Feb 16 02:37:33.870836 master-0 kubenswrapper[31559]: I0216 02:37:33.868535 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1cac3b2-dea2-4c55-a8db-a71dbc5f95f4-dns-svc\") pod \"dnsmasq-dns-6b98d7b55c-hn84p\" (UID: \"c1cac3b2-dea2-4c55-a8db-a71dbc5f95f4\") " pod="openstack/dnsmasq-dns-6b98d7b55c-hn84p" Feb 16 02:37:33.901339 master-0 kubenswrapper[31559]: I0216 02:37:33.901275 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9p2pk\" (UniqueName: \"kubernetes.io/projected/c1cac3b2-dea2-4c55-a8db-a71dbc5f95f4-kube-api-access-9p2pk\") pod \"dnsmasq-dns-6b98d7b55c-hn84p\" (UID: \"c1cac3b2-dea2-4c55-a8db-a71dbc5f95f4\") " pod="openstack/dnsmasq-dns-6b98d7b55c-hn84p" Feb 16 02:37:34.013613 master-0 kubenswrapper[31559]: I0216 02:37:34.013168 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b98d7b55c-hn84p" Feb 16 02:37:34.103577 master-0 kubenswrapper[31559]: W0216 02:37:34.103493 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod95fe90e7_e416_4f71_a5ec_4d1529dea48a.slice/crio-c183f0bdaf5ad7e912d2e52e1a323751f5cb1b0fe444e8f745eb39b9c989fb1d WatchSource:0}: Error finding container c183f0bdaf5ad7e912d2e52e1a323751f5cb1b0fe444e8f745eb39b9c989fb1d: Status 404 returned error can't find the container with id c183f0bdaf5ad7e912d2e52e1a323751f5cb1b0fe444e8f745eb39b9c989fb1d Feb 16 02:37:34.131152 master-0 kubenswrapper[31559]: I0216 02:37:34.131095 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bcd98d69f-xwh4z"] Feb 16 02:37:34.525283 master-0 kubenswrapper[31559]: W0216 02:37:34.525169 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1cac3b2_dea2_4c55_a8db_a71dbc5f95f4.slice/crio-4c501fa60bbe27291420b857df3494a8e65c9fac03ce98651a04969b3637f144 WatchSource:0}: Error finding container 4c501fa60bbe27291420b857df3494a8e65c9fac03ce98651a04969b3637f144: Status 404 returned error can't find the container with id 4c501fa60bbe27291420b857df3494a8e65c9fac03ce98651a04969b3637f144 Feb 16 02:37:34.545054 master-0 kubenswrapper[31559]: I0216 02:37:34.544978 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b98d7b55c-hn84p"] Feb 16 02:37:34.879645 master-0 kubenswrapper[31559]: I0216 02:37:34.879508 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b98d7b55c-hn84p" event={"ID":"c1cac3b2-dea2-4c55-a8db-a71dbc5f95f4","Type":"ContainerStarted","Data":"4c501fa60bbe27291420b857df3494a8e65c9fac03ce98651a04969b3637f144"} Feb 16 02:37:34.882040 master-0 kubenswrapper[31559]: I0216 02:37:34.881988 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bcd98d69f-xwh4z" event={"ID":"95fe90e7-e416-4f71-a5ec-4d1529dea48a","Type":"ContainerStarted","Data":"c183f0bdaf5ad7e912d2e52e1a323751f5cb1b0fe444e8f745eb39b9c989fb1d"} Feb 16 02:37:37.420407 master-0 kubenswrapper[31559]: I0216 02:37:37.420327 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 02:37:37.422127 master-0 kubenswrapper[31559]: I0216 02:37:37.422094 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 16 02:37:37.427821 master-0 kubenswrapper[31559]: I0216 02:37:37.424725 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 16 02:37:37.427821 master-0 kubenswrapper[31559]: I0216 02:37:37.424956 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 16 02:37:37.427821 master-0 kubenswrapper[31559]: I0216 02:37:37.425159 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 16 02:37:37.427821 master-0 kubenswrapper[31559]: I0216 02:37:37.426786 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 16 02:37:37.427821 master-0 kubenswrapper[31559]: I0216 02:37:37.427294 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 16 02:37:37.432559 master-0 kubenswrapper[31559]: I0216 02:37:37.432228 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 16 02:37:37.441899 master-0 kubenswrapper[31559]: I0216 02:37:37.441854 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 02:37:37.545450 master-0 kubenswrapper[31559]: I0216 02:37:37.543819 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 16 02:37:37.552150 master-0 kubenswrapper[31559]: I0216 02:37:37.547690 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 16 02:37:37.553812 master-0 kubenswrapper[31559]: I0216 02:37:37.552847 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 16 02:37:37.553812 master-0 kubenswrapper[31559]: I0216 02:37:37.553083 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 16 02:37:37.568663 master-0 kubenswrapper[31559]: I0216 02:37:37.568627 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 16 02:37:37.583498 master-0 kubenswrapper[31559]: I0216 02:37:37.580855 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/93fa9c51-4a98-4a66-8b10-a1213ca9f95e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"93fa9c51-4a98-4a66-8b10-a1213ca9f95e\") " pod="openstack/rabbitmq-server-0" Feb 16 02:37:37.583498 master-0 kubenswrapper[31559]: I0216 02:37:37.581608 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/93fa9c51-4a98-4a66-8b10-a1213ca9f95e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"93fa9c51-4a98-4a66-8b10-a1213ca9f95e\") " pod="openstack/rabbitmq-server-0" Feb 16 02:37:37.583498 master-0 kubenswrapper[31559]: I0216 02:37:37.581781 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/93fa9c51-4a98-4a66-8b10-a1213ca9f95e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"93fa9c51-4a98-4a66-8b10-a1213ca9f95e\") " pod="openstack/rabbitmq-server-0" Feb 16 02:37:37.583498 master-0 kubenswrapper[31559]: I0216 02:37:37.582119 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/93fa9c51-4a98-4a66-8b10-a1213ca9f95e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"93fa9c51-4a98-4a66-8b10-a1213ca9f95e\") " pod="openstack/rabbitmq-server-0" Feb 16 02:37:37.583498 master-0 kubenswrapper[31559]: I0216 02:37:37.582191 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8fe3dd35-1b4a-4fde-bb0f-cb510af91209\" (UniqueName: \"kubernetes.io/csi/topolvm.io^ed48fb31-f4d4-496a-960a-52d78bfbd102\") pod \"rabbitmq-server-0\" (UID: \"93fa9c51-4a98-4a66-8b10-a1213ca9f95e\") " pod="openstack/rabbitmq-server-0" Feb 16 02:37:37.583498 master-0 kubenswrapper[31559]: I0216 02:37:37.582250 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/93fa9c51-4a98-4a66-8b10-a1213ca9f95e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"93fa9c51-4a98-4a66-8b10-a1213ca9f95e\") " pod="openstack/rabbitmq-server-0" Feb 16 02:37:37.583498 master-0 kubenswrapper[31559]: I0216 02:37:37.582319 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/93fa9c51-4a98-4a66-8b10-a1213ca9f95e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"93fa9c51-4a98-4a66-8b10-a1213ca9f95e\") " pod="openstack/rabbitmq-server-0" Feb 16 02:37:37.583498 master-0 kubenswrapper[31559]: I0216 02:37:37.582349 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmltg\" (UniqueName: \"kubernetes.io/projected/93fa9c51-4a98-4a66-8b10-a1213ca9f95e-kube-api-access-hmltg\") pod \"rabbitmq-server-0\" (UID: \"93fa9c51-4a98-4a66-8b10-a1213ca9f95e\") " pod="openstack/rabbitmq-server-0" Feb 16 02:37:37.583498 master-0 kubenswrapper[31559]: I0216 02:37:37.582484 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/93fa9c51-4a98-4a66-8b10-a1213ca9f95e-config-data\") pod \"rabbitmq-server-0\" (UID: \"93fa9c51-4a98-4a66-8b10-a1213ca9f95e\") " pod="openstack/rabbitmq-server-0" Feb 16 02:37:37.583498 master-0 kubenswrapper[31559]: I0216 02:37:37.582530 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/93fa9c51-4a98-4a66-8b10-a1213ca9f95e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"93fa9c51-4a98-4a66-8b10-a1213ca9f95e\") " pod="openstack/rabbitmq-server-0" Feb 16 02:37:37.583498 master-0 kubenswrapper[31559]: I0216 02:37:37.582604 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/93fa9c51-4a98-4a66-8b10-a1213ca9f95e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"93fa9c51-4a98-4a66-8b10-a1213ca9f95e\") " pod="openstack/rabbitmq-server-0" Feb 16 02:37:37.607883 master-0 kubenswrapper[31559]: I0216 02:37:37.607794 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 16 02:37:37.686734 master-0 kubenswrapper[31559]: I0216 02:37:37.685585 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/93fa9c51-4a98-4a66-8b10-a1213ca9f95e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"93fa9c51-4a98-4a66-8b10-a1213ca9f95e\") " pod="openstack/rabbitmq-server-0" Feb 16 02:37:37.686734 master-0 kubenswrapper[31559]: I0216 02:37:37.685634 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/93fa9c51-4a98-4a66-8b10-a1213ca9f95e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"93fa9c51-4a98-4a66-8b10-a1213ca9f95e\") " pod="openstack/rabbitmq-server-0" Feb 16 02:37:37.686734 master-0 kubenswrapper[31559]: I0216 02:37:37.685667 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/93fa9c51-4a98-4a66-8b10-a1213ca9f95e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"93fa9c51-4a98-4a66-8b10-a1213ca9f95e\") " pod="openstack/rabbitmq-server-0" Feb 16 02:37:37.686734 master-0 kubenswrapper[31559]: I0216 02:37:37.686353 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx665\" (UniqueName: \"kubernetes.io/projected/22cfc80b-fac3-40d5-b3b4-7b1f5f5d37f6-kube-api-access-gx665\") pod \"memcached-0\" (UID: \"22cfc80b-fac3-40d5-b3b4-7b1f5f5d37f6\") " pod="openstack/memcached-0" Feb 16 02:37:37.686734 master-0 kubenswrapper[31559]: I0216 02:37:37.686400 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/93fa9c51-4a98-4a66-8b10-a1213ca9f95e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"93fa9c51-4a98-4a66-8b10-a1213ca9f95e\") " pod="openstack/rabbitmq-server-0" Feb 16 02:37:37.686734 master-0 kubenswrapper[31559]: I0216 02:37:37.686459 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/22cfc80b-fac3-40d5-b3b4-7b1f5f5d37f6-kolla-config\") pod \"memcached-0\" (UID: \"22cfc80b-fac3-40d5-b3b4-7b1f5f5d37f6\") " pod="openstack/memcached-0" Feb 16 02:37:37.686734 master-0 kubenswrapper[31559]: I0216 02:37:37.686484 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/22cfc80b-fac3-40d5-b3b4-7b1f5f5d37f6-memcached-tls-certs\") pod \"memcached-0\" (UID: \"22cfc80b-fac3-40d5-b3b4-7b1f5f5d37f6\") " pod="openstack/memcached-0" Feb 16 02:37:37.686734 master-0 kubenswrapper[31559]: I0216 02:37:37.686505 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8fe3dd35-1b4a-4fde-bb0f-cb510af91209\" (UniqueName: \"kubernetes.io/csi/topolvm.io^ed48fb31-f4d4-496a-960a-52d78bfbd102\") pod \"rabbitmq-server-0\" (UID: \"93fa9c51-4a98-4a66-8b10-a1213ca9f95e\") " pod="openstack/rabbitmq-server-0" Feb 16 02:37:37.686734 master-0 kubenswrapper[31559]: I0216 02:37:37.686542 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/93fa9c51-4a98-4a66-8b10-a1213ca9f95e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"93fa9c51-4a98-4a66-8b10-a1213ca9f95e\") " pod="openstack/rabbitmq-server-0" Feb 16 02:37:37.686734 master-0 kubenswrapper[31559]: I0216 02:37:37.686577 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/93fa9c51-4a98-4a66-8b10-a1213ca9f95e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"93fa9c51-4a98-4a66-8b10-a1213ca9f95e\") " pod="openstack/rabbitmq-server-0" Feb 16 02:37:37.687357 master-0 kubenswrapper[31559]: I0216 02:37:37.687293 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/93fa9c51-4a98-4a66-8b10-a1213ca9f95e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"93fa9c51-4a98-4a66-8b10-a1213ca9f95e\") " pod="openstack/rabbitmq-server-0" Feb 16 02:37:37.688524 master-0 kubenswrapper[31559]: I0216 02:37:37.687714 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmltg\" (UniqueName: \"kubernetes.io/projected/93fa9c51-4a98-4a66-8b10-a1213ca9f95e-kube-api-access-hmltg\") pod \"rabbitmq-server-0\" (UID: \"93fa9c51-4a98-4a66-8b10-a1213ca9f95e\") " pod="openstack/rabbitmq-server-0" Feb 16 02:37:37.688524 master-0 kubenswrapper[31559]: I0216 02:37:37.687783 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/93fa9c51-4a98-4a66-8b10-a1213ca9f95e-config-data\") pod \"rabbitmq-server-0\" (UID: \"93fa9c51-4a98-4a66-8b10-a1213ca9f95e\") " pod="openstack/rabbitmq-server-0" Feb 16 02:37:37.688524 master-0 kubenswrapper[31559]: I0216 02:37:37.687823 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/93fa9c51-4a98-4a66-8b10-a1213ca9f95e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"93fa9c51-4a98-4a66-8b10-a1213ca9f95e\") " pod="openstack/rabbitmq-server-0" Feb 16 02:37:37.688524 master-0 kubenswrapper[31559]: I0216 02:37:37.687849 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/22cfc80b-fac3-40d5-b3b4-7b1f5f5d37f6-config-data\") pod \"memcached-0\" (UID: \"22cfc80b-fac3-40d5-b3b4-7b1f5f5d37f6\") " pod="openstack/memcached-0" Feb 16 02:37:37.688524 master-0 kubenswrapper[31559]: I0216 02:37:37.687868 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22cfc80b-fac3-40d5-b3b4-7b1f5f5d37f6-combined-ca-bundle\") pod \"memcached-0\" (UID: \"22cfc80b-fac3-40d5-b3b4-7b1f5f5d37f6\") " pod="openstack/memcached-0" Feb 16 02:37:37.688524 master-0 kubenswrapper[31559]: I0216 02:37:37.687887 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/93fa9c51-4a98-4a66-8b10-a1213ca9f95e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"93fa9c51-4a98-4a66-8b10-a1213ca9f95e\") " pod="openstack/rabbitmq-server-0" Feb 16 02:37:37.688524 master-0 kubenswrapper[31559]: I0216 02:37:37.688325 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/93fa9c51-4a98-4a66-8b10-a1213ca9f95e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"93fa9c51-4a98-4a66-8b10-a1213ca9f95e\") " pod="openstack/rabbitmq-server-0" Feb 16 02:37:37.689790 master-0 kubenswrapper[31559]: I0216 02:37:37.689138 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/93fa9c51-4a98-4a66-8b10-a1213ca9f95e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"93fa9c51-4a98-4a66-8b10-a1213ca9f95e\") " pod="openstack/rabbitmq-server-0" Feb 16 02:37:37.689790 master-0 kubenswrapper[31559]: I0216 02:37:37.689005 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/93fa9c51-4a98-4a66-8b10-a1213ca9f95e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"93fa9c51-4a98-4a66-8b10-a1213ca9f95e\") " pod="openstack/rabbitmq-server-0" Feb 16 02:37:37.692611 master-0 kubenswrapper[31559]: I0216 02:37:37.692587 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/93fa9c51-4a98-4a66-8b10-a1213ca9f95e-config-data\") pod \"rabbitmq-server-0\" (UID: \"93fa9c51-4a98-4a66-8b10-a1213ca9f95e\") " pod="openstack/rabbitmq-server-0" Feb 16 02:37:37.695913 master-0 kubenswrapper[31559]: I0216 02:37:37.695875 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/93fa9c51-4a98-4a66-8b10-a1213ca9f95e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"93fa9c51-4a98-4a66-8b10-a1213ca9f95e\") " pod="openstack/rabbitmq-server-0" Feb 16 02:37:37.696066 master-0 kubenswrapper[31559]: I0216 02:37:37.696039 31559 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 02:37:37.696133 master-0 kubenswrapper[31559]: I0216 02:37:37.696087 31559 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8fe3dd35-1b4a-4fde-bb0f-cb510af91209\" (UniqueName: \"kubernetes.io/csi/topolvm.io^ed48fb31-f4d4-496a-960a-52d78bfbd102\") pod \"rabbitmq-server-0\" (UID: \"93fa9c51-4a98-4a66-8b10-a1213ca9f95e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/975258f70aa2c197efad025983e9ab1d301f6fdbf2a1055150927f8821f8ba1e/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 16 02:37:37.697369 master-0 kubenswrapper[31559]: I0216 02:37:37.697322 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/93fa9c51-4a98-4a66-8b10-a1213ca9f95e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"93fa9c51-4a98-4a66-8b10-a1213ca9f95e\") " pod="openstack/rabbitmq-server-0" Feb 16 02:37:37.697369 master-0 kubenswrapper[31559]: I0216 02:37:37.697360 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/93fa9c51-4a98-4a66-8b10-a1213ca9f95e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"93fa9c51-4a98-4a66-8b10-a1213ca9f95e\") " pod="openstack/rabbitmq-server-0" Feb 16 02:37:37.697676 master-0 kubenswrapper[31559]: I0216 02:37:37.697653 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/93fa9c51-4a98-4a66-8b10-a1213ca9f95e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"93fa9c51-4a98-4a66-8b10-a1213ca9f95e\") " pod="openstack/rabbitmq-server-0" Feb 16 02:37:37.715194 master-0 kubenswrapper[31559]: I0216 02:37:37.715146 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmltg\" (UniqueName: \"kubernetes.io/projected/93fa9c51-4a98-4a66-8b10-a1213ca9f95e-kube-api-access-hmltg\") pod \"rabbitmq-server-0\" (UID: \"93fa9c51-4a98-4a66-8b10-a1213ca9f95e\") " pod="openstack/rabbitmq-server-0" Feb 16 02:37:37.764356 master-0 kubenswrapper[31559]: I0216 02:37:37.764312 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 02:37:37.767120 master-0 kubenswrapper[31559]: I0216 02:37:37.767068 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 16 02:37:37.773514 master-0 kubenswrapper[31559]: I0216 02:37:37.773476 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 16 02:37:37.773869 master-0 kubenswrapper[31559]: I0216 02:37:37.773819 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 16 02:37:37.773946 master-0 kubenswrapper[31559]: I0216 02:37:37.773885 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 16 02:37:37.776167 master-0 kubenswrapper[31559]: I0216 02:37:37.776135 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 16 02:37:37.776232 master-0 kubenswrapper[31559]: I0216 02:37:37.776191 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 16 02:37:37.777747 master-0 kubenswrapper[31559]: I0216 02:37:37.777624 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 16 02:37:37.790086 master-0 kubenswrapper[31559]: I0216 02:37:37.789330 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gx665\" (UniqueName: \"kubernetes.io/projected/22cfc80b-fac3-40d5-b3b4-7b1f5f5d37f6-kube-api-access-gx665\") pod \"memcached-0\" (UID: \"22cfc80b-fac3-40d5-b3b4-7b1f5f5d37f6\") " pod="openstack/memcached-0" Feb 16 02:37:37.790086 master-0 kubenswrapper[31559]: I0216 02:37:37.789399 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/22cfc80b-fac3-40d5-b3b4-7b1f5f5d37f6-kolla-config\") pod \"memcached-0\" (UID: \"22cfc80b-fac3-40d5-b3b4-7b1f5f5d37f6\") " pod="openstack/memcached-0" Feb 16 02:37:37.790086 master-0 kubenswrapper[31559]: I0216 02:37:37.789430 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/22cfc80b-fac3-40d5-b3b4-7b1f5f5d37f6-memcached-tls-certs\") pod \"memcached-0\" (UID: \"22cfc80b-fac3-40d5-b3b4-7b1f5f5d37f6\") " pod="openstack/memcached-0" Feb 16 02:37:37.790086 master-0 kubenswrapper[31559]: I0216 02:37:37.789516 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/22cfc80b-fac3-40d5-b3b4-7b1f5f5d37f6-config-data\") pod \"memcached-0\" (UID: \"22cfc80b-fac3-40d5-b3b4-7b1f5f5d37f6\") " pod="openstack/memcached-0" Feb 16 02:37:37.790086 master-0 kubenswrapper[31559]: I0216 02:37:37.789535 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22cfc80b-fac3-40d5-b3b4-7b1f5f5d37f6-combined-ca-bundle\") pod \"memcached-0\" (UID: \"22cfc80b-fac3-40d5-b3b4-7b1f5f5d37f6\") " pod="openstack/memcached-0" Feb 16 02:37:37.794069 master-0 kubenswrapper[31559]: I0216 02:37:37.793803 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/22cfc80b-fac3-40d5-b3b4-7b1f5f5d37f6-kolla-config\") pod \"memcached-0\" (UID: \"22cfc80b-fac3-40d5-b3b4-7b1f5f5d37f6\") " pod="openstack/memcached-0" Feb 16 02:37:37.799291 master-0 kubenswrapper[31559]: I0216 02:37:37.799225 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/22cfc80b-fac3-40d5-b3b4-7b1f5f5d37f6-config-data\") pod \"memcached-0\" (UID: \"22cfc80b-fac3-40d5-b3b4-7b1f5f5d37f6\") " pod="openstack/memcached-0" Feb 16 02:37:37.802029 master-0 kubenswrapper[31559]: I0216 02:37:37.801990 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/22cfc80b-fac3-40d5-b3b4-7b1f5f5d37f6-memcached-tls-certs\") pod \"memcached-0\" (UID: \"22cfc80b-fac3-40d5-b3b4-7b1f5f5d37f6\") " pod="openstack/memcached-0" Feb 16 02:37:37.846909 master-0 kubenswrapper[31559]: I0216 02:37:37.846360 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gx665\" (UniqueName: \"kubernetes.io/projected/22cfc80b-fac3-40d5-b3b4-7b1f5f5d37f6-kube-api-access-gx665\") pod \"memcached-0\" (UID: \"22cfc80b-fac3-40d5-b3b4-7b1f5f5d37f6\") " pod="openstack/memcached-0" Feb 16 02:37:37.851069 master-0 kubenswrapper[31559]: I0216 02:37:37.851015 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 02:37:37.857150 master-0 kubenswrapper[31559]: I0216 02:37:37.857115 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22cfc80b-fac3-40d5-b3b4-7b1f5f5d37f6-combined-ca-bundle\") pod \"memcached-0\" (UID: \"22cfc80b-fac3-40d5-b3b4-7b1f5f5d37f6\") " pod="openstack/memcached-0" Feb 16 02:37:37.901794 master-0 kubenswrapper[31559]: I0216 02:37:37.901696 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5cd89103-815f-45df-9b47-4e3db3c708f2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"5cd89103-815f-45df-9b47-4e3db3c708f2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 02:37:37.901794 master-0 kubenswrapper[31559]: I0216 02:37:37.901781 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5cd89103-815f-45df-9b47-4e3db3c708f2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"5cd89103-815f-45df-9b47-4e3db3c708f2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 02:37:37.902197 master-0 kubenswrapper[31559]: I0216 02:37:37.902159 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5cd89103-815f-45df-9b47-4e3db3c708f2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5cd89103-815f-45df-9b47-4e3db3c708f2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 02:37:37.902197 master-0 kubenswrapper[31559]: I0216 02:37:37.902191 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8t4n\" (UniqueName: \"kubernetes.io/projected/5cd89103-815f-45df-9b47-4e3db3c708f2-kube-api-access-w8t4n\") pod \"rabbitmq-cell1-server-0\" (UID: \"5cd89103-815f-45df-9b47-4e3db3c708f2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 02:37:37.902303 master-0 kubenswrapper[31559]: I0216 02:37:37.902216 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5cd89103-815f-45df-9b47-4e3db3c708f2-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"5cd89103-815f-45df-9b47-4e3db3c708f2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 02:37:37.903126 master-0 kubenswrapper[31559]: I0216 02:37:37.902257 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4aa82de5-56c2-4753-88a3-a3d162b48f01\" (UniqueName: \"kubernetes.io/csi/topolvm.io^e45d3a10-c431-46f0-9645-451dc39369da\") pod \"rabbitmq-cell1-server-0\" (UID: \"5cd89103-815f-45df-9b47-4e3db3c708f2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 02:37:37.903736 master-0 kubenswrapper[31559]: I0216 02:37:37.903282 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5cd89103-815f-45df-9b47-4e3db3c708f2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5cd89103-815f-45df-9b47-4e3db3c708f2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 02:37:37.903943 master-0 kubenswrapper[31559]: I0216 02:37:37.903758 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5cd89103-815f-45df-9b47-4e3db3c708f2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"5cd89103-815f-45df-9b47-4e3db3c708f2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 02:37:37.903943 master-0 kubenswrapper[31559]: I0216 02:37:37.903808 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5cd89103-815f-45df-9b47-4e3db3c708f2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"5cd89103-815f-45df-9b47-4e3db3c708f2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 02:37:37.904783 master-0 kubenswrapper[31559]: I0216 02:37:37.904756 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5cd89103-815f-45df-9b47-4e3db3c708f2-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"5cd89103-815f-45df-9b47-4e3db3c708f2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 02:37:37.905047 master-0 kubenswrapper[31559]: I0216 02:37:37.904997 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5cd89103-815f-45df-9b47-4e3db3c708f2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"5cd89103-815f-45df-9b47-4e3db3c708f2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 02:37:37.916861 master-0 kubenswrapper[31559]: I0216 02:37:37.916819 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 16 02:37:38.018584 master-0 kubenswrapper[31559]: I0216 02:37:38.018465 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4aa82de5-56c2-4753-88a3-a3d162b48f01\" (UniqueName: \"kubernetes.io/csi/topolvm.io^e45d3a10-c431-46f0-9645-451dc39369da\") pod \"rabbitmq-cell1-server-0\" (UID: \"5cd89103-815f-45df-9b47-4e3db3c708f2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 02:37:38.018584 master-0 kubenswrapper[31559]: I0216 02:37:38.018540 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5cd89103-815f-45df-9b47-4e3db3c708f2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5cd89103-815f-45df-9b47-4e3db3c708f2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 02:37:38.018584 master-0 kubenswrapper[31559]: I0216 02:37:38.018563 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5cd89103-815f-45df-9b47-4e3db3c708f2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"5cd89103-815f-45df-9b47-4e3db3c708f2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 02:37:38.018820 master-0 kubenswrapper[31559]: I0216 02:37:38.018608 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5cd89103-815f-45df-9b47-4e3db3c708f2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"5cd89103-815f-45df-9b47-4e3db3c708f2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 02:37:38.018820 master-0 kubenswrapper[31559]: I0216 02:37:38.018658 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5cd89103-815f-45df-9b47-4e3db3c708f2-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"5cd89103-815f-45df-9b47-4e3db3c708f2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 02:37:38.018820 master-0 kubenswrapper[31559]: I0216 02:37:38.018686 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5cd89103-815f-45df-9b47-4e3db3c708f2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"5cd89103-815f-45df-9b47-4e3db3c708f2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 02:37:38.018820 master-0 kubenswrapper[31559]: I0216 02:37:38.018778 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5cd89103-815f-45df-9b47-4e3db3c708f2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"5cd89103-815f-45df-9b47-4e3db3c708f2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 02:37:38.018820 master-0 kubenswrapper[31559]: I0216 02:37:38.018803 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5cd89103-815f-45df-9b47-4e3db3c708f2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"5cd89103-815f-45df-9b47-4e3db3c708f2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 02:37:38.018968 master-0 kubenswrapper[31559]: I0216 02:37:38.018821 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5cd89103-815f-45df-9b47-4e3db3c708f2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5cd89103-815f-45df-9b47-4e3db3c708f2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 02:37:38.018968 master-0 kubenswrapper[31559]: I0216 02:37:38.018837 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8t4n\" (UniqueName: \"kubernetes.io/projected/5cd89103-815f-45df-9b47-4e3db3c708f2-kube-api-access-w8t4n\") pod \"rabbitmq-cell1-server-0\" (UID: \"5cd89103-815f-45df-9b47-4e3db3c708f2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 02:37:38.018968 master-0 kubenswrapper[31559]: I0216 02:37:38.018863 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5cd89103-815f-45df-9b47-4e3db3c708f2-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"5cd89103-815f-45df-9b47-4e3db3c708f2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 02:37:38.024161 master-0 kubenswrapper[31559]: I0216 02:37:38.021749 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5cd89103-815f-45df-9b47-4e3db3c708f2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5cd89103-815f-45df-9b47-4e3db3c708f2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 02:37:38.024161 master-0 kubenswrapper[31559]: I0216 02:37:38.022324 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5cd89103-815f-45df-9b47-4e3db3c708f2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"5cd89103-815f-45df-9b47-4e3db3c708f2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 02:37:38.024161 master-0 kubenswrapper[31559]: I0216 02:37:38.023225 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5cd89103-815f-45df-9b47-4e3db3c708f2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5cd89103-815f-45df-9b47-4e3db3c708f2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 02:37:38.024794 master-0 kubenswrapper[31559]: I0216 02:37:38.024756 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5cd89103-815f-45df-9b47-4e3db3c708f2-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"5cd89103-815f-45df-9b47-4e3db3c708f2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 02:37:38.025037 master-0 kubenswrapper[31559]: I0216 02:37:38.025005 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5cd89103-815f-45df-9b47-4e3db3c708f2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"5cd89103-815f-45df-9b47-4e3db3c708f2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 02:37:38.027255 master-0 kubenswrapper[31559]: I0216 02:37:38.027146 31559 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 02:37:38.027255 master-0 kubenswrapper[31559]: I0216 02:37:38.027202 31559 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4aa82de5-56c2-4753-88a3-a3d162b48f01\" (UniqueName: \"kubernetes.io/csi/topolvm.io^e45d3a10-c431-46f0-9645-451dc39369da\") pod \"rabbitmq-cell1-server-0\" (UID: \"5cd89103-815f-45df-9b47-4e3db3c708f2\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/38cdcefa2fde070df11b25288277d13703840f079443d60ccd0ebf581b6c0602/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 16 02:37:38.027969 master-0 kubenswrapper[31559]: I0216 02:37:38.027947 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5cd89103-815f-45df-9b47-4e3db3c708f2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"5cd89103-815f-45df-9b47-4e3db3c708f2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 02:37:38.031267 master-0 kubenswrapper[31559]: I0216 02:37:38.031218 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5cd89103-815f-45df-9b47-4e3db3c708f2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"5cd89103-815f-45df-9b47-4e3db3c708f2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 02:37:38.038023 master-0 kubenswrapper[31559]: I0216 02:37:38.037943 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5cd89103-815f-45df-9b47-4e3db3c708f2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"5cd89103-815f-45df-9b47-4e3db3c708f2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 02:37:38.063704 master-0 kubenswrapper[31559]: I0216 02:37:38.063637 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8t4n\" (UniqueName: \"kubernetes.io/projected/5cd89103-815f-45df-9b47-4e3db3c708f2-kube-api-access-w8t4n\") pod \"rabbitmq-cell1-server-0\" (UID: \"5cd89103-815f-45df-9b47-4e3db3c708f2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 02:37:38.066885 master-0 kubenswrapper[31559]: I0216 02:37:38.066589 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5cd89103-815f-45df-9b47-4e3db3c708f2-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"5cd89103-815f-45df-9b47-4e3db3c708f2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 02:37:38.855150 master-0 kubenswrapper[31559]: I0216 02:37:38.854965 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 16 02:37:38.857029 master-0 kubenswrapper[31559]: I0216 02:37:38.856977 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 16 02:37:38.864295 master-0 kubenswrapper[31559]: I0216 02:37:38.863040 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 16 02:37:38.864295 master-0 kubenswrapper[31559]: I0216 02:37:38.863986 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 16 02:37:38.864295 master-0 kubenswrapper[31559]: I0216 02:37:38.864065 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 16 02:37:38.872827 master-0 kubenswrapper[31559]: I0216 02:37:38.872771 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 16 02:37:38.947695 master-0 kubenswrapper[31559]: I0216 02:37:38.947287 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-606b9a42-2450-4d02-8e28-1693cdb82183\" (UniqueName: \"kubernetes.io/csi/topolvm.io^e807ac13-f408-4e64-8a81-8f423655de78\") pod \"openstack-galera-0\" (UID: \"18822ed6-bb35-43f9-b51c-feb4aa42fcc6\") " pod="openstack/openstack-galera-0" Feb 16 02:37:38.947695 master-0 kubenswrapper[31559]: I0216 02:37:38.947345 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/18822ed6-bb35-43f9-b51c-feb4aa42fcc6-config-data-generated\") pod \"openstack-galera-0\" (UID: \"18822ed6-bb35-43f9-b51c-feb4aa42fcc6\") " pod="openstack/openstack-galera-0" Feb 16 02:37:38.947695 master-0 kubenswrapper[31559]: I0216 02:37:38.947379 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/18822ed6-bb35-43f9-b51c-feb4aa42fcc6-kolla-config\") pod \"openstack-galera-0\" (UID: \"18822ed6-bb35-43f9-b51c-feb4aa42fcc6\") " pod="openstack/openstack-galera-0" Feb 16 02:37:38.947695 master-0 kubenswrapper[31559]: I0216 02:37:38.947405 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/18822ed6-bb35-43f9-b51c-feb4aa42fcc6-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"18822ed6-bb35-43f9-b51c-feb4aa42fcc6\") " pod="openstack/openstack-galera-0" Feb 16 02:37:38.947695 master-0 kubenswrapper[31559]: I0216 02:37:38.947460 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18822ed6-bb35-43f9-b51c-feb4aa42fcc6-operator-scripts\") pod \"openstack-galera-0\" (UID: \"18822ed6-bb35-43f9-b51c-feb4aa42fcc6\") " pod="openstack/openstack-galera-0" Feb 16 02:37:38.947695 master-0 kubenswrapper[31559]: I0216 02:37:38.947480 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18822ed6-bb35-43f9-b51c-feb4aa42fcc6-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"18822ed6-bb35-43f9-b51c-feb4aa42fcc6\") " pod="openstack/openstack-galera-0" Feb 16 02:37:38.947695 master-0 kubenswrapper[31559]: I0216 02:37:38.947557 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/18822ed6-bb35-43f9-b51c-feb4aa42fcc6-config-data-default\") pod \"openstack-galera-0\" (UID: \"18822ed6-bb35-43f9-b51c-feb4aa42fcc6\") " pod="openstack/openstack-galera-0" Feb 16 02:37:38.947695 master-0 kubenswrapper[31559]: I0216 02:37:38.947578 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6phr\" (UniqueName: \"kubernetes.io/projected/18822ed6-bb35-43f9-b51c-feb4aa42fcc6-kube-api-access-m6phr\") pod \"openstack-galera-0\" (UID: \"18822ed6-bb35-43f9-b51c-feb4aa42fcc6\") " pod="openstack/openstack-galera-0" Feb 16 02:37:39.053497 master-0 kubenswrapper[31559]: I0216 02:37:39.053400 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/18822ed6-bb35-43f9-b51c-feb4aa42fcc6-kolla-config\") pod \"openstack-galera-0\" (UID: \"18822ed6-bb35-43f9-b51c-feb4aa42fcc6\") " pod="openstack/openstack-galera-0" Feb 16 02:37:39.053727 master-0 kubenswrapper[31559]: I0216 02:37:39.053507 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/18822ed6-bb35-43f9-b51c-feb4aa42fcc6-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"18822ed6-bb35-43f9-b51c-feb4aa42fcc6\") " pod="openstack/openstack-galera-0" Feb 16 02:37:39.053727 master-0 kubenswrapper[31559]: I0216 02:37:39.053553 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18822ed6-bb35-43f9-b51c-feb4aa42fcc6-operator-scripts\") pod \"openstack-galera-0\" (UID: \"18822ed6-bb35-43f9-b51c-feb4aa42fcc6\") " pod="openstack/openstack-galera-0" Feb 16 02:37:39.053727 master-0 kubenswrapper[31559]: I0216 02:37:39.053571 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18822ed6-bb35-43f9-b51c-feb4aa42fcc6-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"18822ed6-bb35-43f9-b51c-feb4aa42fcc6\") " pod="openstack/openstack-galera-0" Feb 16 02:37:39.053727 master-0 kubenswrapper[31559]: I0216 02:37:39.053648 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/18822ed6-bb35-43f9-b51c-feb4aa42fcc6-config-data-default\") pod \"openstack-galera-0\" (UID: \"18822ed6-bb35-43f9-b51c-feb4aa42fcc6\") " pod="openstack/openstack-galera-0" Feb 16 02:37:39.053727 master-0 kubenswrapper[31559]: I0216 02:37:39.053668 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6phr\" (UniqueName: \"kubernetes.io/projected/18822ed6-bb35-43f9-b51c-feb4aa42fcc6-kube-api-access-m6phr\") pod \"openstack-galera-0\" (UID: \"18822ed6-bb35-43f9-b51c-feb4aa42fcc6\") " pod="openstack/openstack-galera-0" Feb 16 02:37:39.053727 master-0 kubenswrapper[31559]: I0216 02:37:39.053718 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-606b9a42-2450-4d02-8e28-1693cdb82183\" (UniqueName: \"kubernetes.io/csi/topolvm.io^e807ac13-f408-4e64-8a81-8f423655de78\") pod \"openstack-galera-0\" (UID: \"18822ed6-bb35-43f9-b51c-feb4aa42fcc6\") " pod="openstack/openstack-galera-0" Feb 16 02:37:39.053933 master-0 kubenswrapper[31559]: I0216 02:37:39.053754 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/18822ed6-bb35-43f9-b51c-feb4aa42fcc6-config-data-generated\") pod \"openstack-galera-0\" (UID: \"18822ed6-bb35-43f9-b51c-feb4aa42fcc6\") " pod="openstack/openstack-galera-0" Feb 16 02:37:39.054201 master-0 kubenswrapper[31559]: I0216 02:37:39.054178 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/18822ed6-bb35-43f9-b51c-feb4aa42fcc6-config-data-generated\") pod \"openstack-galera-0\" (UID: \"18822ed6-bb35-43f9-b51c-feb4aa42fcc6\") " pod="openstack/openstack-galera-0" Feb 16 02:37:39.054722 master-0 kubenswrapper[31559]: I0216 02:37:39.054698 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/18822ed6-bb35-43f9-b51c-feb4aa42fcc6-kolla-config\") pod \"openstack-galera-0\" (UID: \"18822ed6-bb35-43f9-b51c-feb4aa42fcc6\") " pod="openstack/openstack-galera-0" Feb 16 02:37:39.056940 master-0 kubenswrapper[31559]: I0216 02:37:39.056893 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18822ed6-bb35-43f9-b51c-feb4aa42fcc6-operator-scripts\") pod \"openstack-galera-0\" (UID: \"18822ed6-bb35-43f9-b51c-feb4aa42fcc6\") " pod="openstack/openstack-galera-0" Feb 16 02:37:39.058397 master-0 kubenswrapper[31559]: I0216 02:37:39.058191 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/18822ed6-bb35-43f9-b51c-feb4aa42fcc6-config-data-default\") pod \"openstack-galera-0\" (UID: \"18822ed6-bb35-43f9-b51c-feb4aa42fcc6\") " pod="openstack/openstack-galera-0" Feb 16 02:37:39.060273 master-0 kubenswrapper[31559]: I0216 02:37:39.060230 31559 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 02:37:39.060333 master-0 kubenswrapper[31559]: I0216 02:37:39.060271 31559 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-606b9a42-2450-4d02-8e28-1693cdb82183\" (UniqueName: \"kubernetes.io/csi/topolvm.io^e807ac13-f408-4e64-8a81-8f423655de78\") pod \"openstack-galera-0\" (UID: \"18822ed6-bb35-43f9-b51c-feb4aa42fcc6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/1a55a7ae7b12bd7721bdf3c9cb9394038a66638618a109f383d7e0273006e828/globalmount\"" pod="openstack/openstack-galera-0" Feb 16 02:37:39.061674 master-0 kubenswrapper[31559]: I0216 02:37:39.061627 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/18822ed6-bb35-43f9-b51c-feb4aa42fcc6-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"18822ed6-bb35-43f9-b51c-feb4aa42fcc6\") " pod="openstack/openstack-galera-0" Feb 16 02:37:39.081976 master-0 kubenswrapper[31559]: I0216 02:37:39.081918 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18822ed6-bb35-43f9-b51c-feb4aa42fcc6-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"18822ed6-bb35-43f9-b51c-feb4aa42fcc6\") " pod="openstack/openstack-galera-0" Feb 16 02:37:39.100360 master-0 kubenswrapper[31559]: I0216 02:37:39.100294 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6phr\" (UniqueName: \"kubernetes.io/projected/18822ed6-bb35-43f9-b51c-feb4aa42fcc6-kube-api-access-m6phr\") pod \"openstack-galera-0\" (UID: \"18822ed6-bb35-43f9-b51c-feb4aa42fcc6\") " pod="openstack/openstack-galera-0" Feb 16 02:37:39.250112 master-0 kubenswrapper[31559]: I0216 02:37:39.250057 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8fe3dd35-1b4a-4fde-bb0f-cb510af91209\" (UniqueName: \"kubernetes.io/csi/topolvm.io^ed48fb31-f4d4-496a-960a-52d78bfbd102\") pod \"rabbitmq-server-0\" (UID: \"93fa9c51-4a98-4a66-8b10-a1213ca9f95e\") " pod="openstack/rabbitmq-server-0" Feb 16 02:37:39.257333 master-0 kubenswrapper[31559]: I0216 02:37:39.257282 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 16 02:37:40.302675 master-0 kubenswrapper[31559]: I0216 02:37:40.302411 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 16 02:37:40.304718 master-0 kubenswrapper[31559]: I0216 02:37:40.304679 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 16 02:37:40.308718 master-0 kubenswrapper[31559]: I0216 02:37:40.308668 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 16 02:37:40.310259 master-0 kubenswrapper[31559]: I0216 02:37:40.310225 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 16 02:37:40.310683 master-0 kubenswrapper[31559]: I0216 02:37:40.310661 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 16 02:37:40.343755 master-0 kubenswrapper[31559]: I0216 02:37:40.343704 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 16 02:37:40.387079 master-0 kubenswrapper[31559]: I0216 02:37:40.387013 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95409e86-4b58-4ee3-bd1a-e67fbf366ebb-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"95409e86-4b58-4ee3-bd1a-e67fbf366ebb\") " pod="openstack/openstack-cell1-galera-0" Feb 16 02:37:40.387079 master-0 kubenswrapper[31559]: I0216 02:37:40.387085 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/95409e86-4b58-4ee3-bd1a-e67fbf366ebb-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"95409e86-4b58-4ee3-bd1a-e67fbf366ebb\") " pod="openstack/openstack-cell1-galera-0" Feb 16 02:37:40.387425 master-0 kubenswrapper[31559]: I0216 02:37:40.387223 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/95409e86-4b58-4ee3-bd1a-e67fbf366ebb-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"95409e86-4b58-4ee3-bd1a-e67fbf366ebb\") " pod="openstack/openstack-cell1-galera-0" Feb 16 02:37:40.387425 master-0 kubenswrapper[31559]: I0216 02:37:40.387279 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/95409e86-4b58-4ee3-bd1a-e67fbf366ebb-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"95409e86-4b58-4ee3-bd1a-e67fbf366ebb\") " pod="openstack/openstack-cell1-galera-0" Feb 16 02:37:40.387425 master-0 kubenswrapper[31559]: I0216 02:37:40.387312 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7624be31-906c-4228-ac19-6ff507c8c741\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a7d1c9eb-b861-4b29-8acc-7440363cab28\") pod \"openstack-cell1-galera-0\" (UID: \"95409e86-4b58-4ee3-bd1a-e67fbf366ebb\") " pod="openstack/openstack-cell1-galera-0" Feb 16 02:37:40.387425 master-0 kubenswrapper[31559]: I0216 02:37:40.387403 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/95409e86-4b58-4ee3-bd1a-e67fbf366ebb-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"95409e86-4b58-4ee3-bd1a-e67fbf366ebb\") " pod="openstack/openstack-cell1-galera-0" Feb 16 02:37:40.388085 master-0 kubenswrapper[31559]: I0216 02:37:40.387426 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95409e86-4b58-4ee3-bd1a-e67fbf366ebb-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"95409e86-4b58-4ee3-bd1a-e67fbf366ebb\") " pod="openstack/openstack-cell1-galera-0" Feb 16 02:37:40.388085 master-0 kubenswrapper[31559]: I0216 02:37:40.387496 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cscqm\" (UniqueName: \"kubernetes.io/projected/95409e86-4b58-4ee3-bd1a-e67fbf366ebb-kube-api-access-cscqm\") pod \"openstack-cell1-galera-0\" (UID: \"95409e86-4b58-4ee3-bd1a-e67fbf366ebb\") " pod="openstack/openstack-cell1-galera-0" Feb 16 02:37:40.494462 master-0 kubenswrapper[31559]: I0216 02:37:40.494309 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/95409e86-4b58-4ee3-bd1a-e67fbf366ebb-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"95409e86-4b58-4ee3-bd1a-e67fbf366ebb\") " pod="openstack/openstack-cell1-galera-0" Feb 16 02:37:40.494462 master-0 kubenswrapper[31559]: I0216 02:37:40.494461 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7624be31-906c-4228-ac19-6ff507c8c741\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a7d1c9eb-b861-4b29-8acc-7440363cab28\") pod \"openstack-cell1-galera-0\" (UID: \"95409e86-4b58-4ee3-bd1a-e67fbf366ebb\") " pod="openstack/openstack-cell1-galera-0" Feb 16 02:37:40.494719 master-0 kubenswrapper[31559]: I0216 02:37:40.494516 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/95409e86-4b58-4ee3-bd1a-e67fbf366ebb-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"95409e86-4b58-4ee3-bd1a-e67fbf366ebb\") " pod="openstack/openstack-cell1-galera-0" Feb 16 02:37:40.494719 master-0 kubenswrapper[31559]: I0216 02:37:40.494561 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95409e86-4b58-4ee3-bd1a-e67fbf366ebb-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"95409e86-4b58-4ee3-bd1a-e67fbf366ebb\") " pod="openstack/openstack-cell1-galera-0" Feb 16 02:37:40.494719 master-0 kubenswrapper[31559]: I0216 02:37:40.494607 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cscqm\" (UniqueName: \"kubernetes.io/projected/95409e86-4b58-4ee3-bd1a-e67fbf366ebb-kube-api-access-cscqm\") pod \"openstack-cell1-galera-0\" (UID: \"95409e86-4b58-4ee3-bd1a-e67fbf366ebb\") " pod="openstack/openstack-cell1-galera-0" Feb 16 02:37:40.494823 master-0 kubenswrapper[31559]: I0216 02:37:40.494765 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95409e86-4b58-4ee3-bd1a-e67fbf366ebb-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"95409e86-4b58-4ee3-bd1a-e67fbf366ebb\") " pod="openstack/openstack-cell1-galera-0" Feb 16 02:37:40.494861 master-0 kubenswrapper[31559]: I0216 02:37:40.494829 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/95409e86-4b58-4ee3-bd1a-e67fbf366ebb-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"95409e86-4b58-4ee3-bd1a-e67fbf366ebb\") " pod="openstack/openstack-cell1-galera-0" Feb 16 02:37:40.495713 master-0 kubenswrapper[31559]: I0216 02:37:40.494912 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/95409e86-4b58-4ee3-bd1a-e67fbf366ebb-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"95409e86-4b58-4ee3-bd1a-e67fbf366ebb\") " pod="openstack/openstack-cell1-galera-0" Feb 16 02:37:40.502315 master-0 kubenswrapper[31559]: I0216 02:37:40.497014 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/95409e86-4b58-4ee3-bd1a-e67fbf366ebb-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"95409e86-4b58-4ee3-bd1a-e67fbf366ebb\") " pod="openstack/openstack-cell1-galera-0" Feb 16 02:37:40.502315 master-0 kubenswrapper[31559]: I0216 02:37:40.497122 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/95409e86-4b58-4ee3-bd1a-e67fbf366ebb-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"95409e86-4b58-4ee3-bd1a-e67fbf366ebb\") " pod="openstack/openstack-cell1-galera-0" Feb 16 02:37:40.502315 master-0 kubenswrapper[31559]: I0216 02:37:40.497123 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95409e86-4b58-4ee3-bd1a-e67fbf366ebb-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"95409e86-4b58-4ee3-bd1a-e67fbf366ebb\") " pod="openstack/openstack-cell1-galera-0" Feb 16 02:37:40.502315 master-0 kubenswrapper[31559]: I0216 02:37:40.497997 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/95409e86-4b58-4ee3-bd1a-e67fbf366ebb-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"95409e86-4b58-4ee3-bd1a-e67fbf366ebb\") " pod="openstack/openstack-cell1-galera-0" Feb 16 02:37:40.503403 master-0 kubenswrapper[31559]: I0216 02:37:40.503351 31559 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 02:37:40.503536 master-0 kubenswrapper[31559]: I0216 02:37:40.503418 31559 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7624be31-906c-4228-ac19-6ff507c8c741\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a7d1c9eb-b861-4b29-8acc-7440363cab28\") pod \"openstack-cell1-galera-0\" (UID: \"95409e86-4b58-4ee3-bd1a-e67fbf366ebb\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/c3479998785a4cbca7dd3198785b195bc5b74e2aa7045e7ba310b885475ee1c6/globalmount\"" pod="openstack/openstack-cell1-galera-0" Feb 16 02:37:40.508259 master-0 kubenswrapper[31559]: I0216 02:37:40.508193 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95409e86-4b58-4ee3-bd1a-e67fbf366ebb-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"95409e86-4b58-4ee3-bd1a-e67fbf366ebb\") " pod="openstack/openstack-cell1-galera-0" Feb 16 02:37:40.510917 master-0 kubenswrapper[31559]: I0216 02:37:40.510885 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/95409e86-4b58-4ee3-bd1a-e67fbf366ebb-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"95409e86-4b58-4ee3-bd1a-e67fbf366ebb\") " pod="openstack/openstack-cell1-galera-0" Feb 16 02:37:40.525451 master-0 kubenswrapper[31559]: I0216 02:37:40.525383 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cscqm\" (UniqueName: \"kubernetes.io/projected/95409e86-4b58-4ee3-bd1a-e67fbf366ebb-kube-api-access-cscqm\") pod \"openstack-cell1-galera-0\" (UID: \"95409e86-4b58-4ee3-bd1a-e67fbf366ebb\") " pod="openstack/openstack-cell1-galera-0" Feb 16 02:37:40.603143 master-0 kubenswrapper[31559]: I0216 02:37:40.599516 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4aa82de5-56c2-4753-88a3-a3d162b48f01\" (UniqueName: \"kubernetes.io/csi/topolvm.io^e45d3a10-c431-46f0-9645-451dc39369da\") pod \"rabbitmq-cell1-server-0\" (UID: \"5cd89103-815f-45df-9b47-4e3db3c708f2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 02:37:40.895147 master-0 kubenswrapper[31559]: I0216 02:37:40.894973 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 16 02:37:41.930395 master-0 kubenswrapper[31559]: I0216 02:37:41.930310 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-606b9a42-2450-4d02-8e28-1693cdb82183\" (UniqueName: \"kubernetes.io/csi/topolvm.io^e807ac13-f408-4e64-8a81-8f423655de78\") pod \"openstack-galera-0\" (UID: \"18822ed6-bb35-43f9-b51c-feb4aa42fcc6\") " pod="openstack/openstack-galera-0" Feb 16 02:37:42.180912 master-0 kubenswrapper[31559]: I0216 02:37:42.180803 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 16 02:37:42.973896 master-0 kubenswrapper[31559]: I0216 02:37:42.973761 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-jhsdr"] Feb 16 02:37:42.975179 master-0 kubenswrapper[31559]: I0216 02:37:42.975100 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-jhsdr" Feb 16 02:37:42.979913 master-0 kubenswrapper[31559]: I0216 02:37:42.978669 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 16 02:37:42.979913 master-0 kubenswrapper[31559]: I0216 02:37:42.979009 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 16 02:37:42.995688 master-0 kubenswrapper[31559]: I0216 02:37:42.995621 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-jhsdr"] Feb 16 02:37:43.053019 master-0 kubenswrapper[31559]: I0216 02:37:43.052941 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-w2cr4"] Feb 16 02:37:43.062162 master-0 kubenswrapper[31559]: I0216 02:37:43.055682 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-w2cr4" Feb 16 02:37:43.063021 master-0 kubenswrapper[31559]: I0216 02:37:43.061706 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-w2cr4"] Feb 16 02:37:43.074759 master-0 kubenswrapper[31559]: I0216 02:37:43.073926 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/caef9948-8516-40b1-a940-6b2bea06cf6d-var-run\") pod \"ovn-controller-jhsdr\" (UID: \"caef9948-8516-40b1-a940-6b2bea06cf6d\") " pod="openstack/ovn-controller-jhsdr" Feb 16 02:37:43.074759 master-0 kubenswrapper[31559]: I0216 02:37:43.073985 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/caef9948-8516-40b1-a940-6b2bea06cf6d-var-run-ovn\") pod \"ovn-controller-jhsdr\" (UID: \"caef9948-8516-40b1-a940-6b2bea06cf6d\") " pod="openstack/ovn-controller-jhsdr" Feb 16 02:37:43.074759 master-0 kubenswrapper[31559]: I0216 02:37:43.074190 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caef9948-8516-40b1-a940-6b2bea06cf6d-combined-ca-bundle\") pod \"ovn-controller-jhsdr\" (UID: \"caef9948-8516-40b1-a940-6b2bea06cf6d\") " pod="openstack/ovn-controller-jhsdr" Feb 16 02:37:43.074759 master-0 kubenswrapper[31559]: I0216 02:37:43.074484 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/caef9948-8516-40b1-a940-6b2bea06cf6d-scripts\") pod \"ovn-controller-jhsdr\" (UID: \"caef9948-8516-40b1-a940-6b2bea06cf6d\") " pod="openstack/ovn-controller-jhsdr" Feb 16 02:37:43.074759 master-0 kubenswrapper[31559]: I0216 02:37:43.074543 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcdnj\" (UniqueName: \"kubernetes.io/projected/caef9948-8516-40b1-a940-6b2bea06cf6d-kube-api-access-pcdnj\") pod \"ovn-controller-jhsdr\" (UID: \"caef9948-8516-40b1-a940-6b2bea06cf6d\") " pod="openstack/ovn-controller-jhsdr" Feb 16 02:37:43.074759 master-0 kubenswrapper[31559]: I0216 02:37:43.074578 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/caef9948-8516-40b1-a940-6b2bea06cf6d-ovn-controller-tls-certs\") pod \"ovn-controller-jhsdr\" (UID: \"caef9948-8516-40b1-a940-6b2bea06cf6d\") " pod="openstack/ovn-controller-jhsdr" Feb 16 02:37:43.074759 master-0 kubenswrapper[31559]: I0216 02:37:43.074758 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/caef9948-8516-40b1-a940-6b2bea06cf6d-var-log-ovn\") pod \"ovn-controller-jhsdr\" (UID: \"caef9948-8516-40b1-a940-6b2bea06cf6d\") " pod="openstack/ovn-controller-jhsdr" Feb 16 02:37:43.079522 master-0 kubenswrapper[31559]: I0216 02:37:43.078706 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7624be31-906c-4228-ac19-6ff507c8c741\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a7d1c9eb-b861-4b29-8acc-7440363cab28\") pod \"openstack-cell1-galera-0\" (UID: \"95409e86-4b58-4ee3-bd1a-e67fbf366ebb\") " pod="openstack/openstack-cell1-galera-0" Feb 16 02:37:43.176875 master-0 kubenswrapper[31559]: I0216 02:37:43.176744 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/caef9948-8516-40b1-a940-6b2bea06cf6d-var-run-ovn\") pod \"ovn-controller-jhsdr\" (UID: \"caef9948-8516-40b1-a940-6b2bea06cf6d\") " pod="openstack/ovn-controller-jhsdr" Feb 16 02:37:43.176875 master-0 kubenswrapper[31559]: I0216 02:37:43.176822 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcr78\" (UniqueName: \"kubernetes.io/projected/3d14e07d-5696-4d56-a6bc-b076e1c06fb2-kube-api-access-zcr78\") pod \"ovn-controller-ovs-w2cr4\" (UID: \"3d14e07d-5696-4d56-a6bc-b076e1c06fb2\") " pod="openstack/ovn-controller-ovs-w2cr4" Feb 16 02:37:43.177659 master-0 kubenswrapper[31559]: I0216 02:37:43.176905 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caef9948-8516-40b1-a940-6b2bea06cf6d-combined-ca-bundle\") pod \"ovn-controller-jhsdr\" (UID: \"caef9948-8516-40b1-a940-6b2bea06cf6d\") " pod="openstack/ovn-controller-jhsdr" Feb 16 02:37:43.177659 master-0 kubenswrapper[31559]: I0216 02:37:43.176954 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/3d14e07d-5696-4d56-a6bc-b076e1c06fb2-var-lib\") pod \"ovn-controller-ovs-w2cr4\" (UID: \"3d14e07d-5696-4d56-a6bc-b076e1c06fb2\") " pod="openstack/ovn-controller-ovs-w2cr4" Feb 16 02:37:43.177659 master-0 kubenswrapper[31559]: I0216 02:37:43.177012 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3d14e07d-5696-4d56-a6bc-b076e1c06fb2-scripts\") pod \"ovn-controller-ovs-w2cr4\" (UID: \"3d14e07d-5696-4d56-a6bc-b076e1c06fb2\") " pod="openstack/ovn-controller-ovs-w2cr4" Feb 16 02:37:43.177659 master-0 kubenswrapper[31559]: I0216 02:37:43.177036 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/caef9948-8516-40b1-a940-6b2bea06cf6d-scripts\") pod \"ovn-controller-jhsdr\" (UID: \"caef9948-8516-40b1-a940-6b2bea06cf6d\") " pod="openstack/ovn-controller-jhsdr" Feb 16 02:37:43.177659 master-0 kubenswrapper[31559]: I0216 02:37:43.177057 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/3d14e07d-5696-4d56-a6bc-b076e1c06fb2-var-log\") pod \"ovn-controller-ovs-w2cr4\" (UID: \"3d14e07d-5696-4d56-a6bc-b076e1c06fb2\") " pod="openstack/ovn-controller-ovs-w2cr4" Feb 16 02:37:43.177659 master-0 kubenswrapper[31559]: I0216 02:37:43.177073 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcdnj\" (UniqueName: \"kubernetes.io/projected/caef9948-8516-40b1-a940-6b2bea06cf6d-kube-api-access-pcdnj\") pod \"ovn-controller-jhsdr\" (UID: \"caef9948-8516-40b1-a940-6b2bea06cf6d\") " pod="openstack/ovn-controller-jhsdr" Feb 16 02:37:43.177659 master-0 kubenswrapper[31559]: I0216 02:37:43.177093 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/caef9948-8516-40b1-a940-6b2bea06cf6d-ovn-controller-tls-certs\") pod \"ovn-controller-jhsdr\" (UID: \"caef9948-8516-40b1-a940-6b2bea06cf6d\") " pod="openstack/ovn-controller-jhsdr" Feb 16 02:37:43.177659 master-0 kubenswrapper[31559]: I0216 02:37:43.177584 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3d14e07d-5696-4d56-a6bc-b076e1c06fb2-var-run\") pod \"ovn-controller-ovs-w2cr4\" (UID: \"3d14e07d-5696-4d56-a6bc-b076e1c06fb2\") " pod="openstack/ovn-controller-ovs-w2cr4" Feb 16 02:37:43.177659 master-0 kubenswrapper[31559]: I0216 02:37:43.177610 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/3d14e07d-5696-4d56-a6bc-b076e1c06fb2-etc-ovs\") pod \"ovn-controller-ovs-w2cr4\" (UID: \"3d14e07d-5696-4d56-a6bc-b076e1c06fb2\") " pod="openstack/ovn-controller-ovs-w2cr4" Feb 16 02:37:43.177659 master-0 kubenswrapper[31559]: I0216 02:37:43.177630 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/caef9948-8516-40b1-a940-6b2bea06cf6d-var-log-ovn\") pod \"ovn-controller-jhsdr\" (UID: \"caef9948-8516-40b1-a940-6b2bea06cf6d\") " pod="openstack/ovn-controller-jhsdr" Feb 16 02:37:43.178061 master-0 kubenswrapper[31559]: I0216 02:37:43.177665 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/caef9948-8516-40b1-a940-6b2bea06cf6d-var-run\") pod \"ovn-controller-jhsdr\" (UID: \"caef9948-8516-40b1-a940-6b2bea06cf6d\") " pod="openstack/ovn-controller-jhsdr" Feb 16 02:37:43.178261 master-0 kubenswrapper[31559]: I0216 02:37:43.178241 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/caef9948-8516-40b1-a940-6b2bea06cf6d-var-run\") pod \"ovn-controller-jhsdr\" (UID: \"caef9948-8516-40b1-a940-6b2bea06cf6d\") " pod="openstack/ovn-controller-jhsdr" Feb 16 02:37:43.178512 master-0 kubenswrapper[31559]: I0216 02:37:43.178382 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/caef9948-8516-40b1-a940-6b2bea06cf6d-var-run-ovn\") pod \"ovn-controller-jhsdr\" (UID: \"caef9948-8516-40b1-a940-6b2bea06cf6d\") " pod="openstack/ovn-controller-jhsdr" Feb 16 02:37:43.178595 master-0 kubenswrapper[31559]: I0216 02:37:43.178413 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/caef9948-8516-40b1-a940-6b2bea06cf6d-var-log-ovn\") pod \"ovn-controller-jhsdr\" (UID: \"caef9948-8516-40b1-a940-6b2bea06cf6d\") " pod="openstack/ovn-controller-jhsdr" Feb 16 02:37:43.182590 master-0 kubenswrapper[31559]: I0216 02:37:43.182548 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/caef9948-8516-40b1-a940-6b2bea06cf6d-ovn-controller-tls-certs\") pod \"ovn-controller-jhsdr\" (UID: \"caef9948-8516-40b1-a940-6b2bea06cf6d\") " pod="openstack/ovn-controller-jhsdr" Feb 16 02:37:43.183486 master-0 kubenswrapper[31559]: I0216 02:37:43.183467 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caef9948-8516-40b1-a940-6b2bea06cf6d-combined-ca-bundle\") pod \"ovn-controller-jhsdr\" (UID: \"caef9948-8516-40b1-a940-6b2bea06cf6d\") " pod="openstack/ovn-controller-jhsdr" Feb 16 02:37:43.183952 master-0 kubenswrapper[31559]: I0216 02:37:43.183915 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/caef9948-8516-40b1-a940-6b2bea06cf6d-scripts\") pod \"ovn-controller-jhsdr\" (UID: \"caef9948-8516-40b1-a940-6b2bea06cf6d\") " pod="openstack/ovn-controller-jhsdr" Feb 16 02:37:43.204915 master-0 kubenswrapper[31559]: I0216 02:37:43.204875 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcdnj\" (UniqueName: \"kubernetes.io/projected/caef9948-8516-40b1-a940-6b2bea06cf6d-kube-api-access-pcdnj\") pod \"ovn-controller-jhsdr\" (UID: \"caef9948-8516-40b1-a940-6b2bea06cf6d\") " pod="openstack/ovn-controller-jhsdr" Feb 16 02:37:43.280138 master-0 kubenswrapper[31559]: I0216 02:37:43.279994 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcr78\" (UniqueName: \"kubernetes.io/projected/3d14e07d-5696-4d56-a6bc-b076e1c06fb2-kube-api-access-zcr78\") pod \"ovn-controller-ovs-w2cr4\" (UID: \"3d14e07d-5696-4d56-a6bc-b076e1c06fb2\") " pod="openstack/ovn-controller-ovs-w2cr4" Feb 16 02:37:43.280138 master-0 kubenswrapper[31559]: I0216 02:37:43.280122 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/3d14e07d-5696-4d56-a6bc-b076e1c06fb2-var-lib\") pod \"ovn-controller-ovs-w2cr4\" (UID: \"3d14e07d-5696-4d56-a6bc-b076e1c06fb2\") " pod="openstack/ovn-controller-ovs-w2cr4" Feb 16 02:37:43.280699 master-0 kubenswrapper[31559]: I0216 02:37:43.280179 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3d14e07d-5696-4d56-a6bc-b076e1c06fb2-scripts\") pod \"ovn-controller-ovs-w2cr4\" (UID: \"3d14e07d-5696-4d56-a6bc-b076e1c06fb2\") " pod="openstack/ovn-controller-ovs-w2cr4" Feb 16 02:37:43.280699 master-0 kubenswrapper[31559]: I0216 02:37:43.280203 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/3d14e07d-5696-4d56-a6bc-b076e1c06fb2-var-log\") pod \"ovn-controller-ovs-w2cr4\" (UID: \"3d14e07d-5696-4d56-a6bc-b076e1c06fb2\") " pod="openstack/ovn-controller-ovs-w2cr4" Feb 16 02:37:43.280699 master-0 kubenswrapper[31559]: I0216 02:37:43.280427 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3d14e07d-5696-4d56-a6bc-b076e1c06fb2-var-run\") pod \"ovn-controller-ovs-w2cr4\" (UID: \"3d14e07d-5696-4d56-a6bc-b076e1c06fb2\") " pod="openstack/ovn-controller-ovs-w2cr4" Feb 16 02:37:43.280699 master-0 kubenswrapper[31559]: I0216 02:37:43.280531 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3d14e07d-5696-4d56-a6bc-b076e1c06fb2-var-run\") pod \"ovn-controller-ovs-w2cr4\" (UID: \"3d14e07d-5696-4d56-a6bc-b076e1c06fb2\") " pod="openstack/ovn-controller-ovs-w2cr4" Feb 16 02:37:43.280699 master-0 kubenswrapper[31559]: I0216 02:37:43.280540 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/3d14e07d-5696-4d56-a6bc-b076e1c06fb2-etc-ovs\") pod \"ovn-controller-ovs-w2cr4\" (UID: \"3d14e07d-5696-4d56-a6bc-b076e1c06fb2\") " pod="openstack/ovn-controller-ovs-w2cr4" Feb 16 02:37:43.281059 master-0 kubenswrapper[31559]: I0216 02:37:43.280972 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/3d14e07d-5696-4d56-a6bc-b076e1c06fb2-var-log\") pod \"ovn-controller-ovs-w2cr4\" (UID: \"3d14e07d-5696-4d56-a6bc-b076e1c06fb2\") " pod="openstack/ovn-controller-ovs-w2cr4" Feb 16 02:37:43.281059 master-0 kubenswrapper[31559]: I0216 02:37:43.281008 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/3d14e07d-5696-4d56-a6bc-b076e1c06fb2-etc-ovs\") pod \"ovn-controller-ovs-w2cr4\" (UID: \"3d14e07d-5696-4d56-a6bc-b076e1c06fb2\") " pod="openstack/ovn-controller-ovs-w2cr4" Feb 16 02:37:43.281059 master-0 kubenswrapper[31559]: I0216 02:37:43.281011 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/3d14e07d-5696-4d56-a6bc-b076e1c06fb2-var-lib\") pod \"ovn-controller-ovs-w2cr4\" (UID: \"3d14e07d-5696-4d56-a6bc-b076e1c06fb2\") " pod="openstack/ovn-controller-ovs-w2cr4" Feb 16 02:37:43.282594 master-0 kubenswrapper[31559]: I0216 02:37:43.282560 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3d14e07d-5696-4d56-a6bc-b076e1c06fb2-scripts\") pod \"ovn-controller-ovs-w2cr4\" (UID: \"3d14e07d-5696-4d56-a6bc-b076e1c06fb2\") " pod="openstack/ovn-controller-ovs-w2cr4" Feb 16 02:37:43.295708 master-0 kubenswrapper[31559]: I0216 02:37:43.295691 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcr78\" (UniqueName: \"kubernetes.io/projected/3d14e07d-5696-4d56-a6bc-b076e1c06fb2-kube-api-access-zcr78\") pod \"ovn-controller-ovs-w2cr4\" (UID: \"3d14e07d-5696-4d56-a6bc-b076e1c06fb2\") " pod="openstack/ovn-controller-ovs-w2cr4" Feb 16 02:37:43.356749 master-0 kubenswrapper[31559]: I0216 02:37:43.356626 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 16 02:37:43.387374 master-0 kubenswrapper[31559]: I0216 02:37:43.387317 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-jhsdr" Feb 16 02:37:43.398170 master-0 kubenswrapper[31559]: I0216 02:37:43.398107 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-w2cr4" Feb 16 02:37:44.649044 master-0 kubenswrapper[31559]: I0216 02:37:44.648935 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 16 02:37:44.651833 master-0 kubenswrapper[31559]: I0216 02:37:44.651783 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 16 02:37:44.655712 master-0 kubenswrapper[31559]: I0216 02:37:44.655655 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 16 02:37:44.656338 master-0 kubenswrapper[31559]: I0216 02:37:44.656270 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 16 02:37:44.656674 master-0 kubenswrapper[31559]: I0216 02:37:44.656242 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 16 02:37:44.657389 master-0 kubenswrapper[31559]: I0216 02:37:44.657279 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 16 02:37:44.788274 master-0 kubenswrapper[31559]: I0216 02:37:44.788152 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 16 02:37:45.032387 master-0 kubenswrapper[31559]: I0216 02:37:45.031622 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6ffc4ca8-6010-47bd-88b0-ce9ff30c2093-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"6ffc4ca8-6010-47bd-88b0-ce9ff30c2093\") " pod="openstack/ovsdbserver-nb-0" Feb 16 02:37:45.033253 master-0 kubenswrapper[31559]: I0216 02:37:45.033193 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ffc4ca8-6010-47bd-88b0-ce9ff30c2093-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6ffc4ca8-6010-47bd-88b0-ce9ff30c2093\") " pod="openstack/ovsdbserver-nb-0" Feb 16 02:37:45.033337 master-0 kubenswrapper[31559]: I0216 02:37:45.033293 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ffc4ca8-6010-47bd-88b0-ce9ff30c2093-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6ffc4ca8-6010-47bd-88b0-ce9ff30c2093\") " pod="openstack/ovsdbserver-nb-0" Feb 16 02:37:45.033631 master-0 kubenswrapper[31559]: I0216 02:37:45.033535 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6ffc4ca8-6010-47bd-88b0-ce9ff30c2093-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"6ffc4ca8-6010-47bd-88b0-ce9ff30c2093\") " pod="openstack/ovsdbserver-nb-0" Feb 16 02:37:45.034499 master-0 kubenswrapper[31559]: I0216 02:37:45.034189 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-662139ad-5e03-4b21-b5dd-65bb109c9c01\" (UniqueName: \"kubernetes.io/csi/topolvm.io^18a4eb4c-a631-4242-8695-22804df524e8\") pod \"ovsdbserver-nb-0\" (UID: \"6ffc4ca8-6010-47bd-88b0-ce9ff30c2093\") " pod="openstack/ovsdbserver-nb-0" Feb 16 02:37:45.034585 master-0 kubenswrapper[31559]: I0216 02:37:45.034403 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ffc4ca8-6010-47bd-88b0-ce9ff30c2093-config\") pod \"ovsdbserver-nb-0\" (UID: \"6ffc4ca8-6010-47bd-88b0-ce9ff30c2093\") " pod="openstack/ovsdbserver-nb-0" Feb 16 02:37:45.035080 master-0 kubenswrapper[31559]: I0216 02:37:45.034794 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm2fj\" (UniqueName: \"kubernetes.io/projected/6ffc4ca8-6010-47bd-88b0-ce9ff30c2093-kube-api-access-pm2fj\") pod \"ovsdbserver-nb-0\" (UID: \"6ffc4ca8-6010-47bd-88b0-ce9ff30c2093\") " pod="openstack/ovsdbserver-nb-0" Feb 16 02:37:45.036121 master-0 kubenswrapper[31559]: I0216 02:37:45.036031 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ffc4ca8-6010-47bd-88b0-ce9ff30c2093-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"6ffc4ca8-6010-47bd-88b0-ce9ff30c2093\") " pod="openstack/ovsdbserver-nb-0" Feb 16 02:37:45.138527 master-0 kubenswrapper[31559]: I0216 02:37:45.138456 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6ffc4ca8-6010-47bd-88b0-ce9ff30c2093-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"6ffc4ca8-6010-47bd-88b0-ce9ff30c2093\") " pod="openstack/ovsdbserver-nb-0" Feb 16 02:37:45.138740 master-0 kubenswrapper[31559]: I0216 02:37:45.138621 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ffc4ca8-6010-47bd-88b0-ce9ff30c2093-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6ffc4ca8-6010-47bd-88b0-ce9ff30c2093\") " pod="openstack/ovsdbserver-nb-0" Feb 16 02:37:45.138740 master-0 kubenswrapper[31559]: I0216 02:37:45.138671 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ffc4ca8-6010-47bd-88b0-ce9ff30c2093-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6ffc4ca8-6010-47bd-88b0-ce9ff30c2093\") " pod="openstack/ovsdbserver-nb-0" Feb 16 02:37:45.138825 master-0 kubenswrapper[31559]: I0216 02:37:45.138731 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6ffc4ca8-6010-47bd-88b0-ce9ff30c2093-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"6ffc4ca8-6010-47bd-88b0-ce9ff30c2093\") " pod="openstack/ovsdbserver-nb-0" Feb 16 02:37:45.138908 master-0 kubenswrapper[31559]: I0216 02:37:45.138880 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ffc4ca8-6010-47bd-88b0-ce9ff30c2093-config\") pod \"ovsdbserver-nb-0\" (UID: \"6ffc4ca8-6010-47bd-88b0-ce9ff30c2093\") " pod="openstack/ovsdbserver-nb-0" Feb 16 02:37:45.138955 master-0 kubenswrapper[31559]: I0216 02:37:45.138938 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pm2fj\" (UniqueName: \"kubernetes.io/projected/6ffc4ca8-6010-47bd-88b0-ce9ff30c2093-kube-api-access-pm2fj\") pod \"ovsdbserver-nb-0\" (UID: \"6ffc4ca8-6010-47bd-88b0-ce9ff30c2093\") " pod="openstack/ovsdbserver-nb-0" Feb 16 02:37:45.139053 master-0 kubenswrapper[31559]: I0216 02:37:45.139027 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ffc4ca8-6010-47bd-88b0-ce9ff30c2093-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"6ffc4ca8-6010-47bd-88b0-ce9ff30c2093\") " pod="openstack/ovsdbserver-nb-0" Feb 16 02:37:45.142757 master-0 kubenswrapper[31559]: I0216 02:37:45.141108 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6ffc4ca8-6010-47bd-88b0-ce9ff30c2093-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"6ffc4ca8-6010-47bd-88b0-ce9ff30c2093\") " pod="openstack/ovsdbserver-nb-0" Feb 16 02:37:45.142757 master-0 kubenswrapper[31559]: I0216 02:37:45.142344 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6ffc4ca8-6010-47bd-88b0-ce9ff30c2093-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"6ffc4ca8-6010-47bd-88b0-ce9ff30c2093\") " pod="openstack/ovsdbserver-nb-0" Feb 16 02:37:45.143249 master-0 kubenswrapper[31559]: I0216 02:37:45.143007 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ffc4ca8-6010-47bd-88b0-ce9ff30c2093-config\") pod \"ovsdbserver-nb-0\" (UID: \"6ffc4ca8-6010-47bd-88b0-ce9ff30c2093\") " pod="openstack/ovsdbserver-nb-0" Feb 16 02:37:45.146469 master-0 kubenswrapper[31559]: I0216 02:37:45.146361 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ffc4ca8-6010-47bd-88b0-ce9ff30c2093-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6ffc4ca8-6010-47bd-88b0-ce9ff30c2093\") " pod="openstack/ovsdbserver-nb-0" Feb 16 02:37:45.157228 master-0 kubenswrapper[31559]: I0216 02:37:45.157172 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ffc4ca8-6010-47bd-88b0-ce9ff30c2093-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6ffc4ca8-6010-47bd-88b0-ce9ff30c2093\") " pod="openstack/ovsdbserver-nb-0" Feb 16 02:37:45.158026 master-0 kubenswrapper[31559]: I0216 02:37:45.157939 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ffc4ca8-6010-47bd-88b0-ce9ff30c2093-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"6ffc4ca8-6010-47bd-88b0-ce9ff30c2093\") " pod="openstack/ovsdbserver-nb-0" Feb 16 02:37:45.195522 master-0 kubenswrapper[31559]: I0216 02:37:45.195424 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pm2fj\" (UniqueName: \"kubernetes.io/projected/6ffc4ca8-6010-47bd-88b0-ce9ff30c2093-kube-api-access-pm2fj\") pod \"ovsdbserver-nb-0\" (UID: \"6ffc4ca8-6010-47bd-88b0-ce9ff30c2093\") " pod="openstack/ovsdbserver-nb-0" Feb 16 02:37:45.241265 master-0 kubenswrapper[31559]: I0216 02:37:45.241074 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-662139ad-5e03-4b21-b5dd-65bb109c9c01\" (UniqueName: \"kubernetes.io/csi/topolvm.io^18a4eb4c-a631-4242-8695-22804df524e8\") pod \"ovsdbserver-nb-0\" (UID: \"6ffc4ca8-6010-47bd-88b0-ce9ff30c2093\") " pod="openstack/ovsdbserver-nb-0" Feb 16 02:37:45.244262 master-0 kubenswrapper[31559]: I0216 02:37:45.244190 31559 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 02:37:45.244383 master-0 kubenswrapper[31559]: I0216 02:37:45.244295 31559 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-662139ad-5e03-4b21-b5dd-65bb109c9c01\" (UniqueName: \"kubernetes.io/csi/topolvm.io^18a4eb4c-a631-4242-8695-22804df524e8\") pod \"ovsdbserver-nb-0\" (UID: \"6ffc4ca8-6010-47bd-88b0-ce9ff30c2093\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/cc93b50f193517edb17b638b13cfca5aa11a47aba9e5f9baa5630ec7211dafd5/globalmount\"" pod="openstack/ovsdbserver-nb-0" Feb 16 02:37:49.204722 master-0 kubenswrapper[31559]: I0216 02:37:49.204647 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-662139ad-5e03-4b21-b5dd-65bb109c9c01\" (UniqueName: \"kubernetes.io/csi/topolvm.io^18a4eb4c-a631-4242-8695-22804df524e8\") pod \"ovsdbserver-nb-0\" (UID: \"6ffc4ca8-6010-47bd-88b0-ce9ff30c2093\") " pod="openstack/ovsdbserver-nb-0" Feb 16 02:37:49.484712 master-0 kubenswrapper[31559]: I0216 02:37:49.484651 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 16 02:37:50.255516 master-0 kubenswrapper[31559]: I0216 02:37:50.247360 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 02:37:50.315653 master-0 kubenswrapper[31559]: I0216 02:37:50.315577 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 16 02:37:50.319057 master-0 kubenswrapper[31559]: I0216 02:37:50.319007 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 16 02:37:50.322673 master-0 kubenswrapper[31559]: I0216 02:37:50.322370 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 16 02:37:50.322673 master-0 kubenswrapper[31559]: I0216 02:37:50.322424 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 16 02:37:50.322905 master-0 kubenswrapper[31559]: I0216 02:37:50.322844 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 16 02:37:50.334686 master-0 kubenswrapper[31559]: I0216 02:37:50.334638 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 16 02:37:50.475019 master-0 kubenswrapper[31559]: I0216 02:37:50.474862 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f29bbe57-d7e8-4aa5-bf01-d33b29608b60\" (UniqueName: \"kubernetes.io/csi/topolvm.io^952f7828-772c-4542-9a3b-69cd9701a2cf\") pod \"ovsdbserver-sb-0\" (UID: \"de0ac61b-c6f4-4144-a647-815a3b8e7bca\") " pod="openstack/ovsdbserver-sb-0" Feb 16 02:37:50.475019 master-0 kubenswrapper[31559]: I0216 02:37:50.474911 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de0ac61b-c6f4-4144-a647-815a3b8e7bca-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"de0ac61b-c6f4-4144-a647-815a3b8e7bca\") " pod="openstack/ovsdbserver-sb-0" Feb 16 02:37:50.475019 master-0 kubenswrapper[31559]: I0216 02:37:50.474965 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/de0ac61b-c6f4-4144-a647-815a3b8e7bca-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"de0ac61b-c6f4-4144-a647-815a3b8e7bca\") " pod="openstack/ovsdbserver-sb-0" Feb 16 02:37:50.475554 master-0 kubenswrapper[31559]: I0216 02:37:50.475476 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/de0ac61b-c6f4-4144-a647-815a3b8e7bca-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"de0ac61b-c6f4-4144-a647-815a3b8e7bca\") " pod="openstack/ovsdbserver-sb-0" Feb 16 02:37:50.475796 master-0 kubenswrapper[31559]: I0216 02:37:50.475743 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/de0ac61b-c6f4-4144-a647-815a3b8e7bca-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"de0ac61b-c6f4-4144-a647-815a3b8e7bca\") " pod="openstack/ovsdbserver-sb-0" Feb 16 02:37:50.475906 master-0 kubenswrapper[31559]: I0216 02:37:50.475873 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/de0ac61b-c6f4-4144-a647-815a3b8e7bca-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"de0ac61b-c6f4-4144-a647-815a3b8e7bca\") " pod="openstack/ovsdbserver-sb-0" Feb 16 02:37:50.475959 master-0 kubenswrapper[31559]: I0216 02:37:50.475930 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de0ac61b-c6f4-4144-a647-815a3b8e7bca-config\") pod \"ovsdbserver-sb-0\" (UID: \"de0ac61b-c6f4-4144-a647-815a3b8e7bca\") " pod="openstack/ovsdbserver-sb-0" Feb 16 02:37:50.476076 master-0 kubenswrapper[31559]: I0216 02:37:50.476024 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97k96\" (UniqueName: \"kubernetes.io/projected/de0ac61b-c6f4-4144-a647-815a3b8e7bca-kube-api-access-97k96\") pod \"ovsdbserver-sb-0\" (UID: \"de0ac61b-c6f4-4144-a647-815a3b8e7bca\") " pod="openstack/ovsdbserver-sb-0" Feb 16 02:37:50.577928 master-0 kubenswrapper[31559]: I0216 02:37:50.577813 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/de0ac61b-c6f4-4144-a647-815a3b8e7bca-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"de0ac61b-c6f4-4144-a647-815a3b8e7bca\") " pod="openstack/ovsdbserver-sb-0" Feb 16 02:37:50.578115 master-0 kubenswrapper[31559]: I0216 02:37:50.577950 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/de0ac61b-c6f4-4144-a647-815a3b8e7bca-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"de0ac61b-c6f4-4144-a647-815a3b8e7bca\") " pod="openstack/ovsdbserver-sb-0" Feb 16 02:37:50.578115 master-0 kubenswrapper[31559]: I0216 02:37:50.578062 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/de0ac61b-c6f4-4144-a647-815a3b8e7bca-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"de0ac61b-c6f4-4144-a647-815a3b8e7bca\") " pod="openstack/ovsdbserver-sb-0" Feb 16 02:37:50.578115 master-0 kubenswrapper[31559]: I0216 02:37:50.578098 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/de0ac61b-c6f4-4144-a647-815a3b8e7bca-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"de0ac61b-c6f4-4144-a647-815a3b8e7bca\") " pod="openstack/ovsdbserver-sb-0" Feb 16 02:37:50.578271 master-0 kubenswrapper[31559]: I0216 02:37:50.578117 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de0ac61b-c6f4-4144-a647-815a3b8e7bca-config\") pod \"ovsdbserver-sb-0\" (UID: \"de0ac61b-c6f4-4144-a647-815a3b8e7bca\") " pod="openstack/ovsdbserver-sb-0" Feb 16 02:37:50.578271 master-0 kubenswrapper[31559]: I0216 02:37:50.578148 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97k96\" (UniqueName: \"kubernetes.io/projected/de0ac61b-c6f4-4144-a647-815a3b8e7bca-kube-api-access-97k96\") pod \"ovsdbserver-sb-0\" (UID: \"de0ac61b-c6f4-4144-a647-815a3b8e7bca\") " pod="openstack/ovsdbserver-sb-0" Feb 16 02:37:50.578271 master-0 kubenswrapper[31559]: I0216 02:37:50.578206 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f29bbe57-d7e8-4aa5-bf01-d33b29608b60\" (UniqueName: \"kubernetes.io/csi/topolvm.io^952f7828-772c-4542-9a3b-69cd9701a2cf\") pod \"ovsdbserver-sb-0\" (UID: \"de0ac61b-c6f4-4144-a647-815a3b8e7bca\") " pod="openstack/ovsdbserver-sb-0" Feb 16 02:37:50.578399 master-0 kubenswrapper[31559]: I0216 02:37:50.578229 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de0ac61b-c6f4-4144-a647-815a3b8e7bca-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"de0ac61b-c6f4-4144-a647-815a3b8e7bca\") " pod="openstack/ovsdbserver-sb-0" Feb 16 02:37:50.579031 master-0 kubenswrapper[31559]: I0216 02:37:50.578667 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/de0ac61b-c6f4-4144-a647-815a3b8e7bca-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"de0ac61b-c6f4-4144-a647-815a3b8e7bca\") " pod="openstack/ovsdbserver-sb-0" Feb 16 02:37:50.579671 master-0 kubenswrapper[31559]: I0216 02:37:50.579636 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/de0ac61b-c6f4-4144-a647-815a3b8e7bca-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"de0ac61b-c6f4-4144-a647-815a3b8e7bca\") " pod="openstack/ovsdbserver-sb-0" Feb 16 02:37:50.580872 master-0 kubenswrapper[31559]: I0216 02:37:50.580820 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de0ac61b-c6f4-4144-a647-815a3b8e7bca-config\") pod \"ovsdbserver-sb-0\" (UID: \"de0ac61b-c6f4-4144-a647-815a3b8e7bca\") " pod="openstack/ovsdbserver-sb-0" Feb 16 02:37:50.581032 master-0 kubenswrapper[31559]: I0216 02:37:50.580980 31559 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 02:37:50.581116 master-0 kubenswrapper[31559]: I0216 02:37:50.581024 31559 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f29bbe57-d7e8-4aa5-bf01-d33b29608b60\" (UniqueName: \"kubernetes.io/csi/topolvm.io^952f7828-772c-4542-9a3b-69cd9701a2cf\") pod \"ovsdbserver-sb-0\" (UID: \"de0ac61b-c6f4-4144-a647-815a3b8e7bca\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/92fd09db52c1f0435d71fdc0c24b60284f6327e4daee85c87ad7d4584f16671c/globalmount\"" pod="openstack/ovsdbserver-sb-0" Feb 16 02:37:50.581887 master-0 kubenswrapper[31559]: I0216 02:37:50.581857 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de0ac61b-c6f4-4144-a647-815a3b8e7bca-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"de0ac61b-c6f4-4144-a647-815a3b8e7bca\") " pod="openstack/ovsdbserver-sb-0" Feb 16 02:37:50.583096 master-0 kubenswrapper[31559]: I0216 02:37:50.583059 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/de0ac61b-c6f4-4144-a647-815a3b8e7bca-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"de0ac61b-c6f4-4144-a647-815a3b8e7bca\") " pod="openstack/ovsdbserver-sb-0" Feb 16 02:37:50.585838 master-0 kubenswrapper[31559]: I0216 02:37:50.585553 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/de0ac61b-c6f4-4144-a647-815a3b8e7bca-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"de0ac61b-c6f4-4144-a647-815a3b8e7bca\") " pod="openstack/ovsdbserver-sb-0" Feb 16 02:37:50.617223 master-0 kubenswrapper[31559]: I0216 02:37:50.617173 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97k96\" (UniqueName: \"kubernetes.io/projected/de0ac61b-c6f4-4144-a647-815a3b8e7bca-kube-api-access-97k96\") pod \"ovsdbserver-sb-0\" (UID: \"de0ac61b-c6f4-4144-a647-815a3b8e7bca\") " pod="openstack/ovsdbserver-sb-0" Feb 16 02:37:51.352965 master-0 kubenswrapper[31559]: I0216 02:37:51.352840 31559 generic.go:334] "Generic (PLEG): container finished" podID="95fe90e7-e416-4f71-a5ec-4d1529dea48a" containerID="f8f757e37e9abb67490a227944180281a61592c287bcaaed3221286332385a0f" exitCode=0 Feb 16 02:37:51.352965 master-0 kubenswrapper[31559]: I0216 02:37:51.352936 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bcd98d69f-xwh4z" event={"ID":"95fe90e7-e416-4f71-a5ec-4d1529dea48a","Type":"ContainerDied","Data":"f8f757e37e9abb67490a227944180281a61592c287bcaaed3221286332385a0f"} Feb 16 02:37:51.355494 master-0 kubenswrapper[31559]: I0216 02:37:51.355401 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d78499c-8nnpn" event={"ID":"b82f3e51-fd8b-4c28-bf1c-e1bc3e1d874d","Type":"ContainerStarted","Data":"3b75360af505fbfea7a4ba7bac156e362b054022dd01d56bbeeb6ac8274a1ab5"} Feb 16 02:37:51.361888 master-0 kubenswrapper[31559]: I0216 02:37:51.361854 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 16 02:37:51.364360 master-0 kubenswrapper[31559]: I0216 02:37:51.363008 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"5cd89103-815f-45df-9b47-4e3db3c708f2","Type":"ContainerStarted","Data":"944ba33a6e359736cccea372c32b78b2be141f4ffffa9a442cec0795810225c8"} Feb 16 02:37:51.365479 master-0 kubenswrapper[31559]: I0216 02:37:51.365451 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6fb887-gwljd" event={"ID":"9c20b360-7af1-42e9-af09-713bcb62953f","Type":"ContainerStarted","Data":"e1996cec40e378f85cd25e9b4abea66ba84b10fbfe910c7a79e7e4aa7de23531"} Feb 16 02:37:51.375473 master-0 kubenswrapper[31559]: I0216 02:37:51.375416 31559 generic.go:334] "Generic (PLEG): container finished" podID="c1cac3b2-dea2-4c55-a8db-a71dbc5f95f4" containerID="d84213aad1d75222804e8e1efe9f65f16b8447c7ca45e229a62d116e45ed3773" exitCode=0 Feb 16 02:37:51.375762 master-0 kubenswrapper[31559]: I0216 02:37:51.375647 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b98d7b55c-hn84p" event={"ID":"c1cac3b2-dea2-4c55-a8db-a71dbc5f95f4","Type":"ContainerDied","Data":"d84213aad1d75222804e8e1efe9f65f16b8447c7ca45e229a62d116e45ed3773"} Feb 16 02:37:51.377859 master-0 kubenswrapper[31559]: I0216 02:37:51.377670 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 02:37:51.791149 master-0 kubenswrapper[31559]: W0216 02:37:51.791089 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod18822ed6_bb35_43f9_b51c_feb4aa42fcc6.slice/crio-baab9ff073d090284c280c063b72fa0506ea25e0e7c856928dd51da0aedf2c93 WatchSource:0}: Error finding container baab9ff073d090284c280c063b72fa0506ea25e0e7c856928dd51da0aedf2c93: Status 404 returned error can't find the container with id baab9ff073d090284c280c063b72fa0506ea25e0e7c856928dd51da0aedf2c93 Feb 16 02:37:51.795412 master-0 kubenswrapper[31559]: I0216 02:37:51.794272 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 16 02:37:51.846746 master-0 kubenswrapper[31559]: I0216 02:37:51.846693 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 16 02:37:51.989658 master-0 kubenswrapper[31559]: E0216 02:37:51.974666 31559 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Feb 16 02:37:51.989658 master-0 kubenswrapper[31559]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/95fe90e7-e416-4f71-a5ec-4d1529dea48a/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 16 02:37:51.989658 master-0 kubenswrapper[31559]: > podSandboxID="c183f0bdaf5ad7e912d2e52e1a323751f5cb1b0fe444e8f745eb39b9c989fb1d" Feb 16 02:37:51.989658 master-0 kubenswrapper[31559]: E0216 02:37:51.974862 31559 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 16 02:37:51.989658 master-0 kubenswrapper[31559]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nbchf8h696h5ffh5cdh585hc5hbfh597h58dhfh554h67bh9bh5c9hfch7dh5fbhbbh567h78h669hf8h65dh55dh588h5ddh88h694h669h95h8q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-blbww,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000800000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5bcd98d69f-xwh4z_openstack(95fe90e7-e416-4f71-a5ec-4d1529dea48a): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/95fe90e7-e416-4f71-a5ec-4d1529dea48a/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 16 02:37:51.989658 master-0 kubenswrapper[31559]: > logger="UnhandledError" Feb 16 02:37:51.989658 master-0 kubenswrapper[31559]: E0216 02:37:51.978528 31559 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/95fe90e7-e416-4f71-a5ec-4d1529dea48a/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-5bcd98d69f-xwh4z" podUID="95fe90e7-e416-4f71-a5ec-4d1529dea48a" Feb 16 02:37:51.989658 master-0 kubenswrapper[31559]: I0216 02:37:51.987640 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-jhsdr"] Feb 16 02:37:51.998469 master-0 kubenswrapper[31559]: I0216 02:37:51.998380 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 16 02:37:52.139319 master-0 kubenswrapper[31559]: I0216 02:37:52.137793 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-w2cr4"] Feb 16 02:37:52.147243 master-0 kubenswrapper[31559]: I0216 02:37:52.147197 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f29bbe57-d7e8-4aa5-bf01-d33b29608b60\" (UniqueName: \"kubernetes.io/csi/topolvm.io^952f7828-772c-4542-9a3b-69cd9701a2cf\") pod \"ovsdbserver-sb-0\" (UID: \"de0ac61b-c6f4-4144-a647-815a3b8e7bca\") " pod="openstack/ovsdbserver-sb-0" Feb 16 02:37:52.155480 master-0 kubenswrapper[31559]: I0216 02:37:52.155195 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d78499c-8nnpn" Feb 16 02:37:52.160848 master-0 kubenswrapper[31559]: I0216 02:37:52.160809 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 16 02:37:52.170045 master-0 kubenswrapper[31559]: I0216 02:37:52.169997 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7b6fb887-gwljd" Feb 16 02:37:52.333591 master-0 kubenswrapper[31559]: I0216 02:37:52.333530 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6flv\" (UniqueName: \"kubernetes.io/projected/9c20b360-7af1-42e9-af09-713bcb62953f-kube-api-access-r6flv\") pod \"9c20b360-7af1-42e9-af09-713bcb62953f\" (UID: \"9c20b360-7af1-42e9-af09-713bcb62953f\") " Feb 16 02:37:52.333773 master-0 kubenswrapper[31559]: I0216 02:37:52.333643 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b82f3e51-fd8b-4c28-bf1c-e1bc3e1d874d-config\") pod \"b82f3e51-fd8b-4c28-bf1c-e1bc3e1d874d\" (UID: \"b82f3e51-fd8b-4c28-bf1c-e1bc3e1d874d\") " Feb 16 02:37:52.334325 master-0 kubenswrapper[31559]: I0216 02:37:52.334187 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk89l\" (UniqueName: \"kubernetes.io/projected/b82f3e51-fd8b-4c28-bf1c-e1bc3e1d874d-kube-api-access-tk89l\") pod \"b82f3e51-fd8b-4c28-bf1c-e1bc3e1d874d\" (UID: \"b82f3e51-fd8b-4c28-bf1c-e1bc3e1d874d\") " Feb 16 02:37:52.334782 master-0 kubenswrapper[31559]: I0216 02:37:52.334714 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b82f3e51-fd8b-4c28-bf1c-e1bc3e1d874d-dns-svc\") pod \"b82f3e51-fd8b-4c28-bf1c-e1bc3e1d874d\" (UID: \"b82f3e51-fd8b-4c28-bf1c-e1bc3e1d874d\") " Feb 16 02:37:52.334782 master-0 kubenswrapper[31559]: I0216 02:37:52.334744 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c20b360-7af1-42e9-af09-713bcb62953f-config\") pod \"9c20b360-7af1-42e9-af09-713bcb62953f\" (UID: \"9c20b360-7af1-42e9-af09-713bcb62953f\") " Feb 16 02:37:52.337750 master-0 kubenswrapper[31559]: I0216 02:37:52.337710 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c20b360-7af1-42e9-af09-713bcb62953f-kube-api-access-r6flv" (OuterVolumeSpecName: "kube-api-access-r6flv") pod "9c20b360-7af1-42e9-af09-713bcb62953f" (UID: "9c20b360-7af1-42e9-af09-713bcb62953f"). InnerVolumeSpecName "kube-api-access-r6flv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:37:52.337826 master-0 kubenswrapper[31559]: I0216 02:37:52.337789 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b82f3e51-fd8b-4c28-bf1c-e1bc3e1d874d-kube-api-access-tk89l" (OuterVolumeSpecName: "kube-api-access-tk89l") pod "b82f3e51-fd8b-4c28-bf1c-e1bc3e1d874d" (UID: "b82f3e51-fd8b-4c28-bf1c-e1bc3e1d874d"). InnerVolumeSpecName "kube-api-access-tk89l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:37:52.356934 master-0 kubenswrapper[31559]: I0216 02:37:52.356864 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b82f3e51-fd8b-4c28-bf1c-e1bc3e1d874d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b82f3e51-fd8b-4c28-bf1c-e1bc3e1d874d" (UID: "b82f3e51-fd8b-4c28-bf1c-e1bc3e1d874d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:37:52.357485 master-0 kubenswrapper[31559]: I0216 02:37:52.357384 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b82f3e51-fd8b-4c28-bf1c-e1bc3e1d874d-config" (OuterVolumeSpecName: "config") pod "b82f3e51-fd8b-4c28-bf1c-e1bc3e1d874d" (UID: "b82f3e51-fd8b-4c28-bf1c-e1bc3e1d874d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:37:52.369748 master-0 kubenswrapper[31559]: I0216 02:37:52.369716 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c20b360-7af1-42e9-af09-713bcb62953f-config" (OuterVolumeSpecName: "config") pod "9c20b360-7af1-42e9-af09-713bcb62953f" (UID: "9c20b360-7af1-42e9-af09-713bcb62953f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:37:52.388279 master-0 kubenswrapper[31559]: I0216 02:37:52.388222 31559 generic.go:334] "Generic (PLEG): container finished" podID="b82f3e51-fd8b-4c28-bf1c-e1bc3e1d874d" containerID="3b75360af505fbfea7a4ba7bac156e362b054022dd01d56bbeeb6ac8274a1ab5" exitCode=0 Feb 16 02:37:52.388486 master-0 kubenswrapper[31559]: I0216 02:37:52.388296 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d78499c-8nnpn" event={"ID":"b82f3e51-fd8b-4c28-bf1c-e1bc3e1d874d","Type":"ContainerDied","Data":"3b75360af505fbfea7a4ba7bac156e362b054022dd01d56bbeeb6ac8274a1ab5"} Feb 16 02:37:52.388486 master-0 kubenswrapper[31559]: I0216 02:37:52.388322 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d78499c-8nnpn" event={"ID":"b82f3e51-fd8b-4c28-bf1c-e1bc3e1d874d","Type":"ContainerDied","Data":"b77fa396d98203af59ef7081951217c1042c3f5893885fdd6cb1ca58e4ae1446"} Feb 16 02:37:52.388486 master-0 kubenswrapper[31559]: I0216 02:37:52.388339 31559 scope.go:117] "RemoveContainer" containerID="3b75360af505fbfea7a4ba7bac156e362b054022dd01d56bbeeb6ac8274a1ab5" Feb 16 02:37:52.388486 master-0 kubenswrapper[31559]: I0216 02:37:52.388471 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d78499c-8nnpn" Feb 16 02:37:52.392535 master-0 kubenswrapper[31559]: I0216 02:37:52.392485 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"6ffc4ca8-6010-47bd-88b0-ce9ff30c2093","Type":"ContainerStarted","Data":"f91c97dfb035664560c6bf3301cf42a1352bf141cac9750e704bf85f2430f0c1"} Feb 16 02:37:52.393848 master-0 kubenswrapper[31559]: I0216 02:37:52.393805 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"18822ed6-bb35-43f9-b51c-feb4aa42fcc6","Type":"ContainerStarted","Data":"baab9ff073d090284c280c063b72fa0506ea25e0e7c856928dd51da0aedf2c93"} Feb 16 02:37:52.395452 master-0 kubenswrapper[31559]: I0216 02:37:52.395419 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-w2cr4" event={"ID":"3d14e07d-5696-4d56-a6bc-b076e1c06fb2","Type":"ContainerStarted","Data":"c709f6b210d9e27bafea92f7e30935fd74bb5de3aa77ddf3f202fa6ed69dfb1a"} Feb 16 02:37:52.398330 master-0 kubenswrapper[31559]: I0216 02:37:52.398277 31559 generic.go:334] "Generic (PLEG): container finished" podID="9c20b360-7af1-42e9-af09-713bcb62953f" containerID="e1996cec40e378f85cd25e9b4abea66ba84b10fbfe910c7a79e7e4aa7de23531" exitCode=0 Feb 16 02:37:52.398330 master-0 kubenswrapper[31559]: I0216 02:37:52.398300 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6fb887-gwljd" event={"ID":"9c20b360-7af1-42e9-af09-713bcb62953f","Type":"ContainerDied","Data":"e1996cec40e378f85cd25e9b4abea66ba84b10fbfe910c7a79e7e4aa7de23531"} Feb 16 02:37:52.398330 master-0 kubenswrapper[31559]: I0216 02:37:52.398324 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6fb887-gwljd" event={"ID":"9c20b360-7af1-42e9-af09-713bcb62953f","Type":"ContainerDied","Data":"c0462479ee798b7b85332bbaf6948c46593a6f360a3d31e5e18d7e41e66542a5"} Feb 16 02:37:52.398534 master-0 kubenswrapper[31559]: I0216 02:37:52.398356 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7b6fb887-gwljd" Feb 16 02:37:52.399727 master-0 kubenswrapper[31559]: I0216 02:37:52.399705 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"93fa9c51-4a98-4a66-8b10-a1213ca9f95e","Type":"ContainerStarted","Data":"edbe47b4b49daee5bca772d4b9b66bd08a1521df7109c5b140566758fd011887"} Feb 16 02:37:52.401422 master-0 kubenswrapper[31559]: I0216 02:37:52.401395 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"22cfc80b-fac3-40d5-b3b4-7b1f5f5d37f6","Type":"ContainerStarted","Data":"207f417294d3bdb201ced54a178d23a1e5ece032ce163f054ffd15ccd91f750f"} Feb 16 02:37:52.403012 master-0 kubenswrapper[31559]: I0216 02:37:52.402960 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-jhsdr" event={"ID":"caef9948-8516-40b1-a940-6b2bea06cf6d","Type":"ContainerStarted","Data":"fce3117ba4d5e813035685dea317b750249327743dd151ccd4efc339d1c8f7fd"} Feb 16 02:37:52.404042 master-0 kubenswrapper[31559]: I0216 02:37:52.403999 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"95409e86-4b58-4ee3-bd1a-e67fbf366ebb","Type":"ContainerStarted","Data":"57889f42426f858744d6bdb555e927dc02e4170189c76382ff1c47a66c09ffde"} Feb 16 02:37:52.406159 master-0 kubenswrapper[31559]: I0216 02:37:52.406124 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b98d7b55c-hn84p" event={"ID":"c1cac3b2-dea2-4c55-a8db-a71dbc5f95f4","Type":"ContainerStarted","Data":"abfe958393130499ef1a5e66a06c24c9dc9b3a6ce028b4edc19ad88c5828b9a3"} Feb 16 02:37:52.406257 master-0 kubenswrapper[31559]: I0216 02:37:52.406232 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b98d7b55c-hn84p" Feb 16 02:37:52.415532 master-0 kubenswrapper[31559]: I0216 02:37:52.415259 31559 scope.go:117] "RemoveContainer" containerID="3b75360af505fbfea7a4ba7bac156e362b054022dd01d56bbeeb6ac8274a1ab5" Feb 16 02:37:52.415733 master-0 kubenswrapper[31559]: E0216 02:37:52.415701 31559 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b75360af505fbfea7a4ba7bac156e362b054022dd01d56bbeeb6ac8274a1ab5\": container with ID starting with 3b75360af505fbfea7a4ba7bac156e362b054022dd01d56bbeeb6ac8274a1ab5 not found: ID does not exist" containerID="3b75360af505fbfea7a4ba7bac156e362b054022dd01d56bbeeb6ac8274a1ab5" Feb 16 02:37:52.415796 master-0 kubenswrapper[31559]: I0216 02:37:52.415728 31559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b75360af505fbfea7a4ba7bac156e362b054022dd01d56bbeeb6ac8274a1ab5"} err="failed to get container status \"3b75360af505fbfea7a4ba7bac156e362b054022dd01d56bbeeb6ac8274a1ab5\": rpc error: code = NotFound desc = could not find container \"3b75360af505fbfea7a4ba7bac156e362b054022dd01d56bbeeb6ac8274a1ab5\": container with ID starting with 3b75360af505fbfea7a4ba7bac156e362b054022dd01d56bbeeb6ac8274a1ab5 not found: ID does not exist" Feb 16 02:37:52.415796 master-0 kubenswrapper[31559]: I0216 02:37:52.415746 31559 scope.go:117] "RemoveContainer" containerID="e1996cec40e378f85cd25e9b4abea66ba84b10fbfe910c7a79e7e4aa7de23531" Feb 16 02:37:52.437940 master-0 kubenswrapper[31559]: I0216 02:37:52.437882 31559 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c20b360-7af1-42e9-af09-713bcb62953f-config\") on node \"master-0\" DevicePath \"\"" Feb 16 02:37:52.438150 master-0 kubenswrapper[31559]: I0216 02:37:52.438020 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6flv\" (UniqueName: \"kubernetes.io/projected/9c20b360-7af1-42e9-af09-713bcb62953f-kube-api-access-r6flv\") on node \"master-0\" DevicePath \"\"" Feb 16 02:37:52.438150 master-0 kubenswrapper[31559]: I0216 02:37:52.438050 31559 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b82f3e51-fd8b-4c28-bf1c-e1bc3e1d874d-config\") on node \"master-0\" DevicePath \"\"" Feb 16 02:37:52.438150 master-0 kubenswrapper[31559]: I0216 02:37:52.438060 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk89l\" (UniqueName: \"kubernetes.io/projected/b82f3e51-fd8b-4c28-bf1c-e1bc3e1d874d-kube-api-access-tk89l\") on node \"master-0\" DevicePath \"\"" Feb 16 02:37:52.438150 master-0 kubenswrapper[31559]: I0216 02:37:52.438085 31559 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b82f3e51-fd8b-4c28-bf1c-e1bc3e1d874d-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 16 02:37:52.451516 master-0 kubenswrapper[31559]: I0216 02:37:52.451377 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d78499c-8nnpn"] Feb 16 02:37:52.454625 master-0 kubenswrapper[31559]: I0216 02:37:52.454592 31559 scope.go:117] "RemoveContainer" containerID="e1996cec40e378f85cd25e9b4abea66ba84b10fbfe910c7a79e7e4aa7de23531" Feb 16 02:37:52.457167 master-0 kubenswrapper[31559]: E0216 02:37:52.457037 31559 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1996cec40e378f85cd25e9b4abea66ba84b10fbfe910c7a79e7e4aa7de23531\": container with ID starting with e1996cec40e378f85cd25e9b4abea66ba84b10fbfe910c7a79e7e4aa7de23531 not found: ID does not exist" containerID="e1996cec40e378f85cd25e9b4abea66ba84b10fbfe910c7a79e7e4aa7de23531" Feb 16 02:37:52.457167 master-0 kubenswrapper[31559]: I0216 02:37:52.457127 31559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1996cec40e378f85cd25e9b4abea66ba84b10fbfe910c7a79e7e4aa7de23531"} err="failed to get container status \"e1996cec40e378f85cd25e9b4abea66ba84b10fbfe910c7a79e7e4aa7de23531\": rpc error: code = NotFound desc = could not find container \"e1996cec40e378f85cd25e9b4abea66ba84b10fbfe910c7a79e7e4aa7de23531\": container with ID starting with e1996cec40e378f85cd25e9b4abea66ba84b10fbfe910c7a79e7e4aa7de23531 not found: ID does not exist" Feb 16 02:37:52.464667 master-0 kubenswrapper[31559]: I0216 02:37:52.463173 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d78499c-8nnpn"] Feb 16 02:37:52.513216 master-0 kubenswrapper[31559]: I0216 02:37:52.513070 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c7b6fb887-gwljd"] Feb 16 02:37:52.538262 master-0 kubenswrapper[31559]: I0216 02:37:52.538158 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c7b6fb887-gwljd"] Feb 16 02:37:52.545222 master-0 kubenswrapper[31559]: I0216 02:37:52.545150 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b98d7b55c-hn84p" podStartSLOduration=3.129073925 podStartE2EDuration="19.545129746s" podCreationTimestamp="2026-02-16 02:37:33 +0000 UTC" firstStartedPulling="2026-02-16 02:37:34.52746005 +0000 UTC m=+906.872066065" lastFinishedPulling="2026-02-16 02:37:50.943515871 +0000 UTC m=+923.288121886" observedRunningTime="2026-02-16 02:37:52.511214706 +0000 UTC m=+924.855820721" watchObservedRunningTime="2026-02-16 02:37:52.545129746 +0000 UTC m=+924.889735761" Feb 16 02:37:52.735190 master-0 kubenswrapper[31559]: I0216 02:37:52.735144 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 16 02:37:53.430495 master-0 kubenswrapper[31559]: I0216 02:37:53.430424 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bcd98d69f-xwh4z" event={"ID":"95fe90e7-e416-4f71-a5ec-4d1529dea48a","Type":"ContainerStarted","Data":"3df6f62fabdf5d82ee45edc100b160d6e399a5af7e5fb29396782c052ad171df"} Feb 16 02:37:53.431315 master-0 kubenswrapper[31559]: I0216 02:37:53.430703 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5bcd98d69f-xwh4z" Feb 16 02:37:53.461604 master-0 kubenswrapper[31559]: I0216 02:37:53.461507 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5bcd98d69f-xwh4z" podStartSLOduration=3.6986755799999997 podStartE2EDuration="20.461487456s" podCreationTimestamp="2026-02-16 02:37:33 +0000 UTC" firstStartedPulling="2026-02-16 02:37:34.10815696 +0000 UTC m=+906.452762975" lastFinishedPulling="2026-02-16 02:37:50.870968836 +0000 UTC m=+923.215574851" observedRunningTime="2026-02-16 02:37:53.455070939 +0000 UTC m=+925.799676954" watchObservedRunningTime="2026-02-16 02:37:53.461487456 +0000 UTC m=+925.806093471" Feb 16 02:37:53.937837 master-0 kubenswrapper[31559]: I0216 02:37:53.937770 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c20b360-7af1-42e9-af09-713bcb62953f" path="/var/lib/kubelet/pods/9c20b360-7af1-42e9-af09-713bcb62953f/volumes" Feb 16 02:37:53.938580 master-0 kubenswrapper[31559]: I0216 02:37:53.938553 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b82f3e51-fd8b-4c28-bf1c-e1bc3e1d874d" path="/var/lib/kubelet/pods/b82f3e51-fd8b-4c28-bf1c-e1bc3e1d874d/volumes" Feb 16 02:37:54.447564 master-0 kubenswrapper[31559]: I0216 02:37:54.447502 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"de0ac61b-c6f4-4144-a647-815a3b8e7bca","Type":"ContainerStarted","Data":"65e4305b245fedf72cf5cf51ed6a80e0a872ae13f51190fd5e9a081994ef2fae"} Feb 16 02:37:58.626071 master-0 kubenswrapper[31559]: I0216 02:37:58.625881 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5bcd98d69f-xwh4z" Feb 16 02:37:59.015774 master-0 kubenswrapper[31559]: I0216 02:37:59.015618 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b98d7b55c-hn84p" Feb 16 02:37:59.129422 master-0 kubenswrapper[31559]: I0216 02:37:59.129313 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bcd98d69f-xwh4z"] Feb 16 02:37:59.508045 master-0 kubenswrapper[31559]: I0216 02:37:59.507985 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"95409e86-4b58-4ee3-bd1a-e67fbf366ebb","Type":"ContainerStarted","Data":"d9d400b8056cd4ebfad098fde379395a912dfccab580161617bad7183a0a3d6d"} Feb 16 02:37:59.510644 master-0 kubenswrapper[31559]: I0216 02:37:59.510612 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-w2cr4" event={"ID":"3d14e07d-5696-4d56-a6bc-b076e1c06fb2","Type":"ContainerStarted","Data":"86e47a4359f4d64bfe6bca77ea628ba43fb573561343c53fb864b2f5bb11f9d1"} Feb 16 02:37:59.521099 master-0 kubenswrapper[31559]: I0216 02:37:59.521056 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"de0ac61b-c6f4-4144-a647-815a3b8e7bca","Type":"ContainerStarted","Data":"027b22a95bafa8aa8730006159d502cc2bba0420b5672dbdf7801735b7538139"} Feb 16 02:37:59.533753 master-0 kubenswrapper[31559]: I0216 02:37:59.531550 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"22cfc80b-fac3-40d5-b3b4-7b1f5f5d37f6","Type":"ContainerStarted","Data":"b183f572eab497cda0372796129621026ab3656f01207c872a20616e72a6ae8b"} Feb 16 02:37:59.533753 master-0 kubenswrapper[31559]: I0216 02:37:59.531602 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 16 02:37:59.538326 master-0 kubenswrapper[31559]: I0216 02:37:59.538162 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"6ffc4ca8-6010-47bd-88b0-ce9ff30c2093","Type":"ContainerStarted","Data":"4868ca7d55bae5671660ebaadcf65cadcb8d8105b6241dabf6663afd1ec28012"} Feb 16 02:37:59.543309 master-0 kubenswrapper[31559]: I0216 02:37:59.543267 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-jhsdr" event={"ID":"caef9948-8516-40b1-a940-6b2bea06cf6d","Type":"ContainerStarted","Data":"c9e7c45c1e22d4c500c5b2d3f36add15f325fac56b744a2f3f6a0db130f19510"} Feb 16 02:37:59.543593 master-0 kubenswrapper[31559]: I0216 02:37:59.543551 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-jhsdr" Feb 16 02:37:59.550712 master-0 kubenswrapper[31559]: I0216 02:37:59.550666 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"18822ed6-bb35-43f9-b51c-feb4aa42fcc6","Type":"ContainerStarted","Data":"a8b9f4e3a9086b6d5ae5b88a533d46bdf4a729132e3a7971b860dc503f6efd01"} Feb 16 02:37:59.550846 master-0 kubenswrapper[31559]: I0216 02:37:59.550772 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5bcd98d69f-xwh4z" podUID="95fe90e7-e416-4f71-a5ec-4d1529dea48a" containerName="dnsmasq-dns" containerID="cri-o://3df6f62fabdf5d82ee45edc100b160d6e399a5af7e5fb29396782c052ad171df" gracePeriod=10 Feb 16 02:37:59.568566 master-0 kubenswrapper[31559]: I0216 02:37:59.562231 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=15.306333445 podStartE2EDuration="22.562211381s" podCreationTimestamp="2026-02-16 02:37:37 +0000 UTC" firstStartedPulling="2026-02-16 02:37:51.527498267 +0000 UTC m=+923.872104282" lastFinishedPulling="2026-02-16 02:37:58.783376183 +0000 UTC m=+931.127982218" observedRunningTime="2026-02-16 02:37:59.553274469 +0000 UTC m=+931.897880494" watchObservedRunningTime="2026-02-16 02:37:59.562211381 +0000 UTC m=+931.906817406" Feb 16 02:37:59.603574 master-0 kubenswrapper[31559]: I0216 02:37:59.602316 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-jhsdr" podStartSLOduration=10.729586367 podStartE2EDuration="17.602296242s" podCreationTimestamp="2026-02-16 02:37:42 +0000 UTC" firstStartedPulling="2026-02-16 02:37:52.010280375 +0000 UTC m=+924.354886390" lastFinishedPulling="2026-02-16 02:37:58.88299025 +0000 UTC m=+931.227596265" observedRunningTime="2026-02-16 02:37:59.595569117 +0000 UTC m=+931.940175132" watchObservedRunningTime="2026-02-16 02:37:59.602296242 +0000 UTC m=+931.946902267" Feb 16 02:38:00.351489 master-0 kubenswrapper[31559]: I0216 02:38:00.350320 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bcd98d69f-xwh4z" Feb 16 02:38:00.445997 master-0 kubenswrapper[31559]: I0216 02:38:00.445912 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95fe90e7-e416-4f71-a5ec-4d1529dea48a-config\") pod \"95fe90e7-e416-4f71-a5ec-4d1529dea48a\" (UID: \"95fe90e7-e416-4f71-a5ec-4d1529dea48a\") " Feb 16 02:38:00.446128 master-0 kubenswrapper[31559]: I0216 02:38:00.446116 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95fe90e7-e416-4f71-a5ec-4d1529dea48a-dns-svc\") pod \"95fe90e7-e416-4f71-a5ec-4d1529dea48a\" (UID: \"95fe90e7-e416-4f71-a5ec-4d1529dea48a\") " Feb 16 02:38:00.446192 master-0 kubenswrapper[31559]: I0216 02:38:00.446156 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-blbww\" (UniqueName: \"kubernetes.io/projected/95fe90e7-e416-4f71-a5ec-4d1529dea48a-kube-api-access-blbww\") pod \"95fe90e7-e416-4f71-a5ec-4d1529dea48a\" (UID: \"95fe90e7-e416-4f71-a5ec-4d1529dea48a\") " Feb 16 02:38:00.456507 master-0 kubenswrapper[31559]: I0216 02:38:00.450484 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95fe90e7-e416-4f71-a5ec-4d1529dea48a-kube-api-access-blbww" (OuterVolumeSpecName: "kube-api-access-blbww") pod "95fe90e7-e416-4f71-a5ec-4d1529dea48a" (UID: "95fe90e7-e416-4f71-a5ec-4d1529dea48a"). InnerVolumeSpecName "kube-api-access-blbww". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:38:00.508003 master-0 kubenswrapper[31559]: I0216 02:38:00.507939 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95fe90e7-e416-4f71-a5ec-4d1529dea48a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "95fe90e7-e416-4f71-a5ec-4d1529dea48a" (UID: "95fe90e7-e416-4f71-a5ec-4d1529dea48a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:38:00.514255 master-0 kubenswrapper[31559]: I0216 02:38:00.514218 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95fe90e7-e416-4f71-a5ec-4d1529dea48a-config" (OuterVolumeSpecName: "config") pod "95fe90e7-e416-4f71-a5ec-4d1529dea48a" (UID: "95fe90e7-e416-4f71-a5ec-4d1529dea48a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:38:00.548674 master-0 kubenswrapper[31559]: I0216 02:38:00.548600 31559 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95fe90e7-e416-4f71-a5ec-4d1529dea48a-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 16 02:38:00.548674 master-0 kubenswrapper[31559]: I0216 02:38:00.548649 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-blbww\" (UniqueName: \"kubernetes.io/projected/95fe90e7-e416-4f71-a5ec-4d1529dea48a-kube-api-access-blbww\") on node \"master-0\" DevicePath \"\"" Feb 16 02:38:00.548674 master-0 kubenswrapper[31559]: I0216 02:38:00.548663 31559 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95fe90e7-e416-4f71-a5ec-4d1529dea48a-config\") on node \"master-0\" DevicePath \"\"" Feb 16 02:38:00.565072 master-0 kubenswrapper[31559]: I0216 02:38:00.563980 31559 generic.go:334] "Generic (PLEG): container finished" podID="3d14e07d-5696-4d56-a6bc-b076e1c06fb2" containerID="86e47a4359f4d64bfe6bca77ea628ba43fb573561343c53fb864b2f5bb11f9d1" exitCode=0 Feb 16 02:38:00.565313 master-0 kubenswrapper[31559]: I0216 02:38:00.564090 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-w2cr4" event={"ID":"3d14e07d-5696-4d56-a6bc-b076e1c06fb2","Type":"ContainerDied","Data":"86e47a4359f4d64bfe6bca77ea628ba43fb573561343c53fb864b2f5bb11f9d1"} Feb 16 02:38:00.569202 master-0 kubenswrapper[31559]: I0216 02:38:00.569150 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"93fa9c51-4a98-4a66-8b10-a1213ca9f95e","Type":"ContainerStarted","Data":"bacdcbe8aac8083b3c666eea1d346ab690592ce6c9729c6971e709a915c460ec"} Feb 16 02:38:00.573347 master-0 kubenswrapper[31559]: I0216 02:38:00.573293 31559 generic.go:334] "Generic (PLEG): container finished" podID="95fe90e7-e416-4f71-a5ec-4d1529dea48a" containerID="3df6f62fabdf5d82ee45edc100b160d6e399a5af7e5fb29396782c052ad171df" exitCode=0 Feb 16 02:38:00.573448 master-0 kubenswrapper[31559]: I0216 02:38:00.573363 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bcd98d69f-xwh4z" event={"ID":"95fe90e7-e416-4f71-a5ec-4d1529dea48a","Type":"ContainerDied","Data":"3df6f62fabdf5d82ee45edc100b160d6e399a5af7e5fb29396782c052ad171df"} Feb 16 02:38:00.573448 master-0 kubenswrapper[31559]: I0216 02:38:00.573391 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bcd98d69f-xwh4z" event={"ID":"95fe90e7-e416-4f71-a5ec-4d1529dea48a","Type":"ContainerDied","Data":"c183f0bdaf5ad7e912d2e52e1a323751f5cb1b0fe444e8f745eb39b9c989fb1d"} Feb 16 02:38:00.573448 master-0 kubenswrapper[31559]: I0216 02:38:00.573410 31559 scope.go:117] "RemoveContainer" containerID="3df6f62fabdf5d82ee45edc100b160d6e399a5af7e5fb29396782c052ad171df" Feb 16 02:38:00.573629 master-0 kubenswrapper[31559]: I0216 02:38:00.573535 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bcd98d69f-xwh4z" Feb 16 02:38:00.577790 master-0 kubenswrapper[31559]: I0216 02:38:00.577754 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"5cd89103-815f-45df-9b47-4e3db3c708f2","Type":"ContainerStarted","Data":"adb3da42042abb84afffd0acd2366c7547c839acc0d5fa7934ce46de37dc8721"} Feb 16 02:38:00.659499 master-0 kubenswrapper[31559]: I0216 02:38:00.659411 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bcd98d69f-xwh4z"] Feb 16 02:38:00.676997 master-0 kubenswrapper[31559]: I0216 02:38:00.675208 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bcd98d69f-xwh4z"] Feb 16 02:38:01.075677 master-0 kubenswrapper[31559]: I0216 02:38:01.075602 31559 scope.go:117] "RemoveContainer" containerID="f8f757e37e9abb67490a227944180281a61592c287bcaaed3221286332385a0f" Feb 16 02:38:01.157865 master-0 kubenswrapper[31559]: I0216 02:38:01.157797 31559 scope.go:117] "RemoveContainer" containerID="3df6f62fabdf5d82ee45edc100b160d6e399a5af7e5fb29396782c052ad171df" Feb 16 02:38:01.158749 master-0 kubenswrapper[31559]: E0216 02:38:01.158673 31559 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3df6f62fabdf5d82ee45edc100b160d6e399a5af7e5fb29396782c052ad171df\": container with ID starting with 3df6f62fabdf5d82ee45edc100b160d6e399a5af7e5fb29396782c052ad171df not found: ID does not exist" containerID="3df6f62fabdf5d82ee45edc100b160d6e399a5af7e5fb29396782c052ad171df" Feb 16 02:38:01.158832 master-0 kubenswrapper[31559]: I0216 02:38:01.158768 31559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3df6f62fabdf5d82ee45edc100b160d6e399a5af7e5fb29396782c052ad171df"} err="failed to get container status \"3df6f62fabdf5d82ee45edc100b160d6e399a5af7e5fb29396782c052ad171df\": rpc error: code = NotFound desc = could not find container \"3df6f62fabdf5d82ee45edc100b160d6e399a5af7e5fb29396782c052ad171df\": container with ID starting with 3df6f62fabdf5d82ee45edc100b160d6e399a5af7e5fb29396782c052ad171df not found: ID does not exist" Feb 16 02:38:01.158832 master-0 kubenswrapper[31559]: I0216 02:38:01.158816 31559 scope.go:117] "RemoveContainer" containerID="f8f757e37e9abb67490a227944180281a61592c287bcaaed3221286332385a0f" Feb 16 02:38:01.159972 master-0 kubenswrapper[31559]: E0216 02:38:01.159312 31559 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8f757e37e9abb67490a227944180281a61592c287bcaaed3221286332385a0f\": container with ID starting with f8f757e37e9abb67490a227944180281a61592c287bcaaed3221286332385a0f not found: ID does not exist" containerID="f8f757e37e9abb67490a227944180281a61592c287bcaaed3221286332385a0f" Feb 16 02:38:01.159972 master-0 kubenswrapper[31559]: I0216 02:38:01.159375 31559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8f757e37e9abb67490a227944180281a61592c287bcaaed3221286332385a0f"} err="failed to get container status \"f8f757e37e9abb67490a227944180281a61592c287bcaaed3221286332385a0f\": rpc error: code = NotFound desc = could not find container \"f8f757e37e9abb67490a227944180281a61592c287bcaaed3221286332385a0f\": container with ID starting with f8f757e37e9abb67490a227944180281a61592c287bcaaed3221286332385a0f not found: ID does not exist" Feb 16 02:38:01.592527 master-0 kubenswrapper[31559]: I0216 02:38:01.592418 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"6ffc4ca8-6010-47bd-88b0-ce9ff30c2093","Type":"ContainerStarted","Data":"57301288f27719c6f0798abf7cffadae707edc69897e75a9089f362334bb990f"} Feb 16 02:38:01.595883 master-0 kubenswrapper[31559]: I0216 02:38:01.595815 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"de0ac61b-c6f4-4144-a647-815a3b8e7bca","Type":"ContainerStarted","Data":"8a03f6f89f99d8723f979d9e8d168023113c4741f8673ab06898cf1620e97cbe"} Feb 16 02:38:01.663194 master-0 kubenswrapper[31559]: I0216 02:38:01.663098 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=10.322340520000001 podStartE2EDuration="19.663069394s" podCreationTimestamp="2026-02-16 02:37:42 +0000 UTC" firstStartedPulling="2026-02-16 02:37:51.859806227 +0000 UTC m=+924.204412232" lastFinishedPulling="2026-02-16 02:38:01.200535091 +0000 UTC m=+933.545141106" observedRunningTime="2026-02-16 02:38:01.622932771 +0000 UTC m=+933.967538816" watchObservedRunningTime="2026-02-16 02:38:01.663069394 +0000 UTC m=+934.007675449" Feb 16 02:38:01.710891 master-0 kubenswrapper[31559]: I0216 02:38:01.710596 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=8.335753772 podStartE2EDuration="15.710570778s" podCreationTimestamp="2026-02-16 02:37:46 +0000 UTC" firstStartedPulling="2026-02-16 02:37:53.840802777 +0000 UTC m=+926.185408792" lastFinishedPulling="2026-02-16 02:38:01.215619753 +0000 UTC m=+933.560225798" observedRunningTime="2026-02-16 02:38:01.657784507 +0000 UTC m=+934.002390532" watchObservedRunningTime="2026-02-16 02:38:01.710570778 +0000 UTC m=+934.055176803" Feb 16 02:38:01.948351 master-0 kubenswrapper[31559]: I0216 02:38:01.948298 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95fe90e7-e416-4f71-a5ec-4d1529dea48a" path="/var/lib/kubelet/pods/95fe90e7-e416-4f71-a5ec-4d1529dea48a/volumes" Feb 16 02:38:02.162046 master-0 kubenswrapper[31559]: I0216 02:38:02.161877 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 16 02:38:02.617200 master-0 kubenswrapper[31559]: I0216 02:38:02.617064 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-w2cr4" event={"ID":"3d14e07d-5696-4d56-a6bc-b076e1c06fb2","Type":"ContainerStarted","Data":"561242066c8d6809699c42b87ce8c172dc5c3e73f9c4e64513011a3f7ef3cc2d"} Feb 16 02:38:02.617200 master-0 kubenswrapper[31559]: I0216 02:38:02.617159 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-w2cr4" event={"ID":"3d14e07d-5696-4d56-a6bc-b076e1c06fb2","Type":"ContainerStarted","Data":"10220ee6e343a86c3137eaac838208faf59a8148d35e51f00dd41983cda78aac"} Feb 16 02:38:02.665409 master-0 kubenswrapper[31559]: I0216 02:38:02.665287 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-w2cr4" podStartSLOduration=13.368061384 podStartE2EDuration="19.665251311s" podCreationTimestamp="2026-02-16 02:37:43 +0000 UTC" firstStartedPulling="2026-02-16 02:37:52.132455119 +0000 UTC m=+924.477061134" lastFinishedPulling="2026-02-16 02:37:58.429645046 +0000 UTC m=+930.774251061" observedRunningTime="2026-02-16 02:38:02.650745865 +0000 UTC m=+934.995351940" watchObservedRunningTime="2026-02-16 02:38:02.665251311 +0000 UTC m=+935.009857356" Feb 16 02:38:03.398357 master-0 kubenswrapper[31559]: I0216 02:38:03.398278 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-w2cr4" Feb 16 02:38:03.399519 master-0 kubenswrapper[31559]: I0216 02:38:03.399485 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-w2cr4" Feb 16 02:38:03.636070 master-0 kubenswrapper[31559]: I0216 02:38:03.635984 31559 generic.go:334] "Generic (PLEG): container finished" podID="18822ed6-bb35-43f9-b51c-feb4aa42fcc6" containerID="a8b9f4e3a9086b6d5ae5b88a533d46bdf4a729132e3a7971b860dc503f6efd01" exitCode=0 Feb 16 02:38:03.637225 master-0 kubenswrapper[31559]: I0216 02:38:03.637105 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"18822ed6-bb35-43f9-b51c-feb4aa42fcc6","Type":"ContainerDied","Data":"a8b9f4e3a9086b6d5ae5b88a533d46bdf4a729132e3a7971b860dc503f6efd01"} Feb 16 02:38:03.638814 master-0 kubenswrapper[31559]: I0216 02:38:03.638767 31559 generic.go:334] "Generic (PLEG): container finished" podID="95409e86-4b58-4ee3-bd1a-e67fbf366ebb" containerID="d9d400b8056cd4ebfad098fde379395a912dfccab580161617bad7183a0a3d6d" exitCode=0 Feb 16 02:38:03.638964 master-0 kubenswrapper[31559]: I0216 02:38:03.638878 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"95409e86-4b58-4ee3-bd1a-e67fbf366ebb","Type":"ContainerDied","Data":"d9d400b8056cd4ebfad098fde379395a912dfccab580161617bad7183a0a3d6d"} Feb 16 02:38:04.162652 master-0 kubenswrapper[31559]: I0216 02:38:04.162555 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 16 02:38:04.246053 master-0 kubenswrapper[31559]: I0216 02:38:04.245916 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 16 02:38:04.485490 master-0 kubenswrapper[31559]: I0216 02:38:04.485410 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 16 02:38:04.485796 master-0 kubenswrapper[31559]: I0216 02:38:04.485504 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 16 02:38:04.577581 master-0 kubenswrapper[31559]: I0216 02:38:04.577361 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 16 02:38:04.657390 master-0 kubenswrapper[31559]: I0216 02:38:04.657262 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"18822ed6-bb35-43f9-b51c-feb4aa42fcc6","Type":"ContainerStarted","Data":"c58496c3cc405b99ecd945ff6ff1877c843e2264bc63f20fd342c705b67a9252"} Feb 16 02:38:04.660671 master-0 kubenswrapper[31559]: I0216 02:38:04.660573 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"95409e86-4b58-4ee3-bd1a-e67fbf366ebb","Type":"ContainerStarted","Data":"19e0b1570b37a055055b7fef16898757e4ed3a643ab98f549bbb2016156e7d33"} Feb 16 02:38:04.703166 master-0 kubenswrapper[31559]: I0216 02:38:04.703064 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=23.645415939 podStartE2EDuration="30.703041536s" podCreationTimestamp="2026-02-16 02:37:34 +0000 UTC" firstStartedPulling="2026-02-16 02:37:51.797759806 +0000 UTC m=+924.142365821" lastFinishedPulling="2026-02-16 02:37:58.855385403 +0000 UTC m=+931.199991418" observedRunningTime="2026-02-16 02:38:04.697571014 +0000 UTC m=+937.042177059" watchObservedRunningTime="2026-02-16 02:38:04.703041536 +0000 UTC m=+937.047647591" Feb 16 02:38:04.730813 master-0 kubenswrapper[31559]: I0216 02:38:04.728276 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 16 02:38:04.730813 master-0 kubenswrapper[31559]: I0216 02:38:04.728776 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 16 02:38:04.743524 master-0 kubenswrapper[31559]: I0216 02:38:04.735359 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=21.982904084 podStartE2EDuration="28.735339505s" podCreationTimestamp="2026-02-16 02:37:36 +0000 UTC" firstStartedPulling="2026-02-16 02:37:52.030942982 +0000 UTC m=+924.375548997" lastFinishedPulling="2026-02-16 02:37:58.783378383 +0000 UTC m=+931.127984418" observedRunningTime="2026-02-16 02:38:04.730582822 +0000 UTC m=+937.075188857" watchObservedRunningTime="2026-02-16 02:38:04.735339505 +0000 UTC m=+937.079945520" Feb 16 02:38:05.041847 master-0 kubenswrapper[31559]: I0216 02:38:05.035659 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7c8cfc46bf-gs67b"] Feb 16 02:38:05.041847 master-0 kubenswrapper[31559]: E0216 02:38:05.036132 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c20b360-7af1-42e9-af09-713bcb62953f" containerName="init" Feb 16 02:38:05.041847 master-0 kubenswrapper[31559]: I0216 02:38:05.036147 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c20b360-7af1-42e9-af09-713bcb62953f" containerName="init" Feb 16 02:38:05.041847 master-0 kubenswrapper[31559]: E0216 02:38:05.036197 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95fe90e7-e416-4f71-a5ec-4d1529dea48a" containerName="dnsmasq-dns" Feb 16 02:38:05.041847 master-0 kubenswrapper[31559]: I0216 02:38:05.036204 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="95fe90e7-e416-4f71-a5ec-4d1529dea48a" containerName="dnsmasq-dns" Feb 16 02:38:05.041847 master-0 kubenswrapper[31559]: E0216 02:38:05.036219 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b82f3e51-fd8b-4c28-bf1c-e1bc3e1d874d" containerName="init" Feb 16 02:38:05.041847 master-0 kubenswrapper[31559]: I0216 02:38:05.036226 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="b82f3e51-fd8b-4c28-bf1c-e1bc3e1d874d" containerName="init" Feb 16 02:38:05.041847 master-0 kubenswrapper[31559]: E0216 02:38:05.036239 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95fe90e7-e416-4f71-a5ec-4d1529dea48a" containerName="init" Feb 16 02:38:05.041847 master-0 kubenswrapper[31559]: I0216 02:38:05.036246 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="95fe90e7-e416-4f71-a5ec-4d1529dea48a" containerName="init" Feb 16 02:38:05.041847 master-0 kubenswrapper[31559]: I0216 02:38:05.037376 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="95fe90e7-e416-4f71-a5ec-4d1529dea48a" containerName="dnsmasq-dns" Feb 16 02:38:05.041847 master-0 kubenswrapper[31559]: I0216 02:38:05.037403 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c20b360-7af1-42e9-af09-713bcb62953f" containerName="init" Feb 16 02:38:05.041847 master-0 kubenswrapper[31559]: I0216 02:38:05.037421 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="b82f3e51-fd8b-4c28-bf1c-e1bc3e1d874d" containerName="init" Feb 16 02:38:05.041847 master-0 kubenswrapper[31559]: I0216 02:38:05.038400 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c8cfc46bf-gs67b" Feb 16 02:38:05.041847 master-0 kubenswrapper[31559]: I0216 02:38:05.040962 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 16 02:38:05.062382 master-0 kubenswrapper[31559]: I0216 02:38:05.062292 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c8cfc46bf-gs67b"] Feb 16 02:38:05.114461 master-0 kubenswrapper[31559]: I0216 02:38:05.114275 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-d45k6"] Feb 16 02:38:05.119646 master-0 kubenswrapper[31559]: I0216 02:38:05.115486 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-d45k6" Feb 16 02:38:05.119646 master-0 kubenswrapper[31559]: I0216 02:38:05.118864 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 16 02:38:05.128669 master-0 kubenswrapper[31559]: I0216 02:38:05.128618 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-d45k6"] Feb 16 02:38:05.191903 master-0 kubenswrapper[31559]: I0216 02:38:05.191470 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/12ebcefd-cb4e-43a5-b61b-3ed39634af74-ovs-rundir\") pod \"ovn-controller-metrics-d45k6\" (UID: \"12ebcefd-cb4e-43a5-b61b-3ed39634af74\") " pod="openstack/ovn-controller-metrics-d45k6" Feb 16 02:38:05.191903 master-0 kubenswrapper[31559]: I0216 02:38:05.191521 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/12ebcefd-cb4e-43a5-b61b-3ed39634af74-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-d45k6\" (UID: \"12ebcefd-cb4e-43a5-b61b-3ed39634af74\") " pod="openstack/ovn-controller-metrics-d45k6" Feb 16 02:38:05.191903 master-0 kubenswrapper[31559]: I0216 02:38:05.191552 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82a9069d-9a55-496b-aa69-f107bcbff808-config\") pod \"dnsmasq-dns-7c8cfc46bf-gs67b\" (UID: \"82a9069d-9a55-496b-aa69-f107bcbff808\") " pod="openstack/dnsmasq-dns-7c8cfc46bf-gs67b" Feb 16 02:38:05.191903 master-0 kubenswrapper[31559]: I0216 02:38:05.191577 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82a9069d-9a55-496b-aa69-f107bcbff808-ovsdbserver-nb\") pod \"dnsmasq-dns-7c8cfc46bf-gs67b\" (UID: \"82a9069d-9a55-496b-aa69-f107bcbff808\") " pod="openstack/dnsmasq-dns-7c8cfc46bf-gs67b" Feb 16 02:38:05.191903 master-0 kubenswrapper[31559]: I0216 02:38:05.191600 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82a9069d-9a55-496b-aa69-f107bcbff808-dns-svc\") pod \"dnsmasq-dns-7c8cfc46bf-gs67b\" (UID: \"82a9069d-9a55-496b-aa69-f107bcbff808\") " pod="openstack/dnsmasq-dns-7c8cfc46bf-gs67b" Feb 16 02:38:05.191903 master-0 kubenswrapper[31559]: I0216 02:38:05.191690 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfflv\" (UniqueName: \"kubernetes.io/projected/12ebcefd-cb4e-43a5-b61b-3ed39634af74-kube-api-access-dfflv\") pod \"ovn-controller-metrics-d45k6\" (UID: \"12ebcefd-cb4e-43a5-b61b-3ed39634af74\") " pod="openstack/ovn-controller-metrics-d45k6" Feb 16 02:38:05.191903 master-0 kubenswrapper[31559]: I0216 02:38:05.191736 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/12ebcefd-cb4e-43a5-b61b-3ed39634af74-ovn-rundir\") pod \"ovn-controller-metrics-d45k6\" (UID: \"12ebcefd-cb4e-43a5-b61b-3ed39634af74\") " pod="openstack/ovn-controller-metrics-d45k6" Feb 16 02:38:05.191903 master-0 kubenswrapper[31559]: I0216 02:38:05.191755 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12ebcefd-cb4e-43a5-b61b-3ed39634af74-combined-ca-bundle\") pod \"ovn-controller-metrics-d45k6\" (UID: \"12ebcefd-cb4e-43a5-b61b-3ed39634af74\") " pod="openstack/ovn-controller-metrics-d45k6" Feb 16 02:38:05.191903 master-0 kubenswrapper[31559]: I0216 02:38:05.191774 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12ebcefd-cb4e-43a5-b61b-3ed39634af74-config\") pod \"ovn-controller-metrics-d45k6\" (UID: \"12ebcefd-cb4e-43a5-b61b-3ed39634af74\") " pod="openstack/ovn-controller-metrics-d45k6" Feb 16 02:38:05.191903 master-0 kubenswrapper[31559]: I0216 02:38:05.191793 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5vw6\" (UniqueName: \"kubernetes.io/projected/82a9069d-9a55-496b-aa69-f107bcbff808-kube-api-access-f5vw6\") pod \"dnsmasq-dns-7c8cfc46bf-gs67b\" (UID: \"82a9069d-9a55-496b-aa69-f107bcbff808\") " pod="openstack/dnsmasq-dns-7c8cfc46bf-gs67b" Feb 16 02:38:05.226589 master-0 kubenswrapper[31559]: I0216 02:38:05.226414 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c8cfc46bf-gs67b"] Feb 16 02:38:05.227228 master-0 kubenswrapper[31559]: E0216 02:38:05.227184 31559 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc kube-api-access-f5vw6 ovsdbserver-nb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-7c8cfc46bf-gs67b" podUID="82a9069d-9a55-496b-aa69-f107bcbff808" Feb 16 02:38:05.263320 master-0 kubenswrapper[31559]: I0216 02:38:05.263164 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 16 02:38:05.264943 master-0 kubenswrapper[31559]: I0216 02:38:05.264903 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 16 02:38:05.271920 master-0 kubenswrapper[31559]: I0216 02:38:05.271338 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 16 02:38:05.271920 master-0 kubenswrapper[31559]: I0216 02:38:05.271585 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 16 02:38:05.271920 master-0 kubenswrapper[31559]: I0216 02:38:05.271697 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 16 02:38:05.278165 master-0 kubenswrapper[31559]: I0216 02:38:05.278035 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7b9694dd79-ch65s"] Feb 16 02:38:05.279753 master-0 kubenswrapper[31559]: I0216 02:38:05.279723 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b9694dd79-ch65s" Feb 16 02:38:05.285027 master-0 kubenswrapper[31559]: I0216 02:38:05.284970 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 16 02:38:05.298070 master-0 kubenswrapper[31559]: I0216 02:38:05.297663 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 16 02:38:05.299330 master-0 kubenswrapper[31559]: I0216 02:38:05.298806 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfflv\" (UniqueName: \"kubernetes.io/projected/12ebcefd-cb4e-43a5-b61b-3ed39634af74-kube-api-access-dfflv\") pod \"ovn-controller-metrics-d45k6\" (UID: \"12ebcefd-cb4e-43a5-b61b-3ed39634af74\") " pod="openstack/ovn-controller-metrics-d45k6" Feb 16 02:38:05.299330 master-0 kubenswrapper[31559]: I0216 02:38:05.298868 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/12ebcefd-cb4e-43a5-b61b-3ed39634af74-ovn-rundir\") pod \"ovn-controller-metrics-d45k6\" (UID: \"12ebcefd-cb4e-43a5-b61b-3ed39634af74\") " pod="openstack/ovn-controller-metrics-d45k6" Feb 16 02:38:05.299330 master-0 kubenswrapper[31559]: I0216 02:38:05.298891 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12ebcefd-cb4e-43a5-b61b-3ed39634af74-combined-ca-bundle\") pod \"ovn-controller-metrics-d45k6\" (UID: \"12ebcefd-cb4e-43a5-b61b-3ed39634af74\") " pod="openstack/ovn-controller-metrics-d45k6" Feb 16 02:38:05.299330 master-0 kubenswrapper[31559]: I0216 02:38:05.298909 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12ebcefd-cb4e-43a5-b61b-3ed39634af74-config\") pod \"ovn-controller-metrics-d45k6\" (UID: \"12ebcefd-cb4e-43a5-b61b-3ed39634af74\") " pod="openstack/ovn-controller-metrics-d45k6" Feb 16 02:38:05.299330 master-0 kubenswrapper[31559]: I0216 02:38:05.298931 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5vw6\" (UniqueName: \"kubernetes.io/projected/82a9069d-9a55-496b-aa69-f107bcbff808-kube-api-access-f5vw6\") pod \"dnsmasq-dns-7c8cfc46bf-gs67b\" (UID: \"82a9069d-9a55-496b-aa69-f107bcbff808\") " pod="openstack/dnsmasq-dns-7c8cfc46bf-gs67b" Feb 16 02:38:05.299643 master-0 kubenswrapper[31559]: I0216 02:38:05.299602 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12ebcefd-cb4e-43a5-b61b-3ed39634af74-config\") pod \"ovn-controller-metrics-d45k6\" (UID: \"12ebcefd-cb4e-43a5-b61b-3ed39634af74\") " pod="openstack/ovn-controller-metrics-d45k6" Feb 16 02:38:05.299643 master-0 kubenswrapper[31559]: I0216 02:38:05.299639 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/12ebcefd-cb4e-43a5-b61b-3ed39634af74-ovs-rundir\") pod \"ovn-controller-metrics-d45k6\" (UID: \"12ebcefd-cb4e-43a5-b61b-3ed39634af74\") " pod="openstack/ovn-controller-metrics-d45k6" Feb 16 02:38:05.299738 master-0 kubenswrapper[31559]: I0216 02:38:05.299664 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/12ebcefd-cb4e-43a5-b61b-3ed39634af74-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-d45k6\" (UID: \"12ebcefd-cb4e-43a5-b61b-3ed39634af74\") " pod="openstack/ovn-controller-metrics-d45k6" Feb 16 02:38:05.299738 master-0 kubenswrapper[31559]: I0216 02:38:05.299689 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82a9069d-9a55-496b-aa69-f107bcbff808-config\") pod \"dnsmasq-dns-7c8cfc46bf-gs67b\" (UID: \"82a9069d-9a55-496b-aa69-f107bcbff808\") " pod="openstack/dnsmasq-dns-7c8cfc46bf-gs67b" Feb 16 02:38:05.299738 master-0 kubenswrapper[31559]: I0216 02:38:05.299713 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82a9069d-9a55-496b-aa69-f107bcbff808-ovsdbserver-nb\") pod \"dnsmasq-dns-7c8cfc46bf-gs67b\" (UID: \"82a9069d-9a55-496b-aa69-f107bcbff808\") " pod="openstack/dnsmasq-dns-7c8cfc46bf-gs67b" Feb 16 02:38:05.299738 master-0 kubenswrapper[31559]: I0216 02:38:05.299735 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82a9069d-9a55-496b-aa69-f107bcbff808-dns-svc\") pod \"dnsmasq-dns-7c8cfc46bf-gs67b\" (UID: \"82a9069d-9a55-496b-aa69-f107bcbff808\") " pod="openstack/dnsmasq-dns-7c8cfc46bf-gs67b" Feb 16 02:38:05.300032 master-0 kubenswrapper[31559]: I0216 02:38:05.300000 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/12ebcefd-cb4e-43a5-b61b-3ed39634af74-ovs-rundir\") pod \"ovn-controller-metrics-d45k6\" (UID: \"12ebcefd-cb4e-43a5-b61b-3ed39634af74\") " pod="openstack/ovn-controller-metrics-d45k6" Feb 16 02:38:05.300649 master-0 kubenswrapper[31559]: I0216 02:38:05.300622 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82a9069d-9a55-496b-aa69-f107bcbff808-config\") pod \"dnsmasq-dns-7c8cfc46bf-gs67b\" (UID: \"82a9069d-9a55-496b-aa69-f107bcbff808\") " pod="openstack/dnsmasq-dns-7c8cfc46bf-gs67b" Feb 16 02:38:05.307340 master-0 kubenswrapper[31559]: I0216 02:38:05.301768 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/12ebcefd-cb4e-43a5-b61b-3ed39634af74-ovn-rundir\") pod \"ovn-controller-metrics-d45k6\" (UID: \"12ebcefd-cb4e-43a5-b61b-3ed39634af74\") " pod="openstack/ovn-controller-metrics-d45k6" Feb 16 02:38:05.307340 master-0 kubenswrapper[31559]: I0216 02:38:05.302674 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12ebcefd-cb4e-43a5-b61b-3ed39634af74-combined-ca-bundle\") pod \"ovn-controller-metrics-d45k6\" (UID: \"12ebcefd-cb4e-43a5-b61b-3ed39634af74\") " pod="openstack/ovn-controller-metrics-d45k6" Feb 16 02:38:05.307340 master-0 kubenswrapper[31559]: I0216 02:38:05.303713 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/12ebcefd-cb4e-43a5-b61b-3ed39634af74-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-d45k6\" (UID: \"12ebcefd-cb4e-43a5-b61b-3ed39634af74\") " pod="openstack/ovn-controller-metrics-d45k6" Feb 16 02:38:05.307340 master-0 kubenswrapper[31559]: I0216 02:38:05.305202 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82a9069d-9a55-496b-aa69-f107bcbff808-dns-svc\") pod \"dnsmasq-dns-7c8cfc46bf-gs67b\" (UID: \"82a9069d-9a55-496b-aa69-f107bcbff808\") " pod="openstack/dnsmasq-dns-7c8cfc46bf-gs67b" Feb 16 02:38:05.311564 master-0 kubenswrapper[31559]: I0216 02:38:05.309483 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82a9069d-9a55-496b-aa69-f107bcbff808-ovsdbserver-nb\") pod \"dnsmasq-dns-7c8cfc46bf-gs67b\" (UID: \"82a9069d-9a55-496b-aa69-f107bcbff808\") " pod="openstack/dnsmasq-dns-7c8cfc46bf-gs67b" Feb 16 02:38:05.319485 master-0 kubenswrapper[31559]: I0216 02:38:05.319396 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b9694dd79-ch65s"] Feb 16 02:38:05.323571 master-0 kubenswrapper[31559]: I0216 02:38:05.323413 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfflv\" (UniqueName: \"kubernetes.io/projected/12ebcefd-cb4e-43a5-b61b-3ed39634af74-kube-api-access-dfflv\") pod \"ovn-controller-metrics-d45k6\" (UID: \"12ebcefd-cb4e-43a5-b61b-3ed39634af74\") " pod="openstack/ovn-controller-metrics-d45k6" Feb 16 02:38:05.324781 master-0 kubenswrapper[31559]: I0216 02:38:05.324756 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5vw6\" (UniqueName: \"kubernetes.io/projected/82a9069d-9a55-496b-aa69-f107bcbff808-kube-api-access-f5vw6\") pod \"dnsmasq-dns-7c8cfc46bf-gs67b\" (UID: \"82a9069d-9a55-496b-aa69-f107bcbff808\") " pod="openstack/dnsmasq-dns-7c8cfc46bf-gs67b" Feb 16 02:38:05.403277 master-0 kubenswrapper[31559]: I0216 02:38:05.403133 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72gk6\" (UniqueName: \"kubernetes.io/projected/ab81ee43-70d7-4de2-8bbf-051589116ff2-kube-api-access-72gk6\") pod \"dnsmasq-dns-7b9694dd79-ch65s\" (UID: \"ab81ee43-70d7-4de2-8bbf-051589116ff2\") " pod="openstack/dnsmasq-dns-7b9694dd79-ch65s" Feb 16 02:38:05.403277 master-0 kubenswrapper[31559]: I0216 02:38:05.403183 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/71f6ca45-5f9f-4d4e-b9fa-65862c1e3e4b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"71f6ca45-5f9f-4d4e-b9fa-65862c1e3e4b\") " pod="openstack/ovn-northd-0" Feb 16 02:38:05.403277 master-0 kubenswrapper[31559]: I0216 02:38:05.403204 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/71f6ca45-5f9f-4d4e-b9fa-65862c1e3e4b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"71f6ca45-5f9f-4d4e-b9fa-65862c1e3e4b\") " pod="openstack/ovn-northd-0" Feb 16 02:38:05.403277 master-0 kubenswrapper[31559]: I0216 02:38:05.403318 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4frvw\" (UniqueName: \"kubernetes.io/projected/71f6ca45-5f9f-4d4e-b9fa-65862c1e3e4b-kube-api-access-4frvw\") pod \"ovn-northd-0\" (UID: \"71f6ca45-5f9f-4d4e-b9fa-65862c1e3e4b\") " pod="openstack/ovn-northd-0" Feb 16 02:38:05.404073 master-0 kubenswrapper[31559]: I0216 02:38:05.403427 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab81ee43-70d7-4de2-8bbf-051589116ff2-dns-svc\") pod \"dnsmasq-dns-7b9694dd79-ch65s\" (UID: \"ab81ee43-70d7-4de2-8bbf-051589116ff2\") " pod="openstack/dnsmasq-dns-7b9694dd79-ch65s" Feb 16 02:38:05.404073 master-0 kubenswrapper[31559]: I0216 02:38:05.403542 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ab81ee43-70d7-4de2-8bbf-051589116ff2-ovsdbserver-nb\") pod \"dnsmasq-dns-7b9694dd79-ch65s\" (UID: \"ab81ee43-70d7-4de2-8bbf-051589116ff2\") " pod="openstack/dnsmasq-dns-7b9694dd79-ch65s" Feb 16 02:38:05.404073 master-0 kubenswrapper[31559]: I0216 02:38:05.403654 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71f6ca45-5f9f-4d4e-b9fa-65862c1e3e4b-config\") pod \"ovn-northd-0\" (UID: \"71f6ca45-5f9f-4d4e-b9fa-65862c1e3e4b\") " pod="openstack/ovn-northd-0" Feb 16 02:38:05.404073 master-0 kubenswrapper[31559]: I0216 02:38:05.403873 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/71f6ca45-5f9f-4d4e-b9fa-65862c1e3e4b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"71f6ca45-5f9f-4d4e-b9fa-65862c1e3e4b\") " pod="openstack/ovn-northd-0" Feb 16 02:38:05.404073 master-0 kubenswrapper[31559]: I0216 02:38:05.403893 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab81ee43-70d7-4de2-8bbf-051589116ff2-config\") pod \"dnsmasq-dns-7b9694dd79-ch65s\" (UID: \"ab81ee43-70d7-4de2-8bbf-051589116ff2\") " pod="openstack/dnsmasq-dns-7b9694dd79-ch65s" Feb 16 02:38:05.404073 master-0 kubenswrapper[31559]: I0216 02:38:05.403954 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/71f6ca45-5f9f-4d4e-b9fa-65862c1e3e4b-scripts\") pod \"ovn-northd-0\" (UID: \"71f6ca45-5f9f-4d4e-b9fa-65862c1e3e4b\") " pod="openstack/ovn-northd-0" Feb 16 02:38:05.404073 master-0 kubenswrapper[31559]: I0216 02:38:05.404047 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71f6ca45-5f9f-4d4e-b9fa-65862c1e3e4b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"71f6ca45-5f9f-4d4e-b9fa-65862c1e3e4b\") " pod="openstack/ovn-northd-0" Feb 16 02:38:05.404420 master-0 kubenswrapper[31559]: I0216 02:38:05.404105 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ab81ee43-70d7-4de2-8bbf-051589116ff2-ovsdbserver-sb\") pod \"dnsmasq-dns-7b9694dd79-ch65s\" (UID: \"ab81ee43-70d7-4de2-8bbf-051589116ff2\") " pod="openstack/dnsmasq-dns-7b9694dd79-ch65s" Feb 16 02:38:05.439674 master-0 kubenswrapper[31559]: I0216 02:38:05.439597 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-d45k6" Feb 16 02:38:05.505607 master-0 kubenswrapper[31559]: I0216 02:38:05.505421 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71f6ca45-5f9f-4d4e-b9fa-65862c1e3e4b-config\") pod \"ovn-northd-0\" (UID: \"71f6ca45-5f9f-4d4e-b9fa-65862c1e3e4b\") " pod="openstack/ovn-northd-0" Feb 16 02:38:05.505607 master-0 kubenswrapper[31559]: I0216 02:38:05.505576 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/71f6ca45-5f9f-4d4e-b9fa-65862c1e3e4b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"71f6ca45-5f9f-4d4e-b9fa-65862c1e3e4b\") " pod="openstack/ovn-northd-0" Feb 16 02:38:05.505607 master-0 kubenswrapper[31559]: I0216 02:38:05.505596 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab81ee43-70d7-4de2-8bbf-051589116ff2-config\") pod \"dnsmasq-dns-7b9694dd79-ch65s\" (UID: \"ab81ee43-70d7-4de2-8bbf-051589116ff2\") " pod="openstack/dnsmasq-dns-7b9694dd79-ch65s" Feb 16 02:38:05.505872 master-0 kubenswrapper[31559]: I0216 02:38:05.505623 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/71f6ca45-5f9f-4d4e-b9fa-65862c1e3e4b-scripts\") pod \"ovn-northd-0\" (UID: \"71f6ca45-5f9f-4d4e-b9fa-65862c1e3e4b\") " pod="openstack/ovn-northd-0" Feb 16 02:38:05.505872 master-0 kubenswrapper[31559]: I0216 02:38:05.505657 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71f6ca45-5f9f-4d4e-b9fa-65862c1e3e4b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"71f6ca45-5f9f-4d4e-b9fa-65862c1e3e4b\") " pod="openstack/ovn-northd-0" Feb 16 02:38:05.505872 master-0 kubenswrapper[31559]: I0216 02:38:05.505675 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ab81ee43-70d7-4de2-8bbf-051589116ff2-ovsdbserver-sb\") pod \"dnsmasq-dns-7b9694dd79-ch65s\" (UID: \"ab81ee43-70d7-4de2-8bbf-051589116ff2\") " pod="openstack/dnsmasq-dns-7b9694dd79-ch65s" Feb 16 02:38:05.505872 master-0 kubenswrapper[31559]: I0216 02:38:05.505717 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72gk6\" (UniqueName: \"kubernetes.io/projected/ab81ee43-70d7-4de2-8bbf-051589116ff2-kube-api-access-72gk6\") pod \"dnsmasq-dns-7b9694dd79-ch65s\" (UID: \"ab81ee43-70d7-4de2-8bbf-051589116ff2\") " pod="openstack/dnsmasq-dns-7b9694dd79-ch65s" Feb 16 02:38:05.505872 master-0 kubenswrapper[31559]: I0216 02:38:05.505735 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/71f6ca45-5f9f-4d4e-b9fa-65862c1e3e4b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"71f6ca45-5f9f-4d4e-b9fa-65862c1e3e4b\") " pod="openstack/ovn-northd-0" Feb 16 02:38:05.505872 master-0 kubenswrapper[31559]: I0216 02:38:05.505753 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/71f6ca45-5f9f-4d4e-b9fa-65862c1e3e4b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"71f6ca45-5f9f-4d4e-b9fa-65862c1e3e4b\") " pod="openstack/ovn-northd-0" Feb 16 02:38:05.505872 master-0 kubenswrapper[31559]: I0216 02:38:05.505772 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4frvw\" (UniqueName: \"kubernetes.io/projected/71f6ca45-5f9f-4d4e-b9fa-65862c1e3e4b-kube-api-access-4frvw\") pod \"ovn-northd-0\" (UID: \"71f6ca45-5f9f-4d4e-b9fa-65862c1e3e4b\") " pod="openstack/ovn-northd-0" Feb 16 02:38:05.505872 master-0 kubenswrapper[31559]: I0216 02:38:05.505792 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab81ee43-70d7-4de2-8bbf-051589116ff2-dns-svc\") pod \"dnsmasq-dns-7b9694dd79-ch65s\" (UID: \"ab81ee43-70d7-4de2-8bbf-051589116ff2\") " pod="openstack/dnsmasq-dns-7b9694dd79-ch65s" Feb 16 02:38:05.505872 master-0 kubenswrapper[31559]: I0216 02:38:05.505816 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ab81ee43-70d7-4de2-8bbf-051589116ff2-ovsdbserver-nb\") pod \"dnsmasq-dns-7b9694dd79-ch65s\" (UID: \"ab81ee43-70d7-4de2-8bbf-051589116ff2\") " pod="openstack/dnsmasq-dns-7b9694dd79-ch65s" Feb 16 02:38:05.506322 master-0 kubenswrapper[31559]: I0216 02:38:05.506279 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71f6ca45-5f9f-4d4e-b9fa-65862c1e3e4b-config\") pod \"ovn-northd-0\" (UID: \"71f6ca45-5f9f-4d4e-b9fa-65862c1e3e4b\") " pod="openstack/ovn-northd-0" Feb 16 02:38:05.506690 master-0 kubenswrapper[31559]: I0216 02:38:05.506663 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ab81ee43-70d7-4de2-8bbf-051589116ff2-ovsdbserver-nb\") pod \"dnsmasq-dns-7b9694dd79-ch65s\" (UID: \"ab81ee43-70d7-4de2-8bbf-051589116ff2\") " pod="openstack/dnsmasq-dns-7b9694dd79-ch65s" Feb 16 02:38:05.506902 master-0 kubenswrapper[31559]: I0216 02:38:05.506873 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ab81ee43-70d7-4de2-8bbf-051589116ff2-ovsdbserver-sb\") pod \"dnsmasq-dns-7b9694dd79-ch65s\" (UID: \"ab81ee43-70d7-4de2-8bbf-051589116ff2\") " pod="openstack/dnsmasq-dns-7b9694dd79-ch65s" Feb 16 02:38:05.507517 master-0 kubenswrapper[31559]: I0216 02:38:05.507487 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/71f6ca45-5f9f-4d4e-b9fa-65862c1e3e4b-scripts\") pod \"ovn-northd-0\" (UID: \"71f6ca45-5f9f-4d4e-b9fa-65862c1e3e4b\") " pod="openstack/ovn-northd-0" Feb 16 02:38:05.507995 master-0 kubenswrapper[31559]: I0216 02:38:05.507958 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab81ee43-70d7-4de2-8bbf-051589116ff2-dns-svc\") pod \"dnsmasq-dns-7b9694dd79-ch65s\" (UID: \"ab81ee43-70d7-4de2-8bbf-051589116ff2\") " pod="openstack/dnsmasq-dns-7b9694dd79-ch65s" Feb 16 02:38:05.508313 master-0 kubenswrapper[31559]: I0216 02:38:05.508260 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab81ee43-70d7-4de2-8bbf-051589116ff2-config\") pod \"dnsmasq-dns-7b9694dd79-ch65s\" (UID: \"ab81ee43-70d7-4de2-8bbf-051589116ff2\") " pod="openstack/dnsmasq-dns-7b9694dd79-ch65s" Feb 16 02:38:05.508581 master-0 kubenswrapper[31559]: I0216 02:38:05.508558 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/71f6ca45-5f9f-4d4e-b9fa-65862c1e3e4b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"71f6ca45-5f9f-4d4e-b9fa-65862c1e3e4b\") " pod="openstack/ovn-northd-0" Feb 16 02:38:05.511168 master-0 kubenswrapper[31559]: I0216 02:38:05.511117 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/71f6ca45-5f9f-4d4e-b9fa-65862c1e3e4b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"71f6ca45-5f9f-4d4e-b9fa-65862c1e3e4b\") " pod="openstack/ovn-northd-0" Feb 16 02:38:05.511460 master-0 kubenswrapper[31559]: I0216 02:38:05.511409 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/71f6ca45-5f9f-4d4e-b9fa-65862c1e3e4b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"71f6ca45-5f9f-4d4e-b9fa-65862c1e3e4b\") " pod="openstack/ovn-northd-0" Feb 16 02:38:05.511840 master-0 kubenswrapper[31559]: I0216 02:38:05.511808 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71f6ca45-5f9f-4d4e-b9fa-65862c1e3e4b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"71f6ca45-5f9f-4d4e-b9fa-65862c1e3e4b\") " pod="openstack/ovn-northd-0" Feb 16 02:38:05.526740 master-0 kubenswrapper[31559]: I0216 02:38:05.526606 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4frvw\" (UniqueName: \"kubernetes.io/projected/71f6ca45-5f9f-4d4e-b9fa-65862c1e3e4b-kube-api-access-4frvw\") pod \"ovn-northd-0\" (UID: \"71f6ca45-5f9f-4d4e-b9fa-65862c1e3e4b\") " pod="openstack/ovn-northd-0" Feb 16 02:38:05.530658 master-0 kubenswrapper[31559]: I0216 02:38:05.530628 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72gk6\" (UniqueName: \"kubernetes.io/projected/ab81ee43-70d7-4de2-8bbf-051589116ff2-kube-api-access-72gk6\") pod \"dnsmasq-dns-7b9694dd79-ch65s\" (UID: \"ab81ee43-70d7-4de2-8bbf-051589116ff2\") " pod="openstack/dnsmasq-dns-7b9694dd79-ch65s" Feb 16 02:38:05.674402 master-0 kubenswrapper[31559]: I0216 02:38:05.674340 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 16 02:38:05.679453 master-0 kubenswrapper[31559]: I0216 02:38:05.679390 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c8cfc46bf-gs67b" Feb 16 02:38:05.681477 master-0 kubenswrapper[31559]: I0216 02:38:05.681450 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b9694dd79-ch65s" Feb 16 02:38:05.693486 master-0 kubenswrapper[31559]: I0216 02:38:05.693408 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c8cfc46bf-gs67b" Feb 16 02:38:05.813959 master-0 kubenswrapper[31559]: I0216 02:38:05.813851 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82a9069d-9a55-496b-aa69-f107bcbff808-ovsdbserver-nb\") pod \"82a9069d-9a55-496b-aa69-f107bcbff808\" (UID: \"82a9069d-9a55-496b-aa69-f107bcbff808\") " Feb 16 02:38:05.814143 master-0 kubenswrapper[31559]: I0216 02:38:05.814010 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5vw6\" (UniqueName: \"kubernetes.io/projected/82a9069d-9a55-496b-aa69-f107bcbff808-kube-api-access-f5vw6\") pod \"82a9069d-9a55-496b-aa69-f107bcbff808\" (UID: \"82a9069d-9a55-496b-aa69-f107bcbff808\") " Feb 16 02:38:05.814143 master-0 kubenswrapper[31559]: I0216 02:38:05.814083 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82a9069d-9a55-496b-aa69-f107bcbff808-config\") pod \"82a9069d-9a55-496b-aa69-f107bcbff808\" (UID: \"82a9069d-9a55-496b-aa69-f107bcbff808\") " Feb 16 02:38:05.814686 master-0 kubenswrapper[31559]: I0216 02:38:05.814398 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82a9069d-9a55-496b-aa69-f107bcbff808-dns-svc\") pod \"82a9069d-9a55-496b-aa69-f107bcbff808\" (UID: \"82a9069d-9a55-496b-aa69-f107bcbff808\") " Feb 16 02:38:05.814796 master-0 kubenswrapper[31559]: I0216 02:38:05.814717 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82a9069d-9a55-496b-aa69-f107bcbff808-config" (OuterVolumeSpecName: "config") pod "82a9069d-9a55-496b-aa69-f107bcbff808" (UID: "82a9069d-9a55-496b-aa69-f107bcbff808"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:38:05.814995 master-0 kubenswrapper[31559]: I0216 02:38:05.814965 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82a9069d-9a55-496b-aa69-f107bcbff808-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "82a9069d-9a55-496b-aa69-f107bcbff808" (UID: "82a9069d-9a55-496b-aa69-f107bcbff808"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:38:05.815911 master-0 kubenswrapper[31559]: I0216 02:38:05.815155 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82a9069d-9a55-496b-aa69-f107bcbff808-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "82a9069d-9a55-496b-aa69-f107bcbff808" (UID: "82a9069d-9a55-496b-aa69-f107bcbff808"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:38:05.819614 master-0 kubenswrapper[31559]: I0216 02:38:05.819576 31559 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82a9069d-9a55-496b-aa69-f107bcbff808-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 16 02:38:05.819710 master-0 kubenswrapper[31559]: I0216 02:38:05.819641 31559 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82a9069d-9a55-496b-aa69-f107bcbff808-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 16 02:38:05.819710 master-0 kubenswrapper[31559]: I0216 02:38:05.819660 31559 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82a9069d-9a55-496b-aa69-f107bcbff808-config\") on node \"master-0\" DevicePath \"\"" Feb 16 02:38:05.823720 master-0 kubenswrapper[31559]: I0216 02:38:05.823685 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82a9069d-9a55-496b-aa69-f107bcbff808-kube-api-access-f5vw6" (OuterVolumeSpecName: "kube-api-access-f5vw6") pod "82a9069d-9a55-496b-aa69-f107bcbff808" (UID: "82a9069d-9a55-496b-aa69-f107bcbff808"). InnerVolumeSpecName "kube-api-access-f5vw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:38:05.937813 master-0 kubenswrapper[31559]: I0216 02:38:05.937772 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5vw6\" (UniqueName: \"kubernetes.io/projected/82a9069d-9a55-496b-aa69-f107bcbff808-kube-api-access-f5vw6\") on node \"master-0\" DevicePath \"\"" Feb 16 02:38:05.957545 master-0 kubenswrapper[31559]: I0216 02:38:05.957501 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-d45k6"] Feb 16 02:38:06.147605 master-0 kubenswrapper[31559]: I0216 02:38:06.147515 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 16 02:38:06.255927 master-0 kubenswrapper[31559]: I0216 02:38:06.255875 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b9694dd79-ch65s"] Feb 16 02:38:06.258538 master-0 kubenswrapper[31559]: W0216 02:38:06.258486 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab81ee43_70d7_4de2_8bbf_051589116ff2.slice/crio-bd92caf65fc5368f81922b88f84828b6f2f37212fb0e290887a3a7a17fcc3408 WatchSource:0}: Error finding container bd92caf65fc5368f81922b88f84828b6f2f37212fb0e290887a3a7a17fcc3408: Status 404 returned error can't find the container with id bd92caf65fc5368f81922b88f84828b6f2f37212fb0e290887a3a7a17fcc3408 Feb 16 02:38:06.691709 master-0 kubenswrapper[31559]: I0216 02:38:06.691618 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"71f6ca45-5f9f-4d4e-b9fa-65862c1e3e4b","Type":"ContainerStarted","Data":"f634b877a358b6af0226e32948ca2932cfae51185b896ab09daf1c303b6dcb29"} Feb 16 02:38:06.693890 master-0 kubenswrapper[31559]: I0216 02:38:06.693820 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-d45k6" event={"ID":"12ebcefd-cb4e-43a5-b61b-3ed39634af74","Type":"ContainerStarted","Data":"47fcbcc49c3b18d3a14558c4f066fc0890296d21327eb317815fc7ecef1fc8db"} Feb 16 02:38:06.693970 master-0 kubenswrapper[31559]: I0216 02:38:06.693898 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-d45k6" event={"ID":"12ebcefd-cb4e-43a5-b61b-3ed39634af74","Type":"ContainerStarted","Data":"88d5886d044a01cc1f295980c160429a933e8bf8bea57832d5d2a947b7e68b14"} Feb 16 02:38:06.697055 master-0 kubenswrapper[31559]: I0216 02:38:06.696988 31559 generic.go:334] "Generic (PLEG): container finished" podID="ab81ee43-70d7-4de2-8bbf-051589116ff2" containerID="42266076f9abee5427f7a9544d129f546964c9463de355abb96de6794469ada6" exitCode=0 Feb 16 02:38:06.697125 master-0 kubenswrapper[31559]: I0216 02:38:06.697064 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b9694dd79-ch65s" event={"ID":"ab81ee43-70d7-4de2-8bbf-051589116ff2","Type":"ContainerDied","Data":"42266076f9abee5427f7a9544d129f546964c9463de355abb96de6794469ada6"} Feb 16 02:38:06.697161 master-0 kubenswrapper[31559]: I0216 02:38:06.697144 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b9694dd79-ch65s" event={"ID":"ab81ee43-70d7-4de2-8bbf-051589116ff2","Type":"ContainerStarted","Data":"bd92caf65fc5368f81922b88f84828b6f2f37212fb0e290887a3a7a17fcc3408"} Feb 16 02:38:06.697193 master-0 kubenswrapper[31559]: I0216 02:38:06.697157 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c8cfc46bf-gs67b" Feb 16 02:38:06.745835 master-0 kubenswrapper[31559]: I0216 02:38:06.745673 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-d45k6" podStartSLOduration=1.7456428050000001 podStartE2EDuration="1.745642805s" podCreationTimestamp="2026-02-16 02:38:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:38:06.73040347 +0000 UTC m=+939.075009495" watchObservedRunningTime="2026-02-16 02:38:06.745642805 +0000 UTC m=+939.090248830" Feb 16 02:38:06.832093 master-0 kubenswrapper[31559]: I0216 02:38:06.832019 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c8cfc46bf-gs67b"] Feb 16 02:38:06.841167 master-0 kubenswrapper[31559]: I0216 02:38:06.841128 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7c8cfc46bf-gs67b"] Feb 16 02:38:07.709059 master-0 kubenswrapper[31559]: I0216 02:38:07.708828 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b9694dd79-ch65s" event={"ID":"ab81ee43-70d7-4de2-8bbf-051589116ff2","Type":"ContainerStarted","Data":"6635673f723becbde33f69c13f26598e0e19eb2a90ae4b5e128db3bd2ccf2d47"} Feb 16 02:38:07.714542 master-0 kubenswrapper[31559]: I0216 02:38:07.709595 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7b9694dd79-ch65s" Feb 16 02:38:07.739707 master-0 kubenswrapper[31559]: I0216 02:38:07.739631 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7b9694dd79-ch65s" podStartSLOduration=2.73960704 podStartE2EDuration="2.73960704s" podCreationTimestamp="2026-02-16 02:38:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:38:07.733717418 +0000 UTC m=+940.078323453" watchObservedRunningTime="2026-02-16 02:38:07.73960704 +0000 UTC m=+940.084213065" Feb 16 02:38:07.918014 master-0 kubenswrapper[31559]: I0216 02:38:07.917810 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 16 02:38:07.955493 master-0 kubenswrapper[31559]: I0216 02:38:07.955392 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82a9069d-9a55-496b-aa69-f107bcbff808" path="/var/lib/kubelet/pods/82a9069d-9a55-496b-aa69-f107bcbff808/volumes" Feb 16 02:38:08.739409 master-0 kubenswrapper[31559]: I0216 02:38:08.739320 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"71f6ca45-5f9f-4d4e-b9fa-65862c1e3e4b","Type":"ContainerStarted","Data":"4a38914c2a195de914a32ce8f6bac15fc9db4c08ab3c4909240670f6b15abda7"} Feb 16 02:38:08.739409 master-0 kubenswrapper[31559]: I0216 02:38:08.739405 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"71f6ca45-5f9f-4d4e-b9fa-65862c1e3e4b","Type":"ContainerStarted","Data":"e48fbbd2a02e043d0b7470fd2c1cd86411adc8c8d22044d9b345f167a3971715"} Feb 16 02:38:08.783389 master-0 kubenswrapper[31559]: I0216 02:38:08.783255 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.436835357 podStartE2EDuration="3.783226365s" podCreationTimestamp="2026-02-16 02:38:05 +0000 UTC" firstStartedPulling="2026-02-16 02:38:06.161295409 +0000 UTC m=+938.505901424" lastFinishedPulling="2026-02-16 02:38:07.507686417 +0000 UTC m=+939.852292432" observedRunningTime="2026-02-16 02:38:08.764465368 +0000 UTC m=+941.109071423" watchObservedRunningTime="2026-02-16 02:38:08.783226365 +0000 UTC m=+941.127832420" Feb 16 02:38:09.735924 master-0 kubenswrapper[31559]: I0216 02:38:09.735845 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b9694dd79-ch65s"] Feb 16 02:38:09.785260 master-0 kubenswrapper[31559]: I0216 02:38:09.784921 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 16 02:38:09.785260 master-0 kubenswrapper[31559]: I0216 02:38:09.785078 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7b9694dd79-ch65s" podUID="ab81ee43-70d7-4de2-8bbf-051589116ff2" containerName="dnsmasq-dns" containerID="cri-o://6635673f723becbde33f69c13f26598e0e19eb2a90ae4b5e128db3bd2ccf2d47" gracePeriod=10 Feb 16 02:38:09.807633 master-0 kubenswrapper[31559]: I0216 02:38:09.806632 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6fd49994df-plzpw"] Feb 16 02:38:09.812832 master-0 kubenswrapper[31559]: I0216 02:38:09.810235 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fd49994df-plzpw" Feb 16 02:38:09.825461 master-0 kubenswrapper[31559]: I0216 02:38:09.825162 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6fd49994df-plzpw"] Feb 16 02:38:09.931012 master-0 kubenswrapper[31559]: I0216 02:38:09.930940 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7f44\" (UniqueName: \"kubernetes.io/projected/8dba87a1-1996-4dc0-bc24-cdf9a2fa756f-kube-api-access-w7f44\") pod \"dnsmasq-dns-6fd49994df-plzpw\" (UID: \"8dba87a1-1996-4dc0-bc24-cdf9a2fa756f\") " pod="openstack/dnsmasq-dns-6fd49994df-plzpw" Feb 16 02:38:09.931244 master-0 kubenswrapper[31559]: I0216 02:38:09.931048 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8dba87a1-1996-4dc0-bc24-cdf9a2fa756f-dns-svc\") pod \"dnsmasq-dns-6fd49994df-plzpw\" (UID: \"8dba87a1-1996-4dc0-bc24-cdf9a2fa756f\") " pod="openstack/dnsmasq-dns-6fd49994df-plzpw" Feb 16 02:38:09.931244 master-0 kubenswrapper[31559]: I0216 02:38:09.931090 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8dba87a1-1996-4dc0-bc24-cdf9a2fa756f-ovsdbserver-nb\") pod \"dnsmasq-dns-6fd49994df-plzpw\" (UID: \"8dba87a1-1996-4dc0-bc24-cdf9a2fa756f\") " pod="openstack/dnsmasq-dns-6fd49994df-plzpw" Feb 16 02:38:09.931244 master-0 kubenswrapper[31559]: I0216 02:38:09.931132 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dba87a1-1996-4dc0-bc24-cdf9a2fa756f-config\") pod \"dnsmasq-dns-6fd49994df-plzpw\" (UID: \"8dba87a1-1996-4dc0-bc24-cdf9a2fa756f\") " pod="openstack/dnsmasq-dns-6fd49994df-plzpw" Feb 16 02:38:09.931244 master-0 kubenswrapper[31559]: I0216 02:38:09.931147 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8dba87a1-1996-4dc0-bc24-cdf9a2fa756f-ovsdbserver-sb\") pod \"dnsmasq-dns-6fd49994df-plzpw\" (UID: \"8dba87a1-1996-4dc0-bc24-cdf9a2fa756f\") " pod="openstack/dnsmasq-dns-6fd49994df-plzpw" Feb 16 02:38:10.044482 master-0 kubenswrapper[31559]: I0216 02:38:10.033963 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7f44\" (UniqueName: \"kubernetes.io/projected/8dba87a1-1996-4dc0-bc24-cdf9a2fa756f-kube-api-access-w7f44\") pod \"dnsmasq-dns-6fd49994df-plzpw\" (UID: \"8dba87a1-1996-4dc0-bc24-cdf9a2fa756f\") " pod="openstack/dnsmasq-dns-6fd49994df-plzpw" Feb 16 02:38:10.044482 master-0 kubenswrapper[31559]: I0216 02:38:10.034067 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8dba87a1-1996-4dc0-bc24-cdf9a2fa756f-dns-svc\") pod \"dnsmasq-dns-6fd49994df-plzpw\" (UID: \"8dba87a1-1996-4dc0-bc24-cdf9a2fa756f\") " pod="openstack/dnsmasq-dns-6fd49994df-plzpw" Feb 16 02:38:10.044482 master-0 kubenswrapper[31559]: I0216 02:38:10.034106 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8dba87a1-1996-4dc0-bc24-cdf9a2fa756f-ovsdbserver-nb\") pod \"dnsmasq-dns-6fd49994df-plzpw\" (UID: \"8dba87a1-1996-4dc0-bc24-cdf9a2fa756f\") " pod="openstack/dnsmasq-dns-6fd49994df-plzpw" Feb 16 02:38:10.044482 master-0 kubenswrapper[31559]: I0216 02:38:10.034527 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dba87a1-1996-4dc0-bc24-cdf9a2fa756f-config\") pod \"dnsmasq-dns-6fd49994df-plzpw\" (UID: \"8dba87a1-1996-4dc0-bc24-cdf9a2fa756f\") " pod="openstack/dnsmasq-dns-6fd49994df-plzpw" Feb 16 02:38:10.044482 master-0 kubenswrapper[31559]: I0216 02:38:10.034582 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8dba87a1-1996-4dc0-bc24-cdf9a2fa756f-ovsdbserver-sb\") pod \"dnsmasq-dns-6fd49994df-plzpw\" (UID: \"8dba87a1-1996-4dc0-bc24-cdf9a2fa756f\") " pod="openstack/dnsmasq-dns-6fd49994df-plzpw" Feb 16 02:38:10.044482 master-0 kubenswrapper[31559]: I0216 02:38:10.035044 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8dba87a1-1996-4dc0-bc24-cdf9a2fa756f-dns-svc\") pod \"dnsmasq-dns-6fd49994df-plzpw\" (UID: \"8dba87a1-1996-4dc0-bc24-cdf9a2fa756f\") " pod="openstack/dnsmasq-dns-6fd49994df-plzpw" Feb 16 02:38:10.044482 master-0 kubenswrapper[31559]: I0216 02:38:10.036158 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dba87a1-1996-4dc0-bc24-cdf9a2fa756f-config\") pod \"dnsmasq-dns-6fd49994df-plzpw\" (UID: \"8dba87a1-1996-4dc0-bc24-cdf9a2fa756f\") " pod="openstack/dnsmasq-dns-6fd49994df-plzpw" Feb 16 02:38:10.044482 master-0 kubenswrapper[31559]: I0216 02:38:10.036732 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8dba87a1-1996-4dc0-bc24-cdf9a2fa756f-ovsdbserver-sb\") pod \"dnsmasq-dns-6fd49994df-plzpw\" (UID: \"8dba87a1-1996-4dc0-bc24-cdf9a2fa756f\") " pod="openstack/dnsmasq-dns-6fd49994df-plzpw" Feb 16 02:38:10.044482 master-0 kubenswrapper[31559]: I0216 02:38:10.036784 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8dba87a1-1996-4dc0-bc24-cdf9a2fa756f-ovsdbserver-nb\") pod \"dnsmasq-dns-6fd49994df-plzpw\" (UID: \"8dba87a1-1996-4dc0-bc24-cdf9a2fa756f\") " pod="openstack/dnsmasq-dns-6fd49994df-plzpw" Feb 16 02:38:10.049578 master-0 kubenswrapper[31559]: I0216 02:38:10.049548 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7f44\" (UniqueName: \"kubernetes.io/projected/8dba87a1-1996-4dc0-bc24-cdf9a2fa756f-kube-api-access-w7f44\") pod \"dnsmasq-dns-6fd49994df-plzpw\" (UID: \"8dba87a1-1996-4dc0-bc24-cdf9a2fa756f\") " pod="openstack/dnsmasq-dns-6fd49994df-plzpw" Feb 16 02:38:10.252359 master-0 kubenswrapper[31559]: I0216 02:38:10.252275 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fd49994df-plzpw" Feb 16 02:38:10.352942 master-0 kubenswrapper[31559]: I0216 02:38:10.352807 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b9694dd79-ch65s" Feb 16 02:38:10.451052 master-0 kubenswrapper[31559]: I0216 02:38:10.450990 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab81ee43-70d7-4de2-8bbf-051589116ff2-dns-svc\") pod \"ab81ee43-70d7-4de2-8bbf-051589116ff2\" (UID: \"ab81ee43-70d7-4de2-8bbf-051589116ff2\") " Feb 16 02:38:10.451253 master-0 kubenswrapper[31559]: I0216 02:38:10.451200 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72gk6\" (UniqueName: \"kubernetes.io/projected/ab81ee43-70d7-4de2-8bbf-051589116ff2-kube-api-access-72gk6\") pod \"ab81ee43-70d7-4de2-8bbf-051589116ff2\" (UID: \"ab81ee43-70d7-4de2-8bbf-051589116ff2\") " Feb 16 02:38:10.451291 master-0 kubenswrapper[31559]: I0216 02:38:10.451262 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ab81ee43-70d7-4de2-8bbf-051589116ff2-ovsdbserver-nb\") pod \"ab81ee43-70d7-4de2-8bbf-051589116ff2\" (UID: \"ab81ee43-70d7-4de2-8bbf-051589116ff2\") " Feb 16 02:38:10.451865 master-0 kubenswrapper[31559]: I0216 02:38:10.451313 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab81ee43-70d7-4de2-8bbf-051589116ff2-config\") pod \"ab81ee43-70d7-4de2-8bbf-051589116ff2\" (UID: \"ab81ee43-70d7-4de2-8bbf-051589116ff2\") " Feb 16 02:38:10.451865 master-0 kubenswrapper[31559]: I0216 02:38:10.451394 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ab81ee43-70d7-4de2-8bbf-051589116ff2-ovsdbserver-sb\") pod \"ab81ee43-70d7-4de2-8bbf-051589116ff2\" (UID: \"ab81ee43-70d7-4de2-8bbf-051589116ff2\") " Feb 16 02:38:10.456494 master-0 kubenswrapper[31559]: I0216 02:38:10.456052 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab81ee43-70d7-4de2-8bbf-051589116ff2-kube-api-access-72gk6" (OuterVolumeSpecName: "kube-api-access-72gk6") pod "ab81ee43-70d7-4de2-8bbf-051589116ff2" (UID: "ab81ee43-70d7-4de2-8bbf-051589116ff2"). InnerVolumeSpecName "kube-api-access-72gk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:38:10.501820 master-0 kubenswrapper[31559]: I0216 02:38:10.501731 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab81ee43-70d7-4de2-8bbf-051589116ff2-config" (OuterVolumeSpecName: "config") pod "ab81ee43-70d7-4de2-8bbf-051589116ff2" (UID: "ab81ee43-70d7-4de2-8bbf-051589116ff2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:38:10.522220 master-0 kubenswrapper[31559]: I0216 02:38:10.522164 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab81ee43-70d7-4de2-8bbf-051589116ff2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ab81ee43-70d7-4de2-8bbf-051589116ff2" (UID: "ab81ee43-70d7-4de2-8bbf-051589116ff2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:38:10.549456 master-0 kubenswrapper[31559]: I0216 02:38:10.538674 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab81ee43-70d7-4de2-8bbf-051589116ff2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ab81ee43-70d7-4de2-8bbf-051589116ff2" (UID: "ab81ee43-70d7-4de2-8bbf-051589116ff2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:38:10.553830 master-0 kubenswrapper[31559]: I0216 02:38:10.553738 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72gk6\" (UniqueName: \"kubernetes.io/projected/ab81ee43-70d7-4de2-8bbf-051589116ff2-kube-api-access-72gk6\") on node \"master-0\" DevicePath \"\"" Feb 16 02:38:10.553830 master-0 kubenswrapper[31559]: I0216 02:38:10.553783 31559 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ab81ee43-70d7-4de2-8bbf-051589116ff2-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 16 02:38:10.553830 master-0 kubenswrapper[31559]: I0216 02:38:10.553796 31559 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab81ee43-70d7-4de2-8bbf-051589116ff2-config\") on node \"master-0\" DevicePath \"\"" Feb 16 02:38:10.553830 master-0 kubenswrapper[31559]: I0216 02:38:10.553805 31559 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab81ee43-70d7-4de2-8bbf-051589116ff2-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 16 02:38:10.578254 master-0 kubenswrapper[31559]: I0216 02:38:10.578176 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab81ee43-70d7-4de2-8bbf-051589116ff2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ab81ee43-70d7-4de2-8bbf-051589116ff2" (UID: "ab81ee43-70d7-4de2-8bbf-051589116ff2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:38:10.655705 master-0 kubenswrapper[31559]: I0216 02:38:10.655663 31559 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ab81ee43-70d7-4de2-8bbf-051589116ff2-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 16 02:38:10.762762 master-0 kubenswrapper[31559]: I0216 02:38:10.762617 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6fd49994df-plzpw"] Feb 16 02:38:10.795572 master-0 kubenswrapper[31559]: I0216 02:38:10.795528 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fd49994df-plzpw" event={"ID":"8dba87a1-1996-4dc0-bc24-cdf9a2fa756f","Type":"ContainerStarted","Data":"10414891f25ec0f52b210584d48cccd9b412e50d5558a31d72eb7d87fafc8238"} Feb 16 02:38:10.797555 master-0 kubenswrapper[31559]: I0216 02:38:10.797528 31559 generic.go:334] "Generic (PLEG): container finished" podID="ab81ee43-70d7-4de2-8bbf-051589116ff2" containerID="6635673f723becbde33f69c13f26598e0e19eb2a90ae4b5e128db3bd2ccf2d47" exitCode=0 Feb 16 02:38:10.797756 master-0 kubenswrapper[31559]: I0216 02:38:10.797559 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b9694dd79-ch65s" event={"ID":"ab81ee43-70d7-4de2-8bbf-051589116ff2","Type":"ContainerDied","Data":"6635673f723becbde33f69c13f26598e0e19eb2a90ae4b5e128db3bd2ccf2d47"} Feb 16 02:38:10.797830 master-0 kubenswrapper[31559]: I0216 02:38:10.797792 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b9694dd79-ch65s" event={"ID":"ab81ee43-70d7-4de2-8bbf-051589116ff2","Type":"ContainerDied","Data":"bd92caf65fc5368f81922b88f84828b6f2f37212fb0e290887a3a7a17fcc3408"} Feb 16 02:38:10.797874 master-0 kubenswrapper[31559]: I0216 02:38:10.797591 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b9694dd79-ch65s" Feb 16 02:38:10.797874 master-0 kubenswrapper[31559]: I0216 02:38:10.797840 31559 scope.go:117] "RemoveContainer" containerID="6635673f723becbde33f69c13f26598e0e19eb2a90ae4b5e128db3bd2ccf2d47" Feb 16 02:38:10.826013 master-0 kubenswrapper[31559]: I0216 02:38:10.825974 31559 scope.go:117] "RemoveContainer" containerID="42266076f9abee5427f7a9544d129f546964c9463de355abb96de6794469ada6" Feb 16 02:38:10.873947 master-0 kubenswrapper[31559]: I0216 02:38:10.873902 31559 scope.go:117] "RemoveContainer" containerID="6635673f723becbde33f69c13f26598e0e19eb2a90ae4b5e128db3bd2ccf2d47" Feb 16 02:38:10.874393 master-0 kubenswrapper[31559]: E0216 02:38:10.874341 31559 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6635673f723becbde33f69c13f26598e0e19eb2a90ae4b5e128db3bd2ccf2d47\": container with ID starting with 6635673f723becbde33f69c13f26598e0e19eb2a90ae4b5e128db3bd2ccf2d47 not found: ID does not exist" containerID="6635673f723becbde33f69c13f26598e0e19eb2a90ae4b5e128db3bd2ccf2d47" Feb 16 02:38:10.875119 master-0 kubenswrapper[31559]: I0216 02:38:10.874397 31559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6635673f723becbde33f69c13f26598e0e19eb2a90ae4b5e128db3bd2ccf2d47"} err="failed to get container status \"6635673f723becbde33f69c13f26598e0e19eb2a90ae4b5e128db3bd2ccf2d47\": rpc error: code = NotFound desc = could not find container \"6635673f723becbde33f69c13f26598e0e19eb2a90ae4b5e128db3bd2ccf2d47\": container with ID starting with 6635673f723becbde33f69c13f26598e0e19eb2a90ae4b5e128db3bd2ccf2d47 not found: ID does not exist" Feb 16 02:38:10.875119 master-0 kubenswrapper[31559]: I0216 02:38:10.875119 31559 scope.go:117] "RemoveContainer" containerID="42266076f9abee5427f7a9544d129f546964c9463de355abb96de6794469ada6" Feb 16 02:38:10.876802 master-0 kubenswrapper[31559]: E0216 02:38:10.875547 31559 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42266076f9abee5427f7a9544d129f546964c9463de355abb96de6794469ada6\": container with ID starting with 42266076f9abee5427f7a9544d129f546964c9463de355abb96de6794469ada6 not found: ID does not exist" containerID="42266076f9abee5427f7a9544d129f546964c9463de355abb96de6794469ada6" Feb 16 02:38:10.876802 master-0 kubenswrapper[31559]: I0216 02:38:10.875608 31559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42266076f9abee5427f7a9544d129f546964c9463de355abb96de6794469ada6"} err="failed to get container status \"42266076f9abee5427f7a9544d129f546964c9463de355abb96de6794469ada6\": rpc error: code = NotFound desc = could not find container \"42266076f9abee5427f7a9544d129f546964c9463de355abb96de6794469ada6\": container with ID starting with 42266076f9abee5427f7a9544d129f546964c9463de355abb96de6794469ada6 not found: ID does not exist" Feb 16 02:38:10.888400 master-0 kubenswrapper[31559]: I0216 02:38:10.888347 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b9694dd79-ch65s"] Feb 16 02:38:10.983138 master-0 kubenswrapper[31559]: I0216 02:38:10.983074 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7b9694dd79-ch65s"] Feb 16 02:38:11.826670 master-0 kubenswrapper[31559]: I0216 02:38:11.826591 31559 generic.go:334] "Generic (PLEG): container finished" podID="8dba87a1-1996-4dc0-bc24-cdf9a2fa756f" containerID="23d055c25f63892b80e04926a3ff71c4d4b17071d223b367dada825790016325" exitCode=0 Feb 16 02:38:11.827528 master-0 kubenswrapper[31559]: I0216 02:38:11.826717 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fd49994df-plzpw" event={"ID":"8dba87a1-1996-4dc0-bc24-cdf9a2fa756f","Type":"ContainerDied","Data":"23d055c25f63892b80e04926a3ff71c4d4b17071d223b367dada825790016325"} Feb 16 02:38:11.830473 master-0 kubenswrapper[31559]: I0216 02:38:11.830394 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 16 02:38:11.831307 master-0 kubenswrapper[31559]: E0216 02:38:11.831268 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab81ee43-70d7-4de2-8bbf-051589116ff2" containerName="init" Feb 16 02:38:11.831513 master-0 kubenswrapper[31559]: I0216 02:38:11.831489 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab81ee43-70d7-4de2-8bbf-051589116ff2" containerName="init" Feb 16 02:38:11.831703 master-0 kubenswrapper[31559]: E0216 02:38:11.831679 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab81ee43-70d7-4de2-8bbf-051589116ff2" containerName="dnsmasq-dns" Feb 16 02:38:11.831888 master-0 kubenswrapper[31559]: I0216 02:38:11.831857 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab81ee43-70d7-4de2-8bbf-051589116ff2" containerName="dnsmasq-dns" Feb 16 02:38:11.832478 master-0 kubenswrapper[31559]: I0216 02:38:11.832412 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab81ee43-70d7-4de2-8bbf-051589116ff2" containerName="dnsmasq-dns" Feb 16 02:38:11.846719 master-0 kubenswrapper[31559]: I0216 02:38:11.846652 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 16 02:38:11.849979 master-0 kubenswrapper[31559]: I0216 02:38:11.849933 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 16 02:38:11.850225 master-0 kubenswrapper[31559]: I0216 02:38:11.850190 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 16 02:38:11.850506 master-0 kubenswrapper[31559]: I0216 02:38:11.850473 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 16 02:38:11.896460 master-0 kubenswrapper[31559]: I0216 02:38:11.896363 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 16 02:38:11.940122 master-0 kubenswrapper[31559]: I0216 02:38:11.940054 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab81ee43-70d7-4de2-8bbf-051589116ff2" path="/var/lib/kubelet/pods/ab81ee43-70d7-4de2-8bbf-051589116ff2/volumes" Feb 16 02:38:12.107143 master-0 kubenswrapper[31559]: I0216 02:38:12.107031 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c53aef07-7967-4da8-a56a-018b25e9b92e-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"c53aef07-7967-4da8-a56a-018b25e9b92e\") " pod="openstack/swift-storage-0" Feb 16 02:38:12.107143 master-0 kubenswrapper[31559]: I0216 02:38:12.107147 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c53aef07-7967-4da8-a56a-018b25e9b92e-cache\") pod \"swift-storage-0\" (UID: \"c53aef07-7967-4da8-a56a-018b25e9b92e\") " pod="openstack/swift-storage-0" Feb 16 02:38:12.107510 master-0 kubenswrapper[31559]: I0216 02:38:12.107350 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c53aef07-7967-4da8-a56a-018b25e9b92e-etc-swift\") pod \"swift-storage-0\" (UID: \"c53aef07-7967-4da8-a56a-018b25e9b92e\") " pod="openstack/swift-storage-0" Feb 16 02:38:12.107510 master-0 kubenswrapper[31559]: I0216 02:38:12.107496 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-39711277-b959-4101-a8d0-0b6434ce4062\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b0fab6d1-a6c2-4520-adad-1a4fa83db9a1\") pod \"swift-storage-0\" (UID: \"c53aef07-7967-4da8-a56a-018b25e9b92e\") " pod="openstack/swift-storage-0" Feb 16 02:38:12.107677 master-0 kubenswrapper[31559]: I0216 02:38:12.107542 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2j8tt\" (UniqueName: \"kubernetes.io/projected/c53aef07-7967-4da8-a56a-018b25e9b92e-kube-api-access-2j8tt\") pod \"swift-storage-0\" (UID: \"c53aef07-7967-4da8-a56a-018b25e9b92e\") " pod="openstack/swift-storage-0" Feb 16 02:38:12.107677 master-0 kubenswrapper[31559]: I0216 02:38:12.107656 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c53aef07-7967-4da8-a56a-018b25e9b92e-lock\") pod \"swift-storage-0\" (UID: \"c53aef07-7967-4da8-a56a-018b25e9b92e\") " pod="openstack/swift-storage-0" Feb 16 02:38:12.182230 master-0 kubenswrapper[31559]: I0216 02:38:12.182147 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 16 02:38:12.182230 master-0 kubenswrapper[31559]: I0216 02:38:12.182228 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 16 02:38:12.210136 master-0 kubenswrapper[31559]: I0216 02:38:12.210041 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c53aef07-7967-4da8-a56a-018b25e9b92e-lock\") pod \"swift-storage-0\" (UID: \"c53aef07-7967-4da8-a56a-018b25e9b92e\") " pod="openstack/swift-storage-0" Feb 16 02:38:12.210364 master-0 kubenswrapper[31559]: I0216 02:38:12.210330 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c53aef07-7967-4da8-a56a-018b25e9b92e-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"c53aef07-7967-4da8-a56a-018b25e9b92e\") " pod="openstack/swift-storage-0" Feb 16 02:38:12.210418 master-0 kubenswrapper[31559]: I0216 02:38:12.210395 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c53aef07-7967-4da8-a56a-018b25e9b92e-cache\") pod \"swift-storage-0\" (UID: \"c53aef07-7967-4da8-a56a-018b25e9b92e\") " pod="openstack/swift-storage-0" Feb 16 02:38:12.210610 master-0 kubenswrapper[31559]: I0216 02:38:12.210530 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c53aef07-7967-4da8-a56a-018b25e9b92e-etc-swift\") pod \"swift-storage-0\" (UID: \"c53aef07-7967-4da8-a56a-018b25e9b92e\") " pod="openstack/swift-storage-0" Feb 16 02:38:12.210784 master-0 kubenswrapper[31559]: I0216 02:38:12.210666 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-39711277-b959-4101-a8d0-0b6434ce4062\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b0fab6d1-a6c2-4520-adad-1a4fa83db9a1\") pod \"swift-storage-0\" (UID: \"c53aef07-7967-4da8-a56a-018b25e9b92e\") " pod="openstack/swift-storage-0" Feb 16 02:38:12.210784 master-0 kubenswrapper[31559]: I0216 02:38:12.210693 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c53aef07-7967-4da8-a56a-018b25e9b92e-lock\") pod \"swift-storage-0\" (UID: \"c53aef07-7967-4da8-a56a-018b25e9b92e\") " pod="openstack/swift-storage-0" Feb 16 02:38:12.210784 master-0 kubenswrapper[31559]: I0216 02:38:12.210747 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2j8tt\" (UniqueName: \"kubernetes.io/projected/c53aef07-7967-4da8-a56a-018b25e9b92e-kube-api-access-2j8tt\") pod \"swift-storage-0\" (UID: \"c53aef07-7967-4da8-a56a-018b25e9b92e\") " pod="openstack/swift-storage-0" Feb 16 02:38:12.211317 master-0 kubenswrapper[31559]: E0216 02:38:12.210820 31559 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 02:38:12.211317 master-0 kubenswrapper[31559]: E0216 02:38:12.210839 31559 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 02:38:12.211317 master-0 kubenswrapper[31559]: E0216 02:38:12.210886 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c53aef07-7967-4da8-a56a-018b25e9b92e-etc-swift podName:c53aef07-7967-4da8-a56a-018b25e9b92e nodeName:}" failed. No retries permitted until 2026-02-16 02:38:12.710867046 +0000 UTC m=+945.055473071 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c53aef07-7967-4da8-a56a-018b25e9b92e-etc-swift") pod "swift-storage-0" (UID: "c53aef07-7967-4da8-a56a-018b25e9b92e") : configmap "swift-ring-files" not found Feb 16 02:38:12.212146 master-0 kubenswrapper[31559]: I0216 02:38:12.212100 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c53aef07-7967-4da8-a56a-018b25e9b92e-cache\") pod \"swift-storage-0\" (UID: \"c53aef07-7967-4da8-a56a-018b25e9b92e\") " pod="openstack/swift-storage-0" Feb 16 02:38:12.213181 master-0 kubenswrapper[31559]: I0216 02:38:12.213136 31559 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 02:38:12.213268 master-0 kubenswrapper[31559]: I0216 02:38:12.213185 31559 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-39711277-b959-4101-a8d0-0b6434ce4062\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b0fab6d1-a6c2-4520-adad-1a4fa83db9a1\") pod \"swift-storage-0\" (UID: \"c53aef07-7967-4da8-a56a-018b25e9b92e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/1d72addbed169f974b7cf0a388878c6a6a70ae38bb0fa223741f0d3df8f8ddfc/globalmount\"" pod="openstack/swift-storage-0" Feb 16 02:38:12.232970 master-0 kubenswrapper[31559]: I0216 02:38:12.219010 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c53aef07-7967-4da8-a56a-018b25e9b92e-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"c53aef07-7967-4da8-a56a-018b25e9b92e\") " pod="openstack/swift-storage-0" Feb 16 02:38:12.261383 master-0 kubenswrapper[31559]: I0216 02:38:12.261323 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2j8tt\" (UniqueName: \"kubernetes.io/projected/c53aef07-7967-4da8-a56a-018b25e9b92e-kube-api-access-2j8tt\") pod \"swift-storage-0\" (UID: \"c53aef07-7967-4da8-a56a-018b25e9b92e\") " pod="openstack/swift-storage-0" Feb 16 02:38:12.286715 master-0 kubenswrapper[31559]: I0216 02:38:12.286646 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 16 02:38:12.720630 master-0 kubenswrapper[31559]: I0216 02:38:12.720513 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c53aef07-7967-4da8-a56a-018b25e9b92e-etc-swift\") pod \"swift-storage-0\" (UID: \"c53aef07-7967-4da8-a56a-018b25e9b92e\") " pod="openstack/swift-storage-0" Feb 16 02:38:12.720920 master-0 kubenswrapper[31559]: E0216 02:38:12.720830 31559 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 02:38:12.720920 master-0 kubenswrapper[31559]: E0216 02:38:12.720883 31559 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 02:38:12.721057 master-0 kubenswrapper[31559]: E0216 02:38:12.720974 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c53aef07-7967-4da8-a56a-018b25e9b92e-etc-swift podName:c53aef07-7967-4da8-a56a-018b25e9b92e nodeName:}" failed. No retries permitted until 2026-02-16 02:38:13.720941543 +0000 UTC m=+946.065547598 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c53aef07-7967-4da8-a56a-018b25e9b92e-etc-swift") pod "swift-storage-0" (UID: "c53aef07-7967-4da8-a56a-018b25e9b92e") : configmap "swift-ring-files" not found Feb 16 02:38:12.860482 master-0 kubenswrapper[31559]: I0216 02:38:12.859828 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fd49994df-plzpw" event={"ID":"8dba87a1-1996-4dc0-bc24-cdf9a2fa756f","Type":"ContainerStarted","Data":"f654109ada4b3445a4fbbdd048831e4736211bc00c4c70d2aecbe8baefe2e609"} Feb 16 02:38:12.861526 master-0 kubenswrapper[31559]: I0216 02:38:12.861349 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6fd49994df-plzpw" Feb 16 02:38:12.987924 master-0 kubenswrapper[31559]: I0216 02:38:12.985803 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6fd49994df-plzpw" podStartSLOduration=3.985778581 podStartE2EDuration="3.985778581s" podCreationTimestamp="2026-02-16 02:38:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:38:12.985282028 +0000 UTC m=+945.329888103" watchObservedRunningTime="2026-02-16 02:38:12.985778581 +0000 UTC m=+945.330384636" Feb 16 02:38:13.018040 master-0 kubenswrapper[31559]: I0216 02:38:13.017948 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 16 02:38:13.357474 master-0 kubenswrapper[31559]: I0216 02:38:13.357321 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 16 02:38:13.357474 master-0 kubenswrapper[31559]: I0216 02:38:13.357403 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 16 02:38:13.544676 master-0 kubenswrapper[31559]: I0216 02:38:13.544317 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 16 02:38:13.641794 master-0 kubenswrapper[31559]: I0216 02:38:13.641619 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-39711277-b959-4101-a8d0-0b6434ce4062\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b0fab6d1-a6c2-4520-adad-1a4fa83db9a1\") pod \"swift-storage-0\" (UID: \"c53aef07-7967-4da8-a56a-018b25e9b92e\") " pod="openstack/swift-storage-0" Feb 16 02:38:13.764979 master-0 kubenswrapper[31559]: I0216 02:38:13.764928 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c53aef07-7967-4da8-a56a-018b25e9b92e-etc-swift\") pod \"swift-storage-0\" (UID: \"c53aef07-7967-4da8-a56a-018b25e9b92e\") " pod="openstack/swift-storage-0" Feb 16 02:38:13.765300 master-0 kubenswrapper[31559]: E0216 02:38:13.765182 31559 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 02:38:13.765392 master-0 kubenswrapper[31559]: E0216 02:38:13.765377 31559 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 02:38:13.765570 master-0 kubenswrapper[31559]: E0216 02:38:13.765552 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c53aef07-7967-4da8-a56a-018b25e9b92e-etc-swift podName:c53aef07-7967-4da8-a56a-018b25e9b92e nodeName:}" failed. No retries permitted until 2026-02-16 02:38:15.765531183 +0000 UTC m=+948.110137208 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c53aef07-7967-4da8-a56a-018b25e9b92e-etc-swift") pod "swift-storage-0" (UID: "c53aef07-7967-4da8-a56a-018b25e9b92e") : configmap "swift-ring-files" not found Feb 16 02:38:13.975057 master-0 kubenswrapper[31559]: I0216 02:38:13.974950 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 16 02:38:15.225647 master-0 kubenswrapper[31559]: I0216 02:38:15.225569 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-dtrws"] Feb 16 02:38:15.232270 master-0 kubenswrapper[31559]: I0216 02:38:15.232222 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-dtrws" Feb 16 02:38:15.237955 master-0 kubenswrapper[31559]: I0216 02:38:15.237886 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 16 02:38:15.238368 master-0 kubenswrapper[31559]: I0216 02:38:15.238317 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 16 02:38:15.239131 master-0 kubenswrapper[31559]: I0216 02:38:15.239085 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 16 02:38:15.259122 master-0 kubenswrapper[31559]: I0216 02:38:15.259019 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-9l9b8"] Feb 16 02:38:15.261170 master-0 kubenswrapper[31559]: I0216 02:38:15.261106 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9l9b8" Feb 16 02:38:15.263489 master-0 kubenswrapper[31559]: I0216 02:38:15.263414 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 16 02:38:15.430560 master-0 kubenswrapper[31559]: I0216 02:38:15.430392 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a78217a-f6d3-4ff6-9a50-24367edc7a67-operator-scripts\") pod \"root-account-create-update-9l9b8\" (UID: \"4a78217a-f6d3-4ff6-9a50-24367edc7a67\") " pod="openstack/root-account-create-update-9l9b8" Feb 16 02:38:15.430846 master-0 kubenswrapper[31559]: I0216 02:38:15.430583 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/cb609b01-8137-43bc-a5b5-a9c3744a9067-swiftconf\") pod \"swift-ring-rebalance-dtrws\" (UID: \"cb609b01-8137-43bc-a5b5-a9c3744a9067\") " pod="openstack/swift-ring-rebalance-dtrws" Feb 16 02:38:15.431310 master-0 kubenswrapper[31559]: I0216 02:38:15.431187 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb609b01-8137-43bc-a5b5-a9c3744a9067-combined-ca-bundle\") pod \"swift-ring-rebalance-dtrws\" (UID: \"cb609b01-8137-43bc-a5b5-a9c3744a9067\") " pod="openstack/swift-ring-rebalance-dtrws" Feb 16 02:38:15.431476 master-0 kubenswrapper[31559]: I0216 02:38:15.431358 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cb609b01-8137-43bc-a5b5-a9c3744a9067-scripts\") pod \"swift-ring-rebalance-dtrws\" (UID: \"cb609b01-8137-43bc-a5b5-a9c3744a9067\") " pod="openstack/swift-ring-rebalance-dtrws" Feb 16 02:38:15.431476 master-0 kubenswrapper[31559]: I0216 02:38:15.431459 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/cb609b01-8137-43bc-a5b5-a9c3744a9067-etc-swift\") pod \"swift-ring-rebalance-dtrws\" (UID: \"cb609b01-8137-43bc-a5b5-a9c3744a9067\") " pod="openstack/swift-ring-rebalance-dtrws" Feb 16 02:38:15.431719 master-0 kubenswrapper[31559]: I0216 02:38:15.431685 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbcwz\" (UniqueName: \"kubernetes.io/projected/4a78217a-f6d3-4ff6-9a50-24367edc7a67-kube-api-access-cbcwz\") pod \"root-account-create-update-9l9b8\" (UID: \"4a78217a-f6d3-4ff6-9a50-24367edc7a67\") " pod="openstack/root-account-create-update-9l9b8" Feb 16 02:38:15.431831 master-0 kubenswrapper[31559]: I0216 02:38:15.431727 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/cb609b01-8137-43bc-a5b5-a9c3744a9067-ring-data-devices\") pod \"swift-ring-rebalance-dtrws\" (UID: \"cb609b01-8137-43bc-a5b5-a9c3744a9067\") " pod="openstack/swift-ring-rebalance-dtrws" Feb 16 02:38:15.431910 master-0 kubenswrapper[31559]: I0216 02:38:15.431870 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/cb609b01-8137-43bc-a5b5-a9c3744a9067-dispersionconf\") pod \"swift-ring-rebalance-dtrws\" (UID: \"cb609b01-8137-43bc-a5b5-a9c3744a9067\") " pod="openstack/swift-ring-rebalance-dtrws" Feb 16 02:38:15.431985 master-0 kubenswrapper[31559]: I0216 02:38:15.431965 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nf8nl\" (UniqueName: \"kubernetes.io/projected/cb609b01-8137-43bc-a5b5-a9c3744a9067-kube-api-access-nf8nl\") pod \"swift-ring-rebalance-dtrws\" (UID: \"cb609b01-8137-43bc-a5b5-a9c3744a9067\") " pod="openstack/swift-ring-rebalance-dtrws" Feb 16 02:38:15.440590 master-0 kubenswrapper[31559]: I0216 02:38:15.440512 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-dtrws"] Feb 16 02:38:15.462589 master-0 kubenswrapper[31559]: I0216 02:38:15.462143 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-9l9b8"] Feb 16 02:38:15.534659 master-0 kubenswrapper[31559]: I0216 02:38:15.534273 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/cb609b01-8137-43bc-a5b5-a9c3744a9067-etc-swift\") pod \"swift-ring-rebalance-dtrws\" (UID: \"cb609b01-8137-43bc-a5b5-a9c3744a9067\") " pod="openstack/swift-ring-rebalance-dtrws" Feb 16 02:38:15.534659 master-0 kubenswrapper[31559]: I0216 02:38:15.534406 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbcwz\" (UniqueName: \"kubernetes.io/projected/4a78217a-f6d3-4ff6-9a50-24367edc7a67-kube-api-access-cbcwz\") pod \"root-account-create-update-9l9b8\" (UID: \"4a78217a-f6d3-4ff6-9a50-24367edc7a67\") " pod="openstack/root-account-create-update-9l9b8" Feb 16 02:38:15.534659 master-0 kubenswrapper[31559]: I0216 02:38:15.534461 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/cb609b01-8137-43bc-a5b5-a9c3744a9067-ring-data-devices\") pod \"swift-ring-rebalance-dtrws\" (UID: \"cb609b01-8137-43bc-a5b5-a9c3744a9067\") " pod="openstack/swift-ring-rebalance-dtrws" Feb 16 02:38:15.535024 master-0 kubenswrapper[31559]: I0216 02:38:15.534806 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/cb609b01-8137-43bc-a5b5-a9c3744a9067-dispersionconf\") pod \"swift-ring-rebalance-dtrws\" (UID: \"cb609b01-8137-43bc-a5b5-a9c3744a9067\") " pod="openstack/swift-ring-rebalance-dtrws" Feb 16 02:38:15.535024 master-0 kubenswrapper[31559]: I0216 02:38:15.534986 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/cb609b01-8137-43bc-a5b5-a9c3744a9067-etc-swift\") pod \"swift-ring-rebalance-dtrws\" (UID: \"cb609b01-8137-43bc-a5b5-a9c3744a9067\") " pod="openstack/swift-ring-rebalance-dtrws" Feb 16 02:38:15.535110 master-0 kubenswrapper[31559]: I0216 02:38:15.535030 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nf8nl\" (UniqueName: \"kubernetes.io/projected/cb609b01-8137-43bc-a5b5-a9c3744a9067-kube-api-access-nf8nl\") pod \"swift-ring-rebalance-dtrws\" (UID: \"cb609b01-8137-43bc-a5b5-a9c3744a9067\") " pod="openstack/swift-ring-rebalance-dtrws" Feb 16 02:38:15.535155 master-0 kubenswrapper[31559]: I0216 02:38:15.535110 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a78217a-f6d3-4ff6-9a50-24367edc7a67-operator-scripts\") pod \"root-account-create-update-9l9b8\" (UID: \"4a78217a-f6d3-4ff6-9a50-24367edc7a67\") " pod="openstack/root-account-create-update-9l9b8" Feb 16 02:38:15.535339 master-0 kubenswrapper[31559]: I0216 02:38:15.535294 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/cb609b01-8137-43bc-a5b5-a9c3744a9067-swiftconf\") pod \"swift-ring-rebalance-dtrws\" (UID: \"cb609b01-8137-43bc-a5b5-a9c3744a9067\") " pod="openstack/swift-ring-rebalance-dtrws" Feb 16 02:38:15.535618 master-0 kubenswrapper[31559]: I0216 02:38:15.535582 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb609b01-8137-43bc-a5b5-a9c3744a9067-combined-ca-bundle\") pod \"swift-ring-rebalance-dtrws\" (UID: \"cb609b01-8137-43bc-a5b5-a9c3744a9067\") " pod="openstack/swift-ring-rebalance-dtrws" Feb 16 02:38:15.535726 master-0 kubenswrapper[31559]: I0216 02:38:15.535698 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cb609b01-8137-43bc-a5b5-a9c3744a9067-scripts\") pod \"swift-ring-rebalance-dtrws\" (UID: \"cb609b01-8137-43bc-a5b5-a9c3744a9067\") " pod="openstack/swift-ring-rebalance-dtrws" Feb 16 02:38:15.536132 master-0 kubenswrapper[31559]: I0216 02:38:15.536086 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a78217a-f6d3-4ff6-9a50-24367edc7a67-operator-scripts\") pod \"root-account-create-update-9l9b8\" (UID: \"4a78217a-f6d3-4ff6-9a50-24367edc7a67\") " pod="openstack/root-account-create-update-9l9b8" Feb 16 02:38:15.536180 master-0 kubenswrapper[31559]: I0216 02:38:15.536159 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/cb609b01-8137-43bc-a5b5-a9c3744a9067-ring-data-devices\") pod \"swift-ring-rebalance-dtrws\" (UID: \"cb609b01-8137-43bc-a5b5-a9c3744a9067\") " pod="openstack/swift-ring-rebalance-dtrws" Feb 16 02:38:15.536515 master-0 kubenswrapper[31559]: I0216 02:38:15.536482 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cb609b01-8137-43bc-a5b5-a9c3744a9067-scripts\") pod \"swift-ring-rebalance-dtrws\" (UID: \"cb609b01-8137-43bc-a5b5-a9c3744a9067\") " pod="openstack/swift-ring-rebalance-dtrws" Feb 16 02:38:15.540052 master-0 kubenswrapper[31559]: I0216 02:38:15.540000 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb609b01-8137-43bc-a5b5-a9c3744a9067-combined-ca-bundle\") pod \"swift-ring-rebalance-dtrws\" (UID: \"cb609b01-8137-43bc-a5b5-a9c3744a9067\") " pod="openstack/swift-ring-rebalance-dtrws" Feb 16 02:38:15.540613 master-0 kubenswrapper[31559]: I0216 02:38:15.540563 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/cb609b01-8137-43bc-a5b5-a9c3744a9067-swiftconf\") pod \"swift-ring-rebalance-dtrws\" (UID: \"cb609b01-8137-43bc-a5b5-a9c3744a9067\") " pod="openstack/swift-ring-rebalance-dtrws" Feb 16 02:38:15.540672 master-0 kubenswrapper[31559]: I0216 02:38:15.540620 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/cb609b01-8137-43bc-a5b5-a9c3744a9067-dispersionconf\") pod \"swift-ring-rebalance-dtrws\" (UID: \"cb609b01-8137-43bc-a5b5-a9c3744a9067\") " pod="openstack/swift-ring-rebalance-dtrws" Feb 16 02:38:15.596521 master-0 kubenswrapper[31559]: I0216 02:38:15.596466 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbcwz\" (UniqueName: \"kubernetes.io/projected/4a78217a-f6d3-4ff6-9a50-24367edc7a67-kube-api-access-cbcwz\") pod \"root-account-create-update-9l9b8\" (UID: \"4a78217a-f6d3-4ff6-9a50-24367edc7a67\") " pod="openstack/root-account-create-update-9l9b8" Feb 16 02:38:15.602540 master-0 kubenswrapper[31559]: I0216 02:38:15.602480 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nf8nl\" (UniqueName: \"kubernetes.io/projected/cb609b01-8137-43bc-a5b5-a9c3744a9067-kube-api-access-nf8nl\") pod \"swift-ring-rebalance-dtrws\" (UID: \"cb609b01-8137-43bc-a5b5-a9c3744a9067\") " pod="openstack/swift-ring-rebalance-dtrws" Feb 16 02:38:15.840808 master-0 kubenswrapper[31559]: I0216 02:38:15.840632 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c53aef07-7967-4da8-a56a-018b25e9b92e-etc-swift\") pod \"swift-storage-0\" (UID: \"c53aef07-7967-4da8-a56a-018b25e9b92e\") " pod="openstack/swift-storage-0" Feb 16 02:38:15.841089 master-0 kubenswrapper[31559]: E0216 02:38:15.841000 31559 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 02:38:15.841089 master-0 kubenswrapper[31559]: E0216 02:38:15.841059 31559 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 02:38:15.841236 master-0 kubenswrapper[31559]: E0216 02:38:15.841162 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c53aef07-7967-4da8-a56a-018b25e9b92e-etc-swift podName:c53aef07-7967-4da8-a56a-018b25e9b92e nodeName:}" failed. No retries permitted until 2026-02-16 02:38:19.841133144 +0000 UTC m=+952.185739189 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c53aef07-7967-4da8-a56a-018b25e9b92e-etc-swift") pod "swift-storage-0" (UID: "c53aef07-7967-4da8-a56a-018b25e9b92e") : configmap "swift-ring-files" not found Feb 16 02:38:15.869480 master-0 kubenswrapper[31559]: I0216 02:38:15.869407 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-dtrws" Feb 16 02:38:15.891873 master-0 kubenswrapper[31559]: I0216 02:38:15.891585 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9l9b8" Feb 16 02:38:16.558577 master-0 kubenswrapper[31559]: I0216 02:38:16.558504 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-dtrws"] Feb 16 02:38:16.563308 master-0 kubenswrapper[31559]: W0216 02:38:16.563256 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb609b01_8137_43bc_a5b5_a9c3744a9067.slice/crio-7eb3c02b621dd1c110b7cc269322aa57266f737d54b7f22d2cb39346faa57e92 WatchSource:0}: Error finding container 7eb3c02b621dd1c110b7cc269322aa57266f737d54b7f22d2cb39346faa57e92: Status 404 returned error can't find the container with id 7eb3c02b621dd1c110b7cc269322aa57266f737d54b7f22d2cb39346faa57e92 Feb 16 02:38:16.629288 master-0 kubenswrapper[31559]: I0216 02:38:16.629186 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-9l9b8"] Feb 16 02:38:16.632522 master-0 kubenswrapper[31559]: W0216 02:38:16.632472 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a78217a_f6d3_4ff6_9a50_24367edc7a67.slice/crio-bd605565ddf97991689aef2eacba808933e10b1d5e3facfb9407988838e5a586 WatchSource:0}: Error finding container bd605565ddf97991689aef2eacba808933e10b1d5e3facfb9407988838e5a586: Status 404 returned error can't find the container with id bd605565ddf97991689aef2eacba808933e10b1d5e3facfb9407988838e5a586 Feb 16 02:38:16.918200 master-0 kubenswrapper[31559]: I0216 02:38:16.918118 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9l9b8" event={"ID":"4a78217a-f6d3-4ff6-9a50-24367edc7a67","Type":"ContainerStarted","Data":"d813a884d353fce7a8a09dbc9395e416cc6e807a219a672fddb07b7afb2e5e70"} Feb 16 02:38:16.918200 master-0 kubenswrapper[31559]: I0216 02:38:16.918200 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9l9b8" event={"ID":"4a78217a-f6d3-4ff6-9a50-24367edc7a67","Type":"ContainerStarted","Data":"bd605565ddf97991689aef2eacba808933e10b1d5e3facfb9407988838e5a586"} Feb 16 02:38:16.922296 master-0 kubenswrapper[31559]: I0216 02:38:16.922230 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-dtrws" event={"ID":"cb609b01-8137-43bc-a5b5-a9c3744a9067","Type":"ContainerStarted","Data":"7eb3c02b621dd1c110b7cc269322aa57266f737d54b7f22d2cb39346faa57e92"} Feb 16 02:38:16.955403 master-0 kubenswrapper[31559]: I0216 02:38:16.955306 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-9l9b8" podStartSLOduration=1.95528469 podStartE2EDuration="1.95528469s" podCreationTimestamp="2026-02-16 02:38:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:38:16.940323867 +0000 UTC m=+949.284929942" watchObservedRunningTime="2026-02-16 02:38:16.95528469 +0000 UTC m=+949.299890715" Feb 16 02:38:17.939558 master-0 kubenswrapper[31559]: I0216 02:38:17.939479 31559 generic.go:334] "Generic (PLEG): container finished" podID="4a78217a-f6d3-4ff6-9a50-24367edc7a67" containerID="d813a884d353fce7a8a09dbc9395e416cc6e807a219a672fddb07b7afb2e5e70" exitCode=0 Feb 16 02:38:17.948887 master-0 kubenswrapper[31559]: I0216 02:38:17.948832 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9l9b8" event={"ID":"4a78217a-f6d3-4ff6-9a50-24367edc7a67","Type":"ContainerDied","Data":"d813a884d353fce7a8a09dbc9395e416cc6e807a219a672fddb07b7afb2e5e70"} Feb 16 02:38:18.105796 master-0 kubenswrapper[31559]: I0216 02:38:18.105547 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-fx55d"] Feb 16 02:38:18.126934 master-0 kubenswrapper[31559]: I0216 02:38:18.106971 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-fx55d" Feb 16 02:38:18.126934 master-0 kubenswrapper[31559]: I0216 02:38:18.117364 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-fx55d"] Feb 16 02:38:18.135018 master-0 kubenswrapper[31559]: I0216 02:38:18.134954 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec2f0932-270d-4b92-a1a3-03181503039b-operator-scripts\") pod \"glance-db-create-fx55d\" (UID: \"ec2f0932-270d-4b92-a1a3-03181503039b\") " pod="openstack/glance-db-create-fx55d" Feb 16 02:38:18.135198 master-0 kubenswrapper[31559]: I0216 02:38:18.135124 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mt2zl\" (UniqueName: \"kubernetes.io/projected/ec2f0932-270d-4b92-a1a3-03181503039b-kube-api-access-mt2zl\") pod \"glance-db-create-fx55d\" (UID: \"ec2f0932-270d-4b92-a1a3-03181503039b\") " pod="openstack/glance-db-create-fx55d" Feb 16 02:38:18.219225 master-0 kubenswrapper[31559]: I0216 02:38:18.218880 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-a2d6-account-create-update-f9fsm"] Feb 16 02:38:18.221005 master-0 kubenswrapper[31559]: I0216 02:38:18.220956 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a2d6-account-create-update-f9fsm" Feb 16 02:38:18.223499 master-0 kubenswrapper[31559]: I0216 02:38:18.223464 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 16 02:38:18.238036 master-0 kubenswrapper[31559]: I0216 02:38:18.237965 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec2f0932-270d-4b92-a1a3-03181503039b-operator-scripts\") pod \"glance-db-create-fx55d\" (UID: \"ec2f0932-270d-4b92-a1a3-03181503039b\") " pod="openstack/glance-db-create-fx55d" Feb 16 02:38:18.238036 master-0 kubenswrapper[31559]: I0216 02:38:18.238138 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mt2zl\" (UniqueName: \"kubernetes.io/projected/ec2f0932-270d-4b92-a1a3-03181503039b-kube-api-access-mt2zl\") pod \"glance-db-create-fx55d\" (UID: \"ec2f0932-270d-4b92-a1a3-03181503039b\") " pod="openstack/glance-db-create-fx55d" Feb 16 02:38:18.238036 master-0 kubenswrapper[31559]: I0216 02:38:18.238203 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/292a5fd9-3c50-42c9-8f8d-bfbe7dbec084-operator-scripts\") pod \"glance-a2d6-account-create-update-f9fsm\" (UID: \"292a5fd9-3c50-42c9-8f8d-bfbe7dbec084\") " pod="openstack/glance-a2d6-account-create-update-f9fsm" Feb 16 02:38:18.238036 master-0 kubenswrapper[31559]: I0216 02:38:18.238275 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgv5l\" (UniqueName: \"kubernetes.io/projected/292a5fd9-3c50-42c9-8f8d-bfbe7dbec084-kube-api-access-xgv5l\") pod \"glance-a2d6-account-create-update-f9fsm\" (UID: \"292a5fd9-3c50-42c9-8f8d-bfbe7dbec084\") " pod="openstack/glance-a2d6-account-create-update-f9fsm" Feb 16 02:38:18.239278 master-0 kubenswrapper[31559]: I0216 02:38:18.239224 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec2f0932-270d-4b92-a1a3-03181503039b-operator-scripts\") pod \"glance-db-create-fx55d\" (UID: \"ec2f0932-270d-4b92-a1a3-03181503039b\") " pod="openstack/glance-db-create-fx55d" Feb 16 02:38:18.257523 master-0 kubenswrapper[31559]: I0216 02:38:18.257362 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-a2d6-account-create-update-f9fsm"] Feb 16 02:38:18.261217 master-0 kubenswrapper[31559]: I0216 02:38:18.261187 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mt2zl\" (UniqueName: \"kubernetes.io/projected/ec2f0932-270d-4b92-a1a3-03181503039b-kube-api-access-mt2zl\") pod \"glance-db-create-fx55d\" (UID: \"ec2f0932-270d-4b92-a1a3-03181503039b\") " pod="openstack/glance-db-create-fx55d" Feb 16 02:38:18.340043 master-0 kubenswrapper[31559]: I0216 02:38:18.339989 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/292a5fd9-3c50-42c9-8f8d-bfbe7dbec084-operator-scripts\") pod \"glance-a2d6-account-create-update-f9fsm\" (UID: \"292a5fd9-3c50-42c9-8f8d-bfbe7dbec084\") " pod="openstack/glance-a2d6-account-create-update-f9fsm" Feb 16 02:38:18.340396 master-0 kubenswrapper[31559]: I0216 02:38:18.340374 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgv5l\" (UniqueName: \"kubernetes.io/projected/292a5fd9-3c50-42c9-8f8d-bfbe7dbec084-kube-api-access-xgv5l\") pod \"glance-a2d6-account-create-update-f9fsm\" (UID: \"292a5fd9-3c50-42c9-8f8d-bfbe7dbec084\") " pod="openstack/glance-a2d6-account-create-update-f9fsm" Feb 16 02:38:18.341830 master-0 kubenswrapper[31559]: I0216 02:38:18.341772 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/292a5fd9-3c50-42c9-8f8d-bfbe7dbec084-operator-scripts\") pod \"glance-a2d6-account-create-update-f9fsm\" (UID: \"292a5fd9-3c50-42c9-8f8d-bfbe7dbec084\") " pod="openstack/glance-a2d6-account-create-update-f9fsm" Feb 16 02:38:18.358049 master-0 kubenswrapper[31559]: I0216 02:38:18.357995 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgv5l\" (UniqueName: \"kubernetes.io/projected/292a5fd9-3c50-42c9-8f8d-bfbe7dbec084-kube-api-access-xgv5l\") pod \"glance-a2d6-account-create-update-f9fsm\" (UID: \"292a5fd9-3c50-42c9-8f8d-bfbe7dbec084\") " pod="openstack/glance-a2d6-account-create-update-f9fsm" Feb 16 02:38:18.441736 master-0 kubenswrapper[31559]: I0216 02:38:18.441660 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-fx55d" Feb 16 02:38:18.551726 master-0 kubenswrapper[31559]: I0216 02:38:18.551569 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a2d6-account-create-update-f9fsm" Feb 16 02:38:18.689768 master-0 kubenswrapper[31559]: I0216 02:38:18.689366 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-24kfs"] Feb 16 02:38:18.693361 master-0 kubenswrapper[31559]: I0216 02:38:18.692504 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-24kfs" Feb 16 02:38:18.702750 master-0 kubenswrapper[31559]: I0216 02:38:18.702669 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-24kfs"] Feb 16 02:38:18.747967 master-0 kubenswrapper[31559]: I0216 02:38:18.747671 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e71ec1c5-6255-4555-b1b8-685613eb634b-operator-scripts\") pod \"keystone-db-create-24kfs\" (UID: \"e71ec1c5-6255-4555-b1b8-685613eb634b\") " pod="openstack/keystone-db-create-24kfs" Feb 16 02:38:18.748240 master-0 kubenswrapper[31559]: I0216 02:38:18.748028 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26gq4\" (UniqueName: \"kubernetes.io/projected/e71ec1c5-6255-4555-b1b8-685613eb634b-kube-api-access-26gq4\") pod \"keystone-db-create-24kfs\" (UID: \"e71ec1c5-6255-4555-b1b8-685613eb634b\") " pod="openstack/keystone-db-create-24kfs" Feb 16 02:38:18.801339 master-0 kubenswrapper[31559]: I0216 02:38:18.800938 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-a6ad-account-create-update-sg6j7"] Feb 16 02:38:18.803018 master-0 kubenswrapper[31559]: I0216 02:38:18.802516 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a6ad-account-create-update-sg6j7" Feb 16 02:38:18.811754 master-0 kubenswrapper[31559]: I0216 02:38:18.811705 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 16 02:38:18.827110 master-0 kubenswrapper[31559]: I0216 02:38:18.827055 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-a6ad-account-create-update-sg6j7"] Feb 16 02:38:18.850219 master-0 kubenswrapper[31559]: I0216 02:38:18.849818 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26gq4\" (UniqueName: \"kubernetes.io/projected/e71ec1c5-6255-4555-b1b8-685613eb634b-kube-api-access-26gq4\") pod \"keystone-db-create-24kfs\" (UID: \"e71ec1c5-6255-4555-b1b8-685613eb634b\") " pod="openstack/keystone-db-create-24kfs" Feb 16 02:38:18.850471 master-0 kubenswrapper[31559]: I0216 02:38:18.850260 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e71ec1c5-6255-4555-b1b8-685613eb634b-operator-scripts\") pod \"keystone-db-create-24kfs\" (UID: \"e71ec1c5-6255-4555-b1b8-685613eb634b\") " pod="openstack/keystone-db-create-24kfs" Feb 16 02:38:18.850471 master-0 kubenswrapper[31559]: I0216 02:38:18.850365 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhddr\" (UniqueName: \"kubernetes.io/projected/dee98140-f80a-4c3f-81aa-5a5183eeb144-kube-api-access-hhddr\") pod \"keystone-a6ad-account-create-update-sg6j7\" (UID: \"dee98140-f80a-4c3f-81aa-5a5183eeb144\") " pod="openstack/keystone-a6ad-account-create-update-sg6j7" Feb 16 02:38:18.850471 master-0 kubenswrapper[31559]: I0216 02:38:18.850419 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dee98140-f80a-4c3f-81aa-5a5183eeb144-operator-scripts\") pod \"keystone-a6ad-account-create-update-sg6j7\" (UID: \"dee98140-f80a-4c3f-81aa-5a5183eeb144\") " pod="openstack/keystone-a6ad-account-create-update-sg6j7" Feb 16 02:38:18.852828 master-0 kubenswrapper[31559]: I0216 02:38:18.852777 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e71ec1c5-6255-4555-b1b8-685613eb634b-operator-scripts\") pod \"keystone-db-create-24kfs\" (UID: \"e71ec1c5-6255-4555-b1b8-685613eb634b\") " pod="openstack/keystone-db-create-24kfs" Feb 16 02:38:18.868869 master-0 kubenswrapper[31559]: I0216 02:38:18.868818 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26gq4\" (UniqueName: \"kubernetes.io/projected/e71ec1c5-6255-4555-b1b8-685613eb634b-kube-api-access-26gq4\") pod \"keystone-db-create-24kfs\" (UID: \"e71ec1c5-6255-4555-b1b8-685613eb634b\") " pod="openstack/keystone-db-create-24kfs" Feb 16 02:38:18.953281 master-0 kubenswrapper[31559]: I0216 02:38:18.953229 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhddr\" (UniqueName: \"kubernetes.io/projected/dee98140-f80a-4c3f-81aa-5a5183eeb144-kube-api-access-hhddr\") pod \"keystone-a6ad-account-create-update-sg6j7\" (UID: \"dee98140-f80a-4c3f-81aa-5a5183eeb144\") " pod="openstack/keystone-a6ad-account-create-update-sg6j7" Feb 16 02:38:18.954100 master-0 kubenswrapper[31559]: I0216 02:38:18.953645 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dee98140-f80a-4c3f-81aa-5a5183eeb144-operator-scripts\") pod \"keystone-a6ad-account-create-update-sg6j7\" (UID: \"dee98140-f80a-4c3f-81aa-5a5183eeb144\") " pod="openstack/keystone-a6ad-account-create-update-sg6j7" Feb 16 02:38:18.954690 master-0 kubenswrapper[31559]: I0216 02:38:18.954646 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dee98140-f80a-4c3f-81aa-5a5183eeb144-operator-scripts\") pod \"keystone-a6ad-account-create-update-sg6j7\" (UID: \"dee98140-f80a-4c3f-81aa-5a5183eeb144\") " pod="openstack/keystone-a6ad-account-create-update-sg6j7" Feb 16 02:38:19.029486 master-0 kubenswrapper[31559]: I0216 02:38:19.029409 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-24kfs" Feb 16 02:38:19.401773 master-0 kubenswrapper[31559]: I0216 02:38:19.401714 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhddr\" (UniqueName: \"kubernetes.io/projected/dee98140-f80a-4c3f-81aa-5a5183eeb144-kube-api-access-hhddr\") pod \"keystone-a6ad-account-create-update-sg6j7\" (UID: \"dee98140-f80a-4c3f-81aa-5a5183eeb144\") " pod="openstack/keystone-a6ad-account-create-update-sg6j7" Feb 16 02:38:19.411894 master-0 kubenswrapper[31559]: I0216 02:38:19.411748 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-nxnt5"] Feb 16 02:38:19.413833 master-0 kubenswrapper[31559]: I0216 02:38:19.413807 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-nxnt5" Feb 16 02:38:19.428309 master-0 kubenswrapper[31559]: I0216 02:38:19.428237 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-85ae-account-create-update-4pvhp"] Feb 16 02:38:19.434919 master-0 kubenswrapper[31559]: I0216 02:38:19.434876 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-85ae-account-create-update-4pvhp" Feb 16 02:38:19.438068 master-0 kubenswrapper[31559]: I0216 02:38:19.437924 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 16 02:38:19.441239 master-0 kubenswrapper[31559]: I0216 02:38:19.441200 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-nxnt5"] Feb 16 02:38:19.441749 master-0 kubenswrapper[31559]: I0216 02:38:19.441720 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a6ad-account-create-update-sg6j7" Feb 16 02:38:19.447560 master-0 kubenswrapper[31559]: I0216 02:38:19.446931 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-85ae-account-create-update-4pvhp"] Feb 16 02:38:19.469009 master-0 kubenswrapper[31559]: I0216 02:38:19.468918 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c275eb5-4e56-4ade-8c4a-142ed7dbafea-operator-scripts\") pod \"placement-85ae-account-create-update-4pvhp\" (UID: \"0c275eb5-4e56-4ade-8c4a-142ed7dbafea\") " pod="openstack/placement-85ae-account-create-update-4pvhp" Feb 16 02:38:19.469009 master-0 kubenswrapper[31559]: I0216 02:38:19.469002 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f478b2fd-7ccb-4480-8887-9decb4a4b32e-operator-scripts\") pod \"placement-db-create-nxnt5\" (UID: \"f478b2fd-7ccb-4480-8887-9decb4a4b32e\") " pod="openstack/placement-db-create-nxnt5" Feb 16 02:38:19.469237 master-0 kubenswrapper[31559]: I0216 02:38:19.469094 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zchht\" (UniqueName: \"kubernetes.io/projected/f478b2fd-7ccb-4480-8887-9decb4a4b32e-kube-api-access-zchht\") pod \"placement-db-create-nxnt5\" (UID: \"f478b2fd-7ccb-4480-8887-9decb4a4b32e\") " pod="openstack/placement-db-create-nxnt5" Feb 16 02:38:19.469237 master-0 kubenswrapper[31559]: I0216 02:38:19.469137 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bltrp\" (UniqueName: \"kubernetes.io/projected/0c275eb5-4e56-4ade-8c4a-142ed7dbafea-kube-api-access-bltrp\") pod \"placement-85ae-account-create-update-4pvhp\" (UID: \"0c275eb5-4e56-4ade-8c4a-142ed7dbafea\") " pod="openstack/placement-85ae-account-create-update-4pvhp" Feb 16 02:38:19.570194 master-0 kubenswrapper[31559]: I0216 02:38:19.570134 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f478b2fd-7ccb-4480-8887-9decb4a4b32e-operator-scripts\") pod \"placement-db-create-nxnt5\" (UID: \"f478b2fd-7ccb-4480-8887-9decb4a4b32e\") " pod="openstack/placement-db-create-nxnt5" Feb 16 02:38:19.570486 master-0 kubenswrapper[31559]: I0216 02:38:19.570251 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zchht\" (UniqueName: \"kubernetes.io/projected/f478b2fd-7ccb-4480-8887-9decb4a4b32e-kube-api-access-zchht\") pod \"placement-db-create-nxnt5\" (UID: \"f478b2fd-7ccb-4480-8887-9decb4a4b32e\") " pod="openstack/placement-db-create-nxnt5" Feb 16 02:38:19.570486 master-0 kubenswrapper[31559]: I0216 02:38:19.570444 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bltrp\" (UniqueName: \"kubernetes.io/projected/0c275eb5-4e56-4ade-8c4a-142ed7dbafea-kube-api-access-bltrp\") pod \"placement-85ae-account-create-update-4pvhp\" (UID: \"0c275eb5-4e56-4ade-8c4a-142ed7dbafea\") " pod="openstack/placement-85ae-account-create-update-4pvhp" Feb 16 02:38:19.570628 master-0 kubenswrapper[31559]: I0216 02:38:19.570606 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c275eb5-4e56-4ade-8c4a-142ed7dbafea-operator-scripts\") pod \"placement-85ae-account-create-update-4pvhp\" (UID: \"0c275eb5-4e56-4ade-8c4a-142ed7dbafea\") " pod="openstack/placement-85ae-account-create-update-4pvhp" Feb 16 02:38:19.570969 master-0 kubenswrapper[31559]: I0216 02:38:19.570876 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f478b2fd-7ccb-4480-8887-9decb4a4b32e-operator-scripts\") pod \"placement-db-create-nxnt5\" (UID: \"f478b2fd-7ccb-4480-8887-9decb4a4b32e\") " pod="openstack/placement-db-create-nxnt5" Feb 16 02:38:19.571498 master-0 kubenswrapper[31559]: I0216 02:38:19.571474 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c275eb5-4e56-4ade-8c4a-142ed7dbafea-operator-scripts\") pod \"placement-85ae-account-create-update-4pvhp\" (UID: \"0c275eb5-4e56-4ade-8c4a-142ed7dbafea\") " pod="openstack/placement-85ae-account-create-update-4pvhp" Feb 16 02:38:19.589495 master-0 kubenswrapper[31559]: I0216 02:38:19.589459 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zchht\" (UniqueName: \"kubernetes.io/projected/f478b2fd-7ccb-4480-8887-9decb4a4b32e-kube-api-access-zchht\") pod \"placement-db-create-nxnt5\" (UID: \"f478b2fd-7ccb-4480-8887-9decb4a4b32e\") " pod="openstack/placement-db-create-nxnt5" Feb 16 02:38:19.590236 master-0 kubenswrapper[31559]: I0216 02:38:19.590211 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bltrp\" (UniqueName: \"kubernetes.io/projected/0c275eb5-4e56-4ade-8c4a-142ed7dbafea-kube-api-access-bltrp\") pod \"placement-85ae-account-create-update-4pvhp\" (UID: \"0c275eb5-4e56-4ade-8c4a-142ed7dbafea\") " pod="openstack/placement-85ae-account-create-update-4pvhp" Feb 16 02:38:19.787668 master-0 kubenswrapper[31559]: I0216 02:38:19.787611 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-nxnt5" Feb 16 02:38:19.809010 master-0 kubenswrapper[31559]: I0216 02:38:19.808967 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-85ae-account-create-update-4pvhp" Feb 16 02:38:19.876395 master-0 kubenswrapper[31559]: I0216 02:38:19.876298 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c53aef07-7967-4da8-a56a-018b25e9b92e-etc-swift\") pod \"swift-storage-0\" (UID: \"c53aef07-7967-4da8-a56a-018b25e9b92e\") " pod="openstack/swift-storage-0" Feb 16 02:38:19.876653 master-0 kubenswrapper[31559]: E0216 02:38:19.876592 31559 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 02:38:19.876703 master-0 kubenswrapper[31559]: E0216 02:38:19.876654 31559 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 02:38:19.876790 master-0 kubenswrapper[31559]: E0216 02:38:19.876756 31559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c53aef07-7967-4da8-a56a-018b25e9b92e-etc-swift podName:c53aef07-7967-4da8-a56a-018b25e9b92e nodeName:}" failed. No retries permitted until 2026-02-16 02:38:27.876727392 +0000 UTC m=+960.221333447 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c53aef07-7967-4da8-a56a-018b25e9b92e-etc-swift") pod "swift-storage-0" (UID: "c53aef07-7967-4da8-a56a-018b25e9b92e") : configmap "swift-ring-files" not found Feb 16 02:38:20.254919 master-0 kubenswrapper[31559]: I0216 02:38:20.254823 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6fd49994df-plzpw" Feb 16 02:38:20.675069 master-0 kubenswrapper[31559]: I0216 02:38:20.674615 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9l9b8" Feb 16 02:38:20.679343 master-0 kubenswrapper[31559]: I0216 02:38:20.679294 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b98d7b55c-hn84p"] Feb 16 02:38:20.680787 master-0 kubenswrapper[31559]: I0216 02:38:20.680709 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b98d7b55c-hn84p" podUID="c1cac3b2-dea2-4c55-a8db-a71dbc5f95f4" containerName="dnsmasq-dns" containerID="cri-o://abfe958393130499ef1a5e66a06c24c9dc9b3a6ce028b4edc19ad88c5828b9a3" gracePeriod=10 Feb 16 02:38:20.701595 master-0 kubenswrapper[31559]: I0216 02:38:20.701553 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a78217a-f6d3-4ff6-9a50-24367edc7a67-operator-scripts\") pod \"4a78217a-f6d3-4ff6-9a50-24367edc7a67\" (UID: \"4a78217a-f6d3-4ff6-9a50-24367edc7a67\") " Feb 16 02:38:20.701845 master-0 kubenswrapper[31559]: I0216 02:38:20.701824 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbcwz\" (UniqueName: \"kubernetes.io/projected/4a78217a-f6d3-4ff6-9a50-24367edc7a67-kube-api-access-cbcwz\") pod \"4a78217a-f6d3-4ff6-9a50-24367edc7a67\" (UID: \"4a78217a-f6d3-4ff6-9a50-24367edc7a67\") " Feb 16 02:38:20.702157 master-0 kubenswrapper[31559]: I0216 02:38:20.702109 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a78217a-f6d3-4ff6-9a50-24367edc7a67-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4a78217a-f6d3-4ff6-9a50-24367edc7a67" (UID: "4a78217a-f6d3-4ff6-9a50-24367edc7a67"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:38:20.702796 master-0 kubenswrapper[31559]: I0216 02:38:20.702725 31559 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a78217a-f6d3-4ff6-9a50-24367edc7a67-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 02:38:20.726490 master-0 kubenswrapper[31559]: I0216 02:38:20.725159 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a78217a-f6d3-4ff6-9a50-24367edc7a67-kube-api-access-cbcwz" (OuterVolumeSpecName: "kube-api-access-cbcwz") pod "4a78217a-f6d3-4ff6-9a50-24367edc7a67" (UID: "4a78217a-f6d3-4ff6-9a50-24367edc7a67"). InnerVolumeSpecName "kube-api-access-cbcwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:38:20.807063 master-0 kubenswrapper[31559]: I0216 02:38:20.806954 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbcwz\" (UniqueName: \"kubernetes.io/projected/4a78217a-f6d3-4ff6-9a50-24367edc7a67-kube-api-access-cbcwz\") on node \"master-0\" DevicePath \"\"" Feb 16 02:38:20.973449 master-0 kubenswrapper[31559]: I0216 02:38:20.973375 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9l9b8" event={"ID":"4a78217a-f6d3-4ff6-9a50-24367edc7a67","Type":"ContainerDied","Data":"bd605565ddf97991689aef2eacba808933e10b1d5e3facfb9407988838e5a586"} Feb 16 02:38:20.973449 master-0 kubenswrapper[31559]: I0216 02:38:20.973422 31559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd605565ddf97991689aef2eacba808933e10b1d5e3facfb9407988838e5a586" Feb 16 02:38:20.973667 master-0 kubenswrapper[31559]: I0216 02:38:20.973574 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9l9b8" Feb 16 02:38:20.975516 master-0 kubenswrapper[31559]: I0216 02:38:20.975485 31559 generic.go:334] "Generic (PLEG): container finished" podID="c1cac3b2-dea2-4c55-a8db-a71dbc5f95f4" containerID="abfe958393130499ef1a5e66a06c24c9dc9b3a6ce028b4edc19ad88c5828b9a3" exitCode=0 Feb 16 02:38:20.975516 master-0 kubenswrapper[31559]: I0216 02:38:20.975512 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b98d7b55c-hn84p" event={"ID":"c1cac3b2-dea2-4c55-a8db-a71dbc5f95f4","Type":"ContainerDied","Data":"abfe958393130499ef1a5e66a06c24c9dc9b3a6ce028b4edc19ad88c5828b9a3"} Feb 16 02:38:21.437673 master-0 kubenswrapper[31559]: I0216 02:38:21.436549 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-fx55d"] Feb 16 02:38:21.441575 master-0 kubenswrapper[31559]: W0216 02:38:21.439659 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec2f0932_270d_4b92_a1a3_03181503039b.slice/crio-83e4b8af4b744ff0ac18b99757ee8dab132a1d4413faad9876b793a390714168 WatchSource:0}: Error finding container 83e4b8af4b744ff0ac18b99757ee8dab132a1d4413faad9876b793a390714168: Status 404 returned error can't find the container with id 83e4b8af4b744ff0ac18b99757ee8dab132a1d4413faad9876b793a390714168 Feb 16 02:38:21.445604 master-0 kubenswrapper[31559]: I0216 02:38:21.443861 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-a2d6-account-create-update-f9fsm"] Feb 16 02:38:21.448672 master-0 kubenswrapper[31559]: W0216 02:38:21.448562 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod292a5fd9_3c50_42c9_8f8d_bfbe7dbec084.slice/crio-baa444197bf5c5f4c9392fef71908fa3c067a048fcc62847971183404e786acf WatchSource:0}: Error finding container baa444197bf5c5f4c9392fef71908fa3c067a048fcc62847971183404e786acf: Status 404 returned error can't find the container with id baa444197bf5c5f4c9392fef71908fa3c067a048fcc62847971183404e786acf Feb 16 02:38:21.601806 master-0 kubenswrapper[31559]: I0216 02:38:21.601760 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-nxnt5"] Feb 16 02:38:21.623333 master-0 kubenswrapper[31559]: W0216 02:38:21.623285 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf478b2fd_7ccb_4480_8887_9decb4a4b32e.slice/crio-b11ace13f015f3632f15460aa9d8c40cbaec70e3adcc42c49ca07b27cd6727fa WatchSource:0}: Error finding container b11ace13f015f3632f15460aa9d8c40cbaec70e3adcc42c49ca07b27cd6727fa: Status 404 returned error can't find the container with id b11ace13f015f3632f15460aa9d8c40cbaec70e3adcc42c49ca07b27cd6727fa Feb 16 02:38:21.628970 master-0 kubenswrapper[31559]: I0216 02:38:21.628922 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-24kfs"] Feb 16 02:38:21.635280 master-0 kubenswrapper[31559]: I0216 02:38:21.635243 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-85ae-account-create-update-4pvhp"] Feb 16 02:38:21.641930 master-0 kubenswrapper[31559]: W0216 02:38:21.641530 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode71ec1c5_6255_4555_b1b8_685613eb634b.slice/crio-5f2f2aaad791c4536f053acb51a27f7cd5d7d2c5dd88f9a3ab11c411a0809c83 WatchSource:0}: Error finding container 5f2f2aaad791c4536f053acb51a27f7cd5d7d2c5dd88f9a3ab11c411a0809c83: Status 404 returned error can't find the container with id 5f2f2aaad791c4536f053acb51a27f7cd5d7d2c5dd88f9a3ab11c411a0809c83 Feb 16 02:38:21.685608 master-0 kubenswrapper[31559]: I0216 02:38:21.685575 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-a6ad-account-create-update-sg6j7"] Feb 16 02:38:21.723013 master-0 kubenswrapper[31559]: I0216 02:38:21.722990 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b98d7b55c-hn84p" Feb 16 02:38:21.843102 master-0 kubenswrapper[31559]: I0216 02:38:21.843076 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1cac3b2-dea2-4c55-a8db-a71dbc5f95f4-dns-svc\") pod \"c1cac3b2-dea2-4c55-a8db-a71dbc5f95f4\" (UID: \"c1cac3b2-dea2-4c55-a8db-a71dbc5f95f4\") " Feb 16 02:38:21.843365 master-0 kubenswrapper[31559]: I0216 02:38:21.843350 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9p2pk\" (UniqueName: \"kubernetes.io/projected/c1cac3b2-dea2-4c55-a8db-a71dbc5f95f4-kube-api-access-9p2pk\") pod \"c1cac3b2-dea2-4c55-a8db-a71dbc5f95f4\" (UID: \"c1cac3b2-dea2-4c55-a8db-a71dbc5f95f4\") " Feb 16 02:38:21.843606 master-0 kubenswrapper[31559]: I0216 02:38:21.843556 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1cac3b2-dea2-4c55-a8db-a71dbc5f95f4-config\") pod \"c1cac3b2-dea2-4c55-a8db-a71dbc5f95f4\" (UID: \"c1cac3b2-dea2-4c55-a8db-a71dbc5f95f4\") " Feb 16 02:38:21.854488 master-0 kubenswrapper[31559]: I0216 02:38:21.854405 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1cac3b2-dea2-4c55-a8db-a71dbc5f95f4-kube-api-access-9p2pk" (OuterVolumeSpecName: "kube-api-access-9p2pk") pod "c1cac3b2-dea2-4c55-a8db-a71dbc5f95f4" (UID: "c1cac3b2-dea2-4c55-a8db-a71dbc5f95f4"). InnerVolumeSpecName "kube-api-access-9p2pk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:38:21.945993 master-0 kubenswrapper[31559]: I0216 02:38:21.945929 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9p2pk\" (UniqueName: \"kubernetes.io/projected/c1cac3b2-dea2-4c55-a8db-a71dbc5f95f4-kube-api-access-9p2pk\") on node \"master-0\" DevicePath \"\"" Feb 16 02:38:21.958877 master-0 kubenswrapper[31559]: I0216 02:38:21.957950 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1cac3b2-dea2-4c55-a8db-a71dbc5f95f4-config" (OuterVolumeSpecName: "config") pod "c1cac3b2-dea2-4c55-a8db-a71dbc5f95f4" (UID: "c1cac3b2-dea2-4c55-a8db-a71dbc5f95f4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:38:21.975098 master-0 kubenswrapper[31559]: I0216 02:38:21.975024 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1cac3b2-dea2-4c55-a8db-a71dbc5f95f4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c1cac3b2-dea2-4c55-a8db-a71dbc5f95f4" (UID: "c1cac3b2-dea2-4c55-a8db-a71dbc5f95f4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:38:21.991657 master-0 kubenswrapper[31559]: I0216 02:38:21.990891 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-dtrws" event={"ID":"cb609b01-8137-43bc-a5b5-a9c3744a9067","Type":"ContainerStarted","Data":"447517bce2aaac83ae0397a8b32f3edff059a0aa6050f3104c0af929bbcc84e9"} Feb 16 02:38:21.995462 master-0 kubenswrapper[31559]: I0216 02:38:21.995407 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-a6ad-account-create-update-sg6j7" event={"ID":"dee98140-f80a-4c3f-81aa-5a5183eeb144","Type":"ContainerStarted","Data":"b86a9279ace4ab4733d1c9807a2b522c0a591a646304505d9bf451a966fc3236"} Feb 16 02:38:21.995462 master-0 kubenswrapper[31559]: I0216 02:38:21.995460 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-a6ad-account-create-update-sg6j7" event={"ID":"dee98140-f80a-4c3f-81aa-5a5183eeb144","Type":"ContainerStarted","Data":"5a8a41c4f29c2c7bd4cf7d3fb06ba7108186ff84311e43474272dbd065a8f865"} Feb 16 02:38:21.997121 master-0 kubenswrapper[31559]: I0216 02:38:21.997060 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-24kfs" event={"ID":"e71ec1c5-6255-4555-b1b8-685613eb634b","Type":"ContainerStarted","Data":"da71f6d2a2f407279b730378c7690ac89969556512a3a05f1563b6a826e32857"} Feb 16 02:38:21.997121 master-0 kubenswrapper[31559]: I0216 02:38:21.997116 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-24kfs" event={"ID":"e71ec1c5-6255-4555-b1b8-685613eb634b","Type":"ContainerStarted","Data":"5f2f2aaad791c4536f053acb51a27f7cd5d7d2c5dd88f9a3ab11c411a0809c83"} Feb 16 02:38:22.003763 master-0 kubenswrapper[31559]: I0216 02:38:22.001999 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85ae-account-create-update-4pvhp" event={"ID":"0c275eb5-4e56-4ade-8c4a-142ed7dbafea","Type":"ContainerStarted","Data":"6fc531dd656582b5089f8fe6b11e89727951bafa71a08e66d7322ed27015d58f"} Feb 16 02:38:22.003763 master-0 kubenswrapper[31559]: I0216 02:38:22.002037 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85ae-account-create-update-4pvhp" event={"ID":"0c275eb5-4e56-4ade-8c4a-142ed7dbafea","Type":"ContainerStarted","Data":"68a84ba8d9269e12d5d3a1edaacc3cd97fd6a5c03d6a33ce88ad0d6a6efb0d72"} Feb 16 02:38:22.004583 master-0 kubenswrapper[31559]: I0216 02:38:22.004133 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-fx55d" event={"ID":"ec2f0932-270d-4b92-a1a3-03181503039b","Type":"ContainerStarted","Data":"36ce22eb02d3d001fb5039561a5e2b07970bd43ea6a67bd8c6188bb4cfa5c898"} Feb 16 02:38:22.004583 master-0 kubenswrapper[31559]: I0216 02:38:22.004186 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-fx55d" event={"ID":"ec2f0932-270d-4b92-a1a3-03181503039b","Type":"ContainerStarted","Data":"83e4b8af4b744ff0ac18b99757ee8dab132a1d4413faad9876b793a390714168"} Feb 16 02:38:22.007732 master-0 kubenswrapper[31559]: I0216 02:38:22.007697 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a2d6-account-create-update-f9fsm" event={"ID":"292a5fd9-3c50-42c9-8f8d-bfbe7dbec084","Type":"ContainerStarted","Data":"c90a1be88865a2f8ff4c1adffae60f65354fc0626e863b41d677b18635a4b5ce"} Feb 16 02:38:22.007732 master-0 kubenswrapper[31559]: I0216 02:38:22.007731 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a2d6-account-create-update-f9fsm" event={"ID":"292a5fd9-3c50-42c9-8f8d-bfbe7dbec084","Type":"ContainerStarted","Data":"baa444197bf5c5f4c9392fef71908fa3c067a048fcc62847971183404e786acf"} Feb 16 02:38:22.012265 master-0 kubenswrapper[31559]: I0216 02:38:22.011605 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-nxnt5" event={"ID":"f478b2fd-7ccb-4480-8887-9decb4a4b32e","Type":"ContainerStarted","Data":"6bdec7d62c43a3fdc29fd47dda7c756fcec88d6d779e04dd5caf34785e480651"} Feb 16 02:38:22.012265 master-0 kubenswrapper[31559]: I0216 02:38:22.011657 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-nxnt5" event={"ID":"f478b2fd-7ccb-4480-8887-9decb4a4b32e","Type":"ContainerStarted","Data":"b11ace13f015f3632f15460aa9d8c40cbaec70e3adcc42c49ca07b27cd6727fa"} Feb 16 02:38:22.014204 master-0 kubenswrapper[31559]: I0216 02:38:22.014147 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b98d7b55c-hn84p" event={"ID":"c1cac3b2-dea2-4c55-a8db-a71dbc5f95f4","Type":"ContainerDied","Data":"4c501fa60bbe27291420b857df3494a8e65c9fac03ce98651a04969b3637f144"} Feb 16 02:38:22.014204 master-0 kubenswrapper[31559]: I0216 02:38:22.014184 31559 scope.go:117] "RemoveContainer" containerID="abfe958393130499ef1a5e66a06c24c9dc9b3a6ce028b4edc19ad88c5828b9a3" Feb 16 02:38:22.014453 master-0 kubenswrapper[31559]: I0216 02:38:22.014410 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b98d7b55c-hn84p" Feb 16 02:38:22.027087 master-0 kubenswrapper[31559]: I0216 02:38:22.027025 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-dtrws" podStartSLOduration=2.865149507 podStartE2EDuration="7.02700118s" podCreationTimestamp="2026-02-16 02:38:15 +0000 UTC" firstStartedPulling="2026-02-16 02:38:16.566530436 +0000 UTC m=+948.911136451" lastFinishedPulling="2026-02-16 02:38:20.728382109 +0000 UTC m=+953.072988124" observedRunningTime="2026-02-16 02:38:22.005293484 +0000 UTC m=+954.349899499" watchObservedRunningTime="2026-02-16 02:38:22.02700118 +0000 UTC m=+954.371607195" Feb 16 02:38:22.037410 master-0 kubenswrapper[31559]: I0216 02:38:22.037373 31559 scope.go:117] "RemoveContainer" containerID="d84213aad1d75222804e8e1efe9f65f16b8447c7ca45e229a62d116e45ed3773" Feb 16 02:38:22.049603 master-0 kubenswrapper[31559]: I0216 02:38:22.049523 31559 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1cac3b2-dea2-4c55-a8db-a71dbc5f95f4-config\") on node \"master-0\" DevicePath \"\"" Feb 16 02:38:22.049603 master-0 kubenswrapper[31559]: I0216 02:38:22.049555 31559 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1cac3b2-dea2-4c55-a8db-a71dbc5f95f4-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 16 02:38:22.052675 master-0 kubenswrapper[31559]: I0216 02:38:22.052614 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-85ae-account-create-update-4pvhp" podStartSLOduration=3.052589395 podStartE2EDuration="3.052589395s" podCreationTimestamp="2026-02-16 02:38:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:38:22.037724384 +0000 UTC m=+954.382330399" watchObservedRunningTime="2026-02-16 02:38:22.052589395 +0000 UTC m=+954.397195410" Feb 16 02:38:22.084249 master-0 kubenswrapper[31559]: I0216 02:38:22.084176 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-24kfs" podStartSLOduration=4.084155243 podStartE2EDuration="4.084155243s" podCreationTimestamp="2026-02-16 02:38:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:38:22.05864479 +0000 UTC m=+954.403250805" watchObservedRunningTime="2026-02-16 02:38:22.084155243 +0000 UTC m=+954.428761258" Feb 16 02:38:22.105609 master-0 kubenswrapper[31559]: I0216 02:38:22.105464 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-fx55d" podStartSLOduration=4.105442618 podStartE2EDuration="4.105442618s" podCreationTimestamp="2026-02-16 02:38:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:38:22.071812367 +0000 UTC m=+954.416418382" watchObservedRunningTime="2026-02-16 02:38:22.105442618 +0000 UTC m=+954.450048633" Feb 16 02:38:22.138262 master-0 kubenswrapper[31559]: I0216 02:38:22.137809 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-a6ad-account-create-update-sg6j7" podStartSLOduration=4.137788707 podStartE2EDuration="4.137788707s" podCreationTimestamp="2026-02-16 02:38:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:38:22.091618864 +0000 UTC m=+954.436224879" watchObservedRunningTime="2026-02-16 02:38:22.137788707 +0000 UTC m=+954.482394722" Feb 16 02:38:22.153665 master-0 kubenswrapper[31559]: I0216 02:38:22.153576 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-a2d6-account-create-update-f9fsm" podStartSLOduration=4.15355782 podStartE2EDuration="4.15355782s" podCreationTimestamp="2026-02-16 02:38:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:38:22.109460081 +0000 UTC m=+954.454066096" watchObservedRunningTime="2026-02-16 02:38:22.15355782 +0000 UTC m=+954.498163835" Feb 16 02:38:22.196090 master-0 kubenswrapper[31559]: I0216 02:38:22.196020 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b98d7b55c-hn84p"] Feb 16 02:38:22.203507 master-0 kubenswrapper[31559]: I0216 02:38:22.203233 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b98d7b55c-hn84p"] Feb 16 02:38:22.211368 master-0 kubenswrapper[31559]: I0216 02:38:22.211280 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-nxnt5" podStartSLOduration=3.211257128 podStartE2EDuration="3.211257128s" podCreationTimestamp="2026-02-16 02:38:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:38:22.145285488 +0000 UTC m=+954.489891503" watchObservedRunningTime="2026-02-16 02:38:22.211257128 +0000 UTC m=+954.555863143" Feb 16 02:38:23.031854 master-0 kubenswrapper[31559]: I0216 02:38:23.031778 31559 generic.go:334] "Generic (PLEG): container finished" podID="292a5fd9-3c50-42c9-8f8d-bfbe7dbec084" containerID="c90a1be88865a2f8ff4c1adffae60f65354fc0626e863b41d677b18635a4b5ce" exitCode=0 Feb 16 02:38:23.032421 master-0 kubenswrapper[31559]: I0216 02:38:23.031892 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a2d6-account-create-update-f9fsm" event={"ID":"292a5fd9-3c50-42c9-8f8d-bfbe7dbec084","Type":"ContainerDied","Data":"c90a1be88865a2f8ff4c1adffae60f65354fc0626e863b41d677b18635a4b5ce"} Feb 16 02:38:23.034552 master-0 kubenswrapper[31559]: I0216 02:38:23.034497 31559 generic.go:334] "Generic (PLEG): container finished" podID="f478b2fd-7ccb-4480-8887-9decb4a4b32e" containerID="6bdec7d62c43a3fdc29fd47dda7c756fcec88d6d779e04dd5caf34785e480651" exitCode=0 Feb 16 02:38:23.034658 master-0 kubenswrapper[31559]: I0216 02:38:23.034586 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-nxnt5" event={"ID":"f478b2fd-7ccb-4480-8887-9decb4a4b32e","Type":"ContainerDied","Data":"6bdec7d62c43a3fdc29fd47dda7c756fcec88d6d779e04dd5caf34785e480651"} Feb 16 02:38:23.041549 master-0 kubenswrapper[31559]: I0216 02:38:23.041483 31559 generic.go:334] "Generic (PLEG): container finished" podID="dee98140-f80a-4c3f-81aa-5a5183eeb144" containerID="b86a9279ace4ab4733d1c9807a2b522c0a591a646304505d9bf451a966fc3236" exitCode=0 Feb 16 02:38:23.041611 master-0 kubenswrapper[31559]: I0216 02:38:23.041539 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-a6ad-account-create-update-sg6j7" event={"ID":"dee98140-f80a-4c3f-81aa-5a5183eeb144","Type":"ContainerDied","Data":"b86a9279ace4ab4733d1c9807a2b522c0a591a646304505d9bf451a966fc3236"} Feb 16 02:38:23.044040 master-0 kubenswrapper[31559]: I0216 02:38:23.044000 31559 generic.go:334] "Generic (PLEG): container finished" podID="e71ec1c5-6255-4555-b1b8-685613eb634b" containerID="da71f6d2a2f407279b730378c7690ac89969556512a3a05f1563b6a826e32857" exitCode=0 Feb 16 02:38:23.044107 master-0 kubenswrapper[31559]: I0216 02:38:23.044076 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-24kfs" event={"ID":"e71ec1c5-6255-4555-b1b8-685613eb634b","Type":"ContainerDied","Data":"da71f6d2a2f407279b730378c7690ac89969556512a3a05f1563b6a826e32857"} Feb 16 02:38:23.048069 master-0 kubenswrapper[31559]: I0216 02:38:23.048006 31559 generic.go:334] "Generic (PLEG): container finished" podID="0c275eb5-4e56-4ade-8c4a-142ed7dbafea" containerID="6fc531dd656582b5089f8fe6b11e89727951bafa71a08e66d7322ed27015d58f" exitCode=0 Feb 16 02:38:23.048172 master-0 kubenswrapper[31559]: I0216 02:38:23.048082 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85ae-account-create-update-4pvhp" event={"ID":"0c275eb5-4e56-4ade-8c4a-142ed7dbafea","Type":"ContainerDied","Data":"6fc531dd656582b5089f8fe6b11e89727951bafa71a08e66d7322ed27015d58f"} Feb 16 02:38:23.052468 master-0 kubenswrapper[31559]: I0216 02:38:23.052392 31559 generic.go:334] "Generic (PLEG): container finished" podID="ec2f0932-270d-4b92-a1a3-03181503039b" containerID="36ce22eb02d3d001fb5039561a5e2b07970bd43ea6a67bd8c6188bb4cfa5c898" exitCode=0 Feb 16 02:38:23.052597 master-0 kubenswrapper[31559]: I0216 02:38:23.052506 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-fx55d" event={"ID":"ec2f0932-270d-4b92-a1a3-03181503039b","Type":"ContainerDied","Data":"36ce22eb02d3d001fb5039561a5e2b07970bd43ea6a67bd8c6188bb4cfa5c898"} Feb 16 02:38:23.956758 master-0 kubenswrapper[31559]: I0216 02:38:23.956619 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1cac3b2-dea2-4c55-a8db-a71dbc5f95f4" path="/var/lib/kubelet/pods/c1cac3b2-dea2-4c55-a8db-a71dbc5f95f4/volumes" Feb 16 02:38:24.714607 master-0 kubenswrapper[31559]: I0216 02:38:24.714330 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-nxnt5" Feb 16 02:38:24.726907 master-0 kubenswrapper[31559]: I0216 02:38:24.726761 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f478b2fd-7ccb-4480-8887-9decb4a4b32e-operator-scripts\") pod \"f478b2fd-7ccb-4480-8887-9decb4a4b32e\" (UID: \"f478b2fd-7ccb-4480-8887-9decb4a4b32e\") " Feb 16 02:38:24.727078 master-0 kubenswrapper[31559]: I0216 02:38:24.727059 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zchht\" (UniqueName: \"kubernetes.io/projected/f478b2fd-7ccb-4480-8887-9decb4a4b32e-kube-api-access-zchht\") pod \"f478b2fd-7ccb-4480-8887-9decb4a4b32e\" (UID: \"f478b2fd-7ccb-4480-8887-9decb4a4b32e\") " Feb 16 02:38:24.727671 master-0 kubenswrapper[31559]: I0216 02:38:24.727610 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f478b2fd-7ccb-4480-8887-9decb4a4b32e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f478b2fd-7ccb-4480-8887-9decb4a4b32e" (UID: "f478b2fd-7ccb-4480-8887-9decb4a4b32e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:38:24.743823 master-0 kubenswrapper[31559]: I0216 02:38:24.736765 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f478b2fd-7ccb-4480-8887-9decb4a4b32e-kube-api-access-zchht" (OuterVolumeSpecName: "kube-api-access-zchht") pod "f478b2fd-7ccb-4480-8887-9decb4a4b32e" (UID: "f478b2fd-7ccb-4480-8887-9decb4a4b32e"). InnerVolumeSpecName "kube-api-access-zchht". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:38:24.828581 master-0 kubenswrapper[31559]: I0216 02:38:24.828495 31559 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f478b2fd-7ccb-4480-8887-9decb4a4b32e-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 02:38:24.828581 master-0 kubenswrapper[31559]: I0216 02:38:24.828562 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zchht\" (UniqueName: \"kubernetes.io/projected/f478b2fd-7ccb-4480-8887-9decb4a4b32e-kube-api-access-zchht\") on node \"master-0\" DevicePath \"\"" Feb 16 02:38:25.043840 master-0 kubenswrapper[31559]: I0216 02:38:25.043787 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-85ae-account-create-update-4pvhp" Feb 16 02:38:25.052043 master-0 kubenswrapper[31559]: I0216 02:38:25.051999 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a6ad-account-create-update-sg6j7" Feb 16 02:38:25.055348 master-0 kubenswrapper[31559]: I0216 02:38:25.055313 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-fx55d" Feb 16 02:38:25.074459 master-0 kubenswrapper[31559]: I0216 02:38:25.074394 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-24kfs" Feb 16 02:38:25.083924 master-0 kubenswrapper[31559]: I0216 02:38:25.083817 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a2d6-account-create-update-f9fsm" Feb 16 02:38:25.103875 master-0 kubenswrapper[31559]: I0216 02:38:25.103817 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a2d6-account-create-update-f9fsm" Feb 16 02:38:25.104063 master-0 kubenswrapper[31559]: I0216 02:38:25.103831 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a2d6-account-create-update-f9fsm" event={"ID":"292a5fd9-3c50-42c9-8f8d-bfbe7dbec084","Type":"ContainerDied","Data":"baa444197bf5c5f4c9392fef71908fa3c067a048fcc62847971183404e786acf"} Feb 16 02:38:25.104063 master-0 kubenswrapper[31559]: I0216 02:38:25.103971 31559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="baa444197bf5c5f4c9392fef71908fa3c067a048fcc62847971183404e786acf" Feb 16 02:38:25.111065 master-0 kubenswrapper[31559]: I0216 02:38:25.111023 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-fx55d" Feb 16 02:38:25.111171 master-0 kubenswrapper[31559]: I0216 02:38:25.111039 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-fx55d" event={"ID":"ec2f0932-270d-4b92-a1a3-03181503039b","Type":"ContainerDied","Data":"83e4b8af4b744ff0ac18b99757ee8dab132a1d4413faad9876b793a390714168"} Feb 16 02:38:25.111171 master-0 kubenswrapper[31559]: I0216 02:38:25.111094 31559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83e4b8af4b744ff0ac18b99757ee8dab132a1d4413faad9876b793a390714168" Feb 16 02:38:25.113787 master-0 kubenswrapper[31559]: I0216 02:38:25.113730 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-nxnt5" event={"ID":"f478b2fd-7ccb-4480-8887-9decb4a4b32e","Type":"ContainerDied","Data":"b11ace13f015f3632f15460aa9d8c40cbaec70e3adcc42c49ca07b27cd6727fa"} Feb 16 02:38:25.113787 master-0 kubenswrapper[31559]: I0216 02:38:25.113763 31559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b11ace13f015f3632f15460aa9d8c40cbaec70e3adcc42c49ca07b27cd6727fa" Feb 16 02:38:25.113913 master-0 kubenswrapper[31559]: I0216 02:38:25.113802 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-nxnt5" Feb 16 02:38:25.139616 master-0 kubenswrapper[31559]: I0216 02:38:25.125667 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a6ad-account-create-update-sg6j7" Feb 16 02:38:25.139616 master-0 kubenswrapper[31559]: I0216 02:38:25.125831 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-a6ad-account-create-update-sg6j7" event={"ID":"dee98140-f80a-4c3f-81aa-5a5183eeb144","Type":"ContainerDied","Data":"5a8a41c4f29c2c7bd4cf7d3fb06ba7108186ff84311e43474272dbd065a8f865"} Feb 16 02:38:25.139616 master-0 kubenswrapper[31559]: I0216 02:38:25.126006 31559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a8a41c4f29c2c7bd4cf7d3fb06ba7108186ff84311e43474272dbd065a8f865" Feb 16 02:38:25.139616 master-0 kubenswrapper[31559]: I0216 02:38:25.131070 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-24kfs" event={"ID":"e71ec1c5-6255-4555-b1b8-685613eb634b","Type":"ContainerDied","Data":"5f2f2aaad791c4536f053acb51a27f7cd5d7d2c5dd88f9a3ab11c411a0809c83"} Feb 16 02:38:25.139616 master-0 kubenswrapper[31559]: I0216 02:38:25.131148 31559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f2f2aaad791c4536f053acb51a27f7cd5d7d2c5dd88f9a3ab11c411a0809c83" Feb 16 02:38:25.139616 master-0 kubenswrapper[31559]: I0216 02:38:25.131183 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-24kfs" Feb 16 02:38:25.141260 master-0 kubenswrapper[31559]: I0216 02:38:25.141200 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85ae-account-create-update-4pvhp" event={"ID":"0c275eb5-4e56-4ade-8c4a-142ed7dbafea","Type":"ContainerDied","Data":"68a84ba8d9269e12d5d3a1edaacc3cd97fd6a5c03d6a33ce88ad0d6a6efb0d72"} Feb 16 02:38:25.141319 master-0 kubenswrapper[31559]: I0216 02:38:25.141269 31559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68a84ba8d9269e12d5d3a1edaacc3cd97fd6a5c03d6a33ce88ad0d6a6efb0d72" Feb 16 02:38:25.141413 master-0 kubenswrapper[31559]: I0216 02:38:25.141386 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-85ae-account-create-update-4pvhp" Feb 16 02:38:25.241189 master-0 kubenswrapper[31559]: I0216 02:38:25.241056 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec2f0932-270d-4b92-a1a3-03181503039b-operator-scripts\") pod \"ec2f0932-270d-4b92-a1a3-03181503039b\" (UID: \"ec2f0932-270d-4b92-a1a3-03181503039b\") " Feb 16 02:38:25.241189 master-0 kubenswrapper[31559]: I0216 02:38:25.241129 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mt2zl\" (UniqueName: \"kubernetes.io/projected/ec2f0932-270d-4b92-a1a3-03181503039b-kube-api-access-mt2zl\") pod \"ec2f0932-270d-4b92-a1a3-03181503039b\" (UID: \"ec2f0932-270d-4b92-a1a3-03181503039b\") " Feb 16 02:38:25.241428 master-0 kubenswrapper[31559]: I0216 02:38:25.241245 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e71ec1c5-6255-4555-b1b8-685613eb634b-operator-scripts\") pod \"e71ec1c5-6255-4555-b1b8-685613eb634b\" (UID: \"e71ec1c5-6255-4555-b1b8-685613eb634b\") " Feb 16 02:38:25.241428 master-0 kubenswrapper[31559]: I0216 02:38:25.241271 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26gq4\" (UniqueName: \"kubernetes.io/projected/e71ec1c5-6255-4555-b1b8-685613eb634b-kube-api-access-26gq4\") pod \"e71ec1c5-6255-4555-b1b8-685613eb634b\" (UID: \"e71ec1c5-6255-4555-b1b8-685613eb634b\") " Feb 16 02:38:25.241428 master-0 kubenswrapper[31559]: I0216 02:38:25.241302 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bltrp\" (UniqueName: \"kubernetes.io/projected/0c275eb5-4e56-4ade-8c4a-142ed7dbafea-kube-api-access-bltrp\") pod \"0c275eb5-4e56-4ade-8c4a-142ed7dbafea\" (UID: \"0c275eb5-4e56-4ade-8c4a-142ed7dbafea\") " Feb 16 02:38:25.241428 master-0 kubenswrapper[31559]: I0216 02:38:25.241403 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhddr\" (UniqueName: \"kubernetes.io/projected/dee98140-f80a-4c3f-81aa-5a5183eeb144-kube-api-access-hhddr\") pod \"dee98140-f80a-4c3f-81aa-5a5183eeb144\" (UID: \"dee98140-f80a-4c3f-81aa-5a5183eeb144\") " Feb 16 02:38:25.241624 master-0 kubenswrapper[31559]: I0216 02:38:25.241504 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dee98140-f80a-4c3f-81aa-5a5183eeb144-operator-scripts\") pod \"dee98140-f80a-4c3f-81aa-5a5183eeb144\" (UID: \"dee98140-f80a-4c3f-81aa-5a5183eeb144\") " Feb 16 02:38:25.241835 master-0 kubenswrapper[31559]: I0216 02:38:25.241775 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e71ec1c5-6255-4555-b1b8-685613eb634b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e71ec1c5-6255-4555-b1b8-685613eb634b" (UID: "e71ec1c5-6255-4555-b1b8-685613eb634b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:38:25.241835 master-0 kubenswrapper[31559]: I0216 02:38:25.241775 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec2f0932-270d-4b92-a1a3-03181503039b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ec2f0932-270d-4b92-a1a3-03181503039b" (UID: "ec2f0932-270d-4b92-a1a3-03181503039b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:38:25.241979 master-0 kubenswrapper[31559]: I0216 02:38:25.241946 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgv5l\" (UniqueName: \"kubernetes.io/projected/292a5fd9-3c50-42c9-8f8d-bfbe7dbec084-kube-api-access-xgv5l\") pod \"292a5fd9-3c50-42c9-8f8d-bfbe7dbec084\" (UID: \"292a5fd9-3c50-42c9-8f8d-bfbe7dbec084\") " Feb 16 02:38:25.242034 master-0 kubenswrapper[31559]: I0216 02:38:25.241985 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c275eb5-4e56-4ade-8c4a-142ed7dbafea-operator-scripts\") pod \"0c275eb5-4e56-4ade-8c4a-142ed7dbafea\" (UID: \"0c275eb5-4e56-4ade-8c4a-142ed7dbafea\") " Feb 16 02:38:25.242034 master-0 kubenswrapper[31559]: I0216 02:38:25.242011 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/292a5fd9-3c50-42c9-8f8d-bfbe7dbec084-operator-scripts\") pod \"292a5fd9-3c50-42c9-8f8d-bfbe7dbec084\" (UID: \"292a5fd9-3c50-42c9-8f8d-bfbe7dbec084\") " Feb 16 02:38:25.242423 master-0 kubenswrapper[31559]: I0216 02:38:25.242385 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dee98140-f80a-4c3f-81aa-5a5183eeb144-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dee98140-f80a-4c3f-81aa-5a5183eeb144" (UID: "dee98140-f80a-4c3f-81aa-5a5183eeb144"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:38:25.242693 master-0 kubenswrapper[31559]: I0216 02:38:25.242659 31559 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec2f0932-270d-4b92-a1a3-03181503039b-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 02:38:25.242693 master-0 kubenswrapper[31559]: I0216 02:38:25.242687 31559 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e71ec1c5-6255-4555-b1b8-685613eb634b-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 02:38:25.242789 master-0 kubenswrapper[31559]: I0216 02:38:25.242703 31559 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dee98140-f80a-4c3f-81aa-5a5183eeb144-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 02:38:25.242938 master-0 kubenswrapper[31559]: I0216 02:38:25.242900 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/292a5fd9-3c50-42c9-8f8d-bfbe7dbec084-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "292a5fd9-3c50-42c9-8f8d-bfbe7dbec084" (UID: "292a5fd9-3c50-42c9-8f8d-bfbe7dbec084"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:38:25.242938 master-0 kubenswrapper[31559]: I0216 02:38:25.242902 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c275eb5-4e56-4ade-8c4a-142ed7dbafea-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0c275eb5-4e56-4ade-8c4a-142ed7dbafea" (UID: "0c275eb5-4e56-4ade-8c4a-142ed7dbafea"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:38:25.244843 master-0 kubenswrapper[31559]: I0216 02:38:25.244777 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c275eb5-4e56-4ade-8c4a-142ed7dbafea-kube-api-access-bltrp" (OuterVolumeSpecName: "kube-api-access-bltrp") pod "0c275eb5-4e56-4ade-8c4a-142ed7dbafea" (UID: "0c275eb5-4e56-4ade-8c4a-142ed7dbafea"). InnerVolumeSpecName "kube-api-access-bltrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:38:25.245094 master-0 kubenswrapper[31559]: I0216 02:38:25.245052 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec2f0932-270d-4b92-a1a3-03181503039b-kube-api-access-mt2zl" (OuterVolumeSpecName: "kube-api-access-mt2zl") pod "ec2f0932-270d-4b92-a1a3-03181503039b" (UID: "ec2f0932-270d-4b92-a1a3-03181503039b"). InnerVolumeSpecName "kube-api-access-mt2zl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:38:25.245733 master-0 kubenswrapper[31559]: I0216 02:38:25.245692 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e71ec1c5-6255-4555-b1b8-685613eb634b-kube-api-access-26gq4" (OuterVolumeSpecName: "kube-api-access-26gq4") pod "e71ec1c5-6255-4555-b1b8-685613eb634b" (UID: "e71ec1c5-6255-4555-b1b8-685613eb634b"). InnerVolumeSpecName "kube-api-access-26gq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:38:25.246190 master-0 kubenswrapper[31559]: I0216 02:38:25.246146 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/292a5fd9-3c50-42c9-8f8d-bfbe7dbec084-kube-api-access-xgv5l" (OuterVolumeSpecName: "kube-api-access-xgv5l") pod "292a5fd9-3c50-42c9-8f8d-bfbe7dbec084" (UID: "292a5fd9-3c50-42c9-8f8d-bfbe7dbec084"). InnerVolumeSpecName "kube-api-access-xgv5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:38:25.246238 master-0 kubenswrapper[31559]: I0216 02:38:25.246181 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dee98140-f80a-4c3f-81aa-5a5183eeb144-kube-api-access-hhddr" (OuterVolumeSpecName: "kube-api-access-hhddr") pod "dee98140-f80a-4c3f-81aa-5a5183eeb144" (UID: "dee98140-f80a-4c3f-81aa-5a5183eeb144"). InnerVolumeSpecName "kube-api-access-hhddr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:38:25.344394 master-0 kubenswrapper[31559]: I0216 02:38:25.344314 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26gq4\" (UniqueName: \"kubernetes.io/projected/e71ec1c5-6255-4555-b1b8-685613eb634b-kube-api-access-26gq4\") on node \"master-0\" DevicePath \"\"" Feb 16 02:38:25.344394 master-0 kubenswrapper[31559]: I0216 02:38:25.344364 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bltrp\" (UniqueName: \"kubernetes.io/projected/0c275eb5-4e56-4ade-8c4a-142ed7dbafea-kube-api-access-bltrp\") on node \"master-0\" DevicePath \"\"" Feb 16 02:38:25.344394 master-0 kubenswrapper[31559]: I0216 02:38:25.344380 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhddr\" (UniqueName: \"kubernetes.io/projected/dee98140-f80a-4c3f-81aa-5a5183eeb144-kube-api-access-hhddr\") on node \"master-0\" DevicePath \"\"" Feb 16 02:38:25.344394 master-0 kubenswrapper[31559]: I0216 02:38:25.344395 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgv5l\" (UniqueName: \"kubernetes.io/projected/292a5fd9-3c50-42c9-8f8d-bfbe7dbec084-kube-api-access-xgv5l\") on node \"master-0\" DevicePath \"\"" Feb 16 02:38:25.344394 master-0 kubenswrapper[31559]: I0216 02:38:25.344408 31559 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c275eb5-4e56-4ade-8c4a-142ed7dbafea-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 02:38:25.344394 master-0 kubenswrapper[31559]: I0216 02:38:25.344422 31559 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/292a5fd9-3c50-42c9-8f8d-bfbe7dbec084-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 02:38:25.345151 master-0 kubenswrapper[31559]: I0216 02:38:25.344479 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mt2zl\" (UniqueName: \"kubernetes.io/projected/ec2f0932-270d-4b92-a1a3-03181503039b-kube-api-access-mt2zl\") on node \"master-0\" DevicePath \"\"" Feb 16 02:38:25.784577 master-0 kubenswrapper[31559]: I0216 02:38:25.783594 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 16 02:38:26.465487 master-0 kubenswrapper[31559]: I0216 02:38:26.463238 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-9l9b8"] Feb 16 02:38:26.482068 master-0 kubenswrapper[31559]: I0216 02:38:26.481478 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-9l9b8"] Feb 16 02:38:27.906938 master-0 kubenswrapper[31559]: I0216 02:38:27.906835 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c53aef07-7967-4da8-a56a-018b25e9b92e-etc-swift\") pod \"swift-storage-0\" (UID: \"c53aef07-7967-4da8-a56a-018b25e9b92e\") " pod="openstack/swift-storage-0" Feb 16 02:38:27.920486 master-0 kubenswrapper[31559]: I0216 02:38:27.915712 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c53aef07-7967-4da8-a56a-018b25e9b92e-etc-swift\") pod \"swift-storage-0\" (UID: \"c53aef07-7967-4da8-a56a-018b25e9b92e\") " pod="openstack/swift-storage-0" Feb 16 02:38:27.930531 master-0 kubenswrapper[31559]: I0216 02:38:27.923569 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 16 02:38:27.977837 master-0 kubenswrapper[31559]: I0216 02:38:27.977764 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a78217a-f6d3-4ff6-9a50-24367edc7a67" path="/var/lib/kubelet/pods/4a78217a-f6d3-4ff6-9a50-24367edc7a67/volumes" Feb 16 02:38:28.214600 master-0 kubenswrapper[31559]: I0216 02:38:28.214428 31559 generic.go:334] "Generic (PLEG): container finished" podID="cb609b01-8137-43bc-a5b5-a9c3744a9067" containerID="447517bce2aaac83ae0397a8b32f3edff059a0aa6050f3104c0af929bbcc84e9" exitCode=0 Feb 16 02:38:28.214600 master-0 kubenswrapper[31559]: I0216 02:38:28.214550 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-dtrws" event={"ID":"cb609b01-8137-43bc-a5b5-a9c3744a9067","Type":"ContainerDied","Data":"447517bce2aaac83ae0397a8b32f3edff059a0aa6050f3104c0af929bbcc84e9"} Feb 16 02:38:28.368069 master-0 kubenswrapper[31559]: I0216 02:38:28.367981 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-kcsvv"] Feb 16 02:38:28.369929 master-0 kubenswrapper[31559]: E0216 02:38:28.369874 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f478b2fd-7ccb-4480-8887-9decb4a4b32e" containerName="mariadb-database-create" Feb 16 02:38:28.369929 master-0 kubenswrapper[31559]: I0216 02:38:28.369921 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="f478b2fd-7ccb-4480-8887-9decb4a4b32e" containerName="mariadb-database-create" Feb 16 02:38:28.370092 master-0 kubenswrapper[31559]: E0216 02:38:28.369958 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec2f0932-270d-4b92-a1a3-03181503039b" containerName="mariadb-database-create" Feb 16 02:38:28.370092 master-0 kubenswrapper[31559]: I0216 02:38:28.369976 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec2f0932-270d-4b92-a1a3-03181503039b" containerName="mariadb-database-create" Feb 16 02:38:28.370092 master-0 kubenswrapper[31559]: E0216 02:38:28.369997 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e71ec1c5-6255-4555-b1b8-685613eb634b" containerName="mariadb-database-create" Feb 16 02:38:28.370092 master-0 kubenswrapper[31559]: I0216 02:38:28.370011 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="e71ec1c5-6255-4555-b1b8-685613eb634b" containerName="mariadb-database-create" Feb 16 02:38:28.370092 master-0 kubenswrapper[31559]: E0216 02:38:28.370042 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dee98140-f80a-4c3f-81aa-5a5183eeb144" containerName="mariadb-account-create-update" Feb 16 02:38:28.370092 master-0 kubenswrapper[31559]: I0216 02:38:28.370056 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="dee98140-f80a-4c3f-81aa-5a5183eeb144" containerName="mariadb-account-create-update" Feb 16 02:38:28.370092 master-0 kubenswrapper[31559]: E0216 02:38:28.370083 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1cac3b2-dea2-4c55-a8db-a71dbc5f95f4" containerName="init" Feb 16 02:38:28.370092 master-0 kubenswrapper[31559]: I0216 02:38:28.370097 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1cac3b2-dea2-4c55-a8db-a71dbc5f95f4" containerName="init" Feb 16 02:38:28.370635 master-0 kubenswrapper[31559]: E0216 02:38:28.370125 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1cac3b2-dea2-4c55-a8db-a71dbc5f95f4" containerName="dnsmasq-dns" Feb 16 02:38:28.370635 master-0 kubenswrapper[31559]: I0216 02:38:28.370139 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1cac3b2-dea2-4c55-a8db-a71dbc5f95f4" containerName="dnsmasq-dns" Feb 16 02:38:28.370635 master-0 kubenswrapper[31559]: E0216 02:38:28.370155 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a78217a-f6d3-4ff6-9a50-24367edc7a67" containerName="mariadb-account-create-update" Feb 16 02:38:28.370635 master-0 kubenswrapper[31559]: I0216 02:38:28.370170 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a78217a-f6d3-4ff6-9a50-24367edc7a67" containerName="mariadb-account-create-update" Feb 16 02:38:28.370635 master-0 kubenswrapper[31559]: E0216 02:38:28.370207 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c275eb5-4e56-4ade-8c4a-142ed7dbafea" containerName="mariadb-account-create-update" Feb 16 02:38:28.370635 master-0 kubenswrapper[31559]: I0216 02:38:28.370223 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c275eb5-4e56-4ade-8c4a-142ed7dbafea" containerName="mariadb-account-create-update" Feb 16 02:38:28.370635 master-0 kubenswrapper[31559]: E0216 02:38:28.370254 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="292a5fd9-3c50-42c9-8f8d-bfbe7dbec084" containerName="mariadb-account-create-update" Feb 16 02:38:28.370635 master-0 kubenswrapper[31559]: I0216 02:38:28.370268 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="292a5fd9-3c50-42c9-8f8d-bfbe7dbec084" containerName="mariadb-account-create-update" Feb 16 02:38:28.371147 master-0 kubenswrapper[31559]: I0216 02:38:28.370670 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="f478b2fd-7ccb-4480-8887-9decb4a4b32e" containerName="mariadb-database-create" Feb 16 02:38:28.371147 master-0 kubenswrapper[31559]: I0216 02:38:28.370692 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="dee98140-f80a-4c3f-81aa-5a5183eeb144" containerName="mariadb-account-create-update" Feb 16 02:38:28.371147 master-0 kubenswrapper[31559]: I0216 02:38:28.370713 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1cac3b2-dea2-4c55-a8db-a71dbc5f95f4" containerName="dnsmasq-dns" Feb 16 02:38:28.371147 master-0 kubenswrapper[31559]: I0216 02:38:28.370747 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c275eb5-4e56-4ade-8c4a-142ed7dbafea" containerName="mariadb-account-create-update" Feb 16 02:38:28.371147 master-0 kubenswrapper[31559]: I0216 02:38:28.370776 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a78217a-f6d3-4ff6-9a50-24367edc7a67" containerName="mariadb-account-create-update" Feb 16 02:38:28.371147 master-0 kubenswrapper[31559]: I0216 02:38:28.370816 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="e71ec1c5-6255-4555-b1b8-685613eb634b" containerName="mariadb-database-create" Feb 16 02:38:28.371147 master-0 kubenswrapper[31559]: I0216 02:38:28.370852 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec2f0932-270d-4b92-a1a3-03181503039b" containerName="mariadb-database-create" Feb 16 02:38:28.371147 master-0 kubenswrapper[31559]: I0216 02:38:28.370870 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="292a5fd9-3c50-42c9-8f8d-bfbe7dbec084" containerName="mariadb-account-create-update" Feb 16 02:38:28.372169 master-0 kubenswrapper[31559]: I0216 02:38:28.372116 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-kcsvv" Feb 16 02:38:28.375405 master-0 kubenswrapper[31559]: I0216 02:38:28.375353 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-72940-config-data" Feb 16 02:38:28.413046 master-0 kubenswrapper[31559]: I0216 02:38:28.412970 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-kcsvv"] Feb 16 02:38:28.500240 master-0 kubenswrapper[31559]: I0216 02:38:28.500174 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 16 02:38:28.519993 master-0 kubenswrapper[31559]: I0216 02:38:28.519927 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ce4194c-65d5-4538-bb48-d1f17d880599-combined-ca-bundle\") pod \"glance-db-sync-kcsvv\" (UID: \"2ce4194c-65d5-4538-bb48-d1f17d880599\") " pod="openstack/glance-db-sync-kcsvv" Feb 16 02:38:28.519993 master-0 kubenswrapper[31559]: I0216 02:38:28.519993 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2ce4194c-65d5-4538-bb48-d1f17d880599-db-sync-config-data\") pod \"glance-db-sync-kcsvv\" (UID: \"2ce4194c-65d5-4538-bb48-d1f17d880599\") " pod="openstack/glance-db-sync-kcsvv" Feb 16 02:38:28.520231 master-0 kubenswrapper[31559]: I0216 02:38:28.520049 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqd7j\" (UniqueName: \"kubernetes.io/projected/2ce4194c-65d5-4538-bb48-d1f17d880599-kube-api-access-qqd7j\") pod \"glance-db-sync-kcsvv\" (UID: \"2ce4194c-65d5-4538-bb48-d1f17d880599\") " pod="openstack/glance-db-sync-kcsvv" Feb 16 02:38:28.520231 master-0 kubenswrapper[31559]: I0216 02:38:28.520107 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ce4194c-65d5-4538-bb48-d1f17d880599-config-data\") pod \"glance-db-sync-kcsvv\" (UID: \"2ce4194c-65d5-4538-bb48-d1f17d880599\") " pod="openstack/glance-db-sync-kcsvv" Feb 16 02:38:28.622309 master-0 kubenswrapper[31559]: I0216 02:38:28.622206 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2ce4194c-65d5-4538-bb48-d1f17d880599-db-sync-config-data\") pod \"glance-db-sync-kcsvv\" (UID: \"2ce4194c-65d5-4538-bb48-d1f17d880599\") " pod="openstack/glance-db-sync-kcsvv" Feb 16 02:38:28.622815 master-0 kubenswrapper[31559]: I0216 02:38:28.622365 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqd7j\" (UniqueName: \"kubernetes.io/projected/2ce4194c-65d5-4538-bb48-d1f17d880599-kube-api-access-qqd7j\") pod \"glance-db-sync-kcsvv\" (UID: \"2ce4194c-65d5-4538-bb48-d1f17d880599\") " pod="openstack/glance-db-sync-kcsvv" Feb 16 02:38:28.622815 master-0 kubenswrapper[31559]: I0216 02:38:28.622506 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ce4194c-65d5-4538-bb48-d1f17d880599-config-data\") pod \"glance-db-sync-kcsvv\" (UID: \"2ce4194c-65d5-4538-bb48-d1f17d880599\") " pod="openstack/glance-db-sync-kcsvv" Feb 16 02:38:28.622815 master-0 kubenswrapper[31559]: I0216 02:38:28.622655 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ce4194c-65d5-4538-bb48-d1f17d880599-combined-ca-bundle\") pod \"glance-db-sync-kcsvv\" (UID: \"2ce4194c-65d5-4538-bb48-d1f17d880599\") " pod="openstack/glance-db-sync-kcsvv" Feb 16 02:38:28.628106 master-0 kubenswrapper[31559]: I0216 02:38:28.627614 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2ce4194c-65d5-4538-bb48-d1f17d880599-db-sync-config-data\") pod \"glance-db-sync-kcsvv\" (UID: \"2ce4194c-65d5-4538-bb48-d1f17d880599\") " pod="openstack/glance-db-sync-kcsvv" Feb 16 02:38:28.628650 master-0 kubenswrapper[31559]: I0216 02:38:28.628597 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ce4194c-65d5-4538-bb48-d1f17d880599-config-data\") pod \"glance-db-sync-kcsvv\" (UID: \"2ce4194c-65d5-4538-bb48-d1f17d880599\") " pod="openstack/glance-db-sync-kcsvv" Feb 16 02:38:28.629687 master-0 kubenswrapper[31559]: I0216 02:38:28.629643 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ce4194c-65d5-4538-bb48-d1f17d880599-combined-ca-bundle\") pod \"glance-db-sync-kcsvv\" (UID: \"2ce4194c-65d5-4538-bb48-d1f17d880599\") " pod="openstack/glance-db-sync-kcsvv" Feb 16 02:38:28.645214 master-0 kubenswrapper[31559]: I0216 02:38:28.645149 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqd7j\" (UniqueName: \"kubernetes.io/projected/2ce4194c-65d5-4538-bb48-d1f17d880599-kube-api-access-qqd7j\") pod \"glance-db-sync-kcsvv\" (UID: \"2ce4194c-65d5-4538-bb48-d1f17d880599\") " pod="openstack/glance-db-sync-kcsvv" Feb 16 02:38:28.704743 master-0 kubenswrapper[31559]: I0216 02:38:28.704640 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-kcsvv" Feb 16 02:38:29.001902 master-0 kubenswrapper[31559]: I0216 02:38:29.001716 31559 trace.go:236] Trace[833718213]: "Calculate volume metrics of persistence for pod openstack/rabbitmq-cell1-server-0" (16-Feb-2026 02:38:27.904) (total time: 1097ms): Feb 16 02:38:29.001902 master-0 kubenswrapper[31559]: Trace[833718213]: [1.097193003s] [1.097193003s] END Feb 16 02:38:29.161479 master-0 kubenswrapper[31559]: I0216 02:38:29.161290 31559 trace.go:236] Trace[171949165]: "Calculate volume metrics of swift for pod openstack/swift-storage-0" (16-Feb-2026 02:38:27.903) (total time: 1257ms): Feb 16 02:38:29.161479 master-0 kubenswrapper[31559]: Trace[171949165]: [1.257655851s] [1.257655851s] END Feb 16 02:38:29.227228 master-0 kubenswrapper[31559]: I0216 02:38:29.227168 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c53aef07-7967-4da8-a56a-018b25e9b92e","Type":"ContainerStarted","Data":"e4528b4d0cf13b2e591496a3cf86a4baa5a5458d3d7d925be69e017b07c126ec"} Feb 16 02:38:29.285910 master-0 kubenswrapper[31559]: I0216 02:38:29.285860 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-kcsvv"] Feb 16 02:38:29.824025 master-0 kubenswrapper[31559]: I0216 02:38:29.823963 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-dtrws" Feb 16 02:38:29.950640 master-0 kubenswrapper[31559]: I0216 02:38:29.950160 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/cb609b01-8137-43bc-a5b5-a9c3744a9067-etc-swift\") pod \"cb609b01-8137-43bc-a5b5-a9c3744a9067\" (UID: \"cb609b01-8137-43bc-a5b5-a9c3744a9067\") " Feb 16 02:38:29.950640 master-0 kubenswrapper[31559]: I0216 02:38:29.950250 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/cb609b01-8137-43bc-a5b5-a9c3744a9067-ring-data-devices\") pod \"cb609b01-8137-43bc-a5b5-a9c3744a9067\" (UID: \"cb609b01-8137-43bc-a5b5-a9c3744a9067\") " Feb 16 02:38:29.950640 master-0 kubenswrapper[31559]: I0216 02:38:29.950307 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nf8nl\" (UniqueName: \"kubernetes.io/projected/cb609b01-8137-43bc-a5b5-a9c3744a9067-kube-api-access-nf8nl\") pod \"cb609b01-8137-43bc-a5b5-a9c3744a9067\" (UID: \"cb609b01-8137-43bc-a5b5-a9c3744a9067\") " Feb 16 02:38:29.950640 master-0 kubenswrapper[31559]: I0216 02:38:29.950345 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/cb609b01-8137-43bc-a5b5-a9c3744a9067-dispersionconf\") pod \"cb609b01-8137-43bc-a5b5-a9c3744a9067\" (UID: \"cb609b01-8137-43bc-a5b5-a9c3744a9067\") " Feb 16 02:38:29.950640 master-0 kubenswrapper[31559]: I0216 02:38:29.950381 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/cb609b01-8137-43bc-a5b5-a9c3744a9067-swiftconf\") pod \"cb609b01-8137-43bc-a5b5-a9c3744a9067\" (UID: \"cb609b01-8137-43bc-a5b5-a9c3744a9067\") " Feb 16 02:38:29.950640 master-0 kubenswrapper[31559]: I0216 02:38:29.950403 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb609b01-8137-43bc-a5b5-a9c3744a9067-combined-ca-bundle\") pod \"cb609b01-8137-43bc-a5b5-a9c3744a9067\" (UID: \"cb609b01-8137-43bc-a5b5-a9c3744a9067\") " Feb 16 02:38:29.951234 master-0 kubenswrapper[31559]: I0216 02:38:29.950701 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cb609b01-8137-43bc-a5b5-a9c3744a9067-scripts\") pod \"cb609b01-8137-43bc-a5b5-a9c3744a9067\" (UID: \"cb609b01-8137-43bc-a5b5-a9c3744a9067\") " Feb 16 02:38:29.954776 master-0 kubenswrapper[31559]: I0216 02:38:29.952998 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb609b01-8137-43bc-a5b5-a9c3744a9067-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "cb609b01-8137-43bc-a5b5-a9c3744a9067" (UID: "cb609b01-8137-43bc-a5b5-a9c3744a9067"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 02:38:29.954776 master-0 kubenswrapper[31559]: I0216 02:38:29.953967 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb609b01-8137-43bc-a5b5-a9c3744a9067-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "cb609b01-8137-43bc-a5b5-a9c3744a9067" (UID: "cb609b01-8137-43bc-a5b5-a9c3744a9067"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:38:29.955328 master-0 kubenswrapper[31559]: I0216 02:38:29.954963 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb609b01-8137-43bc-a5b5-a9c3744a9067-kube-api-access-nf8nl" (OuterVolumeSpecName: "kube-api-access-nf8nl") pod "cb609b01-8137-43bc-a5b5-a9c3744a9067" (UID: "cb609b01-8137-43bc-a5b5-a9c3744a9067"). InnerVolumeSpecName "kube-api-access-nf8nl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:38:29.962022 master-0 kubenswrapper[31559]: I0216 02:38:29.961970 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb609b01-8137-43bc-a5b5-a9c3744a9067-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "cb609b01-8137-43bc-a5b5-a9c3744a9067" (UID: "cb609b01-8137-43bc-a5b5-a9c3744a9067"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:38:29.975910 master-0 kubenswrapper[31559]: I0216 02:38:29.975847 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb609b01-8137-43bc-a5b5-a9c3744a9067-scripts" (OuterVolumeSpecName: "scripts") pod "cb609b01-8137-43bc-a5b5-a9c3744a9067" (UID: "cb609b01-8137-43bc-a5b5-a9c3744a9067"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:38:29.987120 master-0 kubenswrapper[31559]: I0216 02:38:29.987072 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb609b01-8137-43bc-a5b5-a9c3744a9067-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "cb609b01-8137-43bc-a5b5-a9c3744a9067" (UID: "cb609b01-8137-43bc-a5b5-a9c3744a9067"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:38:29.989223 master-0 kubenswrapper[31559]: I0216 02:38:29.989116 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb609b01-8137-43bc-a5b5-a9c3744a9067-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cb609b01-8137-43bc-a5b5-a9c3744a9067" (UID: "cb609b01-8137-43bc-a5b5-a9c3744a9067"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:38:30.054494 master-0 kubenswrapper[31559]: I0216 02:38:30.053664 31559 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/cb609b01-8137-43bc-a5b5-a9c3744a9067-etc-swift\") on node \"master-0\" DevicePath \"\"" Feb 16 02:38:30.054494 master-0 kubenswrapper[31559]: I0216 02:38:30.053734 31559 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/cb609b01-8137-43bc-a5b5-a9c3744a9067-ring-data-devices\") on node \"master-0\" DevicePath \"\"" Feb 16 02:38:30.054494 master-0 kubenswrapper[31559]: I0216 02:38:30.053748 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nf8nl\" (UniqueName: \"kubernetes.io/projected/cb609b01-8137-43bc-a5b5-a9c3744a9067-kube-api-access-nf8nl\") on node \"master-0\" DevicePath \"\"" Feb 16 02:38:30.054494 master-0 kubenswrapper[31559]: I0216 02:38:30.053785 31559 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/cb609b01-8137-43bc-a5b5-a9c3744a9067-dispersionconf\") on node \"master-0\" DevicePath \"\"" Feb 16 02:38:30.054494 master-0 kubenswrapper[31559]: I0216 02:38:30.053797 31559 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/cb609b01-8137-43bc-a5b5-a9c3744a9067-swiftconf\") on node \"master-0\" DevicePath \"\"" Feb 16 02:38:30.054494 master-0 kubenswrapper[31559]: I0216 02:38:30.053808 31559 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb609b01-8137-43bc-a5b5-a9c3744a9067-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 02:38:30.054494 master-0 kubenswrapper[31559]: I0216 02:38:30.053821 31559 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cb609b01-8137-43bc-a5b5-a9c3744a9067-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 02:38:30.237880 master-0 kubenswrapper[31559]: I0216 02:38:30.237819 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-kcsvv" event={"ID":"2ce4194c-65d5-4538-bb48-d1f17d880599","Type":"ContainerStarted","Data":"3ea05d9b083f2146147339c772e0f40998e0df5716472748d54c48f17a173aff"} Feb 16 02:38:30.239704 master-0 kubenswrapper[31559]: I0216 02:38:30.239678 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c53aef07-7967-4da8-a56a-018b25e9b92e","Type":"ContainerStarted","Data":"a77eeafaa6a75e52030cf6f156f1fe659d311f3f668862b27f21a0e2f4da07bb"} Feb 16 02:38:30.239704 master-0 kubenswrapper[31559]: I0216 02:38:30.239704 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c53aef07-7967-4da8-a56a-018b25e9b92e","Type":"ContainerStarted","Data":"7810a5900aed8b2ba94660923b2a4d6b706bbef551807f679ec9bcd1b7923bf7"} Feb 16 02:38:30.241495 master-0 kubenswrapper[31559]: I0216 02:38:30.241472 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-dtrws" event={"ID":"cb609b01-8137-43bc-a5b5-a9c3744a9067","Type":"ContainerDied","Data":"7eb3c02b621dd1c110b7cc269322aa57266f737d54b7f22d2cb39346faa57e92"} Feb 16 02:38:30.241495 master-0 kubenswrapper[31559]: I0216 02:38:30.241496 31559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7eb3c02b621dd1c110b7cc269322aa57266f737d54b7f22d2cb39346faa57e92" Feb 16 02:38:30.241907 master-0 kubenswrapper[31559]: I0216 02:38:30.241540 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-dtrws" Feb 16 02:38:31.277468 master-0 kubenswrapper[31559]: I0216 02:38:31.274250 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c53aef07-7967-4da8-a56a-018b25e9b92e","Type":"ContainerStarted","Data":"e40fd9bde6dcb5ba0be7ebf4a15574abdcf472aeb68a80630c9e6a7536e7c8ff"} Feb 16 02:38:31.277468 master-0 kubenswrapper[31559]: I0216 02:38:31.274522 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c53aef07-7967-4da8-a56a-018b25e9b92e","Type":"ContainerStarted","Data":"34927c3c98cbd6ec4f4c97d528bd4c8186e2eb4f97947eaf379b27f1e12c601e"} Feb 16 02:38:31.475454 master-0 kubenswrapper[31559]: I0216 02:38:31.474483 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-btqxw"] Feb 16 02:38:31.475454 master-0 kubenswrapper[31559]: E0216 02:38:31.474942 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb609b01-8137-43bc-a5b5-a9c3744a9067" containerName="swift-ring-rebalance" Feb 16 02:38:31.475454 master-0 kubenswrapper[31559]: I0216 02:38:31.474956 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb609b01-8137-43bc-a5b5-a9c3744a9067" containerName="swift-ring-rebalance" Feb 16 02:38:31.475454 master-0 kubenswrapper[31559]: I0216 02:38:31.475173 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb609b01-8137-43bc-a5b5-a9c3744a9067" containerName="swift-ring-rebalance" Feb 16 02:38:31.479457 master-0 kubenswrapper[31559]: I0216 02:38:31.475831 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-btqxw" Feb 16 02:38:31.479457 master-0 kubenswrapper[31559]: I0216 02:38:31.478942 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 16 02:38:31.488227 master-0 kubenswrapper[31559]: I0216 02:38:31.488171 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-btqxw"] Feb 16 02:38:31.586470 master-0 kubenswrapper[31559]: I0216 02:38:31.586328 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jc7f5\" (UniqueName: \"kubernetes.io/projected/df39c4df-ec6f-4f4f-b603-a9329fd44cf0-kube-api-access-jc7f5\") pod \"root-account-create-update-btqxw\" (UID: \"df39c4df-ec6f-4f4f-b603-a9329fd44cf0\") " pod="openstack/root-account-create-update-btqxw" Feb 16 02:38:31.586681 master-0 kubenswrapper[31559]: I0216 02:38:31.586585 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df39c4df-ec6f-4f4f-b603-a9329fd44cf0-operator-scripts\") pod \"root-account-create-update-btqxw\" (UID: \"df39c4df-ec6f-4f4f-b603-a9329fd44cf0\") " pod="openstack/root-account-create-update-btqxw" Feb 16 02:38:31.688538 master-0 kubenswrapper[31559]: I0216 02:38:31.688424 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jc7f5\" (UniqueName: \"kubernetes.io/projected/df39c4df-ec6f-4f4f-b603-a9329fd44cf0-kube-api-access-jc7f5\") pod \"root-account-create-update-btqxw\" (UID: \"df39c4df-ec6f-4f4f-b603-a9329fd44cf0\") " pod="openstack/root-account-create-update-btqxw" Feb 16 02:38:31.689189 master-0 kubenswrapper[31559]: I0216 02:38:31.688599 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df39c4df-ec6f-4f4f-b603-a9329fd44cf0-operator-scripts\") pod \"root-account-create-update-btqxw\" (UID: \"df39c4df-ec6f-4f4f-b603-a9329fd44cf0\") " pod="openstack/root-account-create-update-btqxw" Feb 16 02:38:31.689554 master-0 kubenswrapper[31559]: I0216 02:38:31.689510 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df39c4df-ec6f-4f4f-b603-a9329fd44cf0-operator-scripts\") pod \"root-account-create-update-btqxw\" (UID: \"df39c4df-ec6f-4f4f-b603-a9329fd44cf0\") " pod="openstack/root-account-create-update-btqxw" Feb 16 02:38:31.704046 master-0 kubenswrapper[31559]: I0216 02:38:31.703993 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jc7f5\" (UniqueName: \"kubernetes.io/projected/df39c4df-ec6f-4f4f-b603-a9329fd44cf0-kube-api-access-jc7f5\") pod \"root-account-create-update-btqxw\" (UID: \"df39c4df-ec6f-4f4f-b603-a9329fd44cf0\") " pod="openstack/root-account-create-update-btqxw" Feb 16 02:38:31.818237 master-0 kubenswrapper[31559]: I0216 02:38:31.818175 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-btqxw" Feb 16 02:38:32.286870 master-0 kubenswrapper[31559]: I0216 02:38:32.286779 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c53aef07-7967-4da8-a56a-018b25e9b92e","Type":"ContainerStarted","Data":"e4ef5ca7c77942136f7a23318ab597961d1478649f4533c7a99565a5f65d434e"} Feb 16 02:38:32.404872 master-0 kubenswrapper[31559]: I0216 02:38:32.404794 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-btqxw"] Feb 16 02:38:32.420750 master-0 kubenswrapper[31559]: W0216 02:38:32.418172 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf39c4df_ec6f_4f4f_b603_a9329fd44cf0.slice/crio-2d1dfcc7a37f0b218bbdfc406c3a91a0c7a8f5eea2f3bfe61c1194e800b90200 WatchSource:0}: Error finding container 2d1dfcc7a37f0b218bbdfc406c3a91a0c7a8f5eea2f3bfe61c1194e800b90200: Status 404 returned error can't find the container with id 2d1dfcc7a37f0b218bbdfc406c3a91a0c7a8f5eea2f3bfe61c1194e800b90200 Feb 16 02:38:33.301608 master-0 kubenswrapper[31559]: I0216 02:38:33.301555 31559 generic.go:334] "Generic (PLEG): container finished" podID="93fa9c51-4a98-4a66-8b10-a1213ca9f95e" containerID="bacdcbe8aac8083b3c666eea1d346ab690592ce6c9729c6971e709a915c460ec" exitCode=0 Feb 16 02:38:33.302258 master-0 kubenswrapper[31559]: I0216 02:38:33.301614 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"93fa9c51-4a98-4a66-8b10-a1213ca9f95e","Type":"ContainerDied","Data":"bacdcbe8aac8083b3c666eea1d346ab690592ce6c9729c6971e709a915c460ec"} Feb 16 02:38:33.307111 master-0 kubenswrapper[31559]: I0216 02:38:33.307008 31559 generic.go:334] "Generic (PLEG): container finished" podID="5cd89103-815f-45df-9b47-4e3db3c708f2" containerID="adb3da42042abb84afffd0acd2366c7547c839acc0d5fa7934ce46de37dc8721" exitCode=0 Feb 16 02:38:33.307246 master-0 kubenswrapper[31559]: I0216 02:38:33.307188 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"5cd89103-815f-45df-9b47-4e3db3c708f2","Type":"ContainerDied","Data":"adb3da42042abb84afffd0acd2366c7547c839acc0d5fa7934ce46de37dc8721"} Feb 16 02:38:33.315831 master-0 kubenswrapper[31559]: I0216 02:38:33.312337 31559 generic.go:334] "Generic (PLEG): container finished" podID="df39c4df-ec6f-4f4f-b603-a9329fd44cf0" containerID="25acd88e9aba26c34afbd4f127e5ac731311211cdbfa70423487cc2dffadc1a4" exitCode=0 Feb 16 02:38:33.315831 master-0 kubenswrapper[31559]: I0216 02:38:33.312519 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-btqxw" event={"ID":"df39c4df-ec6f-4f4f-b603-a9329fd44cf0","Type":"ContainerDied","Data":"25acd88e9aba26c34afbd4f127e5ac731311211cdbfa70423487cc2dffadc1a4"} Feb 16 02:38:33.315831 master-0 kubenswrapper[31559]: I0216 02:38:33.312603 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-btqxw" event={"ID":"df39c4df-ec6f-4f4f-b603-a9329fd44cf0","Type":"ContainerStarted","Data":"2d1dfcc7a37f0b218bbdfc406c3a91a0c7a8f5eea2f3bfe61c1194e800b90200"} Feb 16 02:38:33.320531 master-0 kubenswrapper[31559]: I0216 02:38:33.319684 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c53aef07-7967-4da8-a56a-018b25e9b92e","Type":"ContainerStarted","Data":"102244b4730ed26a8c5038aa8599cb7b8d2f563422355ac6d86a841d4d01b4b5"} Feb 16 02:38:33.320531 master-0 kubenswrapper[31559]: I0216 02:38:33.319758 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c53aef07-7967-4da8-a56a-018b25e9b92e","Type":"ContainerStarted","Data":"6f31576d85f32747f63fcd318eda3f1e59f80feaf96fea143550f585e44ac83c"} Feb 16 02:38:33.320531 master-0 kubenswrapper[31559]: I0216 02:38:33.319779 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c53aef07-7967-4da8-a56a-018b25e9b92e","Type":"ContainerStarted","Data":"c339c998d9b912703037ef76dcbb49b53f3479ffb3f65f561936779997757f01"} Feb 16 02:38:33.436553 master-0 kubenswrapper[31559]: I0216 02:38:33.436196 31559 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-jhsdr" podUID="caef9948-8516-40b1-a940-6b2bea06cf6d" containerName="ovn-controller" probeResult="failure" output=< Feb 16 02:38:33.436553 master-0 kubenswrapper[31559]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 16 02:38:33.436553 master-0 kubenswrapper[31559]: > Feb 16 02:38:33.467732 master-0 kubenswrapper[31559]: I0216 02:38:33.467644 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-w2cr4" Feb 16 02:38:33.481525 master-0 kubenswrapper[31559]: I0216 02:38:33.481352 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-w2cr4" Feb 16 02:38:33.865959 master-0 kubenswrapper[31559]: I0216 02:38:33.865785 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-jhsdr-config-h48v9"] Feb 16 02:38:33.869560 master-0 kubenswrapper[31559]: I0216 02:38:33.867580 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-jhsdr-config-h48v9" Feb 16 02:38:33.870470 master-0 kubenswrapper[31559]: I0216 02:38:33.869860 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 16 02:38:33.960909 master-0 kubenswrapper[31559]: I0216 02:38:33.957788 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d1339f4e-22d9-458e-a7ff-a215f38eb3ba-scripts\") pod \"ovn-controller-jhsdr-config-h48v9\" (UID: \"d1339f4e-22d9-458e-a7ff-a215f38eb3ba\") " pod="openstack/ovn-controller-jhsdr-config-h48v9" Feb 16 02:38:33.960909 master-0 kubenswrapper[31559]: I0216 02:38:33.957865 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d1339f4e-22d9-458e-a7ff-a215f38eb3ba-additional-scripts\") pod \"ovn-controller-jhsdr-config-h48v9\" (UID: \"d1339f4e-22d9-458e-a7ff-a215f38eb3ba\") " pod="openstack/ovn-controller-jhsdr-config-h48v9" Feb 16 02:38:33.960909 master-0 kubenswrapper[31559]: I0216 02:38:33.957896 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjxr6\" (UniqueName: \"kubernetes.io/projected/d1339f4e-22d9-458e-a7ff-a215f38eb3ba-kube-api-access-qjxr6\") pod \"ovn-controller-jhsdr-config-h48v9\" (UID: \"d1339f4e-22d9-458e-a7ff-a215f38eb3ba\") " pod="openstack/ovn-controller-jhsdr-config-h48v9" Feb 16 02:38:33.960909 master-0 kubenswrapper[31559]: I0216 02:38:33.958192 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d1339f4e-22d9-458e-a7ff-a215f38eb3ba-var-run-ovn\") pod \"ovn-controller-jhsdr-config-h48v9\" (UID: \"d1339f4e-22d9-458e-a7ff-a215f38eb3ba\") " pod="openstack/ovn-controller-jhsdr-config-h48v9" Feb 16 02:38:33.960909 master-0 kubenswrapper[31559]: I0216 02:38:33.958229 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d1339f4e-22d9-458e-a7ff-a215f38eb3ba-var-run\") pod \"ovn-controller-jhsdr-config-h48v9\" (UID: \"d1339f4e-22d9-458e-a7ff-a215f38eb3ba\") " pod="openstack/ovn-controller-jhsdr-config-h48v9" Feb 16 02:38:33.960909 master-0 kubenswrapper[31559]: I0216 02:38:33.958281 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d1339f4e-22d9-458e-a7ff-a215f38eb3ba-var-log-ovn\") pod \"ovn-controller-jhsdr-config-h48v9\" (UID: \"d1339f4e-22d9-458e-a7ff-a215f38eb3ba\") " pod="openstack/ovn-controller-jhsdr-config-h48v9" Feb 16 02:38:33.985841 master-0 kubenswrapper[31559]: I0216 02:38:33.985729 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-jhsdr-config-h48v9"] Feb 16 02:38:34.068584 master-0 kubenswrapper[31559]: I0216 02:38:34.068498 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d1339f4e-22d9-458e-a7ff-a215f38eb3ba-var-log-ovn\") pod \"ovn-controller-jhsdr-config-h48v9\" (UID: \"d1339f4e-22d9-458e-a7ff-a215f38eb3ba\") " pod="openstack/ovn-controller-jhsdr-config-h48v9" Feb 16 02:38:34.068841 master-0 kubenswrapper[31559]: I0216 02:38:34.068622 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d1339f4e-22d9-458e-a7ff-a215f38eb3ba-scripts\") pod \"ovn-controller-jhsdr-config-h48v9\" (UID: \"d1339f4e-22d9-458e-a7ff-a215f38eb3ba\") " pod="openstack/ovn-controller-jhsdr-config-h48v9" Feb 16 02:38:34.068841 master-0 kubenswrapper[31559]: I0216 02:38:34.068665 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d1339f4e-22d9-458e-a7ff-a215f38eb3ba-additional-scripts\") pod \"ovn-controller-jhsdr-config-h48v9\" (UID: \"d1339f4e-22d9-458e-a7ff-a215f38eb3ba\") " pod="openstack/ovn-controller-jhsdr-config-h48v9" Feb 16 02:38:34.068841 master-0 kubenswrapper[31559]: I0216 02:38:34.068673 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d1339f4e-22d9-458e-a7ff-a215f38eb3ba-var-log-ovn\") pod \"ovn-controller-jhsdr-config-h48v9\" (UID: \"d1339f4e-22d9-458e-a7ff-a215f38eb3ba\") " pod="openstack/ovn-controller-jhsdr-config-h48v9" Feb 16 02:38:34.068841 master-0 kubenswrapper[31559]: I0216 02:38:34.068687 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjxr6\" (UniqueName: \"kubernetes.io/projected/d1339f4e-22d9-458e-a7ff-a215f38eb3ba-kube-api-access-qjxr6\") pod \"ovn-controller-jhsdr-config-h48v9\" (UID: \"d1339f4e-22d9-458e-a7ff-a215f38eb3ba\") " pod="openstack/ovn-controller-jhsdr-config-h48v9" Feb 16 02:38:34.068975 master-0 kubenswrapper[31559]: I0216 02:38:34.068878 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d1339f4e-22d9-458e-a7ff-a215f38eb3ba-var-run-ovn\") pod \"ovn-controller-jhsdr-config-h48v9\" (UID: \"d1339f4e-22d9-458e-a7ff-a215f38eb3ba\") " pod="openstack/ovn-controller-jhsdr-config-h48v9" Feb 16 02:38:34.068975 master-0 kubenswrapper[31559]: I0216 02:38:34.068910 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d1339f4e-22d9-458e-a7ff-a215f38eb3ba-var-run\") pod \"ovn-controller-jhsdr-config-h48v9\" (UID: \"d1339f4e-22d9-458e-a7ff-a215f38eb3ba\") " pod="openstack/ovn-controller-jhsdr-config-h48v9" Feb 16 02:38:34.069329 master-0 kubenswrapper[31559]: I0216 02:38:34.069308 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d1339f4e-22d9-458e-a7ff-a215f38eb3ba-var-run-ovn\") pod \"ovn-controller-jhsdr-config-h48v9\" (UID: \"d1339f4e-22d9-458e-a7ff-a215f38eb3ba\") " pod="openstack/ovn-controller-jhsdr-config-h48v9" Feb 16 02:38:34.069424 master-0 kubenswrapper[31559]: I0216 02:38:34.069374 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d1339f4e-22d9-458e-a7ff-a215f38eb3ba-var-run\") pod \"ovn-controller-jhsdr-config-h48v9\" (UID: \"d1339f4e-22d9-458e-a7ff-a215f38eb3ba\") " pod="openstack/ovn-controller-jhsdr-config-h48v9" Feb 16 02:38:34.070177 master-0 kubenswrapper[31559]: I0216 02:38:34.070151 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d1339f4e-22d9-458e-a7ff-a215f38eb3ba-additional-scripts\") pod \"ovn-controller-jhsdr-config-h48v9\" (UID: \"d1339f4e-22d9-458e-a7ff-a215f38eb3ba\") " pod="openstack/ovn-controller-jhsdr-config-h48v9" Feb 16 02:38:34.071175 master-0 kubenswrapper[31559]: I0216 02:38:34.071151 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d1339f4e-22d9-458e-a7ff-a215f38eb3ba-scripts\") pod \"ovn-controller-jhsdr-config-h48v9\" (UID: \"d1339f4e-22d9-458e-a7ff-a215f38eb3ba\") " pod="openstack/ovn-controller-jhsdr-config-h48v9" Feb 16 02:38:34.109898 master-0 kubenswrapper[31559]: I0216 02:38:34.104351 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjxr6\" (UniqueName: \"kubernetes.io/projected/d1339f4e-22d9-458e-a7ff-a215f38eb3ba-kube-api-access-qjxr6\") pod \"ovn-controller-jhsdr-config-h48v9\" (UID: \"d1339f4e-22d9-458e-a7ff-a215f38eb3ba\") " pod="openstack/ovn-controller-jhsdr-config-h48v9" Feb 16 02:38:34.187705 master-0 kubenswrapper[31559]: I0216 02:38:34.187641 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-jhsdr-config-h48v9" Feb 16 02:38:34.331992 master-0 kubenswrapper[31559]: I0216 02:38:34.331946 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"93fa9c51-4a98-4a66-8b10-a1213ca9f95e","Type":"ContainerStarted","Data":"4b252849fd3ca71e5f58e28815c3405f4909d650bb241de4795a4c507071349d"} Feb 16 02:38:34.332813 master-0 kubenswrapper[31559]: I0216 02:38:34.332772 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 16 02:38:34.338613 master-0 kubenswrapper[31559]: I0216 02:38:34.338580 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"5cd89103-815f-45df-9b47-4e3db3c708f2","Type":"ContainerStarted","Data":"e278d1ef643283fb63803547ada7cdab0380e75896f8a27251b049a1d2e3f532"} Feb 16 02:38:34.373275 master-0 kubenswrapper[31559]: I0216 02:38:34.373181 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=54.459260181 podStartE2EDuration="1m1.373163565s" podCreationTimestamp="2026-02-16 02:37:33 +0000 UTC" firstStartedPulling="2026-02-16 02:37:51.505088155 +0000 UTC m=+923.849694170" lastFinishedPulling="2026-02-16 02:37:58.418991539 +0000 UTC m=+930.763597554" observedRunningTime="2026-02-16 02:38:34.362479611 +0000 UTC m=+966.707085626" watchObservedRunningTime="2026-02-16 02:38:34.373163565 +0000 UTC m=+966.717769580" Feb 16 02:38:34.423520 master-0 kubenswrapper[31559]: I0216 02:38:34.423005 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=53.693554296 podStartE2EDuration="1m1.422981201s" podCreationTimestamp="2026-02-16 02:37:33 +0000 UTC" firstStartedPulling="2026-02-16 02:37:50.6890008 +0000 UTC m=+923.033606845" lastFinishedPulling="2026-02-16 02:37:58.418427725 +0000 UTC m=+930.763033750" observedRunningTime="2026-02-16 02:38:34.392994453 +0000 UTC m=+966.737600468" watchObservedRunningTime="2026-02-16 02:38:34.422981201 +0000 UTC m=+966.767587216" Feb 16 02:38:34.808808 master-0 kubenswrapper[31559]: I0216 02:38:34.808736 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-jhsdr-config-h48v9"] Feb 16 02:38:34.854634 master-0 kubenswrapper[31559]: I0216 02:38:34.854577 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-btqxw" Feb 16 02:38:34.888492 master-0 kubenswrapper[31559]: I0216 02:38:34.887753 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df39c4df-ec6f-4f4f-b603-a9329fd44cf0-operator-scripts\") pod \"df39c4df-ec6f-4f4f-b603-a9329fd44cf0\" (UID: \"df39c4df-ec6f-4f4f-b603-a9329fd44cf0\") " Feb 16 02:38:34.888492 master-0 kubenswrapper[31559]: I0216 02:38:34.887967 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jc7f5\" (UniqueName: \"kubernetes.io/projected/df39c4df-ec6f-4f4f-b603-a9329fd44cf0-kube-api-access-jc7f5\") pod \"df39c4df-ec6f-4f4f-b603-a9329fd44cf0\" (UID: \"df39c4df-ec6f-4f4f-b603-a9329fd44cf0\") " Feb 16 02:38:34.889998 master-0 kubenswrapper[31559]: I0216 02:38:34.889011 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df39c4df-ec6f-4f4f-b603-a9329fd44cf0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "df39c4df-ec6f-4f4f-b603-a9329fd44cf0" (UID: "df39c4df-ec6f-4f4f-b603-a9329fd44cf0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:38:34.891777 master-0 kubenswrapper[31559]: I0216 02:38:34.891660 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df39c4df-ec6f-4f4f-b603-a9329fd44cf0-kube-api-access-jc7f5" (OuterVolumeSpecName: "kube-api-access-jc7f5") pod "df39c4df-ec6f-4f4f-b603-a9329fd44cf0" (UID: "df39c4df-ec6f-4f4f-b603-a9329fd44cf0"). InnerVolumeSpecName "kube-api-access-jc7f5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:38:34.995044 master-0 kubenswrapper[31559]: I0216 02:38:34.994995 31559 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df39c4df-ec6f-4f4f-b603-a9329fd44cf0-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 02:38:34.995044 master-0 kubenswrapper[31559]: I0216 02:38:34.995029 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jc7f5\" (UniqueName: \"kubernetes.io/projected/df39c4df-ec6f-4f4f-b603-a9329fd44cf0-kube-api-access-jc7f5\") on node \"master-0\" DevicePath \"\"" Feb 16 02:38:35.350504 master-0 kubenswrapper[31559]: I0216 02:38:35.350420 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-btqxw" event={"ID":"df39c4df-ec6f-4f4f-b603-a9329fd44cf0","Type":"ContainerDied","Data":"2d1dfcc7a37f0b218bbdfc406c3a91a0c7a8f5eea2f3bfe61c1194e800b90200"} Feb 16 02:38:35.350504 master-0 kubenswrapper[31559]: I0216 02:38:35.350507 31559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d1dfcc7a37f0b218bbdfc406c3a91a0c7a8f5eea2f3bfe61c1194e800b90200" Feb 16 02:38:35.351200 master-0 kubenswrapper[31559]: I0216 02:38:35.350564 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-btqxw" Feb 16 02:38:35.361697 master-0 kubenswrapper[31559]: I0216 02:38:35.361175 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c53aef07-7967-4da8-a56a-018b25e9b92e","Type":"ContainerStarted","Data":"3bd4f9c2718ea6879833fc8be6a91d34f5a6fbd4448cd066f03db892e97f7929"} Feb 16 02:38:35.361697 master-0 kubenswrapper[31559]: I0216 02:38:35.361232 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c53aef07-7967-4da8-a56a-018b25e9b92e","Type":"ContainerStarted","Data":"3adf9e05329e0762fd9a5366c259a4855ed6a228ac04aaaf1b34c3881a9d143f"} Feb 16 02:38:35.361697 master-0 kubenswrapper[31559]: I0216 02:38:35.361252 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c53aef07-7967-4da8-a56a-018b25e9b92e","Type":"ContainerStarted","Data":"3fc25ffb1332ef51f035fc2e4059e2aaf3bb61635a3b191198ad09a8636e78b9"} Feb 16 02:38:35.361697 master-0 kubenswrapper[31559]: I0216 02:38:35.361266 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c53aef07-7967-4da8-a56a-018b25e9b92e","Type":"ContainerStarted","Data":"3b69e2c2188d0971383d3f8ac3e39b92ab6bd8cfe76b731f0fae627beccbbcfe"} Feb 16 02:38:35.365583 master-0 kubenswrapper[31559]: I0216 02:38:35.365522 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-jhsdr-config-h48v9" event={"ID":"d1339f4e-22d9-458e-a7ff-a215f38eb3ba","Type":"ContainerStarted","Data":"38a91cad39c4c9641fa010eb29c4f3df093c2304b8da2a63043280fb9e9bd1be"} Feb 16 02:38:35.365680 master-0 kubenswrapper[31559]: I0216 02:38:35.365597 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-jhsdr-config-h48v9" event={"ID":"d1339f4e-22d9-458e-a7ff-a215f38eb3ba","Type":"ContainerStarted","Data":"af61c018dc9c6763e430add13960fd147b6b8b6972e29d7f1b58413afe4ba79c"} Feb 16 02:38:35.394729 master-0 kubenswrapper[31559]: I0216 02:38:35.394522 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-jhsdr-config-h48v9" podStartSLOduration=2.394504366 podStartE2EDuration="2.394504366s" podCreationTimestamp="2026-02-16 02:38:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:38:35.388827671 +0000 UTC m=+967.733433686" watchObservedRunningTime="2026-02-16 02:38:35.394504366 +0000 UTC m=+967.739110381" Feb 16 02:38:36.376220 master-0 kubenswrapper[31559]: I0216 02:38:36.376152 31559 generic.go:334] "Generic (PLEG): container finished" podID="d1339f4e-22d9-458e-a7ff-a215f38eb3ba" containerID="38a91cad39c4c9641fa010eb29c4f3df093c2304b8da2a63043280fb9e9bd1be" exitCode=0 Feb 16 02:38:36.376220 master-0 kubenswrapper[31559]: I0216 02:38:36.376213 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-jhsdr-config-h48v9" event={"ID":"d1339f4e-22d9-458e-a7ff-a215f38eb3ba","Type":"ContainerDied","Data":"38a91cad39c4c9641fa010eb29c4f3df093c2304b8da2a63043280fb9e9bd1be"} Feb 16 02:38:38.429781 master-0 kubenswrapper[31559]: I0216 02:38:38.429720 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-jhsdr" Feb 16 02:38:40.896473 master-0 kubenswrapper[31559]: I0216 02:38:40.896188 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 16 02:38:41.911489 master-0 kubenswrapper[31559]: I0216 02:38:41.911385 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-jhsdr-config-h48v9" Feb 16 02:38:41.976537 master-0 kubenswrapper[31559]: I0216 02:38:41.976479 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d1339f4e-22d9-458e-a7ff-a215f38eb3ba-var-run-ovn\") pod \"d1339f4e-22d9-458e-a7ff-a215f38eb3ba\" (UID: \"d1339f4e-22d9-458e-a7ff-a215f38eb3ba\") " Feb 16 02:38:41.976914 master-0 kubenswrapper[31559]: I0216 02:38:41.976584 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1339f4e-22d9-458e-a7ff-a215f38eb3ba-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "d1339f4e-22d9-458e-a7ff-a215f38eb3ba" (UID: "d1339f4e-22d9-458e-a7ff-a215f38eb3ba"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:38:41.977100 master-0 kubenswrapper[31559]: I0216 02:38:41.977069 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d1339f4e-22d9-458e-a7ff-a215f38eb3ba-var-run\") pod \"d1339f4e-22d9-458e-a7ff-a215f38eb3ba\" (UID: \"d1339f4e-22d9-458e-a7ff-a215f38eb3ba\") " Feb 16 02:38:41.977457 master-0 kubenswrapper[31559]: I0216 02:38:41.977316 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1339f4e-22d9-458e-a7ff-a215f38eb3ba-var-run" (OuterVolumeSpecName: "var-run") pod "d1339f4e-22d9-458e-a7ff-a215f38eb3ba" (UID: "d1339f4e-22d9-458e-a7ff-a215f38eb3ba"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:38:41.977457 master-0 kubenswrapper[31559]: I0216 02:38:41.977395 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d1339f4e-22d9-458e-a7ff-a215f38eb3ba-additional-scripts\") pod \"d1339f4e-22d9-458e-a7ff-a215f38eb3ba\" (UID: \"d1339f4e-22d9-458e-a7ff-a215f38eb3ba\") " Feb 16 02:38:41.977649 master-0 kubenswrapper[31559]: I0216 02:38:41.977604 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d1339f4e-22d9-458e-a7ff-a215f38eb3ba-scripts\") pod \"d1339f4e-22d9-458e-a7ff-a215f38eb3ba\" (UID: \"d1339f4e-22d9-458e-a7ff-a215f38eb3ba\") " Feb 16 02:38:41.977806 master-0 kubenswrapper[31559]: I0216 02:38:41.977751 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d1339f4e-22d9-458e-a7ff-a215f38eb3ba-var-log-ovn\") pod \"d1339f4e-22d9-458e-a7ff-a215f38eb3ba\" (UID: \"d1339f4e-22d9-458e-a7ff-a215f38eb3ba\") " Feb 16 02:38:41.977946 master-0 kubenswrapper[31559]: I0216 02:38:41.977830 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjxr6\" (UniqueName: \"kubernetes.io/projected/d1339f4e-22d9-458e-a7ff-a215f38eb3ba-kube-api-access-qjxr6\") pod \"d1339f4e-22d9-458e-a7ff-a215f38eb3ba\" (UID: \"d1339f4e-22d9-458e-a7ff-a215f38eb3ba\") " Feb 16 02:38:41.978836 master-0 kubenswrapper[31559]: I0216 02:38:41.978786 31559 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d1339f4e-22d9-458e-a7ff-a215f38eb3ba-var-run-ovn\") on node \"master-0\" DevicePath \"\"" Feb 16 02:38:41.978836 master-0 kubenswrapper[31559]: I0216 02:38:41.978828 31559 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d1339f4e-22d9-458e-a7ff-a215f38eb3ba-var-run\") on node \"master-0\" DevicePath \"\"" Feb 16 02:38:41.979329 master-0 kubenswrapper[31559]: I0216 02:38:41.979239 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1339f4e-22d9-458e-a7ff-a215f38eb3ba-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "d1339f4e-22d9-458e-a7ff-a215f38eb3ba" (UID: "d1339f4e-22d9-458e-a7ff-a215f38eb3ba"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:38:41.980121 master-0 kubenswrapper[31559]: I0216 02:38:41.980087 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1339f4e-22d9-458e-a7ff-a215f38eb3ba-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "d1339f4e-22d9-458e-a7ff-a215f38eb3ba" (UID: "d1339f4e-22d9-458e-a7ff-a215f38eb3ba"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:38:41.982377 master-0 kubenswrapper[31559]: I0216 02:38:41.982317 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1339f4e-22d9-458e-a7ff-a215f38eb3ba-scripts" (OuterVolumeSpecName: "scripts") pod "d1339f4e-22d9-458e-a7ff-a215f38eb3ba" (UID: "d1339f4e-22d9-458e-a7ff-a215f38eb3ba"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:38:41.986276 master-0 kubenswrapper[31559]: I0216 02:38:41.986241 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1339f4e-22d9-458e-a7ff-a215f38eb3ba-kube-api-access-qjxr6" (OuterVolumeSpecName: "kube-api-access-qjxr6") pod "d1339f4e-22d9-458e-a7ff-a215f38eb3ba" (UID: "d1339f4e-22d9-458e-a7ff-a215f38eb3ba"). InnerVolumeSpecName "kube-api-access-qjxr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:38:42.080873 master-0 kubenswrapper[31559]: I0216 02:38:42.080824 31559 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d1339f4e-22d9-458e-a7ff-a215f38eb3ba-var-log-ovn\") on node \"master-0\" DevicePath \"\"" Feb 16 02:38:42.080873 master-0 kubenswrapper[31559]: I0216 02:38:42.080867 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjxr6\" (UniqueName: \"kubernetes.io/projected/d1339f4e-22d9-458e-a7ff-a215f38eb3ba-kube-api-access-qjxr6\") on node \"master-0\" DevicePath \"\"" Feb 16 02:38:42.081022 master-0 kubenswrapper[31559]: I0216 02:38:42.080882 31559 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d1339f4e-22d9-458e-a7ff-a215f38eb3ba-additional-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 02:38:42.081022 master-0 kubenswrapper[31559]: I0216 02:38:42.080895 31559 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d1339f4e-22d9-458e-a7ff-a215f38eb3ba-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 02:38:42.469218 master-0 kubenswrapper[31559]: I0216 02:38:42.469158 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c53aef07-7967-4da8-a56a-018b25e9b92e","Type":"ContainerStarted","Data":"5f37303000ede126dca0756eaefff6a23cb2c80ce74750ee735853bf1a9780e1"} Feb 16 02:38:42.469309 master-0 kubenswrapper[31559]: I0216 02:38:42.469225 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c53aef07-7967-4da8-a56a-018b25e9b92e","Type":"ContainerStarted","Data":"3c6a8473b09bfa32fe6f113091ebf7bf7fbf770518157cb24b1f01c8b3ccdc5b"} Feb 16 02:38:42.469309 master-0 kubenswrapper[31559]: I0216 02:38:42.469240 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c53aef07-7967-4da8-a56a-018b25e9b92e","Type":"ContainerStarted","Data":"2526729415da3cb099853b183b5ab58f5a1b9d77973ff9376ee4b19754db53c4"} Feb 16 02:38:42.472092 master-0 kubenswrapper[31559]: I0216 02:38:42.472001 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-jhsdr-config-h48v9" event={"ID":"d1339f4e-22d9-458e-a7ff-a215f38eb3ba","Type":"ContainerDied","Data":"af61c018dc9c6763e430add13960fd147b6b8b6972e29d7f1b58413afe4ba79c"} Feb 16 02:38:42.472092 master-0 kubenswrapper[31559]: I0216 02:38:42.472085 31559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af61c018dc9c6763e430add13960fd147b6b8b6972e29d7f1b58413afe4ba79c" Feb 16 02:38:42.472210 master-0 kubenswrapper[31559]: I0216 02:38:42.472192 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-jhsdr-config-h48v9" Feb 16 02:38:42.506026 master-0 kubenswrapper[31559]: I0216 02:38:42.505957 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=27.744160497 podStartE2EDuration="33.505942242s" podCreationTimestamp="2026-02-16 02:38:09 +0000 UTC" firstStartedPulling="2026-02-16 02:38:28.511477812 +0000 UTC m=+960.856083837" lastFinishedPulling="2026-02-16 02:38:34.273259567 +0000 UTC m=+966.617865582" observedRunningTime="2026-02-16 02:38:42.502896884 +0000 UTC m=+974.847502919" watchObservedRunningTime="2026-02-16 02:38:42.505942242 +0000 UTC m=+974.850548257" Feb 16 02:38:42.849657 master-0 kubenswrapper[31559]: I0216 02:38:42.849526 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-c6f66bf95-nbpjz"] Feb 16 02:38:42.850245 master-0 kubenswrapper[31559]: E0216 02:38:42.850217 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df39c4df-ec6f-4f4f-b603-a9329fd44cf0" containerName="mariadb-account-create-update" Feb 16 02:38:42.850334 master-0 kubenswrapper[31559]: I0216 02:38:42.850247 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="df39c4df-ec6f-4f4f-b603-a9329fd44cf0" containerName="mariadb-account-create-update" Feb 16 02:38:42.850383 master-0 kubenswrapper[31559]: E0216 02:38:42.850361 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1339f4e-22d9-458e-a7ff-a215f38eb3ba" containerName="ovn-config" Feb 16 02:38:42.850383 master-0 kubenswrapper[31559]: I0216 02:38:42.850379 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1339f4e-22d9-458e-a7ff-a215f38eb3ba" containerName="ovn-config" Feb 16 02:38:42.850937 master-0 kubenswrapper[31559]: I0216 02:38:42.850709 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="df39c4df-ec6f-4f4f-b603-a9329fd44cf0" containerName="mariadb-account-create-update" Feb 16 02:38:42.850937 master-0 kubenswrapper[31559]: I0216 02:38:42.850762 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1339f4e-22d9-458e-a7ff-a215f38eb3ba" containerName="ovn-config" Feb 16 02:38:42.862310 master-0 kubenswrapper[31559]: I0216 02:38:42.861290 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c6f66bf95-nbpjz" Feb 16 02:38:42.864565 master-0 kubenswrapper[31559]: I0216 02:38:42.863791 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 16 02:38:42.877715 master-0 kubenswrapper[31559]: I0216 02:38:42.877627 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c6f66bf95-nbpjz"] Feb 16 02:38:42.909457 master-0 kubenswrapper[31559]: I0216 02:38:42.903059 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1003e078-0808-429d-99a8-18f4497431cd-config\") pod \"dnsmasq-dns-c6f66bf95-nbpjz\" (UID: \"1003e078-0808-429d-99a8-18f4497431cd\") " pod="openstack/dnsmasq-dns-c6f66bf95-nbpjz" Feb 16 02:38:42.909457 master-0 kubenswrapper[31559]: I0216 02:38:42.903191 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1003e078-0808-429d-99a8-18f4497431cd-dns-swift-storage-0\") pod \"dnsmasq-dns-c6f66bf95-nbpjz\" (UID: \"1003e078-0808-429d-99a8-18f4497431cd\") " pod="openstack/dnsmasq-dns-c6f66bf95-nbpjz" Feb 16 02:38:42.909457 master-0 kubenswrapper[31559]: I0216 02:38:42.903217 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1003e078-0808-429d-99a8-18f4497431cd-dns-svc\") pod \"dnsmasq-dns-c6f66bf95-nbpjz\" (UID: \"1003e078-0808-429d-99a8-18f4497431cd\") " pod="openstack/dnsmasq-dns-c6f66bf95-nbpjz" Feb 16 02:38:42.909457 master-0 kubenswrapper[31559]: I0216 02:38:42.903241 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qk479\" (UniqueName: \"kubernetes.io/projected/1003e078-0808-429d-99a8-18f4497431cd-kube-api-access-qk479\") pod \"dnsmasq-dns-c6f66bf95-nbpjz\" (UID: \"1003e078-0808-429d-99a8-18f4497431cd\") " pod="openstack/dnsmasq-dns-c6f66bf95-nbpjz" Feb 16 02:38:42.909457 master-0 kubenswrapper[31559]: I0216 02:38:42.903265 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1003e078-0808-429d-99a8-18f4497431cd-ovsdbserver-sb\") pod \"dnsmasq-dns-c6f66bf95-nbpjz\" (UID: \"1003e078-0808-429d-99a8-18f4497431cd\") " pod="openstack/dnsmasq-dns-c6f66bf95-nbpjz" Feb 16 02:38:42.909457 master-0 kubenswrapper[31559]: I0216 02:38:42.903312 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1003e078-0808-429d-99a8-18f4497431cd-ovsdbserver-nb\") pod \"dnsmasq-dns-c6f66bf95-nbpjz\" (UID: \"1003e078-0808-429d-99a8-18f4497431cd\") " pod="openstack/dnsmasq-dns-c6f66bf95-nbpjz" Feb 16 02:38:43.006031 master-0 kubenswrapper[31559]: I0216 02:38:43.005852 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1003e078-0808-429d-99a8-18f4497431cd-dns-swift-storage-0\") pod \"dnsmasq-dns-c6f66bf95-nbpjz\" (UID: \"1003e078-0808-429d-99a8-18f4497431cd\") " pod="openstack/dnsmasq-dns-c6f66bf95-nbpjz" Feb 16 02:38:43.006651 master-0 kubenswrapper[31559]: I0216 02:38:43.006068 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1003e078-0808-429d-99a8-18f4497431cd-dns-svc\") pod \"dnsmasq-dns-c6f66bf95-nbpjz\" (UID: \"1003e078-0808-429d-99a8-18f4497431cd\") " pod="openstack/dnsmasq-dns-c6f66bf95-nbpjz" Feb 16 02:38:43.006651 master-0 kubenswrapper[31559]: I0216 02:38:43.006123 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qk479\" (UniqueName: \"kubernetes.io/projected/1003e078-0808-429d-99a8-18f4497431cd-kube-api-access-qk479\") pod \"dnsmasq-dns-c6f66bf95-nbpjz\" (UID: \"1003e078-0808-429d-99a8-18f4497431cd\") " pod="openstack/dnsmasq-dns-c6f66bf95-nbpjz" Feb 16 02:38:43.006651 master-0 kubenswrapper[31559]: I0216 02:38:43.006548 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1003e078-0808-429d-99a8-18f4497431cd-ovsdbserver-sb\") pod \"dnsmasq-dns-c6f66bf95-nbpjz\" (UID: \"1003e078-0808-429d-99a8-18f4497431cd\") " pod="openstack/dnsmasq-dns-c6f66bf95-nbpjz" Feb 16 02:38:43.007596 master-0 kubenswrapper[31559]: I0216 02:38:43.007145 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1003e078-0808-429d-99a8-18f4497431cd-ovsdbserver-nb\") pod \"dnsmasq-dns-c6f66bf95-nbpjz\" (UID: \"1003e078-0808-429d-99a8-18f4497431cd\") " pod="openstack/dnsmasq-dns-c6f66bf95-nbpjz" Feb 16 02:38:43.007596 master-0 kubenswrapper[31559]: I0216 02:38:43.007215 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1003e078-0808-429d-99a8-18f4497431cd-config\") pod \"dnsmasq-dns-c6f66bf95-nbpjz\" (UID: \"1003e078-0808-429d-99a8-18f4497431cd\") " pod="openstack/dnsmasq-dns-c6f66bf95-nbpjz" Feb 16 02:38:43.007983 master-0 kubenswrapper[31559]: I0216 02:38:43.007933 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1003e078-0808-429d-99a8-18f4497431cd-dns-swift-storage-0\") pod \"dnsmasq-dns-c6f66bf95-nbpjz\" (UID: \"1003e078-0808-429d-99a8-18f4497431cd\") " pod="openstack/dnsmasq-dns-c6f66bf95-nbpjz" Feb 16 02:38:43.008199 master-0 kubenswrapper[31559]: I0216 02:38:43.008157 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1003e078-0808-429d-99a8-18f4497431cd-dns-svc\") pod \"dnsmasq-dns-c6f66bf95-nbpjz\" (UID: \"1003e078-0808-429d-99a8-18f4497431cd\") " pod="openstack/dnsmasq-dns-c6f66bf95-nbpjz" Feb 16 02:38:43.008689 master-0 kubenswrapper[31559]: I0216 02:38:43.008647 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1003e078-0808-429d-99a8-18f4497431cd-ovsdbserver-sb\") pod \"dnsmasq-dns-c6f66bf95-nbpjz\" (UID: \"1003e078-0808-429d-99a8-18f4497431cd\") " pod="openstack/dnsmasq-dns-c6f66bf95-nbpjz" Feb 16 02:38:43.009638 master-0 kubenswrapper[31559]: I0216 02:38:43.009589 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1003e078-0808-429d-99a8-18f4497431cd-ovsdbserver-nb\") pod \"dnsmasq-dns-c6f66bf95-nbpjz\" (UID: \"1003e078-0808-429d-99a8-18f4497431cd\") " pod="openstack/dnsmasq-dns-c6f66bf95-nbpjz" Feb 16 02:38:43.010067 master-0 kubenswrapper[31559]: I0216 02:38:43.010013 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1003e078-0808-429d-99a8-18f4497431cd-config\") pod \"dnsmasq-dns-c6f66bf95-nbpjz\" (UID: \"1003e078-0808-429d-99a8-18f4497431cd\") " pod="openstack/dnsmasq-dns-c6f66bf95-nbpjz" Feb 16 02:38:43.029151 master-0 kubenswrapper[31559]: I0216 02:38:43.029095 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qk479\" (UniqueName: \"kubernetes.io/projected/1003e078-0808-429d-99a8-18f4497431cd-kube-api-access-qk479\") pod \"dnsmasq-dns-c6f66bf95-nbpjz\" (UID: \"1003e078-0808-429d-99a8-18f4497431cd\") " pod="openstack/dnsmasq-dns-c6f66bf95-nbpjz" Feb 16 02:38:43.039587 master-0 kubenswrapper[31559]: I0216 02:38:43.039521 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-jhsdr-config-h48v9"] Feb 16 02:38:43.054423 master-0 kubenswrapper[31559]: I0216 02:38:43.054354 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-jhsdr-config-h48v9"] Feb 16 02:38:43.221029 master-0 kubenswrapper[31559]: I0216 02:38:43.220945 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c6f66bf95-nbpjz" Feb 16 02:38:43.488211 master-0 kubenswrapper[31559]: I0216 02:38:43.487813 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-kcsvv" event={"ID":"2ce4194c-65d5-4538-bb48-d1f17d880599","Type":"ContainerStarted","Data":"fe2b4c7f21c19332fed98f625225e4c1c08d28051d33eeb444a714eb51c66ca1"} Feb 16 02:38:43.514747 master-0 kubenswrapper[31559]: I0216 02:38:43.514655 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-kcsvv" podStartSLOduration=3.080072329 podStartE2EDuration="15.514636669s" podCreationTimestamp="2026-02-16 02:38:28 +0000 UTC" firstStartedPulling="2026-02-16 02:38:29.336255279 +0000 UTC m=+961.680861294" lastFinishedPulling="2026-02-16 02:38:41.770819579 +0000 UTC m=+974.115425634" observedRunningTime="2026-02-16 02:38:43.502814477 +0000 UTC m=+975.847420492" watchObservedRunningTime="2026-02-16 02:38:43.514636669 +0000 UTC m=+975.859242684" Feb 16 02:38:43.758178 master-0 kubenswrapper[31559]: I0216 02:38:43.758107 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c6f66bf95-nbpjz"] Feb 16 02:38:43.791888 master-0 kubenswrapper[31559]: W0216 02:38:43.791781 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1003e078_0808_429d_99a8_18f4497431cd.slice/crio-deea3f94164b7f01864045de10337f558a86a0363163677fe3c585c4cbd65546 WatchSource:0}: Error finding container deea3f94164b7f01864045de10337f558a86a0363163677fe3c585c4cbd65546: Status 404 returned error can't find the container with id deea3f94164b7f01864045de10337f558a86a0363163677fe3c585c4cbd65546 Feb 16 02:38:43.947270 master-0 kubenswrapper[31559]: I0216 02:38:43.947182 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1339f4e-22d9-458e-a7ff-a215f38eb3ba" path="/var/lib/kubelet/pods/d1339f4e-22d9-458e-a7ff-a215f38eb3ba/volumes" Feb 16 02:38:44.508479 master-0 kubenswrapper[31559]: I0216 02:38:44.507725 31559 generic.go:334] "Generic (PLEG): container finished" podID="1003e078-0808-429d-99a8-18f4497431cd" containerID="f18ad949046a3a2e41d69e5b1f4d9cb631a1ac2645be9c8bbf7f73916e560f30" exitCode=0 Feb 16 02:38:44.509385 master-0 kubenswrapper[31559]: I0216 02:38:44.509342 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c6f66bf95-nbpjz" event={"ID":"1003e078-0808-429d-99a8-18f4497431cd","Type":"ContainerDied","Data":"f18ad949046a3a2e41d69e5b1f4d9cb631a1ac2645be9c8bbf7f73916e560f30"} Feb 16 02:38:44.509385 master-0 kubenswrapper[31559]: I0216 02:38:44.509377 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c6f66bf95-nbpjz" event={"ID":"1003e078-0808-429d-99a8-18f4497431cd","Type":"ContainerStarted","Data":"deea3f94164b7f01864045de10337f558a86a0363163677fe3c585c4cbd65546"} Feb 16 02:38:45.526577 master-0 kubenswrapper[31559]: I0216 02:38:45.526476 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c6f66bf95-nbpjz" event={"ID":"1003e078-0808-429d-99a8-18f4497431cd","Type":"ContainerStarted","Data":"788b27d114c73b91d23e51f68e0144cb061f5737f8399214f79663f228986001"} Feb 16 02:38:45.527791 master-0 kubenswrapper[31559]: I0216 02:38:45.526720 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-c6f66bf95-nbpjz" Feb 16 02:38:45.576174 master-0 kubenswrapper[31559]: I0216 02:38:45.576020 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-c6f66bf95-nbpjz" podStartSLOduration=3.575991569 podStartE2EDuration="3.575991569s" podCreationTimestamp="2026-02-16 02:38:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:38:45.56040983 +0000 UTC m=+977.905015885" watchObservedRunningTime="2026-02-16 02:38:45.575991569 +0000 UTC m=+977.920597614" Feb 16 02:38:49.260762 master-0 kubenswrapper[31559]: I0216 02:38:49.260679 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 16 02:38:49.598580 master-0 kubenswrapper[31559]: I0216 02:38:49.597959 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-5bsxg"] Feb 16 02:38:49.600246 master-0 kubenswrapper[31559]: I0216 02:38:49.600211 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-5bsxg" Feb 16 02:38:49.621352 master-0 kubenswrapper[31559]: I0216 02:38:49.621294 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-5bsxg"] Feb 16 02:38:49.722496 master-0 kubenswrapper[31559]: I0216 02:38:49.721485 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-7568-account-create-update-nxkd6"] Feb 16 02:38:49.723118 master-0 kubenswrapper[31559]: I0216 02:38:49.723084 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7568-account-create-update-nxkd6" Feb 16 02:38:49.734069 master-0 kubenswrapper[31559]: I0216 02:38:49.734029 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 16 02:38:49.737634 master-0 kubenswrapper[31559]: I0216 02:38:49.737588 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-7568-account-create-update-nxkd6"] Feb 16 02:38:49.770557 master-0 kubenswrapper[31559]: I0216 02:38:49.770490 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9pww\" (UniqueName: \"kubernetes.io/projected/ce0b20e3-b4e6-42eb-a920-588caa2195df-kube-api-access-k9pww\") pod \"cinder-db-create-5bsxg\" (UID: \"ce0b20e3-b4e6-42eb-a920-588caa2195df\") " pod="openstack/cinder-db-create-5bsxg" Feb 16 02:38:49.770752 master-0 kubenswrapper[31559]: I0216 02:38:49.770615 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce0b20e3-b4e6-42eb-a920-588caa2195df-operator-scripts\") pod \"cinder-db-create-5bsxg\" (UID: \"ce0b20e3-b4e6-42eb-a920-588caa2195df\") " pod="openstack/cinder-db-create-5bsxg" Feb 16 02:38:49.873012 master-0 kubenswrapper[31559]: I0216 02:38:49.872872 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3cd0bd49-96d7-417e-9900-3efb9e8a2de0-operator-scripts\") pod \"cinder-7568-account-create-update-nxkd6\" (UID: \"3cd0bd49-96d7-417e-9900-3efb9e8a2de0\") " pod="openstack/cinder-7568-account-create-update-nxkd6" Feb 16 02:38:49.873012 master-0 kubenswrapper[31559]: I0216 02:38:49.872964 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9pww\" (UniqueName: \"kubernetes.io/projected/ce0b20e3-b4e6-42eb-a920-588caa2195df-kube-api-access-k9pww\") pod \"cinder-db-create-5bsxg\" (UID: \"ce0b20e3-b4e6-42eb-a920-588caa2195df\") " pod="openstack/cinder-db-create-5bsxg" Feb 16 02:38:49.873282 master-0 kubenswrapper[31559]: I0216 02:38:49.873046 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lphn\" (UniqueName: \"kubernetes.io/projected/3cd0bd49-96d7-417e-9900-3efb9e8a2de0-kube-api-access-6lphn\") pod \"cinder-7568-account-create-update-nxkd6\" (UID: \"3cd0bd49-96d7-417e-9900-3efb9e8a2de0\") " pod="openstack/cinder-7568-account-create-update-nxkd6" Feb 16 02:38:49.873282 master-0 kubenswrapper[31559]: I0216 02:38:49.873085 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce0b20e3-b4e6-42eb-a920-588caa2195df-operator-scripts\") pod \"cinder-db-create-5bsxg\" (UID: \"ce0b20e3-b4e6-42eb-a920-588caa2195df\") " pod="openstack/cinder-db-create-5bsxg" Feb 16 02:38:49.873987 master-0 kubenswrapper[31559]: I0216 02:38:49.873940 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce0b20e3-b4e6-42eb-a920-588caa2195df-operator-scripts\") pod \"cinder-db-create-5bsxg\" (UID: \"ce0b20e3-b4e6-42eb-a920-588caa2195df\") " pod="openstack/cinder-db-create-5bsxg" Feb 16 02:38:49.892399 master-0 kubenswrapper[31559]: I0216 02:38:49.892291 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-7rwhn"] Feb 16 02:38:49.893797 master-0 kubenswrapper[31559]: I0216 02:38:49.893760 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-7rwhn" Feb 16 02:38:49.922336 master-0 kubenswrapper[31559]: I0216 02:38:49.909349 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9pww\" (UniqueName: \"kubernetes.io/projected/ce0b20e3-b4e6-42eb-a920-588caa2195df-kube-api-access-k9pww\") pod \"cinder-db-create-5bsxg\" (UID: \"ce0b20e3-b4e6-42eb-a920-588caa2195df\") " pod="openstack/cinder-db-create-5bsxg" Feb 16 02:38:49.922336 master-0 kubenswrapper[31559]: I0216 02:38:49.910872 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-7rwhn"] Feb 16 02:38:49.972517 master-0 kubenswrapper[31559]: I0216 02:38:49.969331 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-5bsxg" Feb 16 02:38:49.974455 master-0 kubenswrapper[31559]: I0216 02:38:49.974408 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3cd0bd49-96d7-417e-9900-3efb9e8a2de0-operator-scripts\") pod \"cinder-7568-account-create-update-nxkd6\" (UID: \"3cd0bd49-96d7-417e-9900-3efb9e8a2de0\") " pod="openstack/cinder-7568-account-create-update-nxkd6" Feb 16 02:38:49.974643 master-0 kubenswrapper[31559]: I0216 02:38:49.974621 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lphn\" (UniqueName: \"kubernetes.io/projected/3cd0bd49-96d7-417e-9900-3efb9e8a2de0-kube-api-access-6lphn\") pod \"cinder-7568-account-create-update-nxkd6\" (UID: \"3cd0bd49-96d7-417e-9900-3efb9e8a2de0\") " pod="openstack/cinder-7568-account-create-update-nxkd6" Feb 16 02:38:49.975254 master-0 kubenswrapper[31559]: I0216 02:38:49.975216 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3cd0bd49-96d7-417e-9900-3efb9e8a2de0-operator-scripts\") pod \"cinder-7568-account-create-update-nxkd6\" (UID: \"3cd0bd49-96d7-417e-9900-3efb9e8a2de0\") " pod="openstack/cinder-7568-account-create-update-nxkd6" Feb 16 02:38:49.997672 master-0 kubenswrapper[31559]: I0216 02:38:49.997420 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lphn\" (UniqueName: \"kubernetes.io/projected/3cd0bd49-96d7-417e-9900-3efb9e8a2de0-kube-api-access-6lphn\") pod \"cinder-7568-account-create-update-nxkd6\" (UID: \"3cd0bd49-96d7-417e-9900-3efb9e8a2de0\") " pod="openstack/cinder-7568-account-create-update-nxkd6" Feb 16 02:38:50.000198 master-0 kubenswrapper[31559]: I0216 02:38:50.000163 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6654-account-create-update-5dvj4"] Feb 16 02:38:50.001587 master-0 kubenswrapper[31559]: I0216 02:38:50.001562 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6654-account-create-update-5dvj4" Feb 16 02:38:50.011211 master-0 kubenswrapper[31559]: I0216 02:38:50.010801 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 16 02:38:50.026486 master-0 kubenswrapper[31559]: I0216 02:38:50.025561 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6654-account-create-update-5dvj4"] Feb 16 02:38:50.044301 master-0 kubenswrapper[31559]: I0216 02:38:50.044037 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7568-account-create-update-nxkd6" Feb 16 02:38:50.079463 master-0 kubenswrapper[31559]: I0216 02:38:50.076051 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb6fab0b-42dc-4235-bc08-73b63df4ed3a-operator-scripts\") pod \"neutron-db-create-7rwhn\" (UID: \"eb6fab0b-42dc-4235-bc08-73b63df4ed3a\") " pod="openstack/neutron-db-create-7rwhn" Feb 16 02:38:50.079463 master-0 kubenswrapper[31559]: I0216 02:38:50.076199 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dv8q\" (UniqueName: \"kubernetes.io/projected/eb6fab0b-42dc-4235-bc08-73b63df4ed3a-kube-api-access-6dv8q\") pod \"neutron-db-create-7rwhn\" (UID: \"eb6fab0b-42dc-4235-bc08-73b63df4ed3a\") " pod="openstack/neutron-db-create-7rwhn" Feb 16 02:38:50.138545 master-0 kubenswrapper[31559]: I0216 02:38:50.136197 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-qhrbh"] Feb 16 02:38:50.138545 master-0 kubenswrapper[31559]: I0216 02:38:50.137510 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-qhrbh" Feb 16 02:38:50.144611 master-0 kubenswrapper[31559]: I0216 02:38:50.142865 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 16 02:38:50.144611 master-0 kubenswrapper[31559]: I0216 02:38:50.143130 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 16 02:38:50.144611 master-0 kubenswrapper[31559]: I0216 02:38:50.143282 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 16 02:38:50.152465 master-0 kubenswrapper[31559]: I0216 02:38:50.147815 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-qhrbh"] Feb 16 02:38:50.183399 master-0 kubenswrapper[31559]: I0216 02:38:50.178452 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb6fab0b-42dc-4235-bc08-73b63df4ed3a-operator-scripts\") pod \"neutron-db-create-7rwhn\" (UID: \"eb6fab0b-42dc-4235-bc08-73b63df4ed3a\") " pod="openstack/neutron-db-create-7rwhn" Feb 16 02:38:50.183399 master-0 kubenswrapper[31559]: I0216 02:38:50.178510 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/143f4b4e-6ddb-47c3-bd5f-c370bc7905e2-operator-scripts\") pod \"neutron-6654-account-create-update-5dvj4\" (UID: \"143f4b4e-6ddb-47c3-bd5f-c370bc7905e2\") " pod="openstack/neutron-6654-account-create-update-5dvj4" Feb 16 02:38:50.183399 master-0 kubenswrapper[31559]: I0216 02:38:50.179108 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l2h7\" (UniqueName: \"kubernetes.io/projected/143f4b4e-6ddb-47c3-bd5f-c370bc7905e2-kube-api-access-5l2h7\") pod \"neutron-6654-account-create-update-5dvj4\" (UID: \"143f4b4e-6ddb-47c3-bd5f-c370bc7905e2\") " pod="openstack/neutron-6654-account-create-update-5dvj4" Feb 16 02:38:50.183399 master-0 kubenswrapper[31559]: I0216 02:38:50.179199 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dv8q\" (UniqueName: \"kubernetes.io/projected/eb6fab0b-42dc-4235-bc08-73b63df4ed3a-kube-api-access-6dv8q\") pod \"neutron-db-create-7rwhn\" (UID: \"eb6fab0b-42dc-4235-bc08-73b63df4ed3a\") " pod="openstack/neutron-db-create-7rwhn" Feb 16 02:38:50.207522 master-0 kubenswrapper[31559]: I0216 02:38:50.204628 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dv8q\" (UniqueName: \"kubernetes.io/projected/eb6fab0b-42dc-4235-bc08-73b63df4ed3a-kube-api-access-6dv8q\") pod \"neutron-db-create-7rwhn\" (UID: \"eb6fab0b-42dc-4235-bc08-73b63df4ed3a\") " pod="openstack/neutron-db-create-7rwhn" Feb 16 02:38:50.207522 master-0 kubenswrapper[31559]: I0216 02:38:50.206621 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb6fab0b-42dc-4235-bc08-73b63df4ed3a-operator-scripts\") pod \"neutron-db-create-7rwhn\" (UID: \"eb6fab0b-42dc-4235-bc08-73b63df4ed3a\") " pod="openstack/neutron-db-create-7rwhn" Feb 16 02:38:50.274468 master-0 kubenswrapper[31559]: I0216 02:38:50.273998 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-7rwhn" Feb 16 02:38:50.284832 master-0 kubenswrapper[31559]: I0216 02:38:50.280678 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmlts\" (UniqueName: \"kubernetes.io/projected/d822f0ef-28bc-4cfb-8f3d-0f259b61d2bf-kube-api-access-qmlts\") pod \"keystone-db-sync-qhrbh\" (UID: \"d822f0ef-28bc-4cfb-8f3d-0f259b61d2bf\") " pod="openstack/keystone-db-sync-qhrbh" Feb 16 02:38:50.284832 master-0 kubenswrapper[31559]: I0216 02:38:50.280760 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d822f0ef-28bc-4cfb-8f3d-0f259b61d2bf-combined-ca-bundle\") pod \"keystone-db-sync-qhrbh\" (UID: \"d822f0ef-28bc-4cfb-8f3d-0f259b61d2bf\") " pod="openstack/keystone-db-sync-qhrbh" Feb 16 02:38:50.284832 master-0 kubenswrapper[31559]: I0216 02:38:50.280831 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/143f4b4e-6ddb-47c3-bd5f-c370bc7905e2-operator-scripts\") pod \"neutron-6654-account-create-update-5dvj4\" (UID: \"143f4b4e-6ddb-47c3-bd5f-c370bc7905e2\") " pod="openstack/neutron-6654-account-create-update-5dvj4" Feb 16 02:38:50.284832 master-0 kubenswrapper[31559]: I0216 02:38:50.280904 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5l2h7\" (UniqueName: \"kubernetes.io/projected/143f4b4e-6ddb-47c3-bd5f-c370bc7905e2-kube-api-access-5l2h7\") pod \"neutron-6654-account-create-update-5dvj4\" (UID: \"143f4b4e-6ddb-47c3-bd5f-c370bc7905e2\") " pod="openstack/neutron-6654-account-create-update-5dvj4" Feb 16 02:38:50.284832 master-0 kubenswrapper[31559]: I0216 02:38:50.280931 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d822f0ef-28bc-4cfb-8f3d-0f259b61d2bf-config-data\") pod \"keystone-db-sync-qhrbh\" (UID: \"d822f0ef-28bc-4cfb-8f3d-0f259b61d2bf\") " pod="openstack/keystone-db-sync-qhrbh" Feb 16 02:38:50.284832 master-0 kubenswrapper[31559]: I0216 02:38:50.282189 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/143f4b4e-6ddb-47c3-bd5f-c370bc7905e2-operator-scripts\") pod \"neutron-6654-account-create-update-5dvj4\" (UID: \"143f4b4e-6ddb-47c3-bd5f-c370bc7905e2\") " pod="openstack/neutron-6654-account-create-update-5dvj4" Feb 16 02:38:50.303547 master-0 kubenswrapper[31559]: I0216 02:38:50.303485 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5l2h7\" (UniqueName: \"kubernetes.io/projected/143f4b4e-6ddb-47c3-bd5f-c370bc7905e2-kube-api-access-5l2h7\") pod \"neutron-6654-account-create-update-5dvj4\" (UID: \"143f4b4e-6ddb-47c3-bd5f-c370bc7905e2\") " pod="openstack/neutron-6654-account-create-update-5dvj4" Feb 16 02:38:50.384473 master-0 kubenswrapper[31559]: I0216 02:38:50.383491 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d822f0ef-28bc-4cfb-8f3d-0f259b61d2bf-combined-ca-bundle\") pod \"keystone-db-sync-qhrbh\" (UID: \"d822f0ef-28bc-4cfb-8f3d-0f259b61d2bf\") " pod="openstack/keystone-db-sync-qhrbh" Feb 16 02:38:50.384473 master-0 kubenswrapper[31559]: I0216 02:38:50.383675 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d822f0ef-28bc-4cfb-8f3d-0f259b61d2bf-config-data\") pod \"keystone-db-sync-qhrbh\" (UID: \"d822f0ef-28bc-4cfb-8f3d-0f259b61d2bf\") " pod="openstack/keystone-db-sync-qhrbh" Feb 16 02:38:50.384473 master-0 kubenswrapper[31559]: I0216 02:38:50.383710 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmlts\" (UniqueName: \"kubernetes.io/projected/d822f0ef-28bc-4cfb-8f3d-0f259b61d2bf-kube-api-access-qmlts\") pod \"keystone-db-sync-qhrbh\" (UID: \"d822f0ef-28bc-4cfb-8f3d-0f259b61d2bf\") " pod="openstack/keystone-db-sync-qhrbh" Feb 16 02:38:50.387607 master-0 kubenswrapper[31559]: I0216 02:38:50.386954 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d822f0ef-28bc-4cfb-8f3d-0f259b61d2bf-combined-ca-bundle\") pod \"keystone-db-sync-qhrbh\" (UID: \"d822f0ef-28bc-4cfb-8f3d-0f259b61d2bf\") " pod="openstack/keystone-db-sync-qhrbh" Feb 16 02:38:50.390851 master-0 kubenswrapper[31559]: I0216 02:38:50.390774 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d822f0ef-28bc-4cfb-8f3d-0f259b61d2bf-config-data\") pod \"keystone-db-sync-qhrbh\" (UID: \"d822f0ef-28bc-4cfb-8f3d-0f259b61d2bf\") " pod="openstack/keystone-db-sync-qhrbh" Feb 16 02:38:50.413461 master-0 kubenswrapper[31559]: I0216 02:38:50.409600 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmlts\" (UniqueName: \"kubernetes.io/projected/d822f0ef-28bc-4cfb-8f3d-0f259b61d2bf-kube-api-access-qmlts\") pod \"keystone-db-sync-qhrbh\" (UID: \"d822f0ef-28bc-4cfb-8f3d-0f259b61d2bf\") " pod="openstack/keystone-db-sync-qhrbh" Feb 16 02:38:50.500558 master-0 kubenswrapper[31559]: I0216 02:38:50.499896 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6654-account-create-update-5dvj4" Feb 16 02:38:50.534655 master-0 kubenswrapper[31559]: I0216 02:38:50.534567 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-qhrbh" Feb 16 02:38:50.595628 master-0 kubenswrapper[31559]: I0216 02:38:50.595561 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-5bsxg"] Feb 16 02:38:50.597836 master-0 kubenswrapper[31559]: W0216 02:38:50.597785 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce0b20e3_b4e6_42eb_a920_588caa2195df.slice/crio-631b33fb0593285590bda7c31dcb37522a551348b7efe26c7f93afa888a34b82 WatchSource:0}: Error finding container 631b33fb0593285590bda7c31dcb37522a551348b7efe26c7f93afa888a34b82: Status 404 returned error can't find the container with id 631b33fb0593285590bda7c31dcb37522a551348b7efe26c7f93afa888a34b82 Feb 16 02:38:50.612803 master-0 kubenswrapper[31559]: I0216 02:38:50.612418 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-5bsxg" event={"ID":"ce0b20e3-b4e6-42eb-a920-588caa2195df","Type":"ContainerStarted","Data":"631b33fb0593285590bda7c31dcb37522a551348b7efe26c7f93afa888a34b82"} Feb 16 02:38:50.837349 master-0 kubenswrapper[31559]: I0216 02:38:50.837300 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-7568-account-create-update-nxkd6"] Feb 16 02:38:50.899960 master-0 kubenswrapper[31559]: I0216 02:38:50.899910 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 16 02:38:50.914333 master-0 kubenswrapper[31559]: I0216 02:38:50.914282 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-7rwhn"] Feb 16 02:38:50.921556 master-0 kubenswrapper[31559]: W0216 02:38:50.921511 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb6fab0b_42dc_4235_bc08_73b63df4ed3a.slice/crio-740cd17f732097dfd46c12f965b2e3271c0589a1e85731367144cb1d67cf5a72 WatchSource:0}: Error finding container 740cd17f732097dfd46c12f965b2e3271c0589a1e85731367144cb1d67cf5a72: Status 404 returned error can't find the container with id 740cd17f732097dfd46c12f965b2e3271c0589a1e85731367144cb1d67cf5a72 Feb 16 02:38:51.007002 master-0 kubenswrapper[31559]: I0216 02:38:51.006620 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6654-account-create-update-5dvj4"] Feb 16 02:38:51.013021 master-0 kubenswrapper[31559]: W0216 02:38:51.012963 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod143f4b4e_6ddb_47c3_bd5f_c370bc7905e2.slice/crio-cf4ce4c5e7fde219ed418a08f68bf110be75730a6adae35f08eae7432a5f32d7 WatchSource:0}: Error finding container cf4ce4c5e7fde219ed418a08f68bf110be75730a6adae35f08eae7432a5f32d7: Status 404 returned error can't find the container with id cf4ce4c5e7fde219ed418a08f68bf110be75730a6adae35f08eae7432a5f32d7 Feb 16 02:38:51.029788 master-0 kubenswrapper[31559]: E0216 02:38:51.029733 31559 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 192.168.32.10:45730->192.168.32.10:34313: write tcp 192.168.32.10:10250->192.168.32.10:42628: write: broken pipe Feb 16 02:38:51.115958 master-0 kubenswrapper[31559]: I0216 02:38:51.115919 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-qhrbh"] Feb 16 02:38:51.624519 master-0 kubenswrapper[31559]: I0216 02:38:51.624428 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-qhrbh" event={"ID":"d822f0ef-28bc-4cfb-8f3d-0f259b61d2bf","Type":"ContainerStarted","Data":"7683da1f34cba8216094a614dde6391079c1964aa7643a92b61fd8caad013e7d"} Feb 16 02:38:51.626176 master-0 kubenswrapper[31559]: I0216 02:38:51.626100 31559 generic.go:334] "Generic (PLEG): container finished" podID="ce0b20e3-b4e6-42eb-a920-588caa2195df" containerID="46b54e5d6616bd6bba019be6f5c3a8dec553e57acb7c02a5d680b43627291646" exitCode=0 Feb 16 02:38:51.626226 master-0 kubenswrapper[31559]: I0216 02:38:51.626168 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-5bsxg" event={"ID":"ce0b20e3-b4e6-42eb-a920-588caa2195df","Type":"ContainerDied","Data":"46b54e5d6616bd6bba019be6f5c3a8dec553e57acb7c02a5d680b43627291646"} Feb 16 02:38:51.633814 master-0 kubenswrapper[31559]: I0216 02:38:51.633732 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6654-account-create-update-5dvj4" event={"ID":"143f4b4e-6ddb-47c3-bd5f-c370bc7905e2","Type":"ContainerStarted","Data":"738f69e8f6966da07711446b5a16ec8590a4a02219ffdc9c2f1d72e42cb6bf4e"} Feb 16 02:38:51.633887 master-0 kubenswrapper[31559]: I0216 02:38:51.633831 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6654-account-create-update-5dvj4" event={"ID":"143f4b4e-6ddb-47c3-bd5f-c370bc7905e2","Type":"ContainerStarted","Data":"cf4ce4c5e7fde219ed418a08f68bf110be75730a6adae35f08eae7432a5f32d7"} Feb 16 02:38:51.636881 master-0 kubenswrapper[31559]: I0216 02:38:51.636831 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7568-account-create-update-nxkd6" event={"ID":"3cd0bd49-96d7-417e-9900-3efb9e8a2de0","Type":"ContainerStarted","Data":"7861aba56890b38321922f4fef5df9a19a7ceb579cfc023958c82a984b034d98"} Feb 16 02:38:51.636947 master-0 kubenswrapper[31559]: I0216 02:38:51.636890 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7568-account-create-update-nxkd6" event={"ID":"3cd0bd49-96d7-417e-9900-3efb9e8a2de0","Type":"ContainerStarted","Data":"0470ab9669fa0e69248f1bd40bf078fcf334aaeff3b8666f1290dc751f53a667"} Feb 16 02:38:51.638757 master-0 kubenswrapper[31559]: I0216 02:38:51.638705 31559 generic.go:334] "Generic (PLEG): container finished" podID="2ce4194c-65d5-4538-bb48-d1f17d880599" containerID="fe2b4c7f21c19332fed98f625225e4c1c08d28051d33eeb444a714eb51c66ca1" exitCode=0 Feb 16 02:38:51.638841 master-0 kubenswrapper[31559]: I0216 02:38:51.638803 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-kcsvv" event={"ID":"2ce4194c-65d5-4538-bb48-d1f17d880599","Type":"ContainerDied","Data":"fe2b4c7f21c19332fed98f625225e4c1c08d28051d33eeb444a714eb51c66ca1"} Feb 16 02:38:51.642235 master-0 kubenswrapper[31559]: I0216 02:38:51.641098 31559 generic.go:334] "Generic (PLEG): container finished" podID="eb6fab0b-42dc-4235-bc08-73b63df4ed3a" containerID="ecdfbed0193bf44d16f7f7c379370462a576a2165d57038d68e23ba34c738f03" exitCode=0 Feb 16 02:38:51.642235 master-0 kubenswrapper[31559]: I0216 02:38:51.641160 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-7rwhn" event={"ID":"eb6fab0b-42dc-4235-bc08-73b63df4ed3a","Type":"ContainerDied","Data":"ecdfbed0193bf44d16f7f7c379370462a576a2165d57038d68e23ba34c738f03"} Feb 16 02:38:51.642235 master-0 kubenswrapper[31559]: I0216 02:38:51.641197 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-7rwhn" event={"ID":"eb6fab0b-42dc-4235-bc08-73b63df4ed3a","Type":"ContainerStarted","Data":"740cd17f732097dfd46c12f965b2e3271c0589a1e85731367144cb1d67cf5a72"} Feb 16 02:38:51.739484 master-0 kubenswrapper[31559]: I0216 02:38:51.735098 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6654-account-create-update-5dvj4" podStartSLOduration=2.7350811 podStartE2EDuration="2.7350811s" podCreationTimestamp="2026-02-16 02:38:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:38:51.707941425 +0000 UTC m=+984.052547440" watchObservedRunningTime="2026-02-16 02:38:51.7350811 +0000 UTC m=+984.079687105" Feb 16 02:38:51.753480 master-0 kubenswrapper[31559]: I0216 02:38:51.745814 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-7568-account-create-update-nxkd6" podStartSLOduration=2.745792015 podStartE2EDuration="2.745792015s" podCreationTimestamp="2026-02-16 02:38:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:38:51.729790735 +0000 UTC m=+984.074396740" watchObservedRunningTime="2026-02-16 02:38:51.745792015 +0000 UTC m=+984.090398030" Feb 16 02:38:52.654691 master-0 kubenswrapper[31559]: I0216 02:38:52.654626 31559 generic.go:334] "Generic (PLEG): container finished" podID="143f4b4e-6ddb-47c3-bd5f-c370bc7905e2" containerID="738f69e8f6966da07711446b5a16ec8590a4a02219ffdc9c2f1d72e42cb6bf4e" exitCode=0 Feb 16 02:38:52.655302 master-0 kubenswrapper[31559]: I0216 02:38:52.654708 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6654-account-create-update-5dvj4" event={"ID":"143f4b4e-6ddb-47c3-bd5f-c370bc7905e2","Type":"ContainerDied","Data":"738f69e8f6966da07711446b5a16ec8590a4a02219ffdc9c2f1d72e42cb6bf4e"} Feb 16 02:38:52.656666 master-0 kubenswrapper[31559]: I0216 02:38:52.656619 31559 generic.go:334] "Generic (PLEG): container finished" podID="3cd0bd49-96d7-417e-9900-3efb9e8a2de0" containerID="7861aba56890b38321922f4fef5df9a19a7ceb579cfc023958c82a984b034d98" exitCode=0 Feb 16 02:38:52.656946 master-0 kubenswrapper[31559]: I0216 02:38:52.656903 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7568-account-create-update-nxkd6" event={"ID":"3cd0bd49-96d7-417e-9900-3efb9e8a2de0","Type":"ContainerDied","Data":"7861aba56890b38321922f4fef5df9a19a7ceb579cfc023958c82a984b034d98"} Feb 16 02:38:53.101171 master-0 kubenswrapper[31559]: I0216 02:38:53.101064 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-7rwhn" Feb 16 02:38:53.225599 master-0 kubenswrapper[31559]: I0216 02:38:53.224775 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-c6f66bf95-nbpjz" Feb 16 02:38:53.266263 master-0 kubenswrapper[31559]: I0216 02:38:53.266188 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb6fab0b-42dc-4235-bc08-73b63df4ed3a-operator-scripts\") pod \"eb6fab0b-42dc-4235-bc08-73b63df4ed3a\" (UID: \"eb6fab0b-42dc-4235-bc08-73b63df4ed3a\") " Feb 16 02:38:53.266518 master-0 kubenswrapper[31559]: I0216 02:38:53.266380 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dv8q\" (UniqueName: \"kubernetes.io/projected/eb6fab0b-42dc-4235-bc08-73b63df4ed3a-kube-api-access-6dv8q\") pod \"eb6fab0b-42dc-4235-bc08-73b63df4ed3a\" (UID: \"eb6fab0b-42dc-4235-bc08-73b63df4ed3a\") " Feb 16 02:38:53.270755 master-0 kubenswrapper[31559]: I0216 02:38:53.270683 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb6fab0b-42dc-4235-bc08-73b63df4ed3a-kube-api-access-6dv8q" (OuterVolumeSpecName: "kube-api-access-6dv8q") pod "eb6fab0b-42dc-4235-bc08-73b63df4ed3a" (UID: "eb6fab0b-42dc-4235-bc08-73b63df4ed3a"). InnerVolumeSpecName "kube-api-access-6dv8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:38:53.271075 master-0 kubenswrapper[31559]: I0216 02:38:53.271013 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb6fab0b-42dc-4235-bc08-73b63df4ed3a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "eb6fab0b-42dc-4235-bc08-73b63df4ed3a" (UID: "eb6fab0b-42dc-4235-bc08-73b63df4ed3a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:38:53.301131 master-0 kubenswrapper[31559]: I0216 02:38:53.301068 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6fd49994df-plzpw"] Feb 16 02:38:53.301339 master-0 kubenswrapper[31559]: I0216 02:38:53.301298 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6fd49994df-plzpw" podUID="8dba87a1-1996-4dc0-bc24-cdf9a2fa756f" containerName="dnsmasq-dns" containerID="cri-o://f654109ada4b3445a4fbbdd048831e4736211bc00c4c70d2aecbe8baefe2e609" gracePeriod=10 Feb 16 02:38:53.340535 master-0 kubenswrapper[31559]: I0216 02:38:53.340195 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-5bsxg" Feb 16 02:38:53.350599 master-0 kubenswrapper[31559]: I0216 02:38:53.346422 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-kcsvv" Feb 16 02:38:53.369862 master-0 kubenswrapper[31559]: I0216 02:38:53.368733 31559 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb6fab0b-42dc-4235-bc08-73b63df4ed3a-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 02:38:53.369862 master-0 kubenswrapper[31559]: I0216 02:38:53.368768 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6dv8q\" (UniqueName: \"kubernetes.io/projected/eb6fab0b-42dc-4235-bc08-73b63df4ed3a-kube-api-access-6dv8q\") on node \"master-0\" DevicePath \"\"" Feb 16 02:38:53.473228 master-0 kubenswrapper[31559]: I0216 02:38:53.470007 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqd7j\" (UniqueName: \"kubernetes.io/projected/2ce4194c-65d5-4538-bb48-d1f17d880599-kube-api-access-qqd7j\") pod \"2ce4194c-65d5-4538-bb48-d1f17d880599\" (UID: \"2ce4194c-65d5-4538-bb48-d1f17d880599\") " Feb 16 02:38:53.473228 master-0 kubenswrapper[31559]: I0216 02:38:53.470074 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce0b20e3-b4e6-42eb-a920-588caa2195df-operator-scripts\") pod \"ce0b20e3-b4e6-42eb-a920-588caa2195df\" (UID: \"ce0b20e3-b4e6-42eb-a920-588caa2195df\") " Feb 16 02:38:53.473228 master-0 kubenswrapper[31559]: I0216 02:38:53.470112 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9pww\" (UniqueName: \"kubernetes.io/projected/ce0b20e3-b4e6-42eb-a920-588caa2195df-kube-api-access-k9pww\") pod \"ce0b20e3-b4e6-42eb-a920-588caa2195df\" (UID: \"ce0b20e3-b4e6-42eb-a920-588caa2195df\") " Feb 16 02:38:53.473228 master-0 kubenswrapper[31559]: I0216 02:38:53.470252 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ce4194c-65d5-4538-bb48-d1f17d880599-config-data\") pod \"2ce4194c-65d5-4538-bb48-d1f17d880599\" (UID: \"2ce4194c-65d5-4538-bb48-d1f17d880599\") " Feb 16 02:38:53.473228 master-0 kubenswrapper[31559]: I0216 02:38:53.470355 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2ce4194c-65d5-4538-bb48-d1f17d880599-db-sync-config-data\") pod \"2ce4194c-65d5-4538-bb48-d1f17d880599\" (UID: \"2ce4194c-65d5-4538-bb48-d1f17d880599\") " Feb 16 02:38:53.473228 master-0 kubenswrapper[31559]: I0216 02:38:53.470383 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ce4194c-65d5-4538-bb48-d1f17d880599-combined-ca-bundle\") pod \"2ce4194c-65d5-4538-bb48-d1f17d880599\" (UID: \"2ce4194c-65d5-4538-bb48-d1f17d880599\") " Feb 16 02:38:53.487768 master-0 kubenswrapper[31559]: I0216 02:38:53.484057 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce0b20e3-b4e6-42eb-a920-588caa2195df-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ce0b20e3-b4e6-42eb-a920-588caa2195df" (UID: "ce0b20e3-b4e6-42eb-a920-588caa2195df"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:38:53.510734 master-0 kubenswrapper[31559]: I0216 02:38:53.504798 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce0b20e3-b4e6-42eb-a920-588caa2195df-kube-api-access-k9pww" (OuterVolumeSpecName: "kube-api-access-k9pww") pod "ce0b20e3-b4e6-42eb-a920-588caa2195df" (UID: "ce0b20e3-b4e6-42eb-a920-588caa2195df"). InnerVolumeSpecName "kube-api-access-k9pww". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:38:53.544456 master-0 kubenswrapper[31559]: I0216 02:38:53.543189 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ce4194c-65d5-4538-bb48-d1f17d880599-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "2ce4194c-65d5-4538-bb48-d1f17d880599" (UID: "2ce4194c-65d5-4538-bb48-d1f17d880599"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:38:53.555714 master-0 kubenswrapper[31559]: I0216 02:38:53.555654 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ce4194c-65d5-4538-bb48-d1f17d880599-kube-api-access-qqd7j" (OuterVolumeSpecName: "kube-api-access-qqd7j") pod "2ce4194c-65d5-4538-bb48-d1f17d880599" (UID: "2ce4194c-65d5-4538-bb48-d1f17d880599"). InnerVolumeSpecName "kube-api-access-qqd7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:38:53.582494 master-0 kubenswrapper[31559]: I0216 02:38:53.581304 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9pww\" (UniqueName: \"kubernetes.io/projected/ce0b20e3-b4e6-42eb-a920-588caa2195df-kube-api-access-k9pww\") on node \"master-0\" DevicePath \"\"" Feb 16 02:38:53.582494 master-0 kubenswrapper[31559]: I0216 02:38:53.581341 31559 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2ce4194c-65d5-4538-bb48-d1f17d880599-db-sync-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 02:38:53.582494 master-0 kubenswrapper[31559]: I0216 02:38:53.581350 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqd7j\" (UniqueName: \"kubernetes.io/projected/2ce4194c-65d5-4538-bb48-d1f17d880599-kube-api-access-qqd7j\") on node \"master-0\" DevicePath \"\"" Feb 16 02:38:53.582494 master-0 kubenswrapper[31559]: I0216 02:38:53.581361 31559 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce0b20e3-b4e6-42eb-a920-588caa2195df-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 02:38:53.598681 master-0 kubenswrapper[31559]: I0216 02:38:53.598618 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ce4194c-65d5-4538-bb48-d1f17d880599-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2ce4194c-65d5-4538-bb48-d1f17d880599" (UID: "2ce4194c-65d5-4538-bb48-d1f17d880599"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:38:53.683627 master-0 kubenswrapper[31559]: I0216 02:38:53.683238 31559 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ce4194c-65d5-4538-bb48-d1f17d880599-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 02:38:53.693820 master-0 kubenswrapper[31559]: I0216 02:38:53.687655 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ce4194c-65d5-4538-bb48-d1f17d880599-config-data" (OuterVolumeSpecName: "config-data") pod "2ce4194c-65d5-4538-bb48-d1f17d880599" (UID: "2ce4194c-65d5-4538-bb48-d1f17d880599"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:38:53.702642 master-0 kubenswrapper[31559]: I0216 02:38:53.698682 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-5bsxg" event={"ID":"ce0b20e3-b4e6-42eb-a920-588caa2195df","Type":"ContainerDied","Data":"631b33fb0593285590bda7c31dcb37522a551348b7efe26c7f93afa888a34b82"} Feb 16 02:38:53.702642 master-0 kubenswrapper[31559]: I0216 02:38:53.698720 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-5bsxg" Feb 16 02:38:53.702642 master-0 kubenswrapper[31559]: I0216 02:38:53.698735 31559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="631b33fb0593285590bda7c31dcb37522a551348b7efe26c7f93afa888a34b82" Feb 16 02:38:53.702642 master-0 kubenswrapper[31559]: I0216 02:38:53.700548 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-kcsvv" event={"ID":"2ce4194c-65d5-4538-bb48-d1f17d880599","Type":"ContainerDied","Data":"3ea05d9b083f2146147339c772e0f40998e0df5716472748d54c48f17a173aff"} Feb 16 02:38:53.702642 master-0 kubenswrapper[31559]: I0216 02:38:53.700569 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-kcsvv" Feb 16 02:38:53.702642 master-0 kubenswrapper[31559]: I0216 02:38:53.700579 31559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ea05d9b083f2146147339c772e0f40998e0df5716472748d54c48f17a173aff" Feb 16 02:38:53.703787 master-0 kubenswrapper[31559]: I0216 02:38:53.703303 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-7rwhn" event={"ID":"eb6fab0b-42dc-4235-bc08-73b63df4ed3a","Type":"ContainerDied","Data":"740cd17f732097dfd46c12f965b2e3271c0589a1e85731367144cb1d67cf5a72"} Feb 16 02:38:53.703787 master-0 kubenswrapper[31559]: I0216 02:38:53.703345 31559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="740cd17f732097dfd46c12f965b2e3271c0589a1e85731367144cb1d67cf5a72" Feb 16 02:38:53.703787 master-0 kubenswrapper[31559]: I0216 02:38:53.703395 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-7rwhn" Feb 16 02:38:53.711769 master-0 kubenswrapper[31559]: I0216 02:38:53.709215 31559 generic.go:334] "Generic (PLEG): container finished" podID="8dba87a1-1996-4dc0-bc24-cdf9a2fa756f" containerID="f654109ada4b3445a4fbbdd048831e4736211bc00c4c70d2aecbe8baefe2e609" exitCode=0 Feb 16 02:38:53.711769 master-0 kubenswrapper[31559]: I0216 02:38:53.709350 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fd49994df-plzpw" event={"ID":"8dba87a1-1996-4dc0-bc24-cdf9a2fa756f","Type":"ContainerDied","Data":"f654109ada4b3445a4fbbdd048831e4736211bc00c4c70d2aecbe8baefe2e609"} Feb 16 02:38:53.785010 master-0 kubenswrapper[31559]: I0216 02:38:53.784880 31559 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ce4194c-65d5-4538-bb48-d1f17d880599-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 02:38:54.157510 master-0 kubenswrapper[31559]: I0216 02:38:54.153702 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6cdd8bf54c-48llw"] Feb 16 02:38:54.157510 master-0 kubenswrapper[31559]: E0216 02:38:54.154197 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb6fab0b-42dc-4235-bc08-73b63df4ed3a" containerName="mariadb-database-create" Feb 16 02:38:54.157510 master-0 kubenswrapper[31559]: I0216 02:38:54.154210 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb6fab0b-42dc-4235-bc08-73b63df4ed3a" containerName="mariadb-database-create" Feb 16 02:38:54.157510 master-0 kubenswrapper[31559]: E0216 02:38:54.154260 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce0b20e3-b4e6-42eb-a920-588caa2195df" containerName="mariadb-database-create" Feb 16 02:38:54.157510 master-0 kubenswrapper[31559]: I0216 02:38:54.154270 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce0b20e3-b4e6-42eb-a920-588caa2195df" containerName="mariadb-database-create" Feb 16 02:38:54.157510 master-0 kubenswrapper[31559]: E0216 02:38:54.154287 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ce4194c-65d5-4538-bb48-d1f17d880599" containerName="glance-db-sync" Feb 16 02:38:54.157510 master-0 kubenswrapper[31559]: I0216 02:38:54.154295 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ce4194c-65d5-4538-bb48-d1f17d880599" containerName="glance-db-sync" Feb 16 02:38:54.167466 master-0 kubenswrapper[31559]: I0216 02:38:54.162736 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb6fab0b-42dc-4235-bc08-73b63df4ed3a" containerName="mariadb-database-create" Feb 16 02:38:54.167466 master-0 kubenswrapper[31559]: I0216 02:38:54.162818 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce0b20e3-b4e6-42eb-a920-588caa2195df" containerName="mariadb-database-create" Feb 16 02:38:54.167466 master-0 kubenswrapper[31559]: I0216 02:38:54.162869 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ce4194c-65d5-4538-bb48-d1f17d880599" containerName="glance-db-sync" Feb 16 02:38:54.167466 master-0 kubenswrapper[31559]: I0216 02:38:54.164056 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cdd8bf54c-48llw" Feb 16 02:38:54.203545 master-0 kubenswrapper[31559]: I0216 02:38:54.191130 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cdd8bf54c-48llw"] Feb 16 02:38:54.203545 master-0 kubenswrapper[31559]: I0216 02:38:54.196593 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d38194f-c1be-4b20-acdd-793a9bef8b1b-dns-swift-storage-0\") pod \"dnsmasq-dns-6cdd8bf54c-48llw\" (UID: \"1d38194f-c1be-4b20-acdd-793a9bef8b1b\") " pod="openstack/dnsmasq-dns-6cdd8bf54c-48llw" Feb 16 02:38:54.203545 master-0 kubenswrapper[31559]: I0216 02:38:54.196646 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d38194f-c1be-4b20-acdd-793a9bef8b1b-dns-svc\") pod \"dnsmasq-dns-6cdd8bf54c-48llw\" (UID: \"1d38194f-c1be-4b20-acdd-793a9bef8b1b\") " pod="openstack/dnsmasq-dns-6cdd8bf54c-48llw" Feb 16 02:38:54.203545 master-0 kubenswrapper[31559]: I0216 02:38:54.196680 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d38194f-c1be-4b20-acdd-793a9bef8b1b-ovsdbserver-nb\") pod \"dnsmasq-dns-6cdd8bf54c-48llw\" (UID: \"1d38194f-c1be-4b20-acdd-793a9bef8b1b\") " pod="openstack/dnsmasq-dns-6cdd8bf54c-48llw" Feb 16 02:38:54.203545 master-0 kubenswrapper[31559]: I0216 02:38:54.196724 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d38194f-c1be-4b20-acdd-793a9bef8b1b-ovsdbserver-sb\") pod \"dnsmasq-dns-6cdd8bf54c-48llw\" (UID: \"1d38194f-c1be-4b20-acdd-793a9bef8b1b\") " pod="openstack/dnsmasq-dns-6cdd8bf54c-48llw" Feb 16 02:38:54.203545 master-0 kubenswrapper[31559]: I0216 02:38:54.196811 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkjp6\" (UniqueName: \"kubernetes.io/projected/1d38194f-c1be-4b20-acdd-793a9bef8b1b-kube-api-access-zkjp6\") pod \"dnsmasq-dns-6cdd8bf54c-48llw\" (UID: \"1d38194f-c1be-4b20-acdd-793a9bef8b1b\") " pod="openstack/dnsmasq-dns-6cdd8bf54c-48llw" Feb 16 02:38:54.203545 master-0 kubenswrapper[31559]: I0216 02:38:54.196838 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d38194f-c1be-4b20-acdd-793a9bef8b1b-config\") pod \"dnsmasq-dns-6cdd8bf54c-48llw\" (UID: \"1d38194f-c1be-4b20-acdd-793a9bef8b1b\") " pod="openstack/dnsmasq-dns-6cdd8bf54c-48llw" Feb 16 02:38:54.301692 master-0 kubenswrapper[31559]: I0216 02:38:54.301651 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d38194f-c1be-4b20-acdd-793a9bef8b1b-ovsdbserver-sb\") pod \"dnsmasq-dns-6cdd8bf54c-48llw\" (UID: \"1d38194f-c1be-4b20-acdd-793a9bef8b1b\") " pod="openstack/dnsmasq-dns-6cdd8bf54c-48llw" Feb 16 02:38:54.301964 master-0 kubenswrapper[31559]: I0216 02:38:54.301944 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkjp6\" (UniqueName: \"kubernetes.io/projected/1d38194f-c1be-4b20-acdd-793a9bef8b1b-kube-api-access-zkjp6\") pod \"dnsmasq-dns-6cdd8bf54c-48llw\" (UID: \"1d38194f-c1be-4b20-acdd-793a9bef8b1b\") " pod="openstack/dnsmasq-dns-6cdd8bf54c-48llw" Feb 16 02:38:54.302031 master-0 kubenswrapper[31559]: I0216 02:38:54.301995 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d38194f-c1be-4b20-acdd-793a9bef8b1b-config\") pod \"dnsmasq-dns-6cdd8bf54c-48llw\" (UID: \"1d38194f-c1be-4b20-acdd-793a9bef8b1b\") " pod="openstack/dnsmasq-dns-6cdd8bf54c-48llw" Feb 16 02:38:54.302165 master-0 kubenswrapper[31559]: I0216 02:38:54.302145 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d38194f-c1be-4b20-acdd-793a9bef8b1b-dns-swift-storage-0\") pod \"dnsmasq-dns-6cdd8bf54c-48llw\" (UID: \"1d38194f-c1be-4b20-acdd-793a9bef8b1b\") " pod="openstack/dnsmasq-dns-6cdd8bf54c-48llw" Feb 16 02:38:54.302218 master-0 kubenswrapper[31559]: I0216 02:38:54.302196 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d38194f-c1be-4b20-acdd-793a9bef8b1b-dns-svc\") pod \"dnsmasq-dns-6cdd8bf54c-48llw\" (UID: \"1d38194f-c1be-4b20-acdd-793a9bef8b1b\") " pod="openstack/dnsmasq-dns-6cdd8bf54c-48llw" Feb 16 02:38:54.302273 master-0 kubenswrapper[31559]: I0216 02:38:54.302244 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d38194f-c1be-4b20-acdd-793a9bef8b1b-ovsdbserver-nb\") pod \"dnsmasq-dns-6cdd8bf54c-48llw\" (UID: \"1d38194f-c1be-4b20-acdd-793a9bef8b1b\") " pod="openstack/dnsmasq-dns-6cdd8bf54c-48llw" Feb 16 02:38:54.302922 master-0 kubenswrapper[31559]: I0216 02:38:54.302886 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d38194f-c1be-4b20-acdd-793a9bef8b1b-ovsdbserver-sb\") pod \"dnsmasq-dns-6cdd8bf54c-48llw\" (UID: \"1d38194f-c1be-4b20-acdd-793a9bef8b1b\") " pod="openstack/dnsmasq-dns-6cdd8bf54c-48llw" Feb 16 02:38:54.303572 master-0 kubenswrapper[31559]: I0216 02:38:54.303537 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d38194f-c1be-4b20-acdd-793a9bef8b1b-config\") pod \"dnsmasq-dns-6cdd8bf54c-48llw\" (UID: \"1d38194f-c1be-4b20-acdd-793a9bef8b1b\") " pod="openstack/dnsmasq-dns-6cdd8bf54c-48llw" Feb 16 02:38:54.303705 master-0 kubenswrapper[31559]: I0216 02:38:54.303656 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d38194f-c1be-4b20-acdd-793a9bef8b1b-dns-swift-storage-0\") pod \"dnsmasq-dns-6cdd8bf54c-48llw\" (UID: \"1d38194f-c1be-4b20-acdd-793a9bef8b1b\") " pod="openstack/dnsmasq-dns-6cdd8bf54c-48llw" Feb 16 02:38:54.304703 master-0 kubenswrapper[31559]: I0216 02:38:54.304675 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d38194f-c1be-4b20-acdd-793a9bef8b1b-dns-svc\") pod \"dnsmasq-dns-6cdd8bf54c-48llw\" (UID: \"1d38194f-c1be-4b20-acdd-793a9bef8b1b\") " pod="openstack/dnsmasq-dns-6cdd8bf54c-48llw" Feb 16 02:38:54.304975 master-0 kubenswrapper[31559]: I0216 02:38:54.304943 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d38194f-c1be-4b20-acdd-793a9bef8b1b-ovsdbserver-nb\") pod \"dnsmasq-dns-6cdd8bf54c-48llw\" (UID: \"1d38194f-c1be-4b20-acdd-793a9bef8b1b\") " pod="openstack/dnsmasq-dns-6cdd8bf54c-48llw" Feb 16 02:38:54.321375 master-0 kubenswrapper[31559]: I0216 02:38:54.321271 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkjp6\" (UniqueName: \"kubernetes.io/projected/1d38194f-c1be-4b20-acdd-793a9bef8b1b-kube-api-access-zkjp6\") pod \"dnsmasq-dns-6cdd8bf54c-48llw\" (UID: \"1d38194f-c1be-4b20-acdd-793a9bef8b1b\") " pod="openstack/dnsmasq-dns-6cdd8bf54c-48llw" Feb 16 02:38:54.534143 master-0 kubenswrapper[31559]: I0216 02:38:54.534082 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cdd8bf54c-48llw" Feb 16 02:38:56.743886 master-0 kubenswrapper[31559]: I0216 02:38:56.743799 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7568-account-create-update-nxkd6" event={"ID":"3cd0bd49-96d7-417e-9900-3efb9e8a2de0","Type":"ContainerDied","Data":"0470ab9669fa0e69248f1bd40bf078fcf334aaeff3b8666f1290dc751f53a667"} Feb 16 02:38:56.744683 master-0 kubenswrapper[31559]: I0216 02:38:56.743915 31559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0470ab9669fa0e69248f1bd40bf078fcf334aaeff3b8666f1290dc751f53a667" Feb 16 02:38:56.748007 master-0 kubenswrapper[31559]: I0216 02:38:56.747953 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fd49994df-plzpw" event={"ID":"8dba87a1-1996-4dc0-bc24-cdf9a2fa756f","Type":"ContainerDied","Data":"10414891f25ec0f52b210584d48cccd9b412e50d5558a31d72eb7d87fafc8238"} Feb 16 02:38:56.748083 master-0 kubenswrapper[31559]: I0216 02:38:56.748013 31559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10414891f25ec0f52b210584d48cccd9b412e50d5558a31d72eb7d87fafc8238" Feb 16 02:38:56.750705 master-0 kubenswrapper[31559]: I0216 02:38:56.750655 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6654-account-create-update-5dvj4" event={"ID":"143f4b4e-6ddb-47c3-bd5f-c370bc7905e2","Type":"ContainerDied","Data":"cf4ce4c5e7fde219ed418a08f68bf110be75730a6adae35f08eae7432a5f32d7"} Feb 16 02:38:56.750705 master-0 kubenswrapper[31559]: I0216 02:38:56.750703 31559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf4ce4c5e7fde219ed418a08f68bf110be75730a6adae35f08eae7432a5f32d7" Feb 16 02:38:57.007136 master-0 kubenswrapper[31559]: I0216 02:38:57.007094 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7568-account-create-update-nxkd6" Feb 16 02:38:57.013407 master-0 kubenswrapper[31559]: I0216 02:38:57.013370 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fd49994df-plzpw" Feb 16 02:38:57.061989 master-0 kubenswrapper[31559]: I0216 02:38:57.061663 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6654-account-create-update-5dvj4" Feb 16 02:38:57.095416 master-0 kubenswrapper[31559]: I0216 02:38:57.092483 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3cd0bd49-96d7-417e-9900-3efb9e8a2de0-operator-scripts\") pod \"3cd0bd49-96d7-417e-9900-3efb9e8a2de0\" (UID: \"3cd0bd49-96d7-417e-9900-3efb9e8a2de0\") " Feb 16 02:38:57.095416 master-0 kubenswrapper[31559]: I0216 02:38:57.092557 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dba87a1-1996-4dc0-bc24-cdf9a2fa756f-config\") pod \"8dba87a1-1996-4dc0-bc24-cdf9a2fa756f\" (UID: \"8dba87a1-1996-4dc0-bc24-cdf9a2fa756f\") " Feb 16 02:38:57.095416 master-0 kubenswrapper[31559]: I0216 02:38:57.092608 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7f44\" (UniqueName: \"kubernetes.io/projected/8dba87a1-1996-4dc0-bc24-cdf9a2fa756f-kube-api-access-w7f44\") pod \"8dba87a1-1996-4dc0-bc24-cdf9a2fa756f\" (UID: \"8dba87a1-1996-4dc0-bc24-cdf9a2fa756f\") " Feb 16 02:38:57.095416 master-0 kubenswrapper[31559]: I0216 02:38:57.092686 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8dba87a1-1996-4dc0-bc24-cdf9a2fa756f-ovsdbserver-sb\") pod \"8dba87a1-1996-4dc0-bc24-cdf9a2fa756f\" (UID: \"8dba87a1-1996-4dc0-bc24-cdf9a2fa756f\") " Feb 16 02:38:57.095416 master-0 kubenswrapper[31559]: I0216 02:38:57.092733 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8dba87a1-1996-4dc0-bc24-cdf9a2fa756f-ovsdbserver-nb\") pod \"8dba87a1-1996-4dc0-bc24-cdf9a2fa756f\" (UID: \"8dba87a1-1996-4dc0-bc24-cdf9a2fa756f\") " Feb 16 02:38:57.095416 master-0 kubenswrapper[31559]: I0216 02:38:57.092792 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8dba87a1-1996-4dc0-bc24-cdf9a2fa756f-dns-svc\") pod \"8dba87a1-1996-4dc0-bc24-cdf9a2fa756f\" (UID: \"8dba87a1-1996-4dc0-bc24-cdf9a2fa756f\") " Feb 16 02:38:57.095416 master-0 kubenswrapper[31559]: I0216 02:38:57.092843 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lphn\" (UniqueName: \"kubernetes.io/projected/3cd0bd49-96d7-417e-9900-3efb9e8a2de0-kube-api-access-6lphn\") pod \"3cd0bd49-96d7-417e-9900-3efb9e8a2de0\" (UID: \"3cd0bd49-96d7-417e-9900-3efb9e8a2de0\") " Feb 16 02:38:57.099500 master-0 kubenswrapper[31559]: I0216 02:38:57.096453 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cd0bd49-96d7-417e-9900-3efb9e8a2de0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3cd0bd49-96d7-417e-9900-3efb9e8a2de0" (UID: "3cd0bd49-96d7-417e-9900-3efb9e8a2de0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:38:57.101188 master-0 kubenswrapper[31559]: I0216 02:38:57.101152 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8dba87a1-1996-4dc0-bc24-cdf9a2fa756f-kube-api-access-w7f44" (OuterVolumeSpecName: "kube-api-access-w7f44") pod "8dba87a1-1996-4dc0-bc24-cdf9a2fa756f" (UID: "8dba87a1-1996-4dc0-bc24-cdf9a2fa756f"). InnerVolumeSpecName "kube-api-access-w7f44". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:38:57.101564 master-0 kubenswrapper[31559]: I0216 02:38:57.101545 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cd0bd49-96d7-417e-9900-3efb9e8a2de0-kube-api-access-6lphn" (OuterVolumeSpecName: "kube-api-access-6lphn") pod "3cd0bd49-96d7-417e-9900-3efb9e8a2de0" (UID: "3cd0bd49-96d7-417e-9900-3efb9e8a2de0"). InnerVolumeSpecName "kube-api-access-6lphn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:38:57.158453 master-0 kubenswrapper[31559]: I0216 02:38:57.153861 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8dba87a1-1996-4dc0-bc24-cdf9a2fa756f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8dba87a1-1996-4dc0-bc24-cdf9a2fa756f" (UID: "8dba87a1-1996-4dc0-bc24-cdf9a2fa756f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:38:57.158453 master-0 kubenswrapper[31559]: I0216 02:38:57.155697 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8dba87a1-1996-4dc0-bc24-cdf9a2fa756f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8dba87a1-1996-4dc0-bc24-cdf9a2fa756f" (UID: "8dba87a1-1996-4dc0-bc24-cdf9a2fa756f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:38:57.165539 master-0 kubenswrapper[31559]: I0216 02:38:57.160897 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8dba87a1-1996-4dc0-bc24-cdf9a2fa756f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8dba87a1-1996-4dc0-bc24-cdf9a2fa756f" (UID: "8dba87a1-1996-4dc0-bc24-cdf9a2fa756f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:38:57.177466 master-0 kubenswrapper[31559]: I0216 02:38:57.172180 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8dba87a1-1996-4dc0-bc24-cdf9a2fa756f-config" (OuterVolumeSpecName: "config") pod "8dba87a1-1996-4dc0-bc24-cdf9a2fa756f" (UID: "8dba87a1-1996-4dc0-bc24-cdf9a2fa756f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:38:57.195553 master-0 kubenswrapper[31559]: I0216 02:38:57.194321 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5l2h7\" (UniqueName: \"kubernetes.io/projected/143f4b4e-6ddb-47c3-bd5f-c370bc7905e2-kube-api-access-5l2h7\") pod \"143f4b4e-6ddb-47c3-bd5f-c370bc7905e2\" (UID: \"143f4b4e-6ddb-47c3-bd5f-c370bc7905e2\") " Feb 16 02:38:57.195765 master-0 kubenswrapper[31559]: I0216 02:38:57.195681 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/143f4b4e-6ddb-47c3-bd5f-c370bc7905e2-operator-scripts\") pod \"143f4b4e-6ddb-47c3-bd5f-c370bc7905e2\" (UID: \"143f4b4e-6ddb-47c3-bd5f-c370bc7905e2\") " Feb 16 02:38:57.196287 master-0 kubenswrapper[31559]: I0216 02:38:57.196261 31559 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8dba87a1-1996-4dc0-bc24-cdf9a2fa756f-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 16 02:38:57.196287 master-0 kubenswrapper[31559]: I0216 02:38:57.196283 31559 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8dba87a1-1996-4dc0-bc24-cdf9a2fa756f-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 16 02:38:57.196358 master-0 kubenswrapper[31559]: I0216 02:38:57.196313 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lphn\" (UniqueName: \"kubernetes.io/projected/3cd0bd49-96d7-417e-9900-3efb9e8a2de0-kube-api-access-6lphn\") on node \"master-0\" DevicePath \"\"" Feb 16 02:38:57.196358 master-0 kubenswrapper[31559]: I0216 02:38:57.196326 31559 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3cd0bd49-96d7-417e-9900-3efb9e8a2de0-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 02:38:57.196358 master-0 kubenswrapper[31559]: I0216 02:38:57.196335 31559 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dba87a1-1996-4dc0-bc24-cdf9a2fa756f-config\") on node \"master-0\" DevicePath \"\"" Feb 16 02:38:57.196358 master-0 kubenswrapper[31559]: I0216 02:38:57.196343 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7f44\" (UniqueName: \"kubernetes.io/projected/8dba87a1-1996-4dc0-bc24-cdf9a2fa756f-kube-api-access-w7f44\") on node \"master-0\" DevicePath \"\"" Feb 16 02:38:57.196358 master-0 kubenswrapper[31559]: I0216 02:38:57.196352 31559 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8dba87a1-1996-4dc0-bc24-cdf9a2fa756f-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 16 02:38:57.197246 master-0 kubenswrapper[31559]: I0216 02:38:57.197215 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/143f4b4e-6ddb-47c3-bd5f-c370bc7905e2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "143f4b4e-6ddb-47c3-bd5f-c370bc7905e2" (UID: "143f4b4e-6ddb-47c3-bd5f-c370bc7905e2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:38:57.200863 master-0 kubenswrapper[31559]: I0216 02:38:57.200831 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/143f4b4e-6ddb-47c3-bd5f-c370bc7905e2-kube-api-access-5l2h7" (OuterVolumeSpecName: "kube-api-access-5l2h7") pod "143f4b4e-6ddb-47c3-bd5f-c370bc7905e2" (UID: "143f4b4e-6ddb-47c3-bd5f-c370bc7905e2"). InnerVolumeSpecName "kube-api-access-5l2h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:38:57.299264 master-0 kubenswrapper[31559]: I0216 02:38:57.299104 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5l2h7\" (UniqueName: \"kubernetes.io/projected/143f4b4e-6ddb-47c3-bd5f-c370bc7905e2-kube-api-access-5l2h7\") on node \"master-0\" DevicePath \"\"" Feb 16 02:38:57.299264 master-0 kubenswrapper[31559]: I0216 02:38:57.299174 31559 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/143f4b4e-6ddb-47c3-bd5f-c370bc7905e2-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 02:38:57.364404 master-0 kubenswrapper[31559]: I0216 02:38:57.364324 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cdd8bf54c-48llw"] Feb 16 02:38:57.760322 master-0 kubenswrapper[31559]: I0216 02:38:57.760262 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-qhrbh" event={"ID":"d822f0ef-28bc-4cfb-8f3d-0f259b61d2bf","Type":"ContainerStarted","Data":"f1a6ade63837afd647742266793c7993ba37f1974c65d4a8fa06ebff687d90b5"} Feb 16 02:38:57.761662 master-0 kubenswrapper[31559]: I0216 02:38:57.761580 31559 generic.go:334] "Generic (PLEG): container finished" podID="1d38194f-c1be-4b20-acdd-793a9bef8b1b" containerID="5f79381f3de30e0194afa5dc8505e7d1cb022bb7b2e311ca4ebfcb3e217a1cb4" exitCode=0 Feb 16 02:38:57.761662 master-0 kubenswrapper[31559]: I0216 02:38:57.761644 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fd49994df-plzpw" Feb 16 02:38:57.761821 master-0 kubenswrapper[31559]: I0216 02:38:57.761724 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cdd8bf54c-48llw" event={"ID":"1d38194f-c1be-4b20-acdd-793a9bef8b1b","Type":"ContainerDied","Data":"5f79381f3de30e0194afa5dc8505e7d1cb022bb7b2e311ca4ebfcb3e217a1cb4"} Feb 16 02:38:57.761821 master-0 kubenswrapper[31559]: I0216 02:38:57.761798 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cdd8bf54c-48llw" event={"ID":"1d38194f-c1be-4b20-acdd-793a9bef8b1b","Type":"ContainerStarted","Data":"5f9147cb857e7f9947ab647277a0f6215a87439fe95dc2c08c890f159a659360"} Feb 16 02:38:57.761925 master-0 kubenswrapper[31559]: I0216 02:38:57.761817 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7568-account-create-update-nxkd6" Feb 16 02:38:57.761925 master-0 kubenswrapper[31559]: I0216 02:38:57.761862 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6654-account-create-update-5dvj4" Feb 16 02:38:57.790998 master-0 kubenswrapper[31559]: I0216 02:38:57.790927 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-qhrbh" podStartSLOduration=2.082089336 podStartE2EDuration="7.790910477s" podCreationTimestamp="2026-02-16 02:38:50 +0000 UTC" firstStartedPulling="2026-02-16 02:38:51.112057508 +0000 UTC m=+983.456663513" lastFinishedPulling="2026-02-16 02:38:56.820878629 +0000 UTC m=+989.165484654" observedRunningTime="2026-02-16 02:38:57.783928808 +0000 UTC m=+990.128534823" watchObservedRunningTime="2026-02-16 02:38:57.790910477 +0000 UTC m=+990.135516492" Feb 16 02:38:58.283101 master-0 kubenswrapper[31559]: I0216 02:38:58.283031 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6fd49994df-plzpw"] Feb 16 02:38:58.293407 master-0 kubenswrapper[31559]: I0216 02:38:58.293288 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6fd49994df-plzpw"] Feb 16 02:38:58.781639 master-0 kubenswrapper[31559]: I0216 02:38:58.781540 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cdd8bf54c-48llw" event={"ID":"1d38194f-c1be-4b20-acdd-793a9bef8b1b","Type":"ContainerStarted","Data":"ca0de1cbccc5e3d2a99d5a914ee181b235fc0306e6c10036ba9a1ff6e1fb316d"} Feb 16 02:38:58.829271 master-0 kubenswrapper[31559]: I0216 02:38:58.829150 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6cdd8bf54c-48llw" podStartSLOduration=4.82912217 podStartE2EDuration="4.82912217s" podCreationTimestamp="2026-02-16 02:38:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:38:58.813417308 +0000 UTC m=+991.158023353" watchObservedRunningTime="2026-02-16 02:38:58.82912217 +0000 UTC m=+991.173728225" Feb 16 02:38:59.535862 master-0 kubenswrapper[31559]: I0216 02:38:59.535776 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6cdd8bf54c-48llw" Feb 16 02:38:59.971946 master-0 kubenswrapper[31559]: I0216 02:38:59.971857 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8dba87a1-1996-4dc0-bc24-cdf9a2fa756f" path="/var/lib/kubelet/pods/8dba87a1-1996-4dc0-bc24-cdf9a2fa756f/volumes" Feb 16 02:39:00.254195 master-0 kubenswrapper[31559]: I0216 02:39:00.254021 31559 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6fd49994df-plzpw" podUID="8dba87a1-1996-4dc0-bc24-cdf9a2fa756f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.187:5353: i/o timeout" Feb 16 02:39:01.827050 master-0 kubenswrapper[31559]: I0216 02:39:01.826819 31559 generic.go:334] "Generic (PLEG): container finished" podID="d822f0ef-28bc-4cfb-8f3d-0f259b61d2bf" containerID="f1a6ade63837afd647742266793c7993ba37f1974c65d4a8fa06ebff687d90b5" exitCode=0 Feb 16 02:39:01.827050 master-0 kubenswrapper[31559]: I0216 02:39:01.826911 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-qhrbh" event={"ID":"d822f0ef-28bc-4cfb-8f3d-0f259b61d2bf","Type":"ContainerDied","Data":"f1a6ade63837afd647742266793c7993ba37f1974c65d4a8fa06ebff687d90b5"} Feb 16 02:39:03.476773 master-0 kubenswrapper[31559]: I0216 02:39:03.476704 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-qhrbh" Feb 16 02:39:03.560173 master-0 kubenswrapper[31559]: I0216 02:39:03.560121 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d822f0ef-28bc-4cfb-8f3d-0f259b61d2bf-combined-ca-bundle\") pod \"d822f0ef-28bc-4cfb-8f3d-0f259b61d2bf\" (UID: \"d822f0ef-28bc-4cfb-8f3d-0f259b61d2bf\") " Feb 16 02:39:03.560519 master-0 kubenswrapper[31559]: I0216 02:39:03.560497 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmlts\" (UniqueName: \"kubernetes.io/projected/d822f0ef-28bc-4cfb-8f3d-0f259b61d2bf-kube-api-access-qmlts\") pod \"d822f0ef-28bc-4cfb-8f3d-0f259b61d2bf\" (UID: \"d822f0ef-28bc-4cfb-8f3d-0f259b61d2bf\") " Feb 16 02:39:03.560710 master-0 kubenswrapper[31559]: I0216 02:39:03.560692 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d822f0ef-28bc-4cfb-8f3d-0f259b61d2bf-config-data\") pod \"d822f0ef-28bc-4cfb-8f3d-0f259b61d2bf\" (UID: \"d822f0ef-28bc-4cfb-8f3d-0f259b61d2bf\") " Feb 16 02:39:03.568364 master-0 kubenswrapper[31559]: I0216 02:39:03.568304 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d822f0ef-28bc-4cfb-8f3d-0f259b61d2bf-kube-api-access-qmlts" (OuterVolumeSpecName: "kube-api-access-qmlts") pod "d822f0ef-28bc-4cfb-8f3d-0f259b61d2bf" (UID: "d822f0ef-28bc-4cfb-8f3d-0f259b61d2bf"). InnerVolumeSpecName "kube-api-access-qmlts". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:39:03.609860 master-0 kubenswrapper[31559]: I0216 02:39:03.609793 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d822f0ef-28bc-4cfb-8f3d-0f259b61d2bf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d822f0ef-28bc-4cfb-8f3d-0f259b61d2bf" (UID: "d822f0ef-28bc-4cfb-8f3d-0f259b61d2bf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:39:03.620519 master-0 kubenswrapper[31559]: I0216 02:39:03.620474 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d822f0ef-28bc-4cfb-8f3d-0f259b61d2bf-config-data" (OuterVolumeSpecName: "config-data") pod "d822f0ef-28bc-4cfb-8f3d-0f259b61d2bf" (UID: "d822f0ef-28bc-4cfb-8f3d-0f259b61d2bf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:39:03.664204 master-0 kubenswrapper[31559]: I0216 02:39:03.664148 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmlts\" (UniqueName: \"kubernetes.io/projected/d822f0ef-28bc-4cfb-8f3d-0f259b61d2bf-kube-api-access-qmlts\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:03.664547 master-0 kubenswrapper[31559]: I0216 02:39:03.664521 31559 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d822f0ef-28bc-4cfb-8f3d-0f259b61d2bf-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:03.664906 master-0 kubenswrapper[31559]: I0216 02:39:03.664681 31559 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d822f0ef-28bc-4cfb-8f3d-0f259b61d2bf-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:03.859947 master-0 kubenswrapper[31559]: I0216 02:39:03.859875 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-qhrbh" event={"ID":"d822f0ef-28bc-4cfb-8f3d-0f259b61d2bf","Type":"ContainerDied","Data":"7683da1f34cba8216094a614dde6391079c1964aa7643a92b61fd8caad013e7d"} Feb 16 02:39:03.859947 master-0 kubenswrapper[31559]: I0216 02:39:03.859941 31559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7683da1f34cba8216094a614dde6391079c1964aa7643a92b61fd8caad013e7d" Feb 16 02:39:03.860390 master-0 kubenswrapper[31559]: I0216 02:39:03.860348 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-qhrbh" Feb 16 02:39:04.537293 master-0 kubenswrapper[31559]: I0216 02:39:04.537198 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6cdd8bf54c-48llw" Feb 16 02:39:04.920107 master-0 kubenswrapper[31559]: I0216 02:39:04.920023 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-6glzm"] Feb 16 02:39:04.920598 master-0 kubenswrapper[31559]: E0216 02:39:04.920568 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cd0bd49-96d7-417e-9900-3efb9e8a2de0" containerName="mariadb-account-create-update" Feb 16 02:39:04.920598 master-0 kubenswrapper[31559]: I0216 02:39:04.920593 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cd0bd49-96d7-417e-9900-3efb9e8a2de0" containerName="mariadb-account-create-update" Feb 16 02:39:04.920680 master-0 kubenswrapper[31559]: E0216 02:39:04.920615 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d822f0ef-28bc-4cfb-8f3d-0f259b61d2bf" containerName="keystone-db-sync" Feb 16 02:39:04.920680 master-0 kubenswrapper[31559]: I0216 02:39:04.920624 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="d822f0ef-28bc-4cfb-8f3d-0f259b61d2bf" containerName="keystone-db-sync" Feb 16 02:39:04.920680 master-0 kubenswrapper[31559]: E0216 02:39:04.920649 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="143f4b4e-6ddb-47c3-bd5f-c370bc7905e2" containerName="mariadb-account-create-update" Feb 16 02:39:04.920680 master-0 kubenswrapper[31559]: I0216 02:39:04.920658 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="143f4b4e-6ddb-47c3-bd5f-c370bc7905e2" containerName="mariadb-account-create-update" Feb 16 02:39:04.920816 master-0 kubenswrapper[31559]: E0216 02:39:04.920682 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dba87a1-1996-4dc0-bc24-cdf9a2fa756f" containerName="init" Feb 16 02:39:04.920816 master-0 kubenswrapper[31559]: I0216 02:39:04.920691 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dba87a1-1996-4dc0-bc24-cdf9a2fa756f" containerName="init" Feb 16 02:39:04.920816 master-0 kubenswrapper[31559]: E0216 02:39:04.920707 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dba87a1-1996-4dc0-bc24-cdf9a2fa756f" containerName="dnsmasq-dns" Feb 16 02:39:04.921412 master-0 kubenswrapper[31559]: I0216 02:39:04.920715 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dba87a1-1996-4dc0-bc24-cdf9a2fa756f" containerName="dnsmasq-dns" Feb 16 02:39:04.921732 master-0 kubenswrapper[31559]: I0216 02:39:04.921704 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="8dba87a1-1996-4dc0-bc24-cdf9a2fa756f" containerName="dnsmasq-dns" Feb 16 02:39:04.921776 master-0 kubenswrapper[31559]: I0216 02:39:04.921740 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="143f4b4e-6ddb-47c3-bd5f-c370bc7905e2" containerName="mariadb-account-create-update" Feb 16 02:39:04.921776 master-0 kubenswrapper[31559]: I0216 02:39:04.921765 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cd0bd49-96d7-417e-9900-3efb9e8a2de0" containerName="mariadb-account-create-update" Feb 16 02:39:04.921835 master-0 kubenswrapper[31559]: I0216 02:39:04.921808 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="d822f0ef-28bc-4cfb-8f3d-0f259b61d2bf" containerName="keystone-db-sync" Feb 16 02:39:04.922614 master-0 kubenswrapper[31559]: I0216 02:39:04.922580 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6glzm" Feb 16 02:39:04.925183 master-0 kubenswrapper[31559]: I0216 02:39:04.925146 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 16 02:39:04.926304 master-0 kubenswrapper[31559]: I0216 02:39:04.926276 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 16 02:39:04.926526 master-0 kubenswrapper[31559]: I0216 02:39:04.926488 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 16 02:39:04.926974 master-0 kubenswrapper[31559]: I0216 02:39:04.926918 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 16 02:39:05.014873 master-0 kubenswrapper[31559]: I0216 02:39:05.014811 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e376c3b4-d1f6-4f67-bd17-406917b6c866-fernet-keys\") pod \"keystone-bootstrap-6glzm\" (UID: \"e376c3b4-d1f6-4f67-bd17-406917b6c866\") " pod="openstack/keystone-bootstrap-6glzm" Feb 16 02:39:05.015086 master-0 kubenswrapper[31559]: I0216 02:39:05.014904 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z57cd\" (UniqueName: \"kubernetes.io/projected/e376c3b4-d1f6-4f67-bd17-406917b6c866-kube-api-access-z57cd\") pod \"keystone-bootstrap-6glzm\" (UID: \"e376c3b4-d1f6-4f67-bd17-406917b6c866\") " pod="openstack/keystone-bootstrap-6glzm" Feb 16 02:39:05.015086 master-0 kubenswrapper[31559]: I0216 02:39:05.015012 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e376c3b4-d1f6-4f67-bd17-406917b6c866-scripts\") pod \"keystone-bootstrap-6glzm\" (UID: \"e376c3b4-d1f6-4f67-bd17-406917b6c866\") " pod="openstack/keystone-bootstrap-6glzm" Feb 16 02:39:05.015086 master-0 kubenswrapper[31559]: I0216 02:39:05.015046 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e376c3b4-d1f6-4f67-bd17-406917b6c866-config-data\") pod \"keystone-bootstrap-6glzm\" (UID: \"e376c3b4-d1f6-4f67-bd17-406917b6c866\") " pod="openstack/keystone-bootstrap-6glzm" Feb 16 02:39:05.015178 master-0 kubenswrapper[31559]: I0216 02:39:05.015089 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e376c3b4-d1f6-4f67-bd17-406917b6c866-combined-ca-bundle\") pod \"keystone-bootstrap-6glzm\" (UID: \"e376c3b4-d1f6-4f67-bd17-406917b6c866\") " pod="openstack/keystone-bootstrap-6glzm" Feb 16 02:39:05.015178 master-0 kubenswrapper[31559]: I0216 02:39:05.015125 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e376c3b4-d1f6-4f67-bd17-406917b6c866-credential-keys\") pod \"keystone-bootstrap-6glzm\" (UID: \"e376c3b4-d1f6-4f67-bd17-406917b6c866\") " pod="openstack/keystone-bootstrap-6glzm" Feb 16 02:39:05.032710 master-0 kubenswrapper[31559]: I0216 02:39:05.032640 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-6glzm"] Feb 16 02:39:05.124464 master-0 kubenswrapper[31559]: I0216 02:39:05.117324 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e376c3b4-d1f6-4f67-bd17-406917b6c866-fernet-keys\") pod \"keystone-bootstrap-6glzm\" (UID: \"e376c3b4-d1f6-4f67-bd17-406917b6c866\") " pod="openstack/keystone-bootstrap-6glzm" Feb 16 02:39:05.124464 master-0 kubenswrapper[31559]: I0216 02:39:05.117680 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z57cd\" (UniqueName: \"kubernetes.io/projected/e376c3b4-d1f6-4f67-bd17-406917b6c866-kube-api-access-z57cd\") pod \"keystone-bootstrap-6glzm\" (UID: \"e376c3b4-d1f6-4f67-bd17-406917b6c866\") " pod="openstack/keystone-bootstrap-6glzm" Feb 16 02:39:05.124464 master-0 kubenswrapper[31559]: I0216 02:39:05.120542 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e376c3b4-d1f6-4f67-bd17-406917b6c866-scripts\") pod \"keystone-bootstrap-6glzm\" (UID: \"e376c3b4-d1f6-4f67-bd17-406917b6c866\") " pod="openstack/keystone-bootstrap-6glzm" Feb 16 02:39:05.124464 master-0 kubenswrapper[31559]: I0216 02:39:05.120590 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e376c3b4-d1f6-4f67-bd17-406917b6c866-config-data\") pod \"keystone-bootstrap-6glzm\" (UID: \"e376c3b4-d1f6-4f67-bd17-406917b6c866\") " pod="openstack/keystone-bootstrap-6glzm" Feb 16 02:39:05.124464 master-0 kubenswrapper[31559]: I0216 02:39:05.120630 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e376c3b4-d1f6-4f67-bd17-406917b6c866-combined-ca-bundle\") pod \"keystone-bootstrap-6glzm\" (UID: \"e376c3b4-d1f6-4f67-bd17-406917b6c866\") " pod="openstack/keystone-bootstrap-6glzm" Feb 16 02:39:05.124464 master-0 kubenswrapper[31559]: I0216 02:39:05.120685 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e376c3b4-d1f6-4f67-bd17-406917b6c866-credential-keys\") pod \"keystone-bootstrap-6glzm\" (UID: \"e376c3b4-d1f6-4f67-bd17-406917b6c866\") " pod="openstack/keystone-bootstrap-6glzm" Feb 16 02:39:05.124464 master-0 kubenswrapper[31559]: I0216 02:39:05.124467 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e376c3b4-d1f6-4f67-bd17-406917b6c866-credential-keys\") pod \"keystone-bootstrap-6glzm\" (UID: \"e376c3b4-d1f6-4f67-bd17-406917b6c866\") " pod="openstack/keystone-bootstrap-6glzm" Feb 16 02:39:05.124894 master-0 kubenswrapper[31559]: I0216 02:39:05.124480 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e376c3b4-d1f6-4f67-bd17-406917b6c866-fernet-keys\") pod \"keystone-bootstrap-6glzm\" (UID: \"e376c3b4-d1f6-4f67-bd17-406917b6c866\") " pod="openstack/keystone-bootstrap-6glzm" Feb 16 02:39:05.124894 master-0 kubenswrapper[31559]: I0216 02:39:05.124526 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c6f66bf95-nbpjz"] Feb 16 02:39:05.124894 master-0 kubenswrapper[31559]: I0216 02:39:05.124724 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-c6f66bf95-nbpjz" podUID="1003e078-0808-429d-99a8-18f4497431cd" containerName="dnsmasq-dns" containerID="cri-o://788b27d114c73b91d23e51f68e0144cb061f5737f8399214f79663f228986001" gracePeriod=10 Feb 16 02:39:05.129467 master-0 kubenswrapper[31559]: I0216 02:39:05.126196 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e376c3b4-d1f6-4f67-bd17-406917b6c866-config-data\") pod \"keystone-bootstrap-6glzm\" (UID: \"e376c3b4-d1f6-4f67-bd17-406917b6c866\") " pod="openstack/keystone-bootstrap-6glzm" Feb 16 02:39:05.129467 master-0 kubenswrapper[31559]: I0216 02:39:05.128020 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e376c3b4-d1f6-4f67-bd17-406917b6c866-scripts\") pod \"keystone-bootstrap-6glzm\" (UID: \"e376c3b4-d1f6-4f67-bd17-406917b6c866\") " pod="openstack/keystone-bootstrap-6glzm" Feb 16 02:39:05.129467 master-0 kubenswrapper[31559]: I0216 02:39:05.128206 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e376c3b4-d1f6-4f67-bd17-406917b6c866-combined-ca-bundle\") pod \"keystone-bootstrap-6glzm\" (UID: \"e376c3b4-d1f6-4f67-bd17-406917b6c866\") " pod="openstack/keystone-bootstrap-6glzm" Feb 16 02:39:05.149464 master-0 kubenswrapper[31559]: I0216 02:39:05.148984 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z57cd\" (UniqueName: \"kubernetes.io/projected/e376c3b4-d1f6-4f67-bd17-406917b6c866-kube-api-access-z57cd\") pod \"keystone-bootstrap-6glzm\" (UID: \"e376c3b4-d1f6-4f67-bd17-406917b6c866\") " pod="openstack/keystone-bootstrap-6glzm" Feb 16 02:39:05.178457 master-0 kubenswrapper[31559]: I0216 02:39:05.172100 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d95bcd475-rv868"] Feb 16 02:39:05.194459 master-0 kubenswrapper[31559]: I0216 02:39:05.190294 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d95bcd475-rv868"] Feb 16 02:39:05.194459 master-0 kubenswrapper[31559]: I0216 02:39:05.190576 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d95bcd475-rv868" Feb 16 02:39:05.213297 master-0 kubenswrapper[31559]: I0216 02:39:05.208018 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-db-create-nqj26"] Feb 16 02:39:05.213297 master-0 kubenswrapper[31559]: I0216 02:39:05.212942 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-nqj26" Feb 16 02:39:05.220467 master-0 kubenswrapper[31559]: I0216 02:39:05.219941 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-dde57-db-sync-2z65z"] Feb 16 02:39:05.226782 master-0 kubenswrapper[31559]: I0216 02:39:05.224384 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-dde57-db-sync-2z65z" Feb 16 02:39:05.226782 master-0 kubenswrapper[31559]: I0216 02:39:05.226466 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-dde57-config-data" Feb 16 02:39:05.230482 master-0 kubenswrapper[31559]: I0216 02:39:05.226722 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-dde57-scripts" Feb 16 02:39:05.247469 master-0 kubenswrapper[31559]: I0216 02:39:05.244597 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6glzm" Feb 16 02:39:05.251467 master-0 kubenswrapper[31559]: I0216 02:39:05.250671 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-mxb5k"] Feb 16 02:39:05.256491 master-0 kubenswrapper[31559]: I0216 02:39:05.252983 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-mxb5k" Feb 16 02:39:05.261482 master-0 kubenswrapper[31559]: I0216 02:39:05.257500 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 16 02:39:05.261482 master-0 kubenswrapper[31559]: I0216 02:39:05.257801 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 16 02:39:05.290463 master-0 kubenswrapper[31559]: I0216 02:39:05.287055 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-create-nqj26"] Feb 16 02:39:05.345474 master-0 kubenswrapper[31559]: I0216 02:39:05.327877 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/69e0b955-6516-4cb2-82e0-a6d1a88508c8-dns-swift-storage-0\") pod \"dnsmasq-dns-7d95bcd475-rv868\" (UID: \"69e0b955-6516-4cb2-82e0-a6d1a88508c8\") " pod="openstack/dnsmasq-dns-7d95bcd475-rv868" Feb 16 02:39:05.345474 master-0 kubenswrapper[31559]: I0216 02:39:05.327924 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abbf5e91-b1f0-466e-80ca-d7b79ace4552-config-data\") pod \"cinder-dde57-db-sync-2z65z\" (UID: \"abbf5e91-b1f0-466e-80ca-d7b79ace4552\") " pod="openstack/cinder-dde57-db-sync-2z65z" Feb 16 02:39:05.345474 master-0 kubenswrapper[31559]: I0216 02:39:05.327950 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abbf5e91-b1f0-466e-80ca-d7b79ace4552-combined-ca-bundle\") pod \"cinder-dde57-db-sync-2z65z\" (UID: \"abbf5e91-b1f0-466e-80ca-d7b79ace4552\") " pod="openstack/cinder-dde57-db-sync-2z65z" Feb 16 02:39:05.345474 master-0 kubenswrapper[31559]: I0216 02:39:05.327991 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/032a3554-910a-472a-8537-73e08670ffe8-config\") pod \"neutron-db-sync-mxb5k\" (UID: \"032a3554-910a-472a-8537-73e08670ffe8\") " pod="openstack/neutron-db-sync-mxb5k" Feb 16 02:39:05.345474 master-0 kubenswrapper[31559]: I0216 02:39:05.328009 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/abbf5e91-b1f0-466e-80ca-d7b79ace4552-db-sync-config-data\") pod \"cinder-dde57-db-sync-2z65z\" (UID: \"abbf5e91-b1f0-466e-80ca-d7b79ace4552\") " pod="openstack/cinder-dde57-db-sync-2z65z" Feb 16 02:39:05.345474 master-0 kubenswrapper[31559]: I0216 02:39:05.328040 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/abbf5e91-b1f0-466e-80ca-d7b79ace4552-etc-machine-id\") pod \"cinder-dde57-db-sync-2z65z\" (UID: \"abbf5e91-b1f0-466e-80ca-d7b79ace4552\") " pod="openstack/cinder-dde57-db-sync-2z65z" Feb 16 02:39:05.345474 master-0 kubenswrapper[31559]: I0216 02:39:05.328058 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/032a3554-910a-472a-8537-73e08670ffe8-combined-ca-bundle\") pod \"neutron-db-sync-mxb5k\" (UID: \"032a3554-910a-472a-8537-73e08670ffe8\") " pod="openstack/neutron-db-sync-mxb5k" Feb 16 02:39:05.345474 master-0 kubenswrapper[31559]: I0216 02:39:05.328091 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lkt4\" (UniqueName: \"kubernetes.io/projected/69e0b955-6516-4cb2-82e0-a6d1a88508c8-kube-api-access-8lkt4\") pod \"dnsmasq-dns-7d95bcd475-rv868\" (UID: \"69e0b955-6516-4cb2-82e0-a6d1a88508c8\") " pod="openstack/dnsmasq-dns-7d95bcd475-rv868" Feb 16 02:39:05.345474 master-0 kubenswrapper[31559]: I0216 02:39:05.328115 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21d15bc7-f350-4b27-a27c-df6b81c9b50b-operator-scripts\") pod \"ironic-db-create-nqj26\" (UID: \"21d15bc7-f350-4b27-a27c-df6b81c9b50b\") " pod="openstack/ironic-db-create-nqj26" Feb 16 02:39:05.345474 master-0 kubenswrapper[31559]: I0216 02:39:05.328143 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rk4k\" (UniqueName: \"kubernetes.io/projected/abbf5e91-b1f0-466e-80ca-d7b79ace4552-kube-api-access-8rk4k\") pod \"cinder-dde57-db-sync-2z65z\" (UID: \"abbf5e91-b1f0-466e-80ca-d7b79ace4552\") " pod="openstack/cinder-dde57-db-sync-2z65z" Feb 16 02:39:05.345474 master-0 kubenswrapper[31559]: I0216 02:39:05.328181 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/abbf5e91-b1f0-466e-80ca-d7b79ace4552-scripts\") pod \"cinder-dde57-db-sync-2z65z\" (UID: \"abbf5e91-b1f0-466e-80ca-d7b79ace4552\") " pod="openstack/cinder-dde57-db-sync-2z65z" Feb 16 02:39:05.345474 master-0 kubenswrapper[31559]: I0216 02:39:05.328200 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/69e0b955-6516-4cb2-82e0-a6d1a88508c8-ovsdbserver-nb\") pod \"dnsmasq-dns-7d95bcd475-rv868\" (UID: \"69e0b955-6516-4cb2-82e0-a6d1a88508c8\") " pod="openstack/dnsmasq-dns-7d95bcd475-rv868" Feb 16 02:39:05.345474 master-0 kubenswrapper[31559]: I0216 02:39:05.328225 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/69e0b955-6516-4cb2-82e0-a6d1a88508c8-dns-svc\") pod \"dnsmasq-dns-7d95bcd475-rv868\" (UID: \"69e0b955-6516-4cb2-82e0-a6d1a88508c8\") " pod="openstack/dnsmasq-dns-7d95bcd475-rv868" Feb 16 02:39:05.345474 master-0 kubenswrapper[31559]: I0216 02:39:05.328239 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/69e0b955-6516-4cb2-82e0-a6d1a88508c8-ovsdbserver-sb\") pod \"dnsmasq-dns-7d95bcd475-rv868\" (UID: \"69e0b955-6516-4cb2-82e0-a6d1a88508c8\") " pod="openstack/dnsmasq-dns-7d95bcd475-rv868" Feb 16 02:39:05.345474 master-0 kubenswrapper[31559]: I0216 02:39:05.328258 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vvnn\" (UniqueName: \"kubernetes.io/projected/032a3554-910a-472a-8537-73e08670ffe8-kube-api-access-7vvnn\") pod \"neutron-db-sync-mxb5k\" (UID: \"032a3554-910a-472a-8537-73e08670ffe8\") " pod="openstack/neutron-db-sync-mxb5k" Feb 16 02:39:05.345474 master-0 kubenswrapper[31559]: I0216 02:39:05.328273 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69e0b955-6516-4cb2-82e0-a6d1a88508c8-config\") pod \"dnsmasq-dns-7d95bcd475-rv868\" (UID: \"69e0b955-6516-4cb2-82e0-a6d1a88508c8\") " pod="openstack/dnsmasq-dns-7d95bcd475-rv868" Feb 16 02:39:05.345474 master-0 kubenswrapper[31559]: I0216 02:39:05.328513 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72v72\" (UniqueName: \"kubernetes.io/projected/21d15bc7-f350-4b27-a27c-df6b81c9b50b-kube-api-access-72v72\") pod \"ironic-db-create-nqj26\" (UID: \"21d15bc7-f350-4b27-a27c-df6b81c9b50b\") " pod="openstack/ironic-db-create-nqj26" Feb 16 02:39:05.345474 master-0 kubenswrapper[31559]: I0216 02:39:05.340755 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-dde57-db-sync-2z65z"] Feb 16 02:39:05.375862 master-0 kubenswrapper[31559]: I0216 02:39:05.375661 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-mxb5k"] Feb 16 02:39:05.429106 master-0 kubenswrapper[31559]: I0216 02:39:05.429050 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-5df0-account-create-update-xnvlf"] Feb 16 02:39:05.429828 master-0 kubenswrapper[31559]: I0216 02:39:05.429786 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/69e0b955-6516-4cb2-82e0-a6d1a88508c8-dns-swift-storage-0\") pod \"dnsmasq-dns-7d95bcd475-rv868\" (UID: \"69e0b955-6516-4cb2-82e0-a6d1a88508c8\") " pod="openstack/dnsmasq-dns-7d95bcd475-rv868" Feb 16 02:39:05.429828 master-0 kubenswrapper[31559]: I0216 02:39:05.429836 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abbf5e91-b1f0-466e-80ca-d7b79ace4552-config-data\") pod \"cinder-dde57-db-sync-2z65z\" (UID: \"abbf5e91-b1f0-466e-80ca-d7b79ace4552\") " pod="openstack/cinder-dde57-db-sync-2z65z" Feb 16 02:39:05.429828 master-0 kubenswrapper[31559]: I0216 02:39:05.429864 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abbf5e91-b1f0-466e-80ca-d7b79ace4552-combined-ca-bundle\") pod \"cinder-dde57-db-sync-2z65z\" (UID: \"abbf5e91-b1f0-466e-80ca-d7b79ace4552\") " pod="openstack/cinder-dde57-db-sync-2z65z" Feb 16 02:39:05.429997 master-0 kubenswrapper[31559]: I0216 02:39:05.429896 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/032a3554-910a-472a-8537-73e08670ffe8-config\") pod \"neutron-db-sync-mxb5k\" (UID: \"032a3554-910a-472a-8537-73e08670ffe8\") " pod="openstack/neutron-db-sync-mxb5k" Feb 16 02:39:05.429997 master-0 kubenswrapper[31559]: I0216 02:39:05.429916 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/abbf5e91-b1f0-466e-80ca-d7b79ace4552-db-sync-config-data\") pod \"cinder-dde57-db-sync-2z65z\" (UID: \"abbf5e91-b1f0-466e-80ca-d7b79ace4552\") " pod="openstack/cinder-dde57-db-sync-2z65z" Feb 16 02:39:05.429997 master-0 kubenswrapper[31559]: I0216 02:39:05.429948 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/abbf5e91-b1f0-466e-80ca-d7b79ace4552-etc-machine-id\") pod \"cinder-dde57-db-sync-2z65z\" (UID: \"abbf5e91-b1f0-466e-80ca-d7b79ace4552\") " pod="openstack/cinder-dde57-db-sync-2z65z" Feb 16 02:39:05.429997 master-0 kubenswrapper[31559]: I0216 02:39:05.429968 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/032a3554-910a-472a-8537-73e08670ffe8-combined-ca-bundle\") pod \"neutron-db-sync-mxb5k\" (UID: \"032a3554-910a-472a-8537-73e08670ffe8\") " pod="openstack/neutron-db-sync-mxb5k" Feb 16 02:39:05.430586 master-0 kubenswrapper[31559]: I0216 02:39:05.430007 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lkt4\" (UniqueName: \"kubernetes.io/projected/69e0b955-6516-4cb2-82e0-a6d1a88508c8-kube-api-access-8lkt4\") pod \"dnsmasq-dns-7d95bcd475-rv868\" (UID: \"69e0b955-6516-4cb2-82e0-a6d1a88508c8\") " pod="openstack/dnsmasq-dns-7d95bcd475-rv868" Feb 16 02:39:05.430586 master-0 kubenswrapper[31559]: I0216 02:39:05.430032 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21d15bc7-f350-4b27-a27c-df6b81c9b50b-operator-scripts\") pod \"ironic-db-create-nqj26\" (UID: \"21d15bc7-f350-4b27-a27c-df6b81c9b50b\") " pod="openstack/ironic-db-create-nqj26" Feb 16 02:39:05.430586 master-0 kubenswrapper[31559]: I0216 02:39:05.430060 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rk4k\" (UniqueName: \"kubernetes.io/projected/abbf5e91-b1f0-466e-80ca-d7b79ace4552-kube-api-access-8rk4k\") pod \"cinder-dde57-db-sync-2z65z\" (UID: \"abbf5e91-b1f0-466e-80ca-d7b79ace4552\") " pod="openstack/cinder-dde57-db-sync-2z65z" Feb 16 02:39:05.430586 master-0 kubenswrapper[31559]: I0216 02:39:05.430097 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/abbf5e91-b1f0-466e-80ca-d7b79ace4552-scripts\") pod \"cinder-dde57-db-sync-2z65z\" (UID: \"abbf5e91-b1f0-466e-80ca-d7b79ace4552\") " pod="openstack/cinder-dde57-db-sync-2z65z" Feb 16 02:39:05.430586 master-0 kubenswrapper[31559]: I0216 02:39:05.430119 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/69e0b955-6516-4cb2-82e0-a6d1a88508c8-ovsdbserver-nb\") pod \"dnsmasq-dns-7d95bcd475-rv868\" (UID: \"69e0b955-6516-4cb2-82e0-a6d1a88508c8\") " pod="openstack/dnsmasq-dns-7d95bcd475-rv868" Feb 16 02:39:05.430586 master-0 kubenswrapper[31559]: I0216 02:39:05.430136 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/69e0b955-6516-4cb2-82e0-a6d1a88508c8-dns-svc\") pod \"dnsmasq-dns-7d95bcd475-rv868\" (UID: \"69e0b955-6516-4cb2-82e0-a6d1a88508c8\") " pod="openstack/dnsmasq-dns-7d95bcd475-rv868" Feb 16 02:39:05.430586 master-0 kubenswrapper[31559]: I0216 02:39:05.430151 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/69e0b955-6516-4cb2-82e0-a6d1a88508c8-ovsdbserver-sb\") pod \"dnsmasq-dns-7d95bcd475-rv868\" (UID: \"69e0b955-6516-4cb2-82e0-a6d1a88508c8\") " pod="openstack/dnsmasq-dns-7d95bcd475-rv868" Feb 16 02:39:05.430586 master-0 kubenswrapper[31559]: I0216 02:39:05.430166 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vvnn\" (UniqueName: \"kubernetes.io/projected/032a3554-910a-472a-8537-73e08670ffe8-kube-api-access-7vvnn\") pod \"neutron-db-sync-mxb5k\" (UID: \"032a3554-910a-472a-8537-73e08670ffe8\") " pod="openstack/neutron-db-sync-mxb5k" Feb 16 02:39:05.430586 master-0 kubenswrapper[31559]: I0216 02:39:05.430181 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69e0b955-6516-4cb2-82e0-a6d1a88508c8-config\") pod \"dnsmasq-dns-7d95bcd475-rv868\" (UID: \"69e0b955-6516-4cb2-82e0-a6d1a88508c8\") " pod="openstack/dnsmasq-dns-7d95bcd475-rv868" Feb 16 02:39:05.430586 master-0 kubenswrapper[31559]: I0216 02:39:05.430202 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72v72\" (UniqueName: \"kubernetes.io/projected/21d15bc7-f350-4b27-a27c-df6b81c9b50b-kube-api-access-72v72\") pod \"ironic-db-create-nqj26\" (UID: \"21d15bc7-f350-4b27-a27c-df6b81c9b50b\") " pod="openstack/ironic-db-create-nqj26" Feb 16 02:39:05.430586 master-0 kubenswrapper[31559]: I0216 02:39:05.430422 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-5df0-account-create-update-xnvlf" Feb 16 02:39:05.435184 master-0 kubenswrapper[31559]: I0216 02:39:05.431162 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/69e0b955-6516-4cb2-82e0-a6d1a88508c8-dns-swift-storage-0\") pod \"dnsmasq-dns-7d95bcd475-rv868\" (UID: \"69e0b955-6516-4cb2-82e0-a6d1a88508c8\") " pod="openstack/dnsmasq-dns-7d95bcd475-rv868" Feb 16 02:39:05.435184 master-0 kubenswrapper[31559]: I0216 02:39:05.431890 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21d15bc7-f350-4b27-a27c-df6b81c9b50b-operator-scripts\") pod \"ironic-db-create-nqj26\" (UID: \"21d15bc7-f350-4b27-a27c-df6b81c9b50b\") " pod="openstack/ironic-db-create-nqj26" Feb 16 02:39:05.435184 master-0 kubenswrapper[31559]: I0216 02:39:05.432942 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/69e0b955-6516-4cb2-82e0-a6d1a88508c8-dns-svc\") pod \"dnsmasq-dns-7d95bcd475-rv868\" (UID: \"69e0b955-6516-4cb2-82e0-a6d1a88508c8\") " pod="openstack/dnsmasq-dns-7d95bcd475-rv868" Feb 16 02:39:05.435184 master-0 kubenswrapper[31559]: I0216 02:39:05.434717 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-db-secret" Feb 16 02:39:05.436281 master-0 kubenswrapper[31559]: I0216 02:39:05.435750 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/69e0b955-6516-4cb2-82e0-a6d1a88508c8-ovsdbserver-nb\") pod \"dnsmasq-dns-7d95bcd475-rv868\" (UID: \"69e0b955-6516-4cb2-82e0-a6d1a88508c8\") " pod="openstack/dnsmasq-dns-7d95bcd475-rv868" Feb 16 02:39:05.436768 master-0 kubenswrapper[31559]: I0216 02:39:05.436522 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/69e0b955-6516-4cb2-82e0-a6d1a88508c8-ovsdbserver-sb\") pod \"dnsmasq-dns-7d95bcd475-rv868\" (UID: \"69e0b955-6516-4cb2-82e0-a6d1a88508c8\") " pod="openstack/dnsmasq-dns-7d95bcd475-rv868" Feb 16 02:39:05.440559 master-0 kubenswrapper[31559]: I0216 02:39:05.440059 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69e0b955-6516-4cb2-82e0-a6d1a88508c8-config\") pod \"dnsmasq-dns-7d95bcd475-rv868\" (UID: \"69e0b955-6516-4cb2-82e0-a6d1a88508c8\") " pod="openstack/dnsmasq-dns-7d95bcd475-rv868" Feb 16 02:39:05.440559 master-0 kubenswrapper[31559]: I0216 02:39:05.440133 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/abbf5e91-b1f0-466e-80ca-d7b79ace4552-etc-machine-id\") pod \"cinder-dde57-db-sync-2z65z\" (UID: \"abbf5e91-b1f0-466e-80ca-d7b79ace4552\") " pod="openstack/cinder-dde57-db-sync-2z65z" Feb 16 02:39:05.443718 master-0 kubenswrapper[31559]: I0216 02:39:05.442785 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-5df0-account-create-update-xnvlf"] Feb 16 02:39:05.448584 master-0 kubenswrapper[31559]: I0216 02:39:05.448543 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/abbf5e91-b1f0-466e-80ca-d7b79ace4552-db-sync-config-data\") pod \"cinder-dde57-db-sync-2z65z\" (UID: \"abbf5e91-b1f0-466e-80ca-d7b79ace4552\") " pod="openstack/cinder-dde57-db-sync-2z65z" Feb 16 02:39:05.448863 master-0 kubenswrapper[31559]: I0216 02:39:05.448827 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/032a3554-910a-472a-8537-73e08670ffe8-config\") pod \"neutron-db-sync-mxb5k\" (UID: \"032a3554-910a-472a-8537-73e08670ffe8\") " pod="openstack/neutron-db-sync-mxb5k" Feb 16 02:39:05.449230 master-0 kubenswrapper[31559]: I0216 02:39:05.449197 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/032a3554-910a-472a-8537-73e08670ffe8-combined-ca-bundle\") pod \"neutron-db-sync-mxb5k\" (UID: \"032a3554-910a-472a-8537-73e08670ffe8\") " pod="openstack/neutron-db-sync-mxb5k" Feb 16 02:39:05.462598 master-0 kubenswrapper[31559]: I0216 02:39:05.462555 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vvnn\" (UniqueName: \"kubernetes.io/projected/032a3554-910a-472a-8537-73e08670ffe8-kube-api-access-7vvnn\") pod \"neutron-db-sync-mxb5k\" (UID: \"032a3554-910a-472a-8537-73e08670ffe8\") " pod="openstack/neutron-db-sync-mxb5k" Feb 16 02:39:05.463764 master-0 kubenswrapper[31559]: I0216 02:39:05.463729 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rk4k\" (UniqueName: \"kubernetes.io/projected/abbf5e91-b1f0-466e-80ca-d7b79ace4552-kube-api-access-8rk4k\") pod \"cinder-dde57-db-sync-2z65z\" (UID: \"abbf5e91-b1f0-466e-80ca-d7b79ace4552\") " pod="openstack/cinder-dde57-db-sync-2z65z" Feb 16 02:39:05.464273 master-0 kubenswrapper[31559]: I0216 02:39:05.464220 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72v72\" (UniqueName: \"kubernetes.io/projected/21d15bc7-f350-4b27-a27c-df6b81c9b50b-kube-api-access-72v72\") pod \"ironic-db-create-nqj26\" (UID: \"21d15bc7-f350-4b27-a27c-df6b81c9b50b\") " pod="openstack/ironic-db-create-nqj26" Feb 16 02:39:05.467008 master-0 kubenswrapper[31559]: I0216 02:39:05.466973 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lkt4\" (UniqueName: \"kubernetes.io/projected/69e0b955-6516-4cb2-82e0-a6d1a88508c8-kube-api-access-8lkt4\") pod \"dnsmasq-dns-7d95bcd475-rv868\" (UID: \"69e0b955-6516-4cb2-82e0-a6d1a88508c8\") " pod="openstack/dnsmasq-dns-7d95bcd475-rv868" Feb 16 02:39:05.488597 master-0 kubenswrapper[31559]: I0216 02:39:05.485868 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d95bcd475-rv868"] Feb 16 02:39:05.488597 master-0 kubenswrapper[31559]: I0216 02:39:05.486502 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d95bcd475-rv868" Feb 16 02:39:05.493802 master-0 kubenswrapper[31559]: I0216 02:39:05.493771 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abbf5e91-b1f0-466e-80ca-d7b79ace4552-combined-ca-bundle\") pod \"cinder-dde57-db-sync-2z65z\" (UID: \"abbf5e91-b1f0-466e-80ca-d7b79ace4552\") " pod="openstack/cinder-dde57-db-sync-2z65z" Feb 16 02:39:05.493926 master-0 kubenswrapper[31559]: I0216 02:39:05.493896 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/abbf5e91-b1f0-466e-80ca-d7b79ace4552-scripts\") pod \"cinder-dde57-db-sync-2z65z\" (UID: \"abbf5e91-b1f0-466e-80ca-d7b79ace4552\") " pod="openstack/cinder-dde57-db-sync-2z65z" Feb 16 02:39:05.497583 master-0 kubenswrapper[31559]: I0216 02:39:05.497539 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abbf5e91-b1f0-466e-80ca-d7b79ace4552-config-data\") pod \"cinder-dde57-db-sync-2z65z\" (UID: \"abbf5e91-b1f0-466e-80ca-d7b79ace4552\") " pod="openstack/cinder-dde57-db-sync-2z65z" Feb 16 02:39:05.498511 master-0 kubenswrapper[31559]: I0216 02:39:05.498480 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6cc4888b7-xrh7r"] Feb 16 02:39:05.514077 master-0 kubenswrapper[31559]: I0216 02:39:05.514025 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cc4888b7-xrh7r" Feb 16 02:39:05.536026 master-0 kubenswrapper[31559]: I0216 02:39:05.535969 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwdwq\" (UniqueName: \"kubernetes.io/projected/5a67f2cf-c024-48a0-8341-15aca05beca0-kube-api-access-xwdwq\") pod \"ironic-5df0-account-create-update-xnvlf\" (UID: \"5a67f2cf-c024-48a0-8341-15aca05beca0\") " pod="openstack/ironic-5df0-account-create-update-xnvlf" Feb 16 02:39:05.536204 master-0 kubenswrapper[31559]: I0216 02:39:05.536038 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a67f2cf-c024-48a0-8341-15aca05beca0-operator-scripts\") pod \"ironic-5df0-account-create-update-xnvlf\" (UID: \"5a67f2cf-c024-48a0-8341-15aca05beca0\") " pod="openstack/ironic-5df0-account-create-update-xnvlf" Feb 16 02:39:05.547886 master-0 kubenswrapper[31559]: I0216 02:39:05.547845 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-jgr7s"] Feb 16 02:39:05.552214 master-0 kubenswrapper[31559]: I0216 02:39:05.550877 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-jgr7s" Feb 16 02:39:05.554950 master-0 kubenswrapper[31559]: I0216 02:39:05.554846 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 16 02:39:05.555075 master-0 kubenswrapper[31559]: I0216 02:39:05.555043 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 16 02:39:05.575623 master-0 kubenswrapper[31559]: I0216 02:39:05.575580 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cc4888b7-xrh7r"] Feb 16 02:39:05.589278 master-0 kubenswrapper[31559]: I0216 02:39:05.588498 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-jgr7s"] Feb 16 02:39:05.638932 master-0 kubenswrapper[31559]: I0216 02:39:05.638592 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwdwq\" (UniqueName: \"kubernetes.io/projected/5a67f2cf-c024-48a0-8341-15aca05beca0-kube-api-access-xwdwq\") pod \"ironic-5df0-account-create-update-xnvlf\" (UID: \"5a67f2cf-c024-48a0-8341-15aca05beca0\") " pod="openstack/ironic-5df0-account-create-update-xnvlf" Feb 16 02:39:05.638932 master-0 kubenswrapper[31559]: I0216 02:39:05.638644 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a67f2cf-c024-48a0-8341-15aca05beca0-operator-scripts\") pod \"ironic-5df0-account-create-update-xnvlf\" (UID: \"5a67f2cf-c024-48a0-8341-15aca05beca0\") " pod="openstack/ironic-5df0-account-create-update-xnvlf" Feb 16 02:39:05.638932 master-0 kubenswrapper[31559]: I0216 02:39:05.638697 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4b6218a7-c0ec-46b5-a1df-cdccc54d540c-ovsdbserver-nb\") pod \"dnsmasq-dns-6cc4888b7-xrh7r\" (UID: \"4b6218a7-c0ec-46b5-a1df-cdccc54d540c\") " pod="openstack/dnsmasq-dns-6cc4888b7-xrh7r" Feb 16 02:39:05.638932 master-0 kubenswrapper[31559]: I0216 02:39:05.638775 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5pqd\" (UniqueName: \"kubernetes.io/projected/f313a2ac-7115-41c1-ac49-ede1baebd452-kube-api-access-g5pqd\") pod \"placement-db-sync-jgr7s\" (UID: \"f313a2ac-7115-41c1-ac49-ede1baebd452\") " pod="openstack/placement-db-sync-jgr7s" Feb 16 02:39:05.638932 master-0 kubenswrapper[31559]: I0216 02:39:05.638791 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4b6218a7-c0ec-46b5-a1df-cdccc54d540c-ovsdbserver-sb\") pod \"dnsmasq-dns-6cc4888b7-xrh7r\" (UID: \"4b6218a7-c0ec-46b5-a1df-cdccc54d540c\") " pod="openstack/dnsmasq-dns-6cc4888b7-xrh7r" Feb 16 02:39:05.638932 master-0 kubenswrapper[31559]: I0216 02:39:05.638839 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b6218a7-c0ec-46b5-a1df-cdccc54d540c-dns-svc\") pod \"dnsmasq-dns-6cc4888b7-xrh7r\" (UID: \"4b6218a7-c0ec-46b5-a1df-cdccc54d540c\") " pod="openstack/dnsmasq-dns-6cc4888b7-xrh7r" Feb 16 02:39:05.638932 master-0 kubenswrapper[31559]: I0216 02:39:05.638855 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b6218a7-c0ec-46b5-a1df-cdccc54d540c-config\") pod \"dnsmasq-dns-6cc4888b7-xrh7r\" (UID: \"4b6218a7-c0ec-46b5-a1df-cdccc54d540c\") " pod="openstack/dnsmasq-dns-6cc4888b7-xrh7r" Feb 16 02:39:05.638932 master-0 kubenswrapper[31559]: I0216 02:39:05.638870 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f313a2ac-7115-41c1-ac49-ede1baebd452-config-data\") pod \"placement-db-sync-jgr7s\" (UID: \"f313a2ac-7115-41c1-ac49-ede1baebd452\") " pod="openstack/placement-db-sync-jgr7s" Feb 16 02:39:05.638932 master-0 kubenswrapper[31559]: I0216 02:39:05.638899 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f313a2ac-7115-41c1-ac49-ede1baebd452-combined-ca-bundle\") pod \"placement-db-sync-jgr7s\" (UID: \"f313a2ac-7115-41c1-ac49-ede1baebd452\") " pod="openstack/placement-db-sync-jgr7s" Feb 16 02:39:05.638932 master-0 kubenswrapper[31559]: I0216 02:39:05.638919 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4b6218a7-c0ec-46b5-a1df-cdccc54d540c-dns-swift-storage-0\") pod \"dnsmasq-dns-6cc4888b7-xrh7r\" (UID: \"4b6218a7-c0ec-46b5-a1df-cdccc54d540c\") " pod="openstack/dnsmasq-dns-6cc4888b7-xrh7r" Feb 16 02:39:05.638932 master-0 kubenswrapper[31559]: I0216 02:39:05.638939 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvmtt\" (UniqueName: \"kubernetes.io/projected/4b6218a7-c0ec-46b5-a1df-cdccc54d540c-kube-api-access-nvmtt\") pod \"dnsmasq-dns-6cc4888b7-xrh7r\" (UID: \"4b6218a7-c0ec-46b5-a1df-cdccc54d540c\") " pod="openstack/dnsmasq-dns-6cc4888b7-xrh7r" Feb 16 02:39:05.639493 master-0 kubenswrapper[31559]: I0216 02:39:05.639030 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f313a2ac-7115-41c1-ac49-ede1baebd452-scripts\") pod \"placement-db-sync-jgr7s\" (UID: \"f313a2ac-7115-41c1-ac49-ede1baebd452\") " pod="openstack/placement-db-sync-jgr7s" Feb 16 02:39:05.639493 master-0 kubenswrapper[31559]: I0216 02:39:05.639124 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f313a2ac-7115-41c1-ac49-ede1baebd452-logs\") pod \"placement-db-sync-jgr7s\" (UID: \"f313a2ac-7115-41c1-ac49-ede1baebd452\") " pod="openstack/placement-db-sync-jgr7s" Feb 16 02:39:05.640891 master-0 kubenswrapper[31559]: I0216 02:39:05.639852 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a67f2cf-c024-48a0-8341-15aca05beca0-operator-scripts\") pod \"ironic-5df0-account-create-update-xnvlf\" (UID: \"5a67f2cf-c024-48a0-8341-15aca05beca0\") " pod="openstack/ironic-5df0-account-create-update-xnvlf" Feb 16 02:39:05.692337 master-0 kubenswrapper[31559]: I0216 02:39:05.692283 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwdwq\" (UniqueName: \"kubernetes.io/projected/5a67f2cf-c024-48a0-8341-15aca05beca0-kube-api-access-xwdwq\") pod \"ironic-5df0-account-create-update-xnvlf\" (UID: \"5a67f2cf-c024-48a0-8341-15aca05beca0\") " pod="openstack/ironic-5df0-account-create-update-xnvlf" Feb 16 02:39:05.701451 master-0 kubenswrapper[31559]: I0216 02:39:05.699290 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-nqj26" Feb 16 02:39:05.730122 master-0 kubenswrapper[31559]: I0216 02:39:05.729296 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-dde57-db-sync-2z65z" Feb 16 02:39:05.730122 master-0 kubenswrapper[31559]: I0216 02:39:05.729952 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-mxb5k" Feb 16 02:39:05.741366 master-0 kubenswrapper[31559]: I0216 02:39:05.740733 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b6218a7-c0ec-46b5-a1df-cdccc54d540c-dns-svc\") pod \"dnsmasq-dns-6cc4888b7-xrh7r\" (UID: \"4b6218a7-c0ec-46b5-a1df-cdccc54d540c\") " pod="openstack/dnsmasq-dns-6cc4888b7-xrh7r" Feb 16 02:39:05.741366 master-0 kubenswrapper[31559]: I0216 02:39:05.740778 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b6218a7-c0ec-46b5-a1df-cdccc54d540c-config\") pod \"dnsmasq-dns-6cc4888b7-xrh7r\" (UID: \"4b6218a7-c0ec-46b5-a1df-cdccc54d540c\") " pod="openstack/dnsmasq-dns-6cc4888b7-xrh7r" Feb 16 02:39:05.741366 master-0 kubenswrapper[31559]: I0216 02:39:05.740799 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f313a2ac-7115-41c1-ac49-ede1baebd452-config-data\") pod \"placement-db-sync-jgr7s\" (UID: \"f313a2ac-7115-41c1-ac49-ede1baebd452\") " pod="openstack/placement-db-sync-jgr7s" Feb 16 02:39:05.741366 master-0 kubenswrapper[31559]: I0216 02:39:05.740843 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f313a2ac-7115-41c1-ac49-ede1baebd452-combined-ca-bundle\") pod \"placement-db-sync-jgr7s\" (UID: \"f313a2ac-7115-41c1-ac49-ede1baebd452\") " pod="openstack/placement-db-sync-jgr7s" Feb 16 02:39:05.741366 master-0 kubenswrapper[31559]: I0216 02:39:05.740866 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4b6218a7-c0ec-46b5-a1df-cdccc54d540c-dns-swift-storage-0\") pod \"dnsmasq-dns-6cc4888b7-xrh7r\" (UID: \"4b6218a7-c0ec-46b5-a1df-cdccc54d540c\") " pod="openstack/dnsmasq-dns-6cc4888b7-xrh7r" Feb 16 02:39:05.741366 master-0 kubenswrapper[31559]: I0216 02:39:05.740887 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvmtt\" (UniqueName: \"kubernetes.io/projected/4b6218a7-c0ec-46b5-a1df-cdccc54d540c-kube-api-access-nvmtt\") pod \"dnsmasq-dns-6cc4888b7-xrh7r\" (UID: \"4b6218a7-c0ec-46b5-a1df-cdccc54d540c\") " pod="openstack/dnsmasq-dns-6cc4888b7-xrh7r" Feb 16 02:39:05.741912 master-0 kubenswrapper[31559]: I0216 02:39:05.740980 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f313a2ac-7115-41c1-ac49-ede1baebd452-scripts\") pod \"placement-db-sync-jgr7s\" (UID: \"f313a2ac-7115-41c1-ac49-ede1baebd452\") " pod="openstack/placement-db-sync-jgr7s" Feb 16 02:39:05.741912 master-0 kubenswrapper[31559]: I0216 02:39:05.741515 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f313a2ac-7115-41c1-ac49-ede1baebd452-logs\") pod \"placement-db-sync-jgr7s\" (UID: \"f313a2ac-7115-41c1-ac49-ede1baebd452\") " pod="openstack/placement-db-sync-jgr7s" Feb 16 02:39:05.741912 master-0 kubenswrapper[31559]: I0216 02:39:05.741581 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4b6218a7-c0ec-46b5-a1df-cdccc54d540c-ovsdbserver-nb\") pod \"dnsmasq-dns-6cc4888b7-xrh7r\" (UID: \"4b6218a7-c0ec-46b5-a1df-cdccc54d540c\") " pod="openstack/dnsmasq-dns-6cc4888b7-xrh7r" Feb 16 02:39:05.741912 master-0 kubenswrapper[31559]: I0216 02:39:05.741671 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5pqd\" (UniqueName: \"kubernetes.io/projected/f313a2ac-7115-41c1-ac49-ede1baebd452-kube-api-access-g5pqd\") pod \"placement-db-sync-jgr7s\" (UID: \"f313a2ac-7115-41c1-ac49-ede1baebd452\") " pod="openstack/placement-db-sync-jgr7s" Feb 16 02:39:05.741912 master-0 kubenswrapper[31559]: I0216 02:39:05.741706 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4b6218a7-c0ec-46b5-a1df-cdccc54d540c-ovsdbserver-sb\") pod \"dnsmasq-dns-6cc4888b7-xrh7r\" (UID: \"4b6218a7-c0ec-46b5-a1df-cdccc54d540c\") " pod="openstack/dnsmasq-dns-6cc4888b7-xrh7r" Feb 16 02:39:05.745779 master-0 kubenswrapper[31559]: I0216 02:39:05.743109 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4b6218a7-c0ec-46b5-a1df-cdccc54d540c-ovsdbserver-sb\") pod \"dnsmasq-dns-6cc4888b7-xrh7r\" (UID: \"4b6218a7-c0ec-46b5-a1df-cdccc54d540c\") " pod="openstack/dnsmasq-dns-6cc4888b7-xrh7r" Feb 16 02:39:05.745779 master-0 kubenswrapper[31559]: I0216 02:39:05.743558 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f313a2ac-7115-41c1-ac49-ede1baebd452-logs\") pod \"placement-db-sync-jgr7s\" (UID: \"f313a2ac-7115-41c1-ac49-ede1baebd452\") " pod="openstack/placement-db-sync-jgr7s" Feb 16 02:39:05.745779 master-0 kubenswrapper[31559]: I0216 02:39:05.743878 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4b6218a7-c0ec-46b5-a1df-cdccc54d540c-ovsdbserver-nb\") pod \"dnsmasq-dns-6cc4888b7-xrh7r\" (UID: \"4b6218a7-c0ec-46b5-a1df-cdccc54d540c\") " pod="openstack/dnsmasq-dns-6cc4888b7-xrh7r" Feb 16 02:39:05.745779 master-0 kubenswrapper[31559]: I0216 02:39:05.744087 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4b6218a7-c0ec-46b5-a1df-cdccc54d540c-dns-swift-storage-0\") pod \"dnsmasq-dns-6cc4888b7-xrh7r\" (UID: \"4b6218a7-c0ec-46b5-a1df-cdccc54d540c\") " pod="openstack/dnsmasq-dns-6cc4888b7-xrh7r" Feb 16 02:39:05.745779 master-0 kubenswrapper[31559]: I0216 02:39:05.744669 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b6218a7-c0ec-46b5-a1df-cdccc54d540c-dns-svc\") pod \"dnsmasq-dns-6cc4888b7-xrh7r\" (UID: \"4b6218a7-c0ec-46b5-a1df-cdccc54d540c\") " pod="openstack/dnsmasq-dns-6cc4888b7-xrh7r" Feb 16 02:39:05.746087 master-0 kubenswrapper[31559]: I0216 02:39:05.746005 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b6218a7-c0ec-46b5-a1df-cdccc54d540c-config\") pod \"dnsmasq-dns-6cc4888b7-xrh7r\" (UID: \"4b6218a7-c0ec-46b5-a1df-cdccc54d540c\") " pod="openstack/dnsmasq-dns-6cc4888b7-xrh7r" Feb 16 02:39:05.756473 master-0 kubenswrapper[31559]: I0216 02:39:05.749696 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f313a2ac-7115-41c1-ac49-ede1baebd452-scripts\") pod \"placement-db-sync-jgr7s\" (UID: \"f313a2ac-7115-41c1-ac49-ede1baebd452\") " pod="openstack/placement-db-sync-jgr7s" Feb 16 02:39:05.756473 master-0 kubenswrapper[31559]: I0216 02:39:05.753262 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f313a2ac-7115-41c1-ac49-ede1baebd452-combined-ca-bundle\") pod \"placement-db-sync-jgr7s\" (UID: \"f313a2ac-7115-41c1-ac49-ede1baebd452\") " pod="openstack/placement-db-sync-jgr7s" Feb 16 02:39:05.761362 master-0 kubenswrapper[31559]: I0216 02:39:05.761214 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f313a2ac-7115-41c1-ac49-ede1baebd452-config-data\") pod \"placement-db-sync-jgr7s\" (UID: \"f313a2ac-7115-41c1-ac49-ede1baebd452\") " pod="openstack/placement-db-sync-jgr7s" Feb 16 02:39:05.766153 master-0 kubenswrapper[31559]: I0216 02:39:05.766114 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5pqd\" (UniqueName: \"kubernetes.io/projected/f313a2ac-7115-41c1-ac49-ede1baebd452-kube-api-access-g5pqd\") pod \"placement-db-sync-jgr7s\" (UID: \"f313a2ac-7115-41c1-ac49-ede1baebd452\") " pod="openstack/placement-db-sync-jgr7s" Feb 16 02:39:05.769504 master-0 kubenswrapper[31559]: I0216 02:39:05.769265 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvmtt\" (UniqueName: \"kubernetes.io/projected/4b6218a7-c0ec-46b5-a1df-cdccc54d540c-kube-api-access-nvmtt\") pod \"dnsmasq-dns-6cc4888b7-xrh7r\" (UID: \"4b6218a7-c0ec-46b5-a1df-cdccc54d540c\") " pod="openstack/dnsmasq-dns-6cc4888b7-xrh7r" Feb 16 02:39:05.833813 master-0 kubenswrapper[31559]: I0216 02:39:05.833775 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c6f66bf95-nbpjz" Feb 16 02:39:05.855481 master-0 kubenswrapper[31559]: I0216 02:39:05.855415 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-5df0-account-create-update-xnvlf" Feb 16 02:39:05.878165 master-0 kubenswrapper[31559]: I0216 02:39:05.877920 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cc4888b7-xrh7r" Feb 16 02:39:05.881838 master-0 kubenswrapper[31559]: I0216 02:39:05.881731 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-jgr7s" Feb 16 02:39:05.891960 master-0 kubenswrapper[31559]: I0216 02:39:05.891913 31559 generic.go:334] "Generic (PLEG): container finished" podID="1003e078-0808-429d-99a8-18f4497431cd" containerID="788b27d114c73b91d23e51f68e0144cb061f5737f8399214f79663f228986001" exitCode=0 Feb 16 02:39:05.892036 master-0 kubenswrapper[31559]: I0216 02:39:05.891966 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c6f66bf95-nbpjz" event={"ID":"1003e078-0808-429d-99a8-18f4497431cd","Type":"ContainerDied","Data":"788b27d114c73b91d23e51f68e0144cb061f5737f8399214f79663f228986001"} Feb 16 02:39:05.892036 master-0 kubenswrapper[31559]: I0216 02:39:05.891997 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c6f66bf95-nbpjz" event={"ID":"1003e078-0808-429d-99a8-18f4497431cd","Type":"ContainerDied","Data":"deea3f94164b7f01864045de10337f558a86a0363163677fe3c585c4cbd65546"} Feb 16 02:39:05.892036 master-0 kubenswrapper[31559]: I0216 02:39:05.892019 31559 scope.go:117] "RemoveContainer" containerID="788b27d114c73b91d23e51f68e0144cb061f5737f8399214f79663f228986001" Feb 16 02:39:05.892208 master-0 kubenswrapper[31559]: I0216 02:39:05.892156 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c6f66bf95-nbpjz" Feb 16 02:39:05.947696 master-0 kubenswrapper[31559]: I0216 02:39:05.947420 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1003e078-0808-429d-99a8-18f4497431cd-dns-svc\") pod \"1003e078-0808-429d-99a8-18f4497431cd\" (UID: \"1003e078-0808-429d-99a8-18f4497431cd\") " Feb 16 02:39:05.947696 master-0 kubenswrapper[31559]: I0216 02:39:05.947580 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qk479\" (UniqueName: \"kubernetes.io/projected/1003e078-0808-429d-99a8-18f4497431cd-kube-api-access-qk479\") pod \"1003e078-0808-429d-99a8-18f4497431cd\" (UID: \"1003e078-0808-429d-99a8-18f4497431cd\") " Feb 16 02:39:05.947923 master-0 kubenswrapper[31559]: I0216 02:39:05.947728 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1003e078-0808-429d-99a8-18f4497431cd-config\") pod \"1003e078-0808-429d-99a8-18f4497431cd\" (UID: \"1003e078-0808-429d-99a8-18f4497431cd\") " Feb 16 02:39:05.947923 master-0 kubenswrapper[31559]: I0216 02:39:05.947803 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1003e078-0808-429d-99a8-18f4497431cd-ovsdbserver-nb\") pod \"1003e078-0808-429d-99a8-18f4497431cd\" (UID: \"1003e078-0808-429d-99a8-18f4497431cd\") " Feb 16 02:39:05.947923 master-0 kubenswrapper[31559]: I0216 02:39:05.947828 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1003e078-0808-429d-99a8-18f4497431cd-dns-swift-storage-0\") pod \"1003e078-0808-429d-99a8-18f4497431cd\" (UID: \"1003e078-0808-429d-99a8-18f4497431cd\") " Feb 16 02:39:05.947923 master-0 kubenswrapper[31559]: I0216 02:39:05.947858 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1003e078-0808-429d-99a8-18f4497431cd-ovsdbserver-sb\") pod \"1003e078-0808-429d-99a8-18f4497431cd\" (UID: \"1003e078-0808-429d-99a8-18f4497431cd\") " Feb 16 02:39:05.951706 master-0 kubenswrapper[31559]: I0216 02:39:05.951663 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1003e078-0808-429d-99a8-18f4497431cd-kube-api-access-qk479" (OuterVolumeSpecName: "kube-api-access-qk479") pod "1003e078-0808-429d-99a8-18f4497431cd" (UID: "1003e078-0808-429d-99a8-18f4497431cd"). InnerVolumeSpecName "kube-api-access-qk479". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:39:06.008922 master-0 kubenswrapper[31559]: I0216 02:39:06.008314 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1003e078-0808-429d-99a8-18f4497431cd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1003e078-0808-429d-99a8-18f4497431cd" (UID: "1003e078-0808-429d-99a8-18f4497431cd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:39:06.014842 master-0 kubenswrapper[31559]: I0216 02:39:06.014301 31559 scope.go:117] "RemoveContainer" containerID="f18ad949046a3a2e41d69e5b1f4d9cb631a1ac2645be9c8bbf7f73916e560f30" Feb 16 02:39:06.016944 master-0 kubenswrapper[31559]: I0216 02:39:06.016321 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-6glzm"] Feb 16 02:39:06.018788 master-0 kubenswrapper[31559]: I0216 02:39:06.017947 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1003e078-0808-429d-99a8-18f4497431cd-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1003e078-0808-429d-99a8-18f4497431cd" (UID: "1003e078-0808-429d-99a8-18f4497431cd"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:39:06.023021 master-0 kubenswrapper[31559]: I0216 02:39:06.022963 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1003e078-0808-429d-99a8-18f4497431cd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1003e078-0808-429d-99a8-18f4497431cd" (UID: "1003e078-0808-429d-99a8-18f4497431cd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:39:06.042709 master-0 kubenswrapper[31559]: I0216 02:39:06.042657 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1003e078-0808-429d-99a8-18f4497431cd-config" (OuterVolumeSpecName: "config") pod "1003e078-0808-429d-99a8-18f4497431cd" (UID: "1003e078-0808-429d-99a8-18f4497431cd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:39:06.053329 master-0 kubenswrapper[31559]: I0216 02:39:06.052718 31559 scope.go:117] "RemoveContainer" containerID="788b27d114c73b91d23e51f68e0144cb061f5737f8399214f79663f228986001" Feb 16 02:39:06.054608 master-0 kubenswrapper[31559]: E0216 02:39:06.054576 31559 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"788b27d114c73b91d23e51f68e0144cb061f5737f8399214f79663f228986001\": container with ID starting with 788b27d114c73b91d23e51f68e0144cb061f5737f8399214f79663f228986001 not found: ID does not exist" containerID="788b27d114c73b91d23e51f68e0144cb061f5737f8399214f79663f228986001" Feb 16 02:39:06.054681 master-0 kubenswrapper[31559]: I0216 02:39:06.054610 31559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"788b27d114c73b91d23e51f68e0144cb061f5737f8399214f79663f228986001"} err="failed to get container status \"788b27d114c73b91d23e51f68e0144cb061f5737f8399214f79663f228986001\": rpc error: code = NotFound desc = could not find container \"788b27d114c73b91d23e51f68e0144cb061f5737f8399214f79663f228986001\": container with ID starting with 788b27d114c73b91d23e51f68e0144cb061f5737f8399214f79663f228986001 not found: ID does not exist" Feb 16 02:39:06.054681 master-0 kubenswrapper[31559]: I0216 02:39:06.054640 31559 scope.go:117] "RemoveContainer" containerID="f18ad949046a3a2e41d69e5b1f4d9cb631a1ac2645be9c8bbf7f73916e560f30" Feb 16 02:39:06.057384 master-0 kubenswrapper[31559]: I0216 02:39:06.057334 31559 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1003e078-0808-429d-99a8-18f4497431cd-config\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:06.057384 master-0 kubenswrapper[31559]: I0216 02:39:06.057382 31559 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1003e078-0808-429d-99a8-18f4497431cd-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:06.057549 master-0 kubenswrapper[31559]: I0216 02:39:06.057395 31559 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1003e078-0808-429d-99a8-18f4497431cd-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:06.057549 master-0 kubenswrapper[31559]: I0216 02:39:06.057405 31559 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1003e078-0808-429d-99a8-18f4497431cd-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:06.057549 master-0 kubenswrapper[31559]: I0216 02:39:06.057415 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qk479\" (UniqueName: \"kubernetes.io/projected/1003e078-0808-429d-99a8-18f4497431cd-kube-api-access-qk479\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:06.058174 master-0 kubenswrapper[31559]: E0216 02:39:06.058147 31559 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f18ad949046a3a2e41d69e5b1f4d9cb631a1ac2645be9c8bbf7f73916e560f30\": container with ID starting with f18ad949046a3a2e41d69e5b1f4d9cb631a1ac2645be9c8bbf7f73916e560f30 not found: ID does not exist" containerID="f18ad949046a3a2e41d69e5b1f4d9cb631a1ac2645be9c8bbf7f73916e560f30" Feb 16 02:39:06.058226 master-0 kubenswrapper[31559]: I0216 02:39:06.058182 31559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f18ad949046a3a2e41d69e5b1f4d9cb631a1ac2645be9c8bbf7f73916e560f30"} err="failed to get container status \"f18ad949046a3a2e41d69e5b1f4d9cb631a1ac2645be9c8bbf7f73916e560f30\": rpc error: code = NotFound desc = could not find container \"f18ad949046a3a2e41d69e5b1f4d9cb631a1ac2645be9c8bbf7f73916e560f30\": container with ID starting with f18ad949046a3a2e41d69e5b1f4d9cb631a1ac2645be9c8bbf7f73916e560f30 not found: ID does not exist" Feb 16 02:39:06.073512 master-0 kubenswrapper[31559]: I0216 02:39:06.073451 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1003e078-0808-429d-99a8-18f4497431cd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1003e078-0808-429d-99a8-18f4497431cd" (UID: "1003e078-0808-429d-99a8-18f4497431cd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:39:06.160181 master-0 kubenswrapper[31559]: I0216 02:39:06.159183 31559 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1003e078-0808-429d-99a8-18f4497431cd-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:06.237837 master-0 kubenswrapper[31559]: I0216 02:39:06.237726 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d95bcd475-rv868"] Feb 16 02:39:06.256733 master-0 kubenswrapper[31559]: I0216 02:39:06.256325 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c6f66bf95-nbpjz"] Feb 16 02:39:06.268388 master-0 kubenswrapper[31559]: W0216 02:39:06.264348 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod69e0b955_6516_4cb2_82e0_a6d1a88508c8.slice/crio-bfe44a6e1e44b893da14139844a6294860619a668fdab4bac4b59574f7579b70 WatchSource:0}: Error finding container bfe44a6e1e44b893da14139844a6294860619a668fdab4bac4b59574f7579b70: Status 404 returned error can't find the container with id bfe44a6e1e44b893da14139844a6294860619a668fdab4bac4b59574f7579b70 Feb 16 02:39:06.287191 master-0 kubenswrapper[31559]: I0216 02:39:06.286895 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-c6f66bf95-nbpjz"] Feb 16 02:39:06.390813 master-0 kubenswrapper[31559]: I0216 02:39:06.390226 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-create-nqj26"] Feb 16 02:39:06.785092 master-0 kubenswrapper[31559]: I0216 02:39:06.784926 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-5df0-account-create-update-xnvlf"] Feb 16 02:39:06.804098 master-0 kubenswrapper[31559]: I0216 02:39:06.796048 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-mxb5k"] Feb 16 02:39:06.804583 master-0 kubenswrapper[31559]: W0216 02:39:06.804527 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a67f2cf_c024_48a0_8341_15aca05beca0.slice/crio-33131ff61869e28fbb7c08b979ec302e42b9a0aa63bcea5c88a1741b68300192 WatchSource:0}: Error finding container 33131ff61869e28fbb7c08b979ec302e42b9a0aa63bcea5c88a1741b68300192: Status 404 returned error can't find the container with id 33131ff61869e28fbb7c08b979ec302e42b9a0aa63bcea5c88a1741b68300192 Feb 16 02:39:06.821611 master-0 kubenswrapper[31559]: I0216 02:39:06.821554 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-dde57-db-sync-2z65z"] Feb 16 02:39:06.927747 master-0 kubenswrapper[31559]: I0216 02:39:06.927693 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-dde57-db-sync-2z65z" event={"ID":"abbf5e91-b1f0-466e-80ca-d7b79ace4552","Type":"ContainerStarted","Data":"8b42be8b811288b40c93e27668cb07bbb709d6bd8ade3a50e3b423a88163b599"} Feb 16 02:39:06.931895 master-0 kubenswrapper[31559]: I0216 02:39:06.931701 31559 generic.go:334] "Generic (PLEG): container finished" podID="69e0b955-6516-4cb2-82e0-a6d1a88508c8" containerID="1a5127c10788f5a0715cbb6faa86b6753c0dc7b03b0f231a21559f1e5dc1ad2a" exitCode=0 Feb 16 02:39:06.931895 master-0 kubenswrapper[31559]: I0216 02:39:06.931747 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d95bcd475-rv868" event={"ID":"69e0b955-6516-4cb2-82e0-a6d1a88508c8","Type":"ContainerDied","Data":"1a5127c10788f5a0715cbb6faa86b6753c0dc7b03b0f231a21559f1e5dc1ad2a"} Feb 16 02:39:06.931895 master-0 kubenswrapper[31559]: I0216 02:39:06.931765 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d95bcd475-rv868" event={"ID":"69e0b955-6516-4cb2-82e0-a6d1a88508c8","Type":"ContainerStarted","Data":"bfe44a6e1e44b893da14139844a6294860619a668fdab4bac4b59574f7579b70"} Feb 16 02:39:06.937587 master-0 kubenswrapper[31559]: I0216 02:39:06.936534 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-mxb5k" event={"ID":"032a3554-910a-472a-8537-73e08670ffe8","Type":"ContainerStarted","Data":"abc71f14f21319c5483859a958d4b27119ff55a5ba6483f1ed45f9abe1aefee9"} Feb 16 02:39:06.940145 master-0 kubenswrapper[31559]: I0216 02:39:06.940103 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6glzm" event={"ID":"e376c3b4-d1f6-4f67-bd17-406917b6c866","Type":"ContainerStarted","Data":"9957c94575b16139adde1349a9b018de6417007657f1a1083222f6f2e05706b7"} Feb 16 02:39:06.940225 master-0 kubenswrapper[31559]: I0216 02:39:06.940153 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6glzm" event={"ID":"e376c3b4-d1f6-4f67-bd17-406917b6c866","Type":"ContainerStarted","Data":"9bb130ca030373001395d08726bc73795324ee8475781b6f0c426a444565d5b8"} Feb 16 02:39:06.945365 master-0 kubenswrapper[31559]: I0216 02:39:06.945326 31559 generic.go:334] "Generic (PLEG): container finished" podID="21d15bc7-f350-4b27-a27c-df6b81c9b50b" containerID="e26c3bac2edcae1f951e7a3108bb83548298c61ec175861d49a1dbec63e44783" exitCode=0 Feb 16 02:39:06.945425 master-0 kubenswrapper[31559]: I0216 02:39:06.945394 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-create-nqj26" event={"ID":"21d15bc7-f350-4b27-a27c-df6b81c9b50b","Type":"ContainerDied","Data":"e26c3bac2edcae1f951e7a3108bb83548298c61ec175861d49a1dbec63e44783"} Feb 16 02:39:06.945425 master-0 kubenswrapper[31559]: I0216 02:39:06.945415 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-create-nqj26" event={"ID":"21d15bc7-f350-4b27-a27c-df6b81c9b50b","Type":"ContainerStarted","Data":"c2c297bc88504ecc2cb472f4f6b06d2c76e90f350960dcbf4e27c54f1f1a7cca"} Feb 16 02:39:06.946869 master-0 kubenswrapper[31559]: I0216 02:39:06.946826 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-5df0-account-create-update-xnvlf" event={"ID":"5a67f2cf-c024-48a0-8341-15aca05beca0","Type":"ContainerStarted","Data":"33131ff61869e28fbb7c08b979ec302e42b9a0aa63bcea5c88a1741b68300192"} Feb 16 02:39:06.956495 master-0 kubenswrapper[31559]: I0216 02:39:06.956425 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cc4888b7-xrh7r"] Feb 16 02:39:06.967114 master-0 kubenswrapper[31559]: W0216 02:39:06.967057 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf313a2ac_7115_41c1_ac49_ede1baebd452.slice/crio-155cef6f9e6021de4e94dc97133dfd4e4abb237f5f102f0178d2da7dbc66f071 WatchSource:0}: Error finding container 155cef6f9e6021de4e94dc97133dfd4e4abb237f5f102f0178d2da7dbc66f071: Status 404 returned error can't find the container with id 155cef6f9e6021de4e94dc97133dfd4e4abb237f5f102f0178d2da7dbc66f071 Feb 16 02:39:06.996948 master-0 kubenswrapper[31559]: I0216 02:39:06.996900 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-jgr7s"] Feb 16 02:39:07.038732 master-0 kubenswrapper[31559]: I0216 02:39:07.038596 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-6glzm" podStartSLOduration=3.038573409 podStartE2EDuration="3.038573409s" podCreationTimestamp="2026-02-16 02:39:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:39:06.995531377 +0000 UTC m=+999.340137392" watchObservedRunningTime="2026-02-16 02:39:07.038573409 +0000 UTC m=+999.383179424" Feb 16 02:39:07.089630 master-0 kubenswrapper[31559]: I0216 02:39:07.089350 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-72940-default-external-api-0"] Feb 16 02:39:07.089887 master-0 kubenswrapper[31559]: E0216 02:39:07.089759 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1003e078-0808-429d-99a8-18f4497431cd" containerName="dnsmasq-dns" Feb 16 02:39:07.089887 master-0 kubenswrapper[31559]: I0216 02:39:07.089772 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="1003e078-0808-429d-99a8-18f4497431cd" containerName="dnsmasq-dns" Feb 16 02:39:07.089887 master-0 kubenswrapper[31559]: E0216 02:39:07.089817 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1003e078-0808-429d-99a8-18f4497431cd" containerName="init" Feb 16 02:39:07.089887 master-0 kubenswrapper[31559]: I0216 02:39:07.089825 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="1003e078-0808-429d-99a8-18f4497431cd" containerName="init" Feb 16 02:39:07.091185 master-0 kubenswrapper[31559]: I0216 02:39:07.090101 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="1003e078-0808-429d-99a8-18f4497431cd" containerName="dnsmasq-dns" Feb 16 02:39:07.097479 master-0 kubenswrapper[31559]: I0216 02:39:07.097200 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-72940-default-external-api-0" Feb 16 02:39:07.100023 master-0 kubenswrapper[31559]: I0216 02:39:07.099451 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 16 02:39:07.100023 master-0 kubenswrapper[31559]: I0216 02:39:07.099618 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 16 02:39:07.100023 master-0 kubenswrapper[31559]: I0216 02:39:07.099791 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-72940-default-external-config-data" Feb 16 02:39:07.112459 master-0 kubenswrapper[31559]: I0216 02:39:07.111665 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-72940-default-external-api-0"] Feb 16 02:39:07.249267 master-0 kubenswrapper[31559]: I0216 02:39:07.238627 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34f9638c-9640-4b80-9be1-cf1e208c3f2a-scripts\") pod \"glance-72940-default-external-api-0\" (UID: \"34f9638c-9640-4b80-9be1-cf1e208c3f2a\") " pod="openstack/glance-72940-default-external-api-0" Feb 16 02:39:07.249267 master-0 kubenswrapper[31559]: I0216 02:39:07.238710 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lk8mz\" (UniqueName: \"kubernetes.io/projected/34f9638c-9640-4b80-9be1-cf1e208c3f2a-kube-api-access-lk8mz\") pod \"glance-72940-default-external-api-0\" (UID: \"34f9638c-9640-4b80-9be1-cf1e208c3f2a\") " pod="openstack/glance-72940-default-external-api-0" Feb 16 02:39:07.249267 master-0 kubenswrapper[31559]: I0216 02:39:07.238733 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34f9638c-9640-4b80-9be1-cf1e208c3f2a-combined-ca-bundle\") pod \"glance-72940-default-external-api-0\" (UID: \"34f9638c-9640-4b80-9be1-cf1e208c3f2a\") " pod="openstack/glance-72940-default-external-api-0" Feb 16 02:39:07.249267 master-0 kubenswrapper[31559]: I0216 02:39:07.238772 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f8967d3f-b95e-4cbf-ba61-3779cc05db2f\" (UniqueName: \"kubernetes.io/csi/topolvm.io^fcb2bec2-7af4-4d84-a267-9adfc7ab4af8\") pod \"glance-72940-default-external-api-0\" (UID: \"34f9638c-9640-4b80-9be1-cf1e208c3f2a\") " pod="openstack/glance-72940-default-external-api-0" Feb 16 02:39:07.249267 master-0 kubenswrapper[31559]: I0216 02:39:07.238810 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/34f9638c-9640-4b80-9be1-cf1e208c3f2a-httpd-run\") pod \"glance-72940-default-external-api-0\" (UID: \"34f9638c-9640-4b80-9be1-cf1e208c3f2a\") " pod="openstack/glance-72940-default-external-api-0" Feb 16 02:39:07.249267 master-0 kubenswrapper[31559]: I0216 02:39:07.238852 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/34f9638c-9640-4b80-9be1-cf1e208c3f2a-public-tls-certs\") pod \"glance-72940-default-external-api-0\" (UID: \"34f9638c-9640-4b80-9be1-cf1e208c3f2a\") " pod="openstack/glance-72940-default-external-api-0" Feb 16 02:39:07.249267 master-0 kubenswrapper[31559]: I0216 02:39:07.238902 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34f9638c-9640-4b80-9be1-cf1e208c3f2a-logs\") pod \"glance-72940-default-external-api-0\" (UID: \"34f9638c-9640-4b80-9be1-cf1e208c3f2a\") " pod="openstack/glance-72940-default-external-api-0" Feb 16 02:39:07.249267 master-0 kubenswrapper[31559]: I0216 02:39:07.238931 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34f9638c-9640-4b80-9be1-cf1e208c3f2a-config-data\") pod \"glance-72940-default-external-api-0\" (UID: \"34f9638c-9640-4b80-9be1-cf1e208c3f2a\") " pod="openstack/glance-72940-default-external-api-0" Feb 16 02:39:07.342803 master-0 kubenswrapper[31559]: I0216 02:39:07.342751 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f8967d3f-b95e-4cbf-ba61-3779cc05db2f\" (UniqueName: \"kubernetes.io/csi/topolvm.io^fcb2bec2-7af4-4d84-a267-9adfc7ab4af8\") pod \"glance-72940-default-external-api-0\" (UID: \"34f9638c-9640-4b80-9be1-cf1e208c3f2a\") " pod="openstack/glance-72940-default-external-api-0" Feb 16 02:39:07.342963 master-0 kubenswrapper[31559]: I0216 02:39:07.342829 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/34f9638c-9640-4b80-9be1-cf1e208c3f2a-httpd-run\") pod \"glance-72940-default-external-api-0\" (UID: \"34f9638c-9640-4b80-9be1-cf1e208c3f2a\") " pod="openstack/glance-72940-default-external-api-0" Feb 16 02:39:07.342963 master-0 kubenswrapper[31559]: I0216 02:39:07.342880 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/34f9638c-9640-4b80-9be1-cf1e208c3f2a-public-tls-certs\") pod \"glance-72940-default-external-api-0\" (UID: \"34f9638c-9640-4b80-9be1-cf1e208c3f2a\") " pod="openstack/glance-72940-default-external-api-0" Feb 16 02:39:07.342963 master-0 kubenswrapper[31559]: I0216 02:39:07.342945 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34f9638c-9640-4b80-9be1-cf1e208c3f2a-logs\") pod \"glance-72940-default-external-api-0\" (UID: \"34f9638c-9640-4b80-9be1-cf1e208c3f2a\") " pod="openstack/glance-72940-default-external-api-0" Feb 16 02:39:07.343057 master-0 kubenswrapper[31559]: I0216 02:39:07.342976 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34f9638c-9640-4b80-9be1-cf1e208c3f2a-config-data\") pod \"glance-72940-default-external-api-0\" (UID: \"34f9638c-9640-4b80-9be1-cf1e208c3f2a\") " pod="openstack/glance-72940-default-external-api-0" Feb 16 02:39:07.343057 master-0 kubenswrapper[31559]: I0216 02:39:07.343005 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34f9638c-9640-4b80-9be1-cf1e208c3f2a-scripts\") pod \"glance-72940-default-external-api-0\" (UID: \"34f9638c-9640-4b80-9be1-cf1e208c3f2a\") " pod="openstack/glance-72940-default-external-api-0" Feb 16 02:39:07.343057 master-0 kubenswrapper[31559]: I0216 02:39:07.343039 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lk8mz\" (UniqueName: \"kubernetes.io/projected/34f9638c-9640-4b80-9be1-cf1e208c3f2a-kube-api-access-lk8mz\") pod \"glance-72940-default-external-api-0\" (UID: \"34f9638c-9640-4b80-9be1-cf1e208c3f2a\") " pod="openstack/glance-72940-default-external-api-0" Feb 16 02:39:07.343141 master-0 kubenswrapper[31559]: I0216 02:39:07.343058 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34f9638c-9640-4b80-9be1-cf1e208c3f2a-combined-ca-bundle\") pod \"glance-72940-default-external-api-0\" (UID: \"34f9638c-9640-4b80-9be1-cf1e208c3f2a\") " pod="openstack/glance-72940-default-external-api-0" Feb 16 02:39:07.344029 master-0 kubenswrapper[31559]: I0216 02:39:07.343988 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34f9638c-9640-4b80-9be1-cf1e208c3f2a-logs\") pod \"glance-72940-default-external-api-0\" (UID: \"34f9638c-9640-4b80-9be1-cf1e208c3f2a\") " pod="openstack/glance-72940-default-external-api-0" Feb 16 02:39:07.347849 master-0 kubenswrapper[31559]: I0216 02:39:07.345885 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/34f9638c-9640-4b80-9be1-cf1e208c3f2a-httpd-run\") pod \"glance-72940-default-external-api-0\" (UID: \"34f9638c-9640-4b80-9be1-cf1e208c3f2a\") " pod="openstack/glance-72940-default-external-api-0" Feb 16 02:39:07.347849 master-0 kubenswrapper[31559]: I0216 02:39:07.346926 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34f9638c-9640-4b80-9be1-cf1e208c3f2a-combined-ca-bundle\") pod \"glance-72940-default-external-api-0\" (UID: \"34f9638c-9640-4b80-9be1-cf1e208c3f2a\") " pod="openstack/glance-72940-default-external-api-0" Feb 16 02:39:07.349254 master-0 kubenswrapper[31559]: I0216 02:39:07.349224 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34f9638c-9640-4b80-9be1-cf1e208c3f2a-config-data\") pod \"glance-72940-default-external-api-0\" (UID: \"34f9638c-9640-4b80-9be1-cf1e208c3f2a\") " pod="openstack/glance-72940-default-external-api-0" Feb 16 02:39:07.350046 master-0 kubenswrapper[31559]: I0216 02:39:07.350022 31559 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 02:39:07.350105 master-0 kubenswrapper[31559]: I0216 02:39:07.350048 31559 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f8967d3f-b95e-4cbf-ba61-3779cc05db2f\" (UniqueName: \"kubernetes.io/csi/topolvm.io^fcb2bec2-7af4-4d84-a267-9adfc7ab4af8\") pod \"glance-72940-default-external-api-0\" (UID: \"34f9638c-9640-4b80-9be1-cf1e208c3f2a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/6df8a36abcf3de2186a10fb6ccf098d30950cdf1ed8f716dfec080b59e1d20e7/globalmount\"" pod="openstack/glance-72940-default-external-api-0" Feb 16 02:39:07.360925 master-0 kubenswrapper[31559]: I0216 02:39:07.360086 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lk8mz\" (UniqueName: \"kubernetes.io/projected/34f9638c-9640-4b80-9be1-cf1e208c3f2a-kube-api-access-lk8mz\") pod \"glance-72940-default-external-api-0\" (UID: \"34f9638c-9640-4b80-9be1-cf1e208c3f2a\") " pod="openstack/glance-72940-default-external-api-0" Feb 16 02:39:07.360925 master-0 kubenswrapper[31559]: I0216 02:39:07.360225 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/34f9638c-9640-4b80-9be1-cf1e208c3f2a-public-tls-certs\") pod \"glance-72940-default-external-api-0\" (UID: \"34f9638c-9640-4b80-9be1-cf1e208c3f2a\") " pod="openstack/glance-72940-default-external-api-0" Feb 16 02:39:07.360925 master-0 kubenswrapper[31559]: I0216 02:39:07.360343 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34f9638c-9640-4b80-9be1-cf1e208c3f2a-scripts\") pod \"glance-72940-default-external-api-0\" (UID: \"34f9638c-9640-4b80-9be1-cf1e208c3f2a\") " pod="openstack/glance-72940-default-external-api-0" Feb 16 02:39:07.620120 master-0 kubenswrapper[31559]: I0216 02:39:07.620009 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d95bcd475-rv868" Feb 16 02:39:07.751188 master-0 kubenswrapper[31559]: I0216 02:39:07.751117 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8lkt4\" (UniqueName: \"kubernetes.io/projected/69e0b955-6516-4cb2-82e0-a6d1a88508c8-kube-api-access-8lkt4\") pod \"69e0b955-6516-4cb2-82e0-a6d1a88508c8\" (UID: \"69e0b955-6516-4cb2-82e0-a6d1a88508c8\") " Feb 16 02:39:07.751494 master-0 kubenswrapper[31559]: I0216 02:39:07.751476 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/69e0b955-6516-4cb2-82e0-a6d1a88508c8-dns-swift-storage-0\") pod \"69e0b955-6516-4cb2-82e0-a6d1a88508c8\" (UID: \"69e0b955-6516-4cb2-82e0-a6d1a88508c8\") " Feb 16 02:39:07.751642 master-0 kubenswrapper[31559]: I0216 02:39:07.751625 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/69e0b955-6516-4cb2-82e0-a6d1a88508c8-ovsdbserver-nb\") pod \"69e0b955-6516-4cb2-82e0-a6d1a88508c8\" (UID: \"69e0b955-6516-4cb2-82e0-a6d1a88508c8\") " Feb 16 02:39:07.751853 master-0 kubenswrapper[31559]: I0216 02:39:07.751837 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69e0b955-6516-4cb2-82e0-a6d1a88508c8-config\") pod \"69e0b955-6516-4cb2-82e0-a6d1a88508c8\" (UID: \"69e0b955-6516-4cb2-82e0-a6d1a88508c8\") " Feb 16 02:39:07.752024 master-0 kubenswrapper[31559]: I0216 02:39:07.752008 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/69e0b955-6516-4cb2-82e0-a6d1a88508c8-ovsdbserver-sb\") pod \"69e0b955-6516-4cb2-82e0-a6d1a88508c8\" (UID: \"69e0b955-6516-4cb2-82e0-a6d1a88508c8\") " Feb 16 02:39:07.752217 master-0 kubenswrapper[31559]: I0216 02:39:07.752201 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/69e0b955-6516-4cb2-82e0-a6d1a88508c8-dns-svc\") pod \"69e0b955-6516-4cb2-82e0-a6d1a88508c8\" (UID: \"69e0b955-6516-4cb2-82e0-a6d1a88508c8\") " Feb 16 02:39:07.769860 master-0 kubenswrapper[31559]: I0216 02:39:07.769616 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69e0b955-6516-4cb2-82e0-a6d1a88508c8-kube-api-access-8lkt4" (OuterVolumeSpecName: "kube-api-access-8lkt4") pod "69e0b955-6516-4cb2-82e0-a6d1a88508c8" (UID: "69e0b955-6516-4cb2-82e0-a6d1a88508c8"). InnerVolumeSpecName "kube-api-access-8lkt4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:39:07.807175 master-0 kubenswrapper[31559]: I0216 02:39:07.807117 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69e0b955-6516-4cb2-82e0-a6d1a88508c8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "69e0b955-6516-4cb2-82e0-a6d1a88508c8" (UID: "69e0b955-6516-4cb2-82e0-a6d1a88508c8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:39:07.824107 master-0 kubenswrapper[31559]: I0216 02:39:07.824031 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69e0b955-6516-4cb2-82e0-a6d1a88508c8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "69e0b955-6516-4cb2-82e0-a6d1a88508c8" (UID: "69e0b955-6516-4cb2-82e0-a6d1a88508c8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:39:07.836122 master-0 kubenswrapper[31559]: I0216 02:39:07.836047 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69e0b955-6516-4cb2-82e0-a6d1a88508c8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "69e0b955-6516-4cb2-82e0-a6d1a88508c8" (UID: "69e0b955-6516-4cb2-82e0-a6d1a88508c8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:39:07.851138 master-0 kubenswrapper[31559]: I0216 02:39:07.850007 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69e0b955-6516-4cb2-82e0-a6d1a88508c8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "69e0b955-6516-4cb2-82e0-a6d1a88508c8" (UID: "69e0b955-6516-4cb2-82e0-a6d1a88508c8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:39:07.859071 master-0 kubenswrapper[31559]: I0216 02:39:07.859002 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8lkt4\" (UniqueName: \"kubernetes.io/projected/69e0b955-6516-4cb2-82e0-a6d1a88508c8-kube-api-access-8lkt4\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:07.859071 master-0 kubenswrapper[31559]: I0216 02:39:07.859061 31559 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/69e0b955-6516-4cb2-82e0-a6d1a88508c8-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:07.859071 master-0 kubenswrapper[31559]: I0216 02:39:07.859077 31559 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/69e0b955-6516-4cb2-82e0-a6d1a88508c8-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:07.859337 master-0 kubenswrapper[31559]: I0216 02:39:07.859089 31559 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/69e0b955-6516-4cb2-82e0-a6d1a88508c8-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:07.859337 master-0 kubenswrapper[31559]: I0216 02:39:07.859106 31559 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/69e0b955-6516-4cb2-82e0-a6d1a88508c8-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:07.881531 master-0 kubenswrapper[31559]: I0216 02:39:07.874273 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69e0b955-6516-4cb2-82e0-a6d1a88508c8-config" (OuterVolumeSpecName: "config") pod "69e0b955-6516-4cb2-82e0-a6d1a88508c8" (UID: "69e0b955-6516-4cb2-82e0-a6d1a88508c8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:39:07.963756 master-0 kubenswrapper[31559]: I0216 02:39:07.960845 31559 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69e0b955-6516-4cb2-82e0-a6d1a88508c8-config\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:07.983462 master-0 kubenswrapper[31559]: I0216 02:39:07.977003 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1003e078-0808-429d-99a8-18f4497431cd" path="/var/lib/kubelet/pods/1003e078-0808-429d-99a8-18f4497431cd/volumes" Feb 16 02:39:07.983462 master-0 kubenswrapper[31559]: I0216 02:39:07.982897 31559 generic.go:334] "Generic (PLEG): container finished" podID="5a67f2cf-c024-48a0-8341-15aca05beca0" containerID="596fc56ada4755a525e583fbbd2a9cfb5f1dc7a8e3c4ee5c1f1403d1c41bc945" exitCode=0 Feb 16 02:39:07.983462 master-0 kubenswrapper[31559]: I0216 02:39:07.982971 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-5df0-account-create-update-xnvlf" event={"ID":"5a67f2cf-c024-48a0-8341-15aca05beca0","Type":"ContainerDied","Data":"596fc56ada4755a525e583fbbd2a9cfb5f1dc7a8e3c4ee5c1f1403d1c41bc945"} Feb 16 02:39:07.984073 master-0 kubenswrapper[31559]: I0216 02:39:07.984032 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-72940-default-external-api-0"] Feb 16 02:39:07.985746 master-0 kubenswrapper[31559]: E0216 02:39:07.985047 31559 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[glance], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/glance-72940-default-external-api-0" podUID="34f9638c-9640-4b80-9be1-cf1e208c3f2a" Feb 16 02:39:07.997510 master-0 kubenswrapper[31559]: I0216 02:39:07.987368 31559 generic.go:334] "Generic (PLEG): container finished" podID="4b6218a7-c0ec-46b5-a1df-cdccc54d540c" containerID="d8bc4cec40e08a7dc1ec8533dcca3de93db71fbdd72065c9c572dd7a3758ecba" exitCode=0 Feb 16 02:39:07.997510 master-0 kubenswrapper[31559]: I0216 02:39:07.987482 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cc4888b7-xrh7r" event={"ID":"4b6218a7-c0ec-46b5-a1df-cdccc54d540c","Type":"ContainerDied","Data":"d8bc4cec40e08a7dc1ec8533dcca3de93db71fbdd72065c9c572dd7a3758ecba"} Feb 16 02:39:07.997510 master-0 kubenswrapper[31559]: I0216 02:39:07.987514 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cc4888b7-xrh7r" event={"ID":"4b6218a7-c0ec-46b5-a1df-cdccc54d540c","Type":"ContainerStarted","Data":"98539feb3d65696980b9954c125c831b7a334b6c4b866635c6596a5df8610fb3"} Feb 16 02:39:07.997510 master-0 kubenswrapper[31559]: I0216 02:39:07.996862 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d95bcd475-rv868" event={"ID":"69e0b955-6516-4cb2-82e0-a6d1a88508c8","Type":"ContainerDied","Data":"bfe44a6e1e44b893da14139844a6294860619a668fdab4bac4b59574f7579b70"} Feb 16 02:39:07.997510 master-0 kubenswrapper[31559]: I0216 02:39:07.997003 31559 scope.go:117] "RemoveContainer" containerID="1a5127c10788f5a0715cbb6faa86b6753c0dc7b03b0f231a21559f1e5dc1ad2a" Feb 16 02:39:07.997510 master-0 kubenswrapper[31559]: I0216 02:39:07.997277 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d95bcd475-rv868" Feb 16 02:39:08.001399 master-0 kubenswrapper[31559]: I0216 02:39:08.001365 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-mxb5k" event={"ID":"032a3554-910a-472a-8537-73e08670ffe8","Type":"ContainerStarted","Data":"fa7b96e260598981474c211b1d4e4409f38a196a76e9ee89e6026c1b77ba5afa"} Feb 16 02:39:08.008671 master-0 kubenswrapper[31559]: I0216 02:39:08.008624 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-jgr7s" event={"ID":"f313a2ac-7115-41c1-ac49-ede1baebd452","Type":"ContainerStarted","Data":"155cef6f9e6021de4e94dc97133dfd4e4abb237f5f102f0178d2da7dbc66f071"} Feb 16 02:39:08.170293 master-0 kubenswrapper[31559]: I0216 02:39:08.167787 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-72940-default-internal-api-0"] Feb 16 02:39:08.184987 master-0 kubenswrapper[31559]: E0216 02:39:08.184881 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69e0b955-6516-4cb2-82e0-a6d1a88508c8" containerName="init" Feb 16 02:39:08.184987 master-0 kubenswrapper[31559]: I0216 02:39:08.184931 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="69e0b955-6516-4cb2-82e0-a6d1a88508c8" containerName="init" Feb 16 02:39:08.185412 master-0 kubenswrapper[31559]: I0216 02:39:08.185175 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="69e0b955-6516-4cb2-82e0-a6d1a88508c8" containerName="init" Feb 16 02:39:08.186228 master-0 kubenswrapper[31559]: I0216 02:39:08.186178 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-72940-default-internal-api-0" Feb 16 02:39:08.191992 master-0 kubenswrapper[31559]: I0216 02:39:08.190904 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-72940-default-internal-config-data" Feb 16 02:39:08.212227 master-0 kubenswrapper[31559]: I0216 02:39:08.192574 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 16 02:39:08.218125 master-0 kubenswrapper[31559]: I0216 02:39:08.217547 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-72940-default-internal-api-0"] Feb 16 02:39:08.253566 master-0 kubenswrapper[31559]: I0216 02:39:08.253488 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-mxb5k" podStartSLOduration=3.253467146 podStartE2EDuration="3.253467146s" podCreationTimestamp="2026-02-16 02:39:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:39:08.233793163 +0000 UTC m=+1000.578399178" watchObservedRunningTime="2026-02-16 02:39:08.253467146 +0000 UTC m=+1000.598073161" Feb 16 02:39:08.283282 master-0 kubenswrapper[31559]: I0216 02:39:08.276348 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0e776d3-0db2-474c-ad7f-6798e0b59a3c-combined-ca-bundle\") pod \"glance-72940-default-internal-api-0\" (UID: \"e0e776d3-0db2-474c-ad7f-6798e0b59a3c\") " pod="openstack/glance-72940-default-internal-api-0" Feb 16 02:39:08.283282 master-0 kubenswrapper[31559]: I0216 02:39:08.276494 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a28269aa-59fc-4653-a10d-dcadc1f5499f\" (UniqueName: \"kubernetes.io/csi/topolvm.io^6e8938e5-d57e-476b-aba2-0b5d7983a3e3\") pod \"glance-72940-default-internal-api-0\" (UID: \"e0e776d3-0db2-474c-ad7f-6798e0b59a3c\") " pod="openstack/glance-72940-default-internal-api-0" Feb 16 02:39:08.283282 master-0 kubenswrapper[31559]: I0216 02:39:08.276514 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0e776d3-0db2-474c-ad7f-6798e0b59a3c-config-data\") pod \"glance-72940-default-internal-api-0\" (UID: \"e0e776d3-0db2-474c-ad7f-6798e0b59a3c\") " pod="openstack/glance-72940-default-internal-api-0" Feb 16 02:39:08.283282 master-0 kubenswrapper[31559]: I0216 02:39:08.276552 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0e776d3-0db2-474c-ad7f-6798e0b59a3c-scripts\") pod \"glance-72940-default-internal-api-0\" (UID: \"e0e776d3-0db2-474c-ad7f-6798e0b59a3c\") " pod="openstack/glance-72940-default-internal-api-0" Feb 16 02:39:08.283282 master-0 kubenswrapper[31559]: I0216 02:39:08.276595 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xl5pv\" (UniqueName: \"kubernetes.io/projected/e0e776d3-0db2-474c-ad7f-6798e0b59a3c-kube-api-access-xl5pv\") pod \"glance-72940-default-internal-api-0\" (UID: \"e0e776d3-0db2-474c-ad7f-6798e0b59a3c\") " pod="openstack/glance-72940-default-internal-api-0" Feb 16 02:39:08.283282 master-0 kubenswrapper[31559]: I0216 02:39:08.276651 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0e776d3-0db2-474c-ad7f-6798e0b59a3c-logs\") pod \"glance-72940-default-internal-api-0\" (UID: \"e0e776d3-0db2-474c-ad7f-6798e0b59a3c\") " pod="openstack/glance-72940-default-internal-api-0" Feb 16 02:39:08.283282 master-0 kubenswrapper[31559]: I0216 02:39:08.276982 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e0e776d3-0db2-474c-ad7f-6798e0b59a3c-httpd-run\") pod \"glance-72940-default-internal-api-0\" (UID: \"e0e776d3-0db2-474c-ad7f-6798e0b59a3c\") " pod="openstack/glance-72940-default-internal-api-0" Feb 16 02:39:08.283282 master-0 kubenswrapper[31559]: I0216 02:39:08.277028 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0e776d3-0db2-474c-ad7f-6798e0b59a3c-internal-tls-certs\") pod \"glance-72940-default-internal-api-0\" (UID: \"e0e776d3-0db2-474c-ad7f-6798e0b59a3c\") " pod="openstack/glance-72940-default-internal-api-0" Feb 16 02:39:08.322416 master-0 kubenswrapper[31559]: I0216 02:39:08.322380 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d95bcd475-rv868"] Feb 16 02:39:08.336257 master-0 kubenswrapper[31559]: I0216 02:39:08.336201 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d95bcd475-rv868"] Feb 16 02:39:08.378642 master-0 kubenswrapper[31559]: I0216 02:39:08.378574 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0e776d3-0db2-474c-ad7f-6798e0b59a3c-internal-tls-certs\") pod \"glance-72940-default-internal-api-0\" (UID: \"e0e776d3-0db2-474c-ad7f-6798e0b59a3c\") " pod="openstack/glance-72940-default-internal-api-0" Feb 16 02:39:08.382971 master-0 kubenswrapper[31559]: I0216 02:39:08.378689 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0e776d3-0db2-474c-ad7f-6798e0b59a3c-combined-ca-bundle\") pod \"glance-72940-default-internal-api-0\" (UID: \"e0e776d3-0db2-474c-ad7f-6798e0b59a3c\") " pod="openstack/glance-72940-default-internal-api-0" Feb 16 02:39:08.382971 master-0 kubenswrapper[31559]: I0216 02:39:08.378721 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a28269aa-59fc-4653-a10d-dcadc1f5499f\" (UniqueName: \"kubernetes.io/csi/topolvm.io^6e8938e5-d57e-476b-aba2-0b5d7983a3e3\") pod \"glance-72940-default-internal-api-0\" (UID: \"e0e776d3-0db2-474c-ad7f-6798e0b59a3c\") " pod="openstack/glance-72940-default-internal-api-0" Feb 16 02:39:08.382971 master-0 kubenswrapper[31559]: I0216 02:39:08.378737 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0e776d3-0db2-474c-ad7f-6798e0b59a3c-config-data\") pod \"glance-72940-default-internal-api-0\" (UID: \"e0e776d3-0db2-474c-ad7f-6798e0b59a3c\") " pod="openstack/glance-72940-default-internal-api-0" Feb 16 02:39:08.382971 master-0 kubenswrapper[31559]: I0216 02:39:08.378763 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0e776d3-0db2-474c-ad7f-6798e0b59a3c-scripts\") pod \"glance-72940-default-internal-api-0\" (UID: \"e0e776d3-0db2-474c-ad7f-6798e0b59a3c\") " pod="openstack/glance-72940-default-internal-api-0" Feb 16 02:39:08.382971 master-0 kubenswrapper[31559]: I0216 02:39:08.378799 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xl5pv\" (UniqueName: \"kubernetes.io/projected/e0e776d3-0db2-474c-ad7f-6798e0b59a3c-kube-api-access-xl5pv\") pod \"glance-72940-default-internal-api-0\" (UID: \"e0e776d3-0db2-474c-ad7f-6798e0b59a3c\") " pod="openstack/glance-72940-default-internal-api-0" Feb 16 02:39:08.382971 master-0 kubenswrapper[31559]: I0216 02:39:08.378840 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0e776d3-0db2-474c-ad7f-6798e0b59a3c-logs\") pod \"glance-72940-default-internal-api-0\" (UID: \"e0e776d3-0db2-474c-ad7f-6798e0b59a3c\") " pod="openstack/glance-72940-default-internal-api-0" Feb 16 02:39:08.382971 master-0 kubenswrapper[31559]: I0216 02:39:08.378881 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e0e776d3-0db2-474c-ad7f-6798e0b59a3c-httpd-run\") pod \"glance-72940-default-internal-api-0\" (UID: \"e0e776d3-0db2-474c-ad7f-6798e0b59a3c\") " pod="openstack/glance-72940-default-internal-api-0" Feb 16 02:39:08.382971 master-0 kubenswrapper[31559]: I0216 02:39:08.379993 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e0e776d3-0db2-474c-ad7f-6798e0b59a3c-httpd-run\") pod \"glance-72940-default-internal-api-0\" (UID: \"e0e776d3-0db2-474c-ad7f-6798e0b59a3c\") " pod="openstack/glance-72940-default-internal-api-0" Feb 16 02:39:08.382971 master-0 kubenswrapper[31559]: I0216 02:39:08.380294 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0e776d3-0db2-474c-ad7f-6798e0b59a3c-logs\") pod \"glance-72940-default-internal-api-0\" (UID: \"e0e776d3-0db2-474c-ad7f-6798e0b59a3c\") " pod="openstack/glance-72940-default-internal-api-0" Feb 16 02:39:08.389262 master-0 kubenswrapper[31559]: I0216 02:39:08.389214 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0e776d3-0db2-474c-ad7f-6798e0b59a3c-combined-ca-bundle\") pod \"glance-72940-default-internal-api-0\" (UID: \"e0e776d3-0db2-474c-ad7f-6798e0b59a3c\") " pod="openstack/glance-72940-default-internal-api-0" Feb 16 02:39:08.407887 master-0 kubenswrapper[31559]: I0216 02:39:08.407151 31559 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 02:39:08.407887 master-0 kubenswrapper[31559]: I0216 02:39:08.407201 31559 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a28269aa-59fc-4653-a10d-dcadc1f5499f\" (UniqueName: \"kubernetes.io/csi/topolvm.io^6e8938e5-d57e-476b-aba2-0b5d7983a3e3\") pod \"glance-72940-default-internal-api-0\" (UID: \"e0e776d3-0db2-474c-ad7f-6798e0b59a3c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/0789e8af4b098ede363a908d5ca2de42a744f798f7b80a69915d2057c015778f/globalmount\"" pod="openstack/glance-72940-default-internal-api-0" Feb 16 02:39:08.410157 master-0 kubenswrapper[31559]: I0216 02:39:08.410122 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0e776d3-0db2-474c-ad7f-6798e0b59a3c-config-data\") pod \"glance-72940-default-internal-api-0\" (UID: \"e0e776d3-0db2-474c-ad7f-6798e0b59a3c\") " pod="openstack/glance-72940-default-internal-api-0" Feb 16 02:39:08.414252 master-0 kubenswrapper[31559]: I0216 02:39:08.413823 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0e776d3-0db2-474c-ad7f-6798e0b59a3c-scripts\") pod \"glance-72940-default-internal-api-0\" (UID: \"e0e776d3-0db2-474c-ad7f-6798e0b59a3c\") " pod="openstack/glance-72940-default-internal-api-0" Feb 16 02:39:08.421192 master-0 kubenswrapper[31559]: I0216 02:39:08.421011 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0e776d3-0db2-474c-ad7f-6798e0b59a3c-internal-tls-certs\") pod \"glance-72940-default-internal-api-0\" (UID: \"e0e776d3-0db2-474c-ad7f-6798e0b59a3c\") " pod="openstack/glance-72940-default-internal-api-0" Feb 16 02:39:08.429947 master-0 kubenswrapper[31559]: I0216 02:39:08.428172 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xl5pv\" (UniqueName: \"kubernetes.io/projected/e0e776d3-0db2-474c-ad7f-6798e0b59a3c-kube-api-access-xl5pv\") pod \"glance-72940-default-internal-api-0\" (UID: \"e0e776d3-0db2-474c-ad7f-6798e0b59a3c\") " pod="openstack/glance-72940-default-internal-api-0" Feb 16 02:39:08.657790 master-0 kubenswrapper[31559]: I0216 02:39:08.657750 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-nqj26" Feb 16 02:39:08.789618 master-0 kubenswrapper[31559]: I0216 02:39:08.789508 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72v72\" (UniqueName: \"kubernetes.io/projected/21d15bc7-f350-4b27-a27c-df6b81c9b50b-kube-api-access-72v72\") pod \"21d15bc7-f350-4b27-a27c-df6b81c9b50b\" (UID: \"21d15bc7-f350-4b27-a27c-df6b81c9b50b\") " Feb 16 02:39:08.789804 master-0 kubenswrapper[31559]: I0216 02:39:08.789636 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21d15bc7-f350-4b27-a27c-df6b81c9b50b-operator-scripts\") pod \"21d15bc7-f350-4b27-a27c-df6b81c9b50b\" (UID: \"21d15bc7-f350-4b27-a27c-df6b81c9b50b\") " Feb 16 02:39:08.790563 master-0 kubenswrapper[31559]: I0216 02:39:08.790522 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21d15bc7-f350-4b27-a27c-df6b81c9b50b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "21d15bc7-f350-4b27-a27c-df6b81c9b50b" (UID: "21d15bc7-f350-4b27-a27c-df6b81c9b50b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:39:08.794651 master-0 kubenswrapper[31559]: I0216 02:39:08.794594 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21d15bc7-f350-4b27-a27c-df6b81c9b50b-kube-api-access-72v72" (OuterVolumeSpecName: "kube-api-access-72v72") pod "21d15bc7-f350-4b27-a27c-df6b81c9b50b" (UID: "21d15bc7-f350-4b27-a27c-df6b81c9b50b"). InnerVolumeSpecName "kube-api-access-72v72". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:39:08.798999 master-0 kubenswrapper[31559]: I0216 02:39:08.798938 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f8967d3f-b95e-4cbf-ba61-3779cc05db2f\" (UniqueName: \"kubernetes.io/csi/topolvm.io^fcb2bec2-7af4-4d84-a267-9adfc7ab4af8\") pod \"glance-72940-default-external-api-0\" (UID: \"34f9638c-9640-4b80-9be1-cf1e208c3f2a\") " pod="openstack/glance-72940-default-external-api-0" Feb 16 02:39:08.892368 master-0 kubenswrapper[31559]: I0216 02:39:08.892120 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72v72\" (UniqueName: \"kubernetes.io/projected/21d15bc7-f350-4b27-a27c-df6b81c9b50b-kube-api-access-72v72\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:08.892368 master-0 kubenswrapper[31559]: I0216 02:39:08.892161 31559 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21d15bc7-f350-4b27-a27c-df6b81c9b50b-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:09.024910 master-0 kubenswrapper[31559]: I0216 02:39:09.024854 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cc4888b7-xrh7r" event={"ID":"4b6218a7-c0ec-46b5-a1df-cdccc54d540c","Type":"ContainerStarted","Data":"830f28140856edc42ddb2e2402fdba876374b84819f772279736d4958da75a51"} Feb 16 02:39:09.025378 master-0 kubenswrapper[31559]: I0216 02:39:09.025322 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6cc4888b7-xrh7r" Feb 16 02:39:09.036690 master-0 kubenswrapper[31559]: I0216 02:39:09.036611 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-create-nqj26" event={"ID":"21d15bc7-f350-4b27-a27c-df6b81c9b50b","Type":"ContainerDied","Data":"c2c297bc88504ecc2cb472f4f6b06d2c76e90f350960dcbf4e27c54f1f1a7cca"} Feb 16 02:39:09.036690 master-0 kubenswrapper[31559]: I0216 02:39:09.036674 31559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2c297bc88504ecc2cb472f4f6b06d2c76e90f350960dcbf4e27c54f1f1a7cca" Feb 16 02:39:09.036690 master-0 kubenswrapper[31559]: I0216 02:39:09.036693 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-72940-default-external-api-0" Feb 16 02:39:09.036943 master-0 kubenswrapper[31559]: I0216 02:39:09.036786 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-nqj26" Feb 16 02:39:09.052060 master-0 kubenswrapper[31559]: I0216 02:39:09.051899 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6cc4888b7-xrh7r" podStartSLOduration=4.05188574 podStartE2EDuration="4.05188574s" podCreationTimestamp="2026-02-16 02:39:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:39:09.047946789 +0000 UTC m=+1001.392552804" watchObservedRunningTime="2026-02-16 02:39:09.05188574 +0000 UTC m=+1001.396491755" Feb 16 02:39:09.067316 master-0 kubenswrapper[31559]: I0216 02:39:09.067240 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-72940-default-external-api-0" Feb 16 02:39:09.197298 master-0 kubenswrapper[31559]: I0216 02:39:09.197237 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34f9638c-9640-4b80-9be1-cf1e208c3f2a-combined-ca-bundle\") pod \"34f9638c-9640-4b80-9be1-cf1e208c3f2a\" (UID: \"34f9638c-9640-4b80-9be1-cf1e208c3f2a\") " Feb 16 02:39:09.197298 master-0 kubenswrapper[31559]: I0216 02:39:09.197300 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lk8mz\" (UniqueName: \"kubernetes.io/projected/34f9638c-9640-4b80-9be1-cf1e208c3f2a-kube-api-access-lk8mz\") pod \"34f9638c-9640-4b80-9be1-cf1e208c3f2a\" (UID: \"34f9638c-9640-4b80-9be1-cf1e208c3f2a\") " Feb 16 02:39:09.197557 master-0 kubenswrapper[31559]: I0216 02:39:09.197339 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34f9638c-9640-4b80-9be1-cf1e208c3f2a-logs\") pod \"34f9638c-9640-4b80-9be1-cf1e208c3f2a\" (UID: \"34f9638c-9640-4b80-9be1-cf1e208c3f2a\") " Feb 16 02:39:09.197557 master-0 kubenswrapper[31559]: I0216 02:39:09.197547 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^fcb2bec2-7af4-4d84-a267-9adfc7ab4af8\") pod \"34f9638c-9640-4b80-9be1-cf1e208c3f2a\" (UID: \"34f9638c-9640-4b80-9be1-cf1e208c3f2a\") " Feb 16 02:39:09.197623 master-0 kubenswrapper[31559]: I0216 02:39:09.197592 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/34f9638c-9640-4b80-9be1-cf1e208c3f2a-httpd-run\") pod \"34f9638c-9640-4b80-9be1-cf1e208c3f2a\" (UID: \"34f9638c-9640-4b80-9be1-cf1e208c3f2a\") " Feb 16 02:39:09.197794 master-0 kubenswrapper[31559]: I0216 02:39:09.197771 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34f9638c-9640-4b80-9be1-cf1e208c3f2a-scripts\") pod \"34f9638c-9640-4b80-9be1-cf1e208c3f2a\" (UID: \"34f9638c-9640-4b80-9be1-cf1e208c3f2a\") " Feb 16 02:39:09.197878 master-0 kubenswrapper[31559]: I0216 02:39:09.197845 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/34f9638c-9640-4b80-9be1-cf1e208c3f2a-public-tls-certs\") pod \"34f9638c-9640-4b80-9be1-cf1e208c3f2a\" (UID: \"34f9638c-9640-4b80-9be1-cf1e208c3f2a\") " Feb 16 02:39:09.197938 master-0 kubenswrapper[31559]: I0216 02:39:09.197918 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34f9638c-9640-4b80-9be1-cf1e208c3f2a-config-data\") pod \"34f9638c-9640-4b80-9be1-cf1e208c3f2a\" (UID: \"34f9638c-9640-4b80-9be1-cf1e208c3f2a\") " Feb 16 02:39:09.198833 master-0 kubenswrapper[31559]: I0216 02:39:09.198550 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34f9638c-9640-4b80-9be1-cf1e208c3f2a-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "34f9638c-9640-4b80-9be1-cf1e208c3f2a" (UID: "34f9638c-9640-4b80-9be1-cf1e208c3f2a"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 02:39:09.199290 master-0 kubenswrapper[31559]: I0216 02:39:09.199233 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34f9638c-9640-4b80-9be1-cf1e208c3f2a-logs" (OuterVolumeSpecName: "logs") pod "34f9638c-9640-4b80-9be1-cf1e208c3f2a" (UID: "34f9638c-9640-4b80-9be1-cf1e208c3f2a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 02:39:09.202079 master-0 kubenswrapper[31559]: I0216 02:39:09.201235 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34f9638c-9640-4b80-9be1-cf1e208c3f2a-kube-api-access-lk8mz" (OuterVolumeSpecName: "kube-api-access-lk8mz") pod "34f9638c-9640-4b80-9be1-cf1e208c3f2a" (UID: "34f9638c-9640-4b80-9be1-cf1e208c3f2a"). InnerVolumeSpecName "kube-api-access-lk8mz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:39:09.203486 master-0 kubenswrapper[31559]: I0216 02:39:09.203387 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34f9638c-9640-4b80-9be1-cf1e208c3f2a-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "34f9638c-9640-4b80-9be1-cf1e208c3f2a" (UID: "34f9638c-9640-4b80-9be1-cf1e208c3f2a"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:39:09.204655 master-0 kubenswrapper[31559]: I0216 02:39:09.204277 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34f9638c-9640-4b80-9be1-cf1e208c3f2a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "34f9638c-9640-4b80-9be1-cf1e208c3f2a" (UID: "34f9638c-9640-4b80-9be1-cf1e208c3f2a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:39:09.214728 master-0 kubenswrapper[31559]: I0216 02:39:09.214616 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34f9638c-9640-4b80-9be1-cf1e208c3f2a-config-data" (OuterVolumeSpecName: "config-data") pod "34f9638c-9640-4b80-9be1-cf1e208c3f2a" (UID: "34f9638c-9640-4b80-9be1-cf1e208c3f2a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:39:09.246825 master-0 kubenswrapper[31559]: I0216 02:39:09.246750 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34f9638c-9640-4b80-9be1-cf1e208c3f2a-scripts" (OuterVolumeSpecName: "scripts") pod "34f9638c-9640-4b80-9be1-cf1e208c3f2a" (UID: "34f9638c-9640-4b80-9be1-cf1e208c3f2a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:39:09.300502 master-0 kubenswrapper[31559]: I0216 02:39:09.300451 31559 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/34f9638c-9640-4b80-9be1-cf1e208c3f2a-httpd-run\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:09.300502 master-0 kubenswrapper[31559]: I0216 02:39:09.300488 31559 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34f9638c-9640-4b80-9be1-cf1e208c3f2a-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:09.300502 master-0 kubenswrapper[31559]: I0216 02:39:09.300498 31559 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/34f9638c-9640-4b80-9be1-cf1e208c3f2a-public-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:09.300502 master-0 kubenswrapper[31559]: I0216 02:39:09.300509 31559 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34f9638c-9640-4b80-9be1-cf1e208c3f2a-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:09.300502 master-0 kubenswrapper[31559]: I0216 02:39:09.300517 31559 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34f9638c-9640-4b80-9be1-cf1e208c3f2a-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:09.300813 master-0 kubenswrapper[31559]: I0216 02:39:09.300527 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lk8mz\" (UniqueName: \"kubernetes.io/projected/34f9638c-9640-4b80-9be1-cf1e208c3f2a-kube-api-access-lk8mz\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:09.300813 master-0 kubenswrapper[31559]: I0216 02:39:09.300537 31559 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34f9638c-9640-4b80-9be1-cf1e208c3f2a-logs\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:09.591679 master-0 kubenswrapper[31559]: I0216 02:39:09.591632 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-5df0-account-create-update-xnvlf" Feb 16 02:39:09.712581 master-0 kubenswrapper[31559]: I0216 02:39:09.712522 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwdwq\" (UniqueName: \"kubernetes.io/projected/5a67f2cf-c024-48a0-8341-15aca05beca0-kube-api-access-xwdwq\") pod \"5a67f2cf-c024-48a0-8341-15aca05beca0\" (UID: \"5a67f2cf-c024-48a0-8341-15aca05beca0\") " Feb 16 02:39:09.712581 master-0 kubenswrapper[31559]: I0216 02:39:09.712582 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a67f2cf-c024-48a0-8341-15aca05beca0-operator-scripts\") pod \"5a67f2cf-c024-48a0-8341-15aca05beca0\" (UID: \"5a67f2cf-c024-48a0-8341-15aca05beca0\") " Feb 16 02:39:09.713739 master-0 kubenswrapper[31559]: I0216 02:39:09.713704 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a67f2cf-c024-48a0-8341-15aca05beca0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5a67f2cf-c024-48a0-8341-15aca05beca0" (UID: "5a67f2cf-c024-48a0-8341-15aca05beca0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:39:09.721377 master-0 kubenswrapper[31559]: I0216 02:39:09.721304 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a67f2cf-c024-48a0-8341-15aca05beca0-kube-api-access-xwdwq" (OuterVolumeSpecName: "kube-api-access-xwdwq") pod "5a67f2cf-c024-48a0-8341-15aca05beca0" (UID: "5a67f2cf-c024-48a0-8341-15aca05beca0"). InnerVolumeSpecName "kube-api-access-xwdwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:39:09.815735 master-0 kubenswrapper[31559]: I0216 02:39:09.815677 31559 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a67f2cf-c024-48a0-8341-15aca05beca0-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:09.815735 master-0 kubenswrapper[31559]: I0216 02:39:09.815714 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwdwq\" (UniqueName: \"kubernetes.io/projected/5a67f2cf-c024-48a0-8341-15aca05beca0-kube-api-access-xwdwq\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:09.941671 master-0 kubenswrapper[31559]: I0216 02:39:09.941607 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69e0b955-6516-4cb2-82e0-a6d1a88508c8" path="/var/lib/kubelet/pods/69e0b955-6516-4cb2-82e0-a6d1a88508c8/volumes" Feb 16 02:39:10.047818 master-0 kubenswrapper[31559]: I0216 02:39:10.047641 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-5df0-account-create-update-xnvlf" event={"ID":"5a67f2cf-c024-48a0-8341-15aca05beca0","Type":"ContainerDied","Data":"33131ff61869e28fbb7c08b979ec302e42b9a0aa63bcea5c88a1741b68300192"} Feb 16 02:39:10.047818 master-0 kubenswrapper[31559]: I0216 02:39:10.047692 31559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33131ff61869e28fbb7c08b979ec302e42b9a0aa63bcea5c88a1741b68300192" Feb 16 02:39:10.047818 master-0 kubenswrapper[31559]: I0216 02:39:10.047692 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-5df0-account-create-update-xnvlf" Feb 16 02:39:10.050569 master-0 kubenswrapper[31559]: I0216 02:39:10.050523 31559 generic.go:334] "Generic (PLEG): container finished" podID="e376c3b4-d1f6-4f67-bd17-406917b6c866" containerID="9957c94575b16139adde1349a9b018de6417007657f1a1083222f6f2e05706b7" exitCode=0 Feb 16 02:39:10.050635 master-0 kubenswrapper[31559]: I0216 02:39:10.050594 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6glzm" event={"ID":"e376c3b4-d1f6-4f67-bd17-406917b6c866","Type":"ContainerDied","Data":"9957c94575b16139adde1349a9b018de6417007657f1a1083222f6f2e05706b7"} Feb 16 02:39:10.050726 master-0 kubenswrapper[31559]: I0216 02:39:10.050671 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-72940-default-external-api-0" Feb 16 02:39:10.356469 master-0 kubenswrapper[31559]: I0216 02:39:10.356406 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^fcb2bec2-7af4-4d84-a267-9adfc7ab4af8" (OuterVolumeSpecName: "glance") pod "34f9638c-9640-4b80-9be1-cf1e208c3f2a" (UID: "34f9638c-9640-4b80-9be1-cf1e208c3f2a"). InnerVolumeSpecName "pvc-f8967d3f-b95e-4cbf-ba61-3779cc05db2f". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 02:39:10.361780 master-0 kubenswrapper[31559]: I0216 02:39:10.361747 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a28269aa-59fc-4653-a10d-dcadc1f5499f\" (UniqueName: \"kubernetes.io/csi/topolvm.io^6e8938e5-d57e-476b-aba2-0b5d7983a3e3\") pod \"glance-72940-default-internal-api-0\" (UID: \"e0e776d3-0db2-474c-ad7f-6798e0b59a3c\") " pod="openstack/glance-72940-default-internal-api-0" Feb 16 02:39:10.391757 master-0 kubenswrapper[31559]: I0216 02:39:10.391642 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-72940-default-internal-api-0" Feb 16 02:39:10.428875 master-0 kubenswrapper[31559]: I0216 02:39:10.428808 31559 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-f8967d3f-b95e-4cbf-ba61-3779cc05db2f\" (UniqueName: \"kubernetes.io/csi/topolvm.io^fcb2bec2-7af4-4d84-a267-9adfc7ab4af8\") on node \"master-0\" " Feb 16 02:39:10.453257 master-0 kubenswrapper[31559]: I0216 02:39:10.453216 31559 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 02:39:10.453762 master-0 kubenswrapper[31559]: I0216 02:39:10.453364 31559 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-f8967d3f-b95e-4cbf-ba61-3779cc05db2f" (UniqueName: "kubernetes.io/csi/topolvm.io^fcb2bec2-7af4-4d84-a267-9adfc7ab4af8") on node "master-0" Feb 16 02:39:10.530754 master-0 kubenswrapper[31559]: I0216 02:39:10.530714 31559 reconciler_common.go:293] "Volume detached for volume \"pvc-f8967d3f-b95e-4cbf-ba61-3779cc05db2f\" (UniqueName: \"kubernetes.io/csi/topolvm.io^fcb2bec2-7af4-4d84-a267-9adfc7ab4af8\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:10.784178 master-0 kubenswrapper[31559]: I0216 02:39:10.783483 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-72940-default-external-api-0"] Feb 16 02:39:10.794408 master-0 kubenswrapper[31559]: I0216 02:39:10.792633 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-72940-default-external-api-0"] Feb 16 02:39:10.821516 master-0 kubenswrapper[31559]: I0216 02:39:10.821042 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-72940-default-external-api-0"] Feb 16 02:39:10.826951 master-0 kubenswrapper[31559]: E0216 02:39:10.826888 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21d15bc7-f350-4b27-a27c-df6b81c9b50b" containerName="mariadb-database-create" Feb 16 02:39:10.826951 master-0 kubenswrapper[31559]: I0216 02:39:10.826945 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="21d15bc7-f350-4b27-a27c-df6b81c9b50b" containerName="mariadb-database-create" Feb 16 02:39:10.827165 master-0 kubenswrapper[31559]: E0216 02:39:10.826976 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a67f2cf-c024-48a0-8341-15aca05beca0" containerName="mariadb-account-create-update" Feb 16 02:39:10.828333 master-0 kubenswrapper[31559]: I0216 02:39:10.828307 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a67f2cf-c024-48a0-8341-15aca05beca0" containerName="mariadb-account-create-update" Feb 16 02:39:10.828686 master-0 kubenswrapper[31559]: I0216 02:39:10.828664 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="21d15bc7-f350-4b27-a27c-df6b81c9b50b" containerName="mariadb-database-create" Feb 16 02:39:10.829269 master-0 kubenswrapper[31559]: I0216 02:39:10.828693 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a67f2cf-c024-48a0-8341-15aca05beca0" containerName="mariadb-account-create-update" Feb 16 02:39:10.834016 master-0 kubenswrapper[31559]: I0216 02:39:10.833986 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-72940-default-external-api-0" Feb 16 02:39:10.846258 master-0 kubenswrapper[31559]: I0216 02:39:10.846195 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-72940-default-external-config-data" Feb 16 02:39:10.846704 master-0 kubenswrapper[31559]: I0216 02:39:10.846623 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 16 02:39:10.854108 master-0 kubenswrapper[31559]: I0216 02:39:10.854060 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-72940-default-external-api-0"] Feb 16 02:39:10.943853 master-0 kubenswrapper[31559]: I0216 02:39:10.943742 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b2014b4-1c15-4883-8435-1f26396ed008-logs\") pod \"glance-72940-default-external-api-0\" (UID: \"9b2014b4-1c15-4883-8435-1f26396ed008\") " pod="openstack/glance-72940-default-external-api-0" Feb 16 02:39:10.943853 master-0 kubenswrapper[31559]: I0216 02:39:10.943831 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b2014b4-1c15-4883-8435-1f26396ed008-public-tls-certs\") pod \"glance-72940-default-external-api-0\" (UID: \"9b2014b4-1c15-4883-8435-1f26396ed008\") " pod="openstack/glance-72940-default-external-api-0" Feb 16 02:39:10.944364 master-0 kubenswrapper[31559]: I0216 02:39:10.943887 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpdlm\" (UniqueName: \"kubernetes.io/projected/9b2014b4-1c15-4883-8435-1f26396ed008-kube-api-access-fpdlm\") pod \"glance-72940-default-external-api-0\" (UID: \"9b2014b4-1c15-4883-8435-1f26396ed008\") " pod="openstack/glance-72940-default-external-api-0" Feb 16 02:39:10.944364 master-0 kubenswrapper[31559]: I0216 02:39:10.943920 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9b2014b4-1c15-4883-8435-1f26396ed008-httpd-run\") pod \"glance-72940-default-external-api-0\" (UID: \"9b2014b4-1c15-4883-8435-1f26396ed008\") " pod="openstack/glance-72940-default-external-api-0" Feb 16 02:39:10.944364 master-0 kubenswrapper[31559]: I0216 02:39:10.944035 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f8967d3f-b95e-4cbf-ba61-3779cc05db2f\" (UniqueName: \"kubernetes.io/csi/topolvm.io^fcb2bec2-7af4-4d84-a267-9adfc7ab4af8\") pod \"glance-72940-default-external-api-0\" (UID: \"9b2014b4-1c15-4883-8435-1f26396ed008\") " pod="openstack/glance-72940-default-external-api-0" Feb 16 02:39:10.944364 master-0 kubenswrapper[31559]: I0216 02:39:10.944126 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b2014b4-1c15-4883-8435-1f26396ed008-scripts\") pod \"glance-72940-default-external-api-0\" (UID: \"9b2014b4-1c15-4883-8435-1f26396ed008\") " pod="openstack/glance-72940-default-external-api-0" Feb 16 02:39:10.944364 master-0 kubenswrapper[31559]: I0216 02:39:10.944211 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b2014b4-1c15-4883-8435-1f26396ed008-combined-ca-bundle\") pod \"glance-72940-default-external-api-0\" (UID: \"9b2014b4-1c15-4883-8435-1f26396ed008\") " pod="openstack/glance-72940-default-external-api-0" Feb 16 02:39:10.944364 master-0 kubenswrapper[31559]: I0216 02:39:10.944252 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b2014b4-1c15-4883-8435-1f26396ed008-config-data\") pod \"glance-72940-default-external-api-0\" (UID: \"9b2014b4-1c15-4883-8435-1f26396ed008\") " pod="openstack/glance-72940-default-external-api-0" Feb 16 02:39:11.048002 master-0 kubenswrapper[31559]: I0216 02:39:11.046458 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b2014b4-1c15-4883-8435-1f26396ed008-combined-ca-bundle\") pod \"glance-72940-default-external-api-0\" (UID: \"9b2014b4-1c15-4883-8435-1f26396ed008\") " pod="openstack/glance-72940-default-external-api-0" Feb 16 02:39:11.048002 master-0 kubenswrapper[31559]: I0216 02:39:11.046525 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b2014b4-1c15-4883-8435-1f26396ed008-config-data\") pod \"glance-72940-default-external-api-0\" (UID: \"9b2014b4-1c15-4883-8435-1f26396ed008\") " pod="openstack/glance-72940-default-external-api-0" Feb 16 02:39:11.048002 master-0 kubenswrapper[31559]: I0216 02:39:11.046609 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b2014b4-1c15-4883-8435-1f26396ed008-logs\") pod \"glance-72940-default-external-api-0\" (UID: \"9b2014b4-1c15-4883-8435-1f26396ed008\") " pod="openstack/glance-72940-default-external-api-0" Feb 16 02:39:11.048002 master-0 kubenswrapper[31559]: I0216 02:39:11.046658 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b2014b4-1c15-4883-8435-1f26396ed008-public-tls-certs\") pod \"glance-72940-default-external-api-0\" (UID: \"9b2014b4-1c15-4883-8435-1f26396ed008\") " pod="openstack/glance-72940-default-external-api-0" Feb 16 02:39:11.048002 master-0 kubenswrapper[31559]: I0216 02:39:11.047389 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpdlm\" (UniqueName: \"kubernetes.io/projected/9b2014b4-1c15-4883-8435-1f26396ed008-kube-api-access-fpdlm\") pod \"glance-72940-default-external-api-0\" (UID: \"9b2014b4-1c15-4883-8435-1f26396ed008\") " pod="openstack/glance-72940-default-external-api-0" Feb 16 02:39:11.048002 master-0 kubenswrapper[31559]: I0216 02:39:11.047428 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9b2014b4-1c15-4883-8435-1f26396ed008-httpd-run\") pod \"glance-72940-default-external-api-0\" (UID: \"9b2014b4-1c15-4883-8435-1f26396ed008\") " pod="openstack/glance-72940-default-external-api-0" Feb 16 02:39:11.048002 master-0 kubenswrapper[31559]: I0216 02:39:11.047624 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f8967d3f-b95e-4cbf-ba61-3779cc05db2f\" (UniqueName: \"kubernetes.io/csi/topolvm.io^fcb2bec2-7af4-4d84-a267-9adfc7ab4af8\") pod \"glance-72940-default-external-api-0\" (UID: \"9b2014b4-1c15-4883-8435-1f26396ed008\") " pod="openstack/glance-72940-default-external-api-0" Feb 16 02:39:11.048002 master-0 kubenswrapper[31559]: I0216 02:39:11.047782 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b2014b4-1c15-4883-8435-1f26396ed008-logs\") pod \"glance-72940-default-external-api-0\" (UID: \"9b2014b4-1c15-4883-8435-1f26396ed008\") " pod="openstack/glance-72940-default-external-api-0" Feb 16 02:39:11.048002 master-0 kubenswrapper[31559]: I0216 02:39:11.047857 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9b2014b4-1c15-4883-8435-1f26396ed008-httpd-run\") pod \"glance-72940-default-external-api-0\" (UID: \"9b2014b4-1c15-4883-8435-1f26396ed008\") " pod="openstack/glance-72940-default-external-api-0" Feb 16 02:39:11.049503 master-0 kubenswrapper[31559]: I0216 02:39:11.049269 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b2014b4-1c15-4883-8435-1f26396ed008-scripts\") pod \"glance-72940-default-external-api-0\" (UID: \"9b2014b4-1c15-4883-8435-1f26396ed008\") " pod="openstack/glance-72940-default-external-api-0" Feb 16 02:39:11.049829 master-0 kubenswrapper[31559]: I0216 02:39:11.049800 31559 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 02:39:11.049883 master-0 kubenswrapper[31559]: I0216 02:39:11.049830 31559 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f8967d3f-b95e-4cbf-ba61-3779cc05db2f\" (UniqueName: \"kubernetes.io/csi/topolvm.io^fcb2bec2-7af4-4d84-a267-9adfc7ab4af8\") pod \"glance-72940-default-external-api-0\" (UID: \"9b2014b4-1c15-4883-8435-1f26396ed008\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/6df8a36abcf3de2186a10fb6ccf098d30950cdf1ed8f716dfec080b59e1d20e7/globalmount\"" pod="openstack/glance-72940-default-external-api-0" Feb 16 02:39:11.053331 master-0 kubenswrapper[31559]: I0216 02:39:11.053299 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b2014b4-1c15-4883-8435-1f26396ed008-config-data\") pod \"glance-72940-default-external-api-0\" (UID: \"9b2014b4-1c15-4883-8435-1f26396ed008\") " pod="openstack/glance-72940-default-external-api-0" Feb 16 02:39:11.068829 master-0 kubenswrapper[31559]: I0216 02:39:11.068695 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b2014b4-1c15-4883-8435-1f26396ed008-scripts\") pod \"glance-72940-default-external-api-0\" (UID: \"9b2014b4-1c15-4883-8435-1f26396ed008\") " pod="openstack/glance-72940-default-external-api-0" Feb 16 02:39:11.069220 master-0 kubenswrapper[31559]: I0216 02:39:11.069193 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b2014b4-1c15-4883-8435-1f26396ed008-public-tls-certs\") pod \"glance-72940-default-external-api-0\" (UID: \"9b2014b4-1c15-4883-8435-1f26396ed008\") " pod="openstack/glance-72940-default-external-api-0" Feb 16 02:39:11.069833 master-0 kubenswrapper[31559]: I0216 02:39:11.069652 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b2014b4-1c15-4883-8435-1f26396ed008-combined-ca-bundle\") pod \"glance-72940-default-external-api-0\" (UID: \"9b2014b4-1c15-4883-8435-1f26396ed008\") " pod="openstack/glance-72940-default-external-api-0" Feb 16 02:39:11.072110 master-0 kubenswrapper[31559]: I0216 02:39:11.072056 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpdlm\" (UniqueName: \"kubernetes.io/projected/9b2014b4-1c15-4883-8435-1f26396ed008-kube-api-access-fpdlm\") pod \"glance-72940-default-external-api-0\" (UID: \"9b2014b4-1c15-4883-8435-1f26396ed008\") " pod="openstack/glance-72940-default-external-api-0" Feb 16 02:39:11.782563 master-0 kubenswrapper[31559]: I0216 02:39:11.769987 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6glzm" Feb 16 02:39:11.865225 master-0 kubenswrapper[31559]: I0216 02:39:11.865160 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e376c3b4-d1f6-4f67-bd17-406917b6c866-credential-keys\") pod \"e376c3b4-d1f6-4f67-bd17-406917b6c866\" (UID: \"e376c3b4-d1f6-4f67-bd17-406917b6c866\") " Feb 16 02:39:11.865338 master-0 kubenswrapper[31559]: I0216 02:39:11.865319 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e376c3b4-d1f6-4f67-bd17-406917b6c866-combined-ca-bundle\") pod \"e376c3b4-d1f6-4f67-bd17-406917b6c866\" (UID: \"e376c3b4-d1f6-4f67-bd17-406917b6c866\") " Feb 16 02:39:11.865535 master-0 kubenswrapper[31559]: I0216 02:39:11.865487 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e376c3b4-d1f6-4f67-bd17-406917b6c866-fernet-keys\") pod \"e376c3b4-d1f6-4f67-bd17-406917b6c866\" (UID: \"e376c3b4-d1f6-4f67-bd17-406917b6c866\") " Feb 16 02:39:11.869871 master-0 kubenswrapper[31559]: I0216 02:39:11.866858 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e376c3b4-d1f6-4f67-bd17-406917b6c866-config-data\") pod \"e376c3b4-d1f6-4f67-bd17-406917b6c866\" (UID: \"e376c3b4-d1f6-4f67-bd17-406917b6c866\") " Feb 16 02:39:11.869871 master-0 kubenswrapper[31559]: I0216 02:39:11.868491 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e376c3b4-d1f6-4f67-bd17-406917b6c866-scripts\") pod \"e376c3b4-d1f6-4f67-bd17-406917b6c866\" (UID: \"e376c3b4-d1f6-4f67-bd17-406917b6c866\") " Feb 16 02:39:11.869871 master-0 kubenswrapper[31559]: I0216 02:39:11.868836 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z57cd\" (UniqueName: \"kubernetes.io/projected/e376c3b4-d1f6-4f67-bd17-406917b6c866-kube-api-access-z57cd\") pod \"e376c3b4-d1f6-4f67-bd17-406917b6c866\" (UID: \"e376c3b4-d1f6-4f67-bd17-406917b6c866\") " Feb 16 02:39:11.870687 master-0 kubenswrapper[31559]: I0216 02:39:11.870641 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e376c3b4-d1f6-4f67-bd17-406917b6c866-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "e376c3b4-d1f6-4f67-bd17-406917b6c866" (UID: "e376c3b4-d1f6-4f67-bd17-406917b6c866"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:39:11.871861 master-0 kubenswrapper[31559]: I0216 02:39:11.871770 31559 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e376c3b4-d1f6-4f67-bd17-406917b6c866-credential-keys\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:11.873536 master-0 kubenswrapper[31559]: I0216 02:39:11.873490 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e376c3b4-d1f6-4f67-bd17-406917b6c866-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "e376c3b4-d1f6-4f67-bd17-406917b6c866" (UID: "e376c3b4-d1f6-4f67-bd17-406917b6c866"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:39:11.874739 master-0 kubenswrapper[31559]: I0216 02:39:11.873954 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e376c3b4-d1f6-4f67-bd17-406917b6c866-kube-api-access-z57cd" (OuterVolumeSpecName: "kube-api-access-z57cd") pod "e376c3b4-d1f6-4f67-bd17-406917b6c866" (UID: "e376c3b4-d1f6-4f67-bd17-406917b6c866"). InnerVolumeSpecName "kube-api-access-z57cd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:39:11.879252 master-0 kubenswrapper[31559]: I0216 02:39:11.879079 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e376c3b4-d1f6-4f67-bd17-406917b6c866-scripts" (OuterVolumeSpecName: "scripts") pod "e376c3b4-d1f6-4f67-bd17-406917b6c866" (UID: "e376c3b4-d1f6-4f67-bd17-406917b6c866"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:39:11.897702 master-0 kubenswrapper[31559]: I0216 02:39:11.895602 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e376c3b4-d1f6-4f67-bd17-406917b6c866-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e376c3b4-d1f6-4f67-bd17-406917b6c866" (UID: "e376c3b4-d1f6-4f67-bd17-406917b6c866"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:39:11.910601 master-0 kubenswrapper[31559]: I0216 02:39:11.908944 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e376c3b4-d1f6-4f67-bd17-406917b6c866-config-data" (OuterVolumeSpecName: "config-data") pod "e376c3b4-d1f6-4f67-bd17-406917b6c866" (UID: "e376c3b4-d1f6-4f67-bd17-406917b6c866"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:39:11.947706 master-0 kubenswrapper[31559]: I0216 02:39:11.947643 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34f9638c-9640-4b80-9be1-cf1e208c3f2a" path="/var/lib/kubelet/pods/34f9638c-9640-4b80-9be1-cf1e208c3f2a/volumes" Feb 16 02:39:11.974090 master-0 kubenswrapper[31559]: I0216 02:39:11.974049 31559 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e376c3b4-d1f6-4f67-bd17-406917b6c866-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:11.974283 master-0 kubenswrapper[31559]: I0216 02:39:11.974268 31559 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e376c3b4-d1f6-4f67-bd17-406917b6c866-fernet-keys\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:11.974382 master-0 kubenswrapper[31559]: I0216 02:39:11.974367 31559 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e376c3b4-d1f6-4f67-bd17-406917b6c866-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:11.974506 master-0 kubenswrapper[31559]: I0216 02:39:11.974493 31559 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e376c3b4-d1f6-4f67-bd17-406917b6c866-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:11.974609 master-0 kubenswrapper[31559]: I0216 02:39:11.974592 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z57cd\" (UniqueName: \"kubernetes.io/projected/e376c3b4-d1f6-4f67-bd17-406917b6c866-kube-api-access-z57cd\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:12.073740 master-0 kubenswrapper[31559]: I0216 02:39:12.073668 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-jgr7s" event={"ID":"f313a2ac-7115-41c1-ac49-ede1baebd452","Type":"ContainerStarted","Data":"b39b162722fec5db416e0318921cdfcdebd7b22a51d16df593ce4bf1ab9f8aa5"} Feb 16 02:39:12.078772 master-0 kubenswrapper[31559]: I0216 02:39:12.078723 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6glzm" event={"ID":"e376c3b4-d1f6-4f67-bd17-406917b6c866","Type":"ContainerDied","Data":"9bb130ca030373001395d08726bc73795324ee8475781b6f0c426a444565d5b8"} Feb 16 02:39:12.078879 master-0 kubenswrapper[31559]: I0216 02:39:12.078775 31559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9bb130ca030373001395d08726bc73795324ee8475781b6f0c426a444565d5b8" Feb 16 02:39:12.078931 master-0 kubenswrapper[31559]: I0216 02:39:12.078892 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6glzm" Feb 16 02:39:12.117559 master-0 kubenswrapper[31559]: I0216 02:39:12.115505 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-jgr7s" podStartSLOduration=2.460560693 podStartE2EDuration="7.115478691s" podCreationTimestamp="2026-02-16 02:39:05 +0000 UTC" firstStartedPulling="2026-02-16 02:39:06.972084766 +0000 UTC m=+999.316690781" lastFinishedPulling="2026-02-16 02:39:11.627002764 +0000 UTC m=+1003.971608779" observedRunningTime="2026-02-16 02:39:12.099924883 +0000 UTC m=+1004.444530948" watchObservedRunningTime="2026-02-16 02:39:12.115478691 +0000 UTC m=+1004.460084706" Feb 16 02:39:12.200824 master-0 kubenswrapper[31559]: W0216 02:39:12.200761 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode0e776d3_0db2_474c_ad7f_6798e0b59a3c.slice/crio-78623c7cf9ba16846090ca440c9884e1ada73bdbfc89227bd3523d117464dc19 WatchSource:0}: Error finding container 78623c7cf9ba16846090ca440c9884e1ada73bdbfc89227bd3523d117464dc19: Status 404 returned error can't find the container with id 78623c7cf9ba16846090ca440c9884e1ada73bdbfc89227bd3523d117464dc19 Feb 16 02:39:12.206608 master-0 kubenswrapper[31559]: I0216 02:39:12.206558 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-72940-default-internal-api-0"] Feb 16 02:39:12.219916 master-0 kubenswrapper[31559]: I0216 02:39:12.219836 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-6glzm"] Feb 16 02:39:12.235465 master-0 kubenswrapper[31559]: I0216 02:39:12.235349 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-6glzm"] Feb 16 02:39:12.312848 master-0 kubenswrapper[31559]: I0216 02:39:12.312795 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-zzhf8"] Feb 16 02:39:12.313421 master-0 kubenswrapper[31559]: E0216 02:39:12.313399 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e376c3b4-d1f6-4f67-bd17-406917b6c866" containerName="keystone-bootstrap" Feb 16 02:39:12.313421 master-0 kubenswrapper[31559]: I0216 02:39:12.313417 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="e376c3b4-d1f6-4f67-bd17-406917b6c866" containerName="keystone-bootstrap" Feb 16 02:39:12.313686 master-0 kubenswrapper[31559]: I0216 02:39:12.313667 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="e376c3b4-d1f6-4f67-bd17-406917b6c866" containerName="keystone-bootstrap" Feb 16 02:39:12.318456 master-0 kubenswrapper[31559]: I0216 02:39:12.315097 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zzhf8" Feb 16 02:39:12.318456 master-0 kubenswrapper[31559]: I0216 02:39:12.317451 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 16 02:39:12.318456 master-0 kubenswrapper[31559]: I0216 02:39:12.318187 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 16 02:39:12.318745 master-0 kubenswrapper[31559]: I0216 02:39:12.317883 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 16 02:39:12.318940 master-0 kubenswrapper[31559]: I0216 02:39:12.318915 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 16 02:39:12.354786 master-0 kubenswrapper[31559]: I0216 02:39:12.348512 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-zzhf8"] Feb 16 02:39:12.385557 master-0 kubenswrapper[31559]: I0216 02:39:12.385412 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/456d0b0c-176d-4d17-993b-d02048e33d25-fernet-keys\") pod \"keystone-bootstrap-zzhf8\" (UID: \"456d0b0c-176d-4d17-993b-d02048e33d25\") " pod="openstack/keystone-bootstrap-zzhf8" Feb 16 02:39:12.385557 master-0 kubenswrapper[31559]: I0216 02:39:12.385496 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/456d0b0c-176d-4d17-993b-d02048e33d25-credential-keys\") pod \"keystone-bootstrap-zzhf8\" (UID: \"456d0b0c-176d-4d17-993b-d02048e33d25\") " pod="openstack/keystone-bootstrap-zzhf8" Feb 16 02:39:12.385557 master-0 kubenswrapper[31559]: I0216 02:39:12.385541 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/456d0b0c-176d-4d17-993b-d02048e33d25-scripts\") pod \"keystone-bootstrap-zzhf8\" (UID: \"456d0b0c-176d-4d17-993b-d02048e33d25\") " pod="openstack/keystone-bootstrap-zzhf8" Feb 16 02:39:12.385862 master-0 kubenswrapper[31559]: I0216 02:39:12.385562 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/456d0b0c-176d-4d17-993b-d02048e33d25-combined-ca-bundle\") pod \"keystone-bootstrap-zzhf8\" (UID: \"456d0b0c-176d-4d17-993b-d02048e33d25\") " pod="openstack/keystone-bootstrap-zzhf8" Feb 16 02:39:12.385862 master-0 kubenswrapper[31559]: I0216 02:39:12.385640 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8hm6\" (UniqueName: \"kubernetes.io/projected/456d0b0c-176d-4d17-993b-d02048e33d25-kube-api-access-p8hm6\") pod \"keystone-bootstrap-zzhf8\" (UID: \"456d0b0c-176d-4d17-993b-d02048e33d25\") " pod="openstack/keystone-bootstrap-zzhf8" Feb 16 02:39:12.385862 master-0 kubenswrapper[31559]: I0216 02:39:12.385709 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/456d0b0c-176d-4d17-993b-d02048e33d25-config-data\") pod \"keystone-bootstrap-zzhf8\" (UID: \"456d0b0c-176d-4d17-993b-d02048e33d25\") " pod="openstack/keystone-bootstrap-zzhf8" Feb 16 02:39:12.487531 master-0 kubenswrapper[31559]: I0216 02:39:12.487422 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/456d0b0c-176d-4d17-993b-d02048e33d25-fernet-keys\") pod \"keystone-bootstrap-zzhf8\" (UID: \"456d0b0c-176d-4d17-993b-d02048e33d25\") " pod="openstack/keystone-bootstrap-zzhf8" Feb 16 02:39:12.487531 master-0 kubenswrapper[31559]: I0216 02:39:12.487486 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/456d0b0c-176d-4d17-993b-d02048e33d25-credential-keys\") pod \"keystone-bootstrap-zzhf8\" (UID: \"456d0b0c-176d-4d17-993b-d02048e33d25\") " pod="openstack/keystone-bootstrap-zzhf8" Feb 16 02:39:12.487531 master-0 kubenswrapper[31559]: I0216 02:39:12.487509 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/456d0b0c-176d-4d17-993b-d02048e33d25-scripts\") pod \"keystone-bootstrap-zzhf8\" (UID: \"456d0b0c-176d-4d17-993b-d02048e33d25\") " pod="openstack/keystone-bootstrap-zzhf8" Feb 16 02:39:12.487531 master-0 kubenswrapper[31559]: I0216 02:39:12.487531 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/456d0b0c-176d-4d17-993b-d02048e33d25-combined-ca-bundle\") pod \"keystone-bootstrap-zzhf8\" (UID: \"456d0b0c-176d-4d17-993b-d02048e33d25\") " pod="openstack/keystone-bootstrap-zzhf8" Feb 16 02:39:12.487843 master-0 kubenswrapper[31559]: I0216 02:39:12.487643 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8hm6\" (UniqueName: \"kubernetes.io/projected/456d0b0c-176d-4d17-993b-d02048e33d25-kube-api-access-p8hm6\") pod \"keystone-bootstrap-zzhf8\" (UID: \"456d0b0c-176d-4d17-993b-d02048e33d25\") " pod="openstack/keystone-bootstrap-zzhf8" Feb 16 02:39:12.487843 master-0 kubenswrapper[31559]: I0216 02:39:12.487716 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/456d0b0c-176d-4d17-993b-d02048e33d25-config-data\") pod \"keystone-bootstrap-zzhf8\" (UID: \"456d0b0c-176d-4d17-993b-d02048e33d25\") " pod="openstack/keystone-bootstrap-zzhf8" Feb 16 02:39:12.491391 master-0 kubenswrapper[31559]: I0216 02:39:12.491348 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/456d0b0c-176d-4d17-993b-d02048e33d25-config-data\") pod \"keystone-bootstrap-zzhf8\" (UID: \"456d0b0c-176d-4d17-993b-d02048e33d25\") " pod="openstack/keystone-bootstrap-zzhf8" Feb 16 02:39:12.492407 master-0 kubenswrapper[31559]: I0216 02:39:12.492366 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/456d0b0c-176d-4d17-993b-d02048e33d25-credential-keys\") pod \"keystone-bootstrap-zzhf8\" (UID: \"456d0b0c-176d-4d17-993b-d02048e33d25\") " pod="openstack/keystone-bootstrap-zzhf8" Feb 16 02:39:12.492706 master-0 kubenswrapper[31559]: I0216 02:39:12.492672 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/456d0b0c-176d-4d17-993b-d02048e33d25-combined-ca-bundle\") pod \"keystone-bootstrap-zzhf8\" (UID: \"456d0b0c-176d-4d17-993b-d02048e33d25\") " pod="openstack/keystone-bootstrap-zzhf8" Feb 16 02:39:12.493496 master-0 kubenswrapper[31559]: I0216 02:39:12.493475 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/456d0b0c-176d-4d17-993b-d02048e33d25-fernet-keys\") pod \"keystone-bootstrap-zzhf8\" (UID: \"456d0b0c-176d-4d17-993b-d02048e33d25\") " pod="openstack/keystone-bootstrap-zzhf8" Feb 16 02:39:12.493992 master-0 kubenswrapper[31559]: I0216 02:39:12.493951 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/456d0b0c-176d-4d17-993b-d02048e33d25-scripts\") pod \"keystone-bootstrap-zzhf8\" (UID: \"456d0b0c-176d-4d17-993b-d02048e33d25\") " pod="openstack/keystone-bootstrap-zzhf8" Feb 16 02:39:12.505919 master-0 kubenswrapper[31559]: I0216 02:39:12.505881 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8hm6\" (UniqueName: \"kubernetes.io/projected/456d0b0c-176d-4d17-993b-d02048e33d25-kube-api-access-p8hm6\") pod \"keystone-bootstrap-zzhf8\" (UID: \"456d0b0c-176d-4d17-993b-d02048e33d25\") " pod="openstack/keystone-bootstrap-zzhf8" Feb 16 02:39:12.521671 master-0 kubenswrapper[31559]: I0216 02:39:12.521623 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f8967d3f-b95e-4cbf-ba61-3779cc05db2f\" (UniqueName: \"kubernetes.io/csi/topolvm.io^fcb2bec2-7af4-4d84-a267-9adfc7ab4af8\") pod \"glance-72940-default-external-api-0\" (UID: \"9b2014b4-1c15-4883-8435-1f26396ed008\") " pod="openstack/glance-72940-default-external-api-0" Feb 16 02:39:12.669660 master-0 kubenswrapper[31559]: I0216 02:39:12.669613 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zzhf8" Feb 16 02:39:12.672690 master-0 kubenswrapper[31559]: I0216 02:39:12.672653 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-72940-default-external-api-0" Feb 16 02:39:13.101862 master-0 kubenswrapper[31559]: I0216 02:39:13.101788 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-72940-default-internal-api-0" event={"ID":"e0e776d3-0db2-474c-ad7f-6798e0b59a3c","Type":"ContainerStarted","Data":"852e77019e6f36e648b06570ae813160acdcdcb8bf5958343b9075f3cf0d9bf3"} Feb 16 02:39:13.101862 master-0 kubenswrapper[31559]: I0216 02:39:13.101853 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-72940-default-internal-api-0" event={"ID":"e0e776d3-0db2-474c-ad7f-6798e0b59a3c","Type":"ContainerStarted","Data":"78623c7cf9ba16846090ca440c9884e1ada73bdbfc89227bd3523d117464dc19"} Feb 16 02:39:13.416081 master-0 kubenswrapper[31559]: I0216 02:39:13.416018 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-zzhf8"] Feb 16 02:39:13.477916 master-0 kubenswrapper[31559]: I0216 02:39:13.477781 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-72940-default-external-api-0"] Feb 16 02:39:13.492468 master-0 kubenswrapper[31559]: W0216 02:39:13.492329 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b2014b4_1c15_4883_8435_1f26396ed008.slice/crio-854e375b5788ff97303456a8cf5f2f324ada42f2b54e497516ce302f36b419b7 WatchSource:0}: Error finding container 854e375b5788ff97303456a8cf5f2f324ada42f2b54e497516ce302f36b419b7: Status 404 returned error can't find the container with id 854e375b5788ff97303456a8cf5f2f324ada42f2b54e497516ce302f36b419b7 Feb 16 02:39:13.941335 master-0 kubenswrapper[31559]: I0216 02:39:13.940542 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e376c3b4-d1f6-4f67-bd17-406917b6c866" path="/var/lib/kubelet/pods/e376c3b4-d1f6-4f67-bd17-406917b6c866/volumes" Feb 16 02:39:14.129303 master-0 kubenswrapper[31559]: I0216 02:39:14.129184 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-72940-default-external-api-0" event={"ID":"9b2014b4-1c15-4883-8435-1f26396ed008","Type":"ContainerStarted","Data":"854e375b5788ff97303456a8cf5f2f324ada42f2b54e497516ce302f36b419b7"} Feb 16 02:39:14.134397 master-0 kubenswrapper[31559]: I0216 02:39:14.132651 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zzhf8" event={"ID":"456d0b0c-176d-4d17-993b-d02048e33d25","Type":"ContainerStarted","Data":"4b3cdded2da9fa3731909d910565acb813f101628c84b5933c4efa436aefe1a6"} Feb 16 02:39:14.134397 master-0 kubenswrapper[31559]: I0216 02:39:14.132762 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zzhf8" event={"ID":"456d0b0c-176d-4d17-993b-d02048e33d25","Type":"ContainerStarted","Data":"c1ee8994906ec917e01f98211834f50928974933aaf4dac7e319c9df1a991af9"} Feb 16 02:39:14.145330 master-0 kubenswrapper[31559]: I0216 02:39:14.143842 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-72940-default-internal-api-0" event={"ID":"e0e776d3-0db2-474c-ad7f-6798e0b59a3c","Type":"ContainerStarted","Data":"a156719f3471713b64f07c6e9ffea069af70f3c5cfb89722283b7cd9365bd26d"} Feb 16 02:39:14.172635 master-0 kubenswrapper[31559]: I0216 02:39:14.170549 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-zzhf8" podStartSLOduration=2.17053 podStartE2EDuration="2.17053s" podCreationTimestamp="2026-02-16 02:39:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:39:14.169241187 +0000 UTC m=+1006.513847202" watchObservedRunningTime="2026-02-16 02:39:14.17053 +0000 UTC m=+1006.515136015" Feb 16 02:39:14.213300 master-0 kubenswrapper[31559]: I0216 02:39:14.213194 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-72940-default-internal-api-0" podStartSLOduration=6.213170312 podStartE2EDuration="6.213170312s" podCreationTimestamp="2026-02-16 02:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:39:14.203561586 +0000 UTC m=+1006.548167611" watchObservedRunningTime="2026-02-16 02:39:14.213170312 +0000 UTC m=+1006.557776327" Feb 16 02:39:15.160404 master-0 kubenswrapper[31559]: I0216 02:39:15.160333 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-72940-default-external-api-0" event={"ID":"9b2014b4-1c15-4883-8435-1f26396ed008","Type":"ContainerStarted","Data":"d727907c96d4b6dc6c4393991a32d768caf72eeec9fb1de049864242f34db444"} Feb 16 02:39:15.160404 master-0 kubenswrapper[31559]: I0216 02:39:15.160386 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-72940-default-external-api-0" event={"ID":"9b2014b4-1c15-4883-8435-1f26396ed008","Type":"ContainerStarted","Data":"84179352863d97c35954bf2153a31c72640c6c68fb23f5bda3ccdb7a1950ede3"} Feb 16 02:39:15.435903 master-0 kubenswrapper[31559]: I0216 02:39:15.435761 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-72940-default-external-api-0" podStartSLOduration=5.435738505 podStartE2EDuration="5.435738505s" podCreationTimestamp="2026-02-16 02:39:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:39:15.421265685 +0000 UTC m=+1007.765871720" watchObservedRunningTime="2026-02-16 02:39:15.435738505 +0000 UTC m=+1007.780344530" Feb 16 02:39:15.697124 master-0 kubenswrapper[31559]: I0216 02:39:15.697011 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-db-sync-ljf52"] Feb 16 02:39:15.701652 master-0 kubenswrapper[31559]: I0216 02:39:15.701607 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-ljf52" Feb 16 02:39:15.707524 master-0 kubenswrapper[31559]: I0216 02:39:15.707468 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-scripts" Feb 16 02:39:15.709728 master-0 kubenswrapper[31559]: I0216 02:39:15.709698 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-config-data" Feb 16 02:39:15.711321 master-0 kubenswrapper[31559]: I0216 02:39:15.711272 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-sync-ljf52"] Feb 16 02:39:15.803684 master-0 kubenswrapper[31559]: I0216 02:39:15.803261 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45gnj\" (UniqueName: \"kubernetes.io/projected/656fd1dc-bb87-40c8-a161-31a194c23629-kube-api-access-45gnj\") pod \"ironic-db-sync-ljf52\" (UID: \"656fd1dc-bb87-40c8-a161-31a194c23629\") " pod="openstack/ironic-db-sync-ljf52" Feb 16 02:39:15.803684 master-0 kubenswrapper[31559]: I0216 02:39:15.803415 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/656fd1dc-bb87-40c8-a161-31a194c23629-combined-ca-bundle\") pod \"ironic-db-sync-ljf52\" (UID: \"656fd1dc-bb87-40c8-a161-31a194c23629\") " pod="openstack/ironic-db-sync-ljf52" Feb 16 02:39:15.803684 master-0 kubenswrapper[31559]: I0216 02:39:15.803522 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/656fd1dc-bb87-40c8-a161-31a194c23629-config-data-merged\") pod \"ironic-db-sync-ljf52\" (UID: \"656fd1dc-bb87-40c8-a161-31a194c23629\") " pod="openstack/ironic-db-sync-ljf52" Feb 16 02:39:15.803684 master-0 kubenswrapper[31559]: I0216 02:39:15.803675 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/656fd1dc-bb87-40c8-a161-31a194c23629-config-data\") pod \"ironic-db-sync-ljf52\" (UID: \"656fd1dc-bb87-40c8-a161-31a194c23629\") " pod="openstack/ironic-db-sync-ljf52" Feb 16 02:39:15.804147 master-0 kubenswrapper[31559]: I0216 02:39:15.803815 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/656fd1dc-bb87-40c8-a161-31a194c23629-etc-podinfo\") pod \"ironic-db-sync-ljf52\" (UID: \"656fd1dc-bb87-40c8-a161-31a194c23629\") " pod="openstack/ironic-db-sync-ljf52" Feb 16 02:39:15.804147 master-0 kubenswrapper[31559]: I0216 02:39:15.803894 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/656fd1dc-bb87-40c8-a161-31a194c23629-scripts\") pod \"ironic-db-sync-ljf52\" (UID: \"656fd1dc-bb87-40c8-a161-31a194c23629\") " pod="openstack/ironic-db-sync-ljf52" Feb 16 02:39:15.879750 master-0 kubenswrapper[31559]: I0216 02:39:15.879695 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6cc4888b7-xrh7r" Feb 16 02:39:15.917551 master-0 kubenswrapper[31559]: I0216 02:39:15.917417 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/656fd1dc-bb87-40c8-a161-31a194c23629-config-data\") pod \"ironic-db-sync-ljf52\" (UID: \"656fd1dc-bb87-40c8-a161-31a194c23629\") " pod="openstack/ironic-db-sync-ljf52" Feb 16 02:39:15.917551 master-0 kubenswrapper[31559]: I0216 02:39:15.917505 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/656fd1dc-bb87-40c8-a161-31a194c23629-etc-podinfo\") pod \"ironic-db-sync-ljf52\" (UID: \"656fd1dc-bb87-40c8-a161-31a194c23629\") " pod="openstack/ironic-db-sync-ljf52" Feb 16 02:39:15.917551 master-0 kubenswrapper[31559]: I0216 02:39:15.917536 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/656fd1dc-bb87-40c8-a161-31a194c23629-scripts\") pod \"ironic-db-sync-ljf52\" (UID: \"656fd1dc-bb87-40c8-a161-31a194c23629\") " pod="openstack/ironic-db-sync-ljf52" Feb 16 02:39:15.918302 master-0 kubenswrapper[31559]: I0216 02:39:15.918237 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45gnj\" (UniqueName: \"kubernetes.io/projected/656fd1dc-bb87-40c8-a161-31a194c23629-kube-api-access-45gnj\") pod \"ironic-db-sync-ljf52\" (UID: \"656fd1dc-bb87-40c8-a161-31a194c23629\") " pod="openstack/ironic-db-sync-ljf52" Feb 16 02:39:15.918953 master-0 kubenswrapper[31559]: I0216 02:39:15.918919 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/656fd1dc-bb87-40c8-a161-31a194c23629-combined-ca-bundle\") pod \"ironic-db-sync-ljf52\" (UID: \"656fd1dc-bb87-40c8-a161-31a194c23629\") " pod="openstack/ironic-db-sync-ljf52" Feb 16 02:39:15.919570 master-0 kubenswrapper[31559]: I0216 02:39:15.919532 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/656fd1dc-bb87-40c8-a161-31a194c23629-config-data-merged\") pod \"ironic-db-sync-ljf52\" (UID: \"656fd1dc-bb87-40c8-a161-31a194c23629\") " pod="openstack/ironic-db-sync-ljf52" Feb 16 02:39:15.920156 master-0 kubenswrapper[31559]: I0216 02:39:15.920126 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/656fd1dc-bb87-40c8-a161-31a194c23629-config-data-merged\") pod \"ironic-db-sync-ljf52\" (UID: \"656fd1dc-bb87-40c8-a161-31a194c23629\") " pod="openstack/ironic-db-sync-ljf52" Feb 16 02:39:15.922102 master-0 kubenswrapper[31559]: I0216 02:39:15.922074 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/656fd1dc-bb87-40c8-a161-31a194c23629-etc-podinfo\") pod \"ironic-db-sync-ljf52\" (UID: \"656fd1dc-bb87-40c8-a161-31a194c23629\") " pod="openstack/ironic-db-sync-ljf52" Feb 16 02:39:15.922225 master-0 kubenswrapper[31559]: I0216 02:39:15.922110 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/656fd1dc-bb87-40c8-a161-31a194c23629-scripts\") pod \"ironic-db-sync-ljf52\" (UID: \"656fd1dc-bb87-40c8-a161-31a194c23629\") " pod="openstack/ironic-db-sync-ljf52" Feb 16 02:39:15.922966 master-0 kubenswrapper[31559]: I0216 02:39:15.922919 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/656fd1dc-bb87-40c8-a161-31a194c23629-config-data\") pod \"ironic-db-sync-ljf52\" (UID: \"656fd1dc-bb87-40c8-a161-31a194c23629\") " pod="openstack/ironic-db-sync-ljf52" Feb 16 02:39:15.936403 master-0 kubenswrapper[31559]: I0216 02:39:15.936362 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/656fd1dc-bb87-40c8-a161-31a194c23629-combined-ca-bundle\") pod \"ironic-db-sync-ljf52\" (UID: \"656fd1dc-bb87-40c8-a161-31a194c23629\") " pod="openstack/ironic-db-sync-ljf52" Feb 16 02:39:15.966416 master-0 kubenswrapper[31559]: I0216 02:39:15.966378 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45gnj\" (UniqueName: \"kubernetes.io/projected/656fd1dc-bb87-40c8-a161-31a194c23629-kube-api-access-45gnj\") pod \"ironic-db-sync-ljf52\" (UID: \"656fd1dc-bb87-40c8-a161-31a194c23629\") " pod="openstack/ironic-db-sync-ljf52" Feb 16 02:39:15.973456 master-0 kubenswrapper[31559]: I0216 02:39:15.973383 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6cdd8bf54c-48llw"] Feb 16 02:39:15.973673 master-0 kubenswrapper[31559]: I0216 02:39:15.973623 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6cdd8bf54c-48llw" podUID="1d38194f-c1be-4b20-acdd-793a9bef8b1b" containerName="dnsmasq-dns" containerID="cri-o://ca0de1cbccc5e3d2a99d5a914ee181b235fc0306e6c10036ba9a1ff6e1fb316d" gracePeriod=10 Feb 16 02:39:16.053152 master-0 kubenswrapper[31559]: I0216 02:39:16.053089 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-ljf52" Feb 16 02:39:16.175578 master-0 kubenswrapper[31559]: I0216 02:39:16.175429 31559 generic.go:334] "Generic (PLEG): container finished" podID="1d38194f-c1be-4b20-acdd-793a9bef8b1b" containerID="ca0de1cbccc5e3d2a99d5a914ee181b235fc0306e6c10036ba9a1ff6e1fb316d" exitCode=0 Feb 16 02:39:16.176603 master-0 kubenswrapper[31559]: I0216 02:39:16.175772 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cdd8bf54c-48llw" event={"ID":"1d38194f-c1be-4b20-acdd-793a9bef8b1b","Type":"ContainerDied","Data":"ca0de1cbccc5e3d2a99d5a914ee181b235fc0306e6c10036ba9a1ff6e1fb316d"} Feb 16 02:39:16.177220 master-0 kubenswrapper[31559]: I0216 02:39:16.177171 31559 generic.go:334] "Generic (PLEG): container finished" podID="f313a2ac-7115-41c1-ac49-ede1baebd452" containerID="b39b162722fec5db416e0318921cdfcdebd7b22a51d16df593ce4bf1ab9f8aa5" exitCode=0 Feb 16 02:39:16.177597 master-0 kubenswrapper[31559]: I0216 02:39:16.177556 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-jgr7s" event={"ID":"f313a2ac-7115-41c1-ac49-ede1baebd452","Type":"ContainerDied","Data":"b39b162722fec5db416e0318921cdfcdebd7b22a51d16df593ce4bf1ab9f8aa5"} Feb 16 02:39:17.193521 master-0 kubenswrapper[31559]: I0216 02:39:17.193415 31559 generic.go:334] "Generic (PLEG): container finished" podID="456d0b0c-176d-4d17-993b-d02048e33d25" containerID="4b3cdded2da9fa3731909d910565acb813f101628c84b5933c4efa436aefe1a6" exitCode=0 Feb 16 02:39:17.193521 master-0 kubenswrapper[31559]: I0216 02:39:17.193475 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zzhf8" event={"ID":"456d0b0c-176d-4d17-993b-d02048e33d25","Type":"ContainerDied","Data":"4b3cdded2da9fa3731909d910565acb813f101628c84b5933c4efa436aefe1a6"} Feb 16 02:39:19.535208 master-0 kubenswrapper[31559]: I0216 02:39:19.535110 31559 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6cdd8bf54c-48llw" podUID="1d38194f-c1be-4b20-acdd-793a9bef8b1b" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.206:5353: connect: connection refused" Feb 16 02:39:20.392033 master-0 kubenswrapper[31559]: I0216 02:39:20.391930 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-72940-default-internal-api-0" Feb 16 02:39:20.392033 master-0 kubenswrapper[31559]: I0216 02:39:20.392019 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-72940-default-internal-api-0" Feb 16 02:39:20.433164 master-0 kubenswrapper[31559]: I0216 02:39:20.433062 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-72940-default-internal-api-0" Feb 16 02:39:20.458728 master-0 kubenswrapper[31559]: I0216 02:39:20.458653 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-72940-default-internal-api-0" Feb 16 02:39:21.246219 master-0 kubenswrapper[31559]: I0216 02:39:21.246149 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-72940-default-internal-api-0" Feb 16 02:39:21.246219 master-0 kubenswrapper[31559]: I0216 02:39:21.246223 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-72940-default-internal-api-0" Feb 16 02:39:22.673484 master-0 kubenswrapper[31559]: I0216 02:39:22.673410 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-72940-default-external-api-0" Feb 16 02:39:22.673484 master-0 kubenswrapper[31559]: I0216 02:39:22.673481 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-72940-default-external-api-0" Feb 16 02:39:22.705936 master-0 kubenswrapper[31559]: I0216 02:39:22.705850 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-72940-default-external-api-0" Feb 16 02:39:22.722394 master-0 kubenswrapper[31559]: I0216 02:39:22.722336 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-72940-default-external-api-0" Feb 16 02:39:23.180691 master-0 kubenswrapper[31559]: I0216 02:39:23.180602 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-72940-default-internal-api-0" Feb 16 02:39:23.181191 master-0 kubenswrapper[31559]: I0216 02:39:23.181116 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-72940-default-internal-api-0" Feb 16 02:39:23.279002 master-0 kubenswrapper[31559]: I0216 02:39:23.278928 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-72940-default-external-api-0" Feb 16 02:39:23.279002 master-0 kubenswrapper[31559]: I0216 02:39:23.278978 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-72940-default-external-api-0" Feb 16 02:39:24.547639 master-0 kubenswrapper[31559]: I0216 02:39:24.547560 31559 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6cdd8bf54c-48llw" podUID="1d38194f-c1be-4b20-acdd-793a9bef8b1b" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.206:5353: connect: connection refused" Feb 16 02:39:24.689579 master-0 kubenswrapper[31559]: I0216 02:39:24.689526 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-jgr7s" Feb 16 02:39:24.701040 master-0 kubenswrapper[31559]: I0216 02:39:24.700944 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zzhf8" Feb 16 02:39:24.750574 master-0 kubenswrapper[31559]: I0216 02:39:24.750507 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f313a2ac-7115-41c1-ac49-ede1baebd452-combined-ca-bundle\") pod \"f313a2ac-7115-41c1-ac49-ede1baebd452\" (UID: \"f313a2ac-7115-41c1-ac49-ede1baebd452\") " Feb 16 02:39:24.750802 master-0 kubenswrapper[31559]: I0216 02:39:24.750633 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f313a2ac-7115-41c1-ac49-ede1baebd452-scripts\") pod \"f313a2ac-7115-41c1-ac49-ede1baebd452\" (UID: \"f313a2ac-7115-41c1-ac49-ede1baebd452\") " Feb 16 02:39:24.750802 master-0 kubenswrapper[31559]: I0216 02:39:24.750675 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f313a2ac-7115-41c1-ac49-ede1baebd452-config-data\") pod \"f313a2ac-7115-41c1-ac49-ede1baebd452\" (UID: \"f313a2ac-7115-41c1-ac49-ede1baebd452\") " Feb 16 02:39:24.750802 master-0 kubenswrapper[31559]: I0216 02:39:24.750776 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f313a2ac-7115-41c1-ac49-ede1baebd452-logs\") pod \"f313a2ac-7115-41c1-ac49-ede1baebd452\" (UID: \"f313a2ac-7115-41c1-ac49-ede1baebd452\") " Feb 16 02:39:24.750899 master-0 kubenswrapper[31559]: I0216 02:39:24.750861 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5pqd\" (UniqueName: \"kubernetes.io/projected/f313a2ac-7115-41c1-ac49-ede1baebd452-kube-api-access-g5pqd\") pod \"f313a2ac-7115-41c1-ac49-ede1baebd452\" (UID: \"f313a2ac-7115-41c1-ac49-ede1baebd452\") " Feb 16 02:39:24.753378 master-0 kubenswrapper[31559]: I0216 02:39:24.753319 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f313a2ac-7115-41c1-ac49-ede1baebd452-logs" (OuterVolumeSpecName: "logs") pod "f313a2ac-7115-41c1-ac49-ede1baebd452" (UID: "f313a2ac-7115-41c1-ac49-ede1baebd452"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 02:39:24.756732 master-0 kubenswrapper[31559]: I0216 02:39:24.756691 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f313a2ac-7115-41c1-ac49-ede1baebd452-scripts" (OuterVolumeSpecName: "scripts") pod "f313a2ac-7115-41c1-ac49-ede1baebd452" (UID: "f313a2ac-7115-41c1-ac49-ede1baebd452"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:39:24.775227 master-0 kubenswrapper[31559]: I0216 02:39:24.774621 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f313a2ac-7115-41c1-ac49-ede1baebd452-kube-api-access-g5pqd" (OuterVolumeSpecName: "kube-api-access-g5pqd") pod "f313a2ac-7115-41c1-ac49-ede1baebd452" (UID: "f313a2ac-7115-41c1-ac49-ede1baebd452"). InnerVolumeSpecName "kube-api-access-g5pqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:39:24.783981 master-0 kubenswrapper[31559]: I0216 02:39:24.783923 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f313a2ac-7115-41c1-ac49-ede1baebd452-config-data" (OuterVolumeSpecName: "config-data") pod "f313a2ac-7115-41c1-ac49-ede1baebd452" (UID: "f313a2ac-7115-41c1-ac49-ede1baebd452"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:39:24.794072 master-0 kubenswrapper[31559]: I0216 02:39:24.794019 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f313a2ac-7115-41c1-ac49-ede1baebd452-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f313a2ac-7115-41c1-ac49-ede1baebd452" (UID: "f313a2ac-7115-41c1-ac49-ede1baebd452"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:39:24.852044 master-0 kubenswrapper[31559]: I0216 02:39:24.851939 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/456d0b0c-176d-4d17-993b-d02048e33d25-combined-ca-bundle\") pod \"456d0b0c-176d-4d17-993b-d02048e33d25\" (UID: \"456d0b0c-176d-4d17-993b-d02048e33d25\") " Feb 16 02:39:24.853080 master-0 kubenswrapper[31559]: I0216 02:39:24.852268 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/456d0b0c-176d-4d17-993b-d02048e33d25-scripts\") pod \"456d0b0c-176d-4d17-993b-d02048e33d25\" (UID: \"456d0b0c-176d-4d17-993b-d02048e33d25\") " Feb 16 02:39:24.853080 master-0 kubenswrapper[31559]: I0216 02:39:24.852339 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/456d0b0c-176d-4d17-993b-d02048e33d25-config-data\") pod \"456d0b0c-176d-4d17-993b-d02048e33d25\" (UID: \"456d0b0c-176d-4d17-993b-d02048e33d25\") " Feb 16 02:39:24.853080 master-0 kubenswrapper[31559]: I0216 02:39:24.852364 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8hm6\" (UniqueName: \"kubernetes.io/projected/456d0b0c-176d-4d17-993b-d02048e33d25-kube-api-access-p8hm6\") pod \"456d0b0c-176d-4d17-993b-d02048e33d25\" (UID: \"456d0b0c-176d-4d17-993b-d02048e33d25\") " Feb 16 02:39:24.853080 master-0 kubenswrapper[31559]: I0216 02:39:24.852452 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/456d0b0c-176d-4d17-993b-d02048e33d25-credential-keys\") pod \"456d0b0c-176d-4d17-993b-d02048e33d25\" (UID: \"456d0b0c-176d-4d17-993b-d02048e33d25\") " Feb 16 02:39:24.853080 master-0 kubenswrapper[31559]: I0216 02:39:24.852533 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/456d0b0c-176d-4d17-993b-d02048e33d25-fernet-keys\") pod \"456d0b0c-176d-4d17-993b-d02048e33d25\" (UID: \"456d0b0c-176d-4d17-993b-d02048e33d25\") " Feb 16 02:39:24.853080 master-0 kubenswrapper[31559]: I0216 02:39:24.852921 31559 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f313a2ac-7115-41c1-ac49-ede1baebd452-logs\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:24.853080 master-0 kubenswrapper[31559]: I0216 02:39:24.852946 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5pqd\" (UniqueName: \"kubernetes.io/projected/f313a2ac-7115-41c1-ac49-ede1baebd452-kube-api-access-g5pqd\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:24.853080 master-0 kubenswrapper[31559]: I0216 02:39:24.852956 31559 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f313a2ac-7115-41c1-ac49-ede1baebd452-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:24.853080 master-0 kubenswrapper[31559]: I0216 02:39:24.852965 31559 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f313a2ac-7115-41c1-ac49-ede1baebd452-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:24.853080 master-0 kubenswrapper[31559]: I0216 02:39:24.852972 31559 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f313a2ac-7115-41c1-ac49-ede1baebd452-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:24.858913 master-0 kubenswrapper[31559]: I0216 02:39:24.858873 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/456d0b0c-176d-4d17-993b-d02048e33d25-kube-api-access-p8hm6" (OuterVolumeSpecName: "kube-api-access-p8hm6") pod "456d0b0c-176d-4d17-993b-d02048e33d25" (UID: "456d0b0c-176d-4d17-993b-d02048e33d25"). InnerVolumeSpecName "kube-api-access-p8hm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:39:24.859016 master-0 kubenswrapper[31559]: I0216 02:39:24.858922 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/456d0b0c-176d-4d17-993b-d02048e33d25-scripts" (OuterVolumeSpecName: "scripts") pod "456d0b0c-176d-4d17-993b-d02048e33d25" (UID: "456d0b0c-176d-4d17-993b-d02048e33d25"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:39:24.859016 master-0 kubenswrapper[31559]: I0216 02:39:24.858887 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/456d0b0c-176d-4d17-993b-d02048e33d25-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "456d0b0c-176d-4d17-993b-d02048e33d25" (UID: "456d0b0c-176d-4d17-993b-d02048e33d25"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:39:24.860541 master-0 kubenswrapper[31559]: I0216 02:39:24.860474 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/456d0b0c-176d-4d17-993b-d02048e33d25-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "456d0b0c-176d-4d17-993b-d02048e33d25" (UID: "456d0b0c-176d-4d17-993b-d02048e33d25"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:39:24.903900 master-0 kubenswrapper[31559]: I0216 02:39:24.903567 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/456d0b0c-176d-4d17-993b-d02048e33d25-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "456d0b0c-176d-4d17-993b-d02048e33d25" (UID: "456d0b0c-176d-4d17-993b-d02048e33d25"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:39:24.912593 master-0 kubenswrapper[31559]: I0216 02:39:24.912536 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/456d0b0c-176d-4d17-993b-d02048e33d25-config-data" (OuterVolumeSpecName: "config-data") pod "456d0b0c-176d-4d17-993b-d02048e33d25" (UID: "456d0b0c-176d-4d17-993b-d02048e33d25"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:39:24.961296 master-0 kubenswrapper[31559]: I0216 02:39:24.961241 31559 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/456d0b0c-176d-4d17-993b-d02048e33d25-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:24.961296 master-0 kubenswrapper[31559]: I0216 02:39:24.961286 31559 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/456d0b0c-176d-4d17-993b-d02048e33d25-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:24.961296 master-0 kubenswrapper[31559]: I0216 02:39:24.961304 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8hm6\" (UniqueName: \"kubernetes.io/projected/456d0b0c-176d-4d17-993b-d02048e33d25-kube-api-access-p8hm6\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:24.961589 master-0 kubenswrapper[31559]: I0216 02:39:24.961318 31559 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/456d0b0c-176d-4d17-993b-d02048e33d25-credential-keys\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:24.961589 master-0 kubenswrapper[31559]: I0216 02:39:24.961331 31559 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/456d0b0c-176d-4d17-993b-d02048e33d25-fernet-keys\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:24.961589 master-0 kubenswrapper[31559]: I0216 02:39:24.961342 31559 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/456d0b0c-176d-4d17-993b-d02048e33d25-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:25.155362 master-0 kubenswrapper[31559]: I0216 02:39:25.155283 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cdd8bf54c-48llw" Feb 16 02:39:25.265039 master-0 kubenswrapper[31559]: I0216 02:39:25.264992 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d38194f-c1be-4b20-acdd-793a9bef8b1b-ovsdbserver-sb\") pod \"1d38194f-c1be-4b20-acdd-793a9bef8b1b\" (UID: \"1d38194f-c1be-4b20-acdd-793a9bef8b1b\") " Feb 16 02:39:25.265473 master-0 kubenswrapper[31559]: I0216 02:39:25.265458 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkjp6\" (UniqueName: \"kubernetes.io/projected/1d38194f-c1be-4b20-acdd-793a9bef8b1b-kube-api-access-zkjp6\") pod \"1d38194f-c1be-4b20-acdd-793a9bef8b1b\" (UID: \"1d38194f-c1be-4b20-acdd-793a9bef8b1b\") " Feb 16 02:39:25.265641 master-0 kubenswrapper[31559]: I0216 02:39:25.265627 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d38194f-c1be-4b20-acdd-793a9bef8b1b-ovsdbserver-nb\") pod \"1d38194f-c1be-4b20-acdd-793a9bef8b1b\" (UID: \"1d38194f-c1be-4b20-acdd-793a9bef8b1b\") " Feb 16 02:39:25.266132 master-0 kubenswrapper[31559]: I0216 02:39:25.265775 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d38194f-c1be-4b20-acdd-793a9bef8b1b-dns-swift-storage-0\") pod \"1d38194f-c1be-4b20-acdd-793a9bef8b1b\" (UID: \"1d38194f-c1be-4b20-acdd-793a9bef8b1b\") " Feb 16 02:39:25.266275 master-0 kubenswrapper[31559]: I0216 02:39:25.266234 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d38194f-c1be-4b20-acdd-793a9bef8b1b-dns-svc\") pod \"1d38194f-c1be-4b20-acdd-793a9bef8b1b\" (UID: \"1d38194f-c1be-4b20-acdd-793a9bef8b1b\") " Feb 16 02:39:25.266366 master-0 kubenswrapper[31559]: I0216 02:39:25.266352 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d38194f-c1be-4b20-acdd-793a9bef8b1b-config\") pod \"1d38194f-c1be-4b20-acdd-793a9bef8b1b\" (UID: \"1d38194f-c1be-4b20-acdd-793a9bef8b1b\") " Feb 16 02:39:25.269094 master-0 kubenswrapper[31559]: I0216 02:39:25.269035 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d38194f-c1be-4b20-acdd-793a9bef8b1b-kube-api-access-zkjp6" (OuterVolumeSpecName: "kube-api-access-zkjp6") pod "1d38194f-c1be-4b20-acdd-793a9bef8b1b" (UID: "1d38194f-c1be-4b20-acdd-793a9bef8b1b"). InnerVolumeSpecName "kube-api-access-zkjp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:39:25.306815 master-0 kubenswrapper[31559]: I0216 02:39:25.306665 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-72940-default-external-api-0" Feb 16 02:39:25.310773 master-0 kubenswrapper[31559]: I0216 02:39:25.310749 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-jgr7s" Feb 16 02:39:25.310881 master-0 kubenswrapper[31559]: I0216 02:39:25.310848 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-jgr7s" event={"ID":"f313a2ac-7115-41c1-ac49-ede1baebd452","Type":"ContainerDied","Data":"155cef6f9e6021de4e94dc97133dfd4e4abb237f5f102f0178d2da7dbc66f071"} Feb 16 02:39:25.310960 master-0 kubenswrapper[31559]: I0216 02:39:25.310943 31559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="155cef6f9e6021de4e94dc97133dfd4e4abb237f5f102f0178d2da7dbc66f071" Feb 16 02:39:25.311087 master-0 kubenswrapper[31559]: I0216 02:39:25.311053 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d38194f-c1be-4b20-acdd-793a9bef8b1b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1d38194f-c1be-4b20-acdd-793a9bef8b1b" (UID: "1d38194f-c1be-4b20-acdd-793a9bef8b1b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:39:25.312739 master-0 kubenswrapper[31559]: I0216 02:39:25.312693 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-sync-ljf52"] Feb 16 02:39:25.313619 master-0 kubenswrapper[31559]: I0216 02:39:25.313590 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-72940-default-external-api-0" Feb 16 02:39:25.313837 master-0 kubenswrapper[31559]: I0216 02:39:25.313766 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cdd8bf54c-48llw" Feb 16 02:39:25.313837 master-0 kubenswrapper[31559]: I0216 02:39:25.313770 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cdd8bf54c-48llw" event={"ID":"1d38194f-c1be-4b20-acdd-793a9bef8b1b","Type":"ContainerDied","Data":"5f9147cb857e7f9947ab647277a0f6215a87439fe95dc2c08c890f159a659360"} Feb 16 02:39:25.313837 master-0 kubenswrapper[31559]: I0216 02:39:25.313807 31559 scope.go:117] "RemoveContainer" containerID="ca0de1cbccc5e3d2a99d5a914ee181b235fc0306e6c10036ba9a1ff6e1fb316d" Feb 16 02:39:25.317682 master-0 kubenswrapper[31559]: I0216 02:39:25.317312 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zzhf8" Feb 16 02:39:25.317682 master-0 kubenswrapper[31559]: I0216 02:39:25.317603 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zzhf8" event={"ID":"456d0b0c-176d-4d17-993b-d02048e33d25","Type":"ContainerDied","Data":"c1ee8994906ec917e01f98211834f50928974933aaf4dac7e319c9df1a991af9"} Feb 16 02:39:25.317682 master-0 kubenswrapper[31559]: I0216 02:39:25.317634 31559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1ee8994906ec917e01f98211834f50928974933aaf4dac7e319c9df1a991af9" Feb 16 02:39:25.334061 master-0 kubenswrapper[31559]: I0216 02:39:25.334008 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d38194f-c1be-4b20-acdd-793a9bef8b1b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1d38194f-c1be-4b20-acdd-793a9bef8b1b" (UID: "1d38194f-c1be-4b20-acdd-793a9bef8b1b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:39:25.334238 master-0 kubenswrapper[31559]: E0216 02:39:25.334193 31559 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf313a2ac_7115_41c1_ac49_ede1baebd452.slice\": RecentStats: unable to find data in memory cache]" Feb 16 02:39:25.347572 master-0 kubenswrapper[31559]: I0216 02:39:25.347534 31559 scope.go:117] "RemoveContainer" containerID="5f79381f3de30e0194afa5dc8505e7d1cb022bb7b2e311ca4ebfcb3e217a1cb4" Feb 16 02:39:25.366950 master-0 kubenswrapper[31559]: I0216 02:39:25.366891 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d38194f-c1be-4b20-acdd-793a9bef8b1b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1d38194f-c1be-4b20-acdd-793a9bef8b1b" (UID: "1d38194f-c1be-4b20-acdd-793a9bef8b1b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:39:25.369684 master-0 kubenswrapper[31559]: I0216 02:39:25.369632 31559 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d38194f-c1be-4b20-acdd-793a9bef8b1b-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:25.369684 master-0 kubenswrapper[31559]: I0216 02:39:25.369674 31559 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d38194f-c1be-4b20-acdd-793a9bef8b1b-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:25.369684 master-0 kubenswrapper[31559]: I0216 02:39:25.369687 31559 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d38194f-c1be-4b20-acdd-793a9bef8b1b-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:25.370039 master-0 kubenswrapper[31559]: I0216 02:39:25.369698 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkjp6\" (UniqueName: \"kubernetes.io/projected/1d38194f-c1be-4b20-acdd-793a9bef8b1b-kube-api-access-zkjp6\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:25.380697 master-0 kubenswrapper[31559]: I0216 02:39:25.374714 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d38194f-c1be-4b20-acdd-793a9bef8b1b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1d38194f-c1be-4b20-acdd-793a9bef8b1b" (UID: "1d38194f-c1be-4b20-acdd-793a9bef8b1b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:39:25.388644 master-0 kubenswrapper[31559]: I0216 02:39:25.388537 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d38194f-c1be-4b20-acdd-793a9bef8b1b-config" (OuterVolumeSpecName: "config") pod "1d38194f-c1be-4b20-acdd-793a9bef8b1b" (UID: "1d38194f-c1be-4b20-acdd-793a9bef8b1b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:39:25.472846 master-0 kubenswrapper[31559]: I0216 02:39:25.472766 31559 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d38194f-c1be-4b20-acdd-793a9bef8b1b-config\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:25.473150 master-0 kubenswrapper[31559]: I0216 02:39:25.473093 31559 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d38194f-c1be-4b20-acdd-793a9bef8b1b-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:25.679532 master-0 kubenswrapper[31559]: I0216 02:39:25.674592 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6cdd8bf54c-48llw"] Feb 16 02:39:25.684028 master-0 kubenswrapper[31559]: I0216 02:39:25.683973 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6cdd8bf54c-48llw"] Feb 16 02:39:25.888023 master-0 kubenswrapper[31559]: I0216 02:39:25.887952 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6896ff5478-9txrd"] Feb 16 02:39:25.888450 master-0 kubenswrapper[31559]: E0216 02:39:25.888424 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d38194f-c1be-4b20-acdd-793a9bef8b1b" containerName="init" Feb 16 02:39:25.888497 master-0 kubenswrapper[31559]: I0216 02:39:25.888451 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d38194f-c1be-4b20-acdd-793a9bef8b1b" containerName="init" Feb 16 02:39:25.888497 master-0 kubenswrapper[31559]: E0216 02:39:25.888485 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="456d0b0c-176d-4d17-993b-d02048e33d25" containerName="keystone-bootstrap" Feb 16 02:39:25.888497 master-0 kubenswrapper[31559]: I0216 02:39:25.888491 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="456d0b0c-176d-4d17-993b-d02048e33d25" containerName="keystone-bootstrap" Feb 16 02:39:25.888591 master-0 kubenswrapper[31559]: E0216 02:39:25.888530 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d38194f-c1be-4b20-acdd-793a9bef8b1b" containerName="dnsmasq-dns" Feb 16 02:39:25.888591 master-0 kubenswrapper[31559]: I0216 02:39:25.888538 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d38194f-c1be-4b20-acdd-793a9bef8b1b" containerName="dnsmasq-dns" Feb 16 02:39:25.888591 master-0 kubenswrapper[31559]: E0216 02:39:25.888548 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f313a2ac-7115-41c1-ac49-ede1baebd452" containerName="placement-db-sync" Feb 16 02:39:25.888591 master-0 kubenswrapper[31559]: I0216 02:39:25.888554 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="f313a2ac-7115-41c1-ac49-ede1baebd452" containerName="placement-db-sync" Feb 16 02:39:25.888767 master-0 kubenswrapper[31559]: I0216 02:39:25.888748 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="f313a2ac-7115-41c1-ac49-ede1baebd452" containerName="placement-db-sync" Feb 16 02:39:25.888805 master-0 kubenswrapper[31559]: I0216 02:39:25.888768 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d38194f-c1be-4b20-acdd-793a9bef8b1b" containerName="dnsmasq-dns" Feb 16 02:39:25.888805 master-0 kubenswrapper[31559]: I0216 02:39:25.888798 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="456d0b0c-176d-4d17-993b-d02048e33d25" containerName="keystone-bootstrap" Feb 16 02:39:25.889999 master-0 kubenswrapper[31559]: I0216 02:39:25.889975 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6896ff5478-9txrd" Feb 16 02:39:25.895824 master-0 kubenswrapper[31559]: I0216 02:39:25.895133 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 16 02:39:25.895824 master-0 kubenswrapper[31559]: I0216 02:39:25.895336 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 16 02:39:25.895824 master-0 kubenswrapper[31559]: I0216 02:39:25.895465 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 16 02:39:25.895824 master-0 kubenswrapper[31559]: I0216 02:39:25.895694 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 16 02:39:25.907167 master-0 kubenswrapper[31559]: I0216 02:39:25.906294 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6896ff5478-9txrd"] Feb 16 02:39:25.942380 master-0 kubenswrapper[31559]: I0216 02:39:25.942266 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d38194f-c1be-4b20-acdd-793a9bef8b1b" path="/var/lib/kubelet/pods/1d38194f-c1be-4b20-acdd-793a9bef8b1b/volumes" Feb 16 02:39:25.953294 master-0 kubenswrapper[31559]: I0216 02:39:25.953054 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-6899ff6f9c-4hbb9"] Feb 16 02:39:25.959002 master-0 kubenswrapper[31559]: I0216 02:39:25.958730 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6899ff6f9c-4hbb9" Feb 16 02:39:25.961253 master-0 kubenswrapper[31559]: I0216 02:39:25.960969 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 16 02:39:25.961253 master-0 kubenswrapper[31559]: I0216 02:39:25.961212 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 16 02:39:25.962409 master-0 kubenswrapper[31559]: I0216 02:39:25.961890 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 16 02:39:25.963342 master-0 kubenswrapper[31559]: I0216 02:39:25.963299 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 16 02:39:25.963466 master-0 kubenswrapper[31559]: I0216 02:39:25.963331 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 16 02:39:25.968572 master-0 kubenswrapper[31559]: I0216 02:39:25.968326 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6899ff6f9c-4hbb9"] Feb 16 02:39:25.995468 master-0 kubenswrapper[31559]: I0216 02:39:25.990457 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2ed2e4a-8cda-4a54-8b5d-01b559d15d76-scripts\") pod \"placement-6896ff5478-9txrd\" (UID: \"d2ed2e4a-8cda-4a54-8b5d-01b559d15d76\") " pod="openstack/placement-6896ff5478-9txrd" Feb 16 02:39:25.995468 master-0 kubenswrapper[31559]: I0216 02:39:25.990605 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2ed2e4a-8cda-4a54-8b5d-01b559d15d76-internal-tls-certs\") pod \"placement-6896ff5478-9txrd\" (UID: \"d2ed2e4a-8cda-4a54-8b5d-01b559d15d76\") " pod="openstack/placement-6896ff5478-9txrd" Feb 16 02:39:25.995468 master-0 kubenswrapper[31559]: I0216 02:39:25.990698 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2ed2e4a-8cda-4a54-8b5d-01b559d15d76-config-data\") pod \"placement-6896ff5478-9txrd\" (UID: \"d2ed2e4a-8cda-4a54-8b5d-01b559d15d76\") " pod="openstack/placement-6896ff5478-9txrd" Feb 16 02:39:25.995468 master-0 kubenswrapper[31559]: I0216 02:39:25.990894 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2ed2e4a-8cda-4a54-8b5d-01b559d15d76-combined-ca-bundle\") pod \"placement-6896ff5478-9txrd\" (UID: \"d2ed2e4a-8cda-4a54-8b5d-01b559d15d76\") " pod="openstack/placement-6896ff5478-9txrd" Feb 16 02:39:25.995468 master-0 kubenswrapper[31559]: I0216 02:39:25.992409 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2ed2e4a-8cda-4a54-8b5d-01b559d15d76-logs\") pod \"placement-6896ff5478-9txrd\" (UID: \"d2ed2e4a-8cda-4a54-8b5d-01b559d15d76\") " pod="openstack/placement-6896ff5478-9txrd" Feb 16 02:39:25.995468 master-0 kubenswrapper[31559]: I0216 02:39:25.992543 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlbjg\" (UniqueName: \"kubernetes.io/projected/d2ed2e4a-8cda-4a54-8b5d-01b559d15d76-kube-api-access-wlbjg\") pod \"placement-6896ff5478-9txrd\" (UID: \"d2ed2e4a-8cda-4a54-8b5d-01b559d15d76\") " pod="openstack/placement-6896ff5478-9txrd" Feb 16 02:39:25.995468 master-0 kubenswrapper[31559]: I0216 02:39:25.992568 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2ed2e4a-8cda-4a54-8b5d-01b559d15d76-public-tls-certs\") pod \"placement-6896ff5478-9txrd\" (UID: \"d2ed2e4a-8cda-4a54-8b5d-01b559d15d76\") " pod="openstack/placement-6896ff5478-9txrd" Feb 16 02:39:26.095581 master-0 kubenswrapper[31559]: I0216 02:39:26.094647 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2ed2e4a-8cda-4a54-8b5d-01b559d15d76-internal-tls-certs\") pod \"placement-6896ff5478-9txrd\" (UID: \"d2ed2e4a-8cda-4a54-8b5d-01b559d15d76\") " pod="openstack/placement-6896ff5478-9txrd" Feb 16 02:39:26.095581 master-0 kubenswrapper[31559]: I0216 02:39:26.094731 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5dc166e5-5887-4562-9e31-65239e383197-internal-tls-certs\") pod \"keystone-6899ff6f9c-4hbb9\" (UID: \"5dc166e5-5887-4562-9e31-65239e383197\") " pod="openstack/keystone-6899ff6f9c-4hbb9" Feb 16 02:39:26.095581 master-0 kubenswrapper[31559]: I0216 02:39:26.094784 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5lsk\" (UniqueName: \"kubernetes.io/projected/5dc166e5-5887-4562-9e31-65239e383197-kube-api-access-m5lsk\") pod \"keystone-6899ff6f9c-4hbb9\" (UID: \"5dc166e5-5887-4562-9e31-65239e383197\") " pod="openstack/keystone-6899ff6f9c-4hbb9" Feb 16 02:39:26.095581 master-0 kubenswrapper[31559]: I0216 02:39:26.094811 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5dc166e5-5887-4562-9e31-65239e383197-scripts\") pod \"keystone-6899ff6f9c-4hbb9\" (UID: \"5dc166e5-5887-4562-9e31-65239e383197\") " pod="openstack/keystone-6899ff6f9c-4hbb9" Feb 16 02:39:26.095581 master-0 kubenswrapper[31559]: I0216 02:39:26.094849 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2ed2e4a-8cda-4a54-8b5d-01b559d15d76-config-data\") pod \"placement-6896ff5478-9txrd\" (UID: \"d2ed2e4a-8cda-4a54-8b5d-01b559d15d76\") " pod="openstack/placement-6896ff5478-9txrd" Feb 16 02:39:26.095581 master-0 kubenswrapper[31559]: I0216 02:39:26.094920 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5dc166e5-5887-4562-9e31-65239e383197-credential-keys\") pod \"keystone-6899ff6f9c-4hbb9\" (UID: \"5dc166e5-5887-4562-9e31-65239e383197\") " pod="openstack/keystone-6899ff6f9c-4hbb9" Feb 16 02:39:26.095581 master-0 kubenswrapper[31559]: I0216 02:39:26.094945 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5dc166e5-5887-4562-9e31-65239e383197-fernet-keys\") pod \"keystone-6899ff6f9c-4hbb9\" (UID: \"5dc166e5-5887-4562-9e31-65239e383197\") " pod="openstack/keystone-6899ff6f9c-4hbb9" Feb 16 02:39:26.095581 master-0 kubenswrapper[31559]: I0216 02:39:26.094981 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5dc166e5-5887-4562-9e31-65239e383197-public-tls-certs\") pod \"keystone-6899ff6f9c-4hbb9\" (UID: \"5dc166e5-5887-4562-9e31-65239e383197\") " pod="openstack/keystone-6899ff6f9c-4hbb9" Feb 16 02:39:26.095581 master-0 kubenswrapper[31559]: I0216 02:39:26.095008 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2ed2e4a-8cda-4a54-8b5d-01b559d15d76-combined-ca-bundle\") pod \"placement-6896ff5478-9txrd\" (UID: \"d2ed2e4a-8cda-4a54-8b5d-01b559d15d76\") " pod="openstack/placement-6896ff5478-9txrd" Feb 16 02:39:26.095581 master-0 kubenswrapper[31559]: I0216 02:39:26.095024 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2ed2e4a-8cda-4a54-8b5d-01b559d15d76-logs\") pod \"placement-6896ff5478-9txrd\" (UID: \"d2ed2e4a-8cda-4a54-8b5d-01b559d15d76\") " pod="openstack/placement-6896ff5478-9txrd" Feb 16 02:39:26.095581 master-0 kubenswrapper[31559]: I0216 02:39:26.095051 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlbjg\" (UniqueName: \"kubernetes.io/projected/d2ed2e4a-8cda-4a54-8b5d-01b559d15d76-kube-api-access-wlbjg\") pod \"placement-6896ff5478-9txrd\" (UID: \"d2ed2e4a-8cda-4a54-8b5d-01b559d15d76\") " pod="openstack/placement-6896ff5478-9txrd" Feb 16 02:39:26.095581 master-0 kubenswrapper[31559]: I0216 02:39:26.095068 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2ed2e4a-8cda-4a54-8b5d-01b559d15d76-public-tls-certs\") pod \"placement-6896ff5478-9txrd\" (UID: \"d2ed2e4a-8cda-4a54-8b5d-01b559d15d76\") " pod="openstack/placement-6896ff5478-9txrd" Feb 16 02:39:26.095581 master-0 kubenswrapper[31559]: I0216 02:39:26.095086 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dc166e5-5887-4562-9e31-65239e383197-config-data\") pod \"keystone-6899ff6f9c-4hbb9\" (UID: \"5dc166e5-5887-4562-9e31-65239e383197\") " pod="openstack/keystone-6899ff6f9c-4hbb9" Feb 16 02:39:26.095581 master-0 kubenswrapper[31559]: I0216 02:39:26.095108 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2ed2e4a-8cda-4a54-8b5d-01b559d15d76-scripts\") pod \"placement-6896ff5478-9txrd\" (UID: \"d2ed2e4a-8cda-4a54-8b5d-01b559d15d76\") " pod="openstack/placement-6896ff5478-9txrd" Feb 16 02:39:26.095581 master-0 kubenswrapper[31559]: I0216 02:39:26.095126 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dc166e5-5887-4562-9e31-65239e383197-combined-ca-bundle\") pod \"keystone-6899ff6f9c-4hbb9\" (UID: \"5dc166e5-5887-4562-9e31-65239e383197\") " pod="openstack/keystone-6899ff6f9c-4hbb9" Feb 16 02:39:26.096933 master-0 kubenswrapper[31559]: I0216 02:39:26.096887 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2ed2e4a-8cda-4a54-8b5d-01b559d15d76-logs\") pod \"placement-6896ff5478-9txrd\" (UID: \"d2ed2e4a-8cda-4a54-8b5d-01b559d15d76\") " pod="openstack/placement-6896ff5478-9txrd" Feb 16 02:39:26.099561 master-0 kubenswrapper[31559]: I0216 02:39:26.099519 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2ed2e4a-8cda-4a54-8b5d-01b559d15d76-config-data\") pod \"placement-6896ff5478-9txrd\" (UID: \"d2ed2e4a-8cda-4a54-8b5d-01b559d15d76\") " pod="openstack/placement-6896ff5478-9txrd" Feb 16 02:39:26.099650 master-0 kubenswrapper[31559]: I0216 02:39:26.099570 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2ed2e4a-8cda-4a54-8b5d-01b559d15d76-internal-tls-certs\") pod \"placement-6896ff5478-9txrd\" (UID: \"d2ed2e4a-8cda-4a54-8b5d-01b559d15d76\") " pod="openstack/placement-6896ff5478-9txrd" Feb 16 02:39:26.099827 master-0 kubenswrapper[31559]: I0216 02:39:26.099799 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2ed2e4a-8cda-4a54-8b5d-01b559d15d76-scripts\") pod \"placement-6896ff5478-9txrd\" (UID: \"d2ed2e4a-8cda-4a54-8b5d-01b559d15d76\") " pod="openstack/placement-6896ff5478-9txrd" Feb 16 02:39:26.100925 master-0 kubenswrapper[31559]: I0216 02:39:26.100633 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2ed2e4a-8cda-4a54-8b5d-01b559d15d76-public-tls-certs\") pod \"placement-6896ff5478-9txrd\" (UID: \"d2ed2e4a-8cda-4a54-8b5d-01b559d15d76\") " pod="openstack/placement-6896ff5478-9txrd" Feb 16 02:39:26.103284 master-0 kubenswrapper[31559]: I0216 02:39:26.101894 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2ed2e4a-8cda-4a54-8b5d-01b559d15d76-combined-ca-bundle\") pod \"placement-6896ff5478-9txrd\" (UID: \"d2ed2e4a-8cda-4a54-8b5d-01b559d15d76\") " pod="openstack/placement-6896ff5478-9txrd" Feb 16 02:39:26.113996 master-0 kubenswrapper[31559]: I0216 02:39:26.113934 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlbjg\" (UniqueName: \"kubernetes.io/projected/d2ed2e4a-8cda-4a54-8b5d-01b559d15d76-kube-api-access-wlbjg\") pod \"placement-6896ff5478-9txrd\" (UID: \"d2ed2e4a-8cda-4a54-8b5d-01b559d15d76\") " pod="openstack/placement-6896ff5478-9txrd" Feb 16 02:39:26.199315 master-0 kubenswrapper[31559]: I0216 02:39:26.199191 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5dc166e5-5887-4562-9e31-65239e383197-credential-keys\") pod \"keystone-6899ff6f9c-4hbb9\" (UID: \"5dc166e5-5887-4562-9e31-65239e383197\") " pod="openstack/keystone-6899ff6f9c-4hbb9" Feb 16 02:39:26.199603 master-0 kubenswrapper[31559]: I0216 02:39:26.199583 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5dc166e5-5887-4562-9e31-65239e383197-fernet-keys\") pod \"keystone-6899ff6f9c-4hbb9\" (UID: \"5dc166e5-5887-4562-9e31-65239e383197\") " pod="openstack/keystone-6899ff6f9c-4hbb9" Feb 16 02:39:26.199770 master-0 kubenswrapper[31559]: I0216 02:39:26.199749 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5dc166e5-5887-4562-9e31-65239e383197-public-tls-certs\") pod \"keystone-6899ff6f9c-4hbb9\" (UID: \"5dc166e5-5887-4562-9e31-65239e383197\") " pod="openstack/keystone-6899ff6f9c-4hbb9" Feb 16 02:39:26.199969 master-0 kubenswrapper[31559]: I0216 02:39:26.199950 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dc166e5-5887-4562-9e31-65239e383197-config-data\") pod \"keystone-6899ff6f9c-4hbb9\" (UID: \"5dc166e5-5887-4562-9e31-65239e383197\") " pod="openstack/keystone-6899ff6f9c-4hbb9" Feb 16 02:39:26.200115 master-0 kubenswrapper[31559]: I0216 02:39:26.200098 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dc166e5-5887-4562-9e31-65239e383197-combined-ca-bundle\") pod \"keystone-6899ff6f9c-4hbb9\" (UID: \"5dc166e5-5887-4562-9e31-65239e383197\") " pod="openstack/keystone-6899ff6f9c-4hbb9" Feb 16 02:39:26.200291 master-0 kubenswrapper[31559]: I0216 02:39:26.200274 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5dc166e5-5887-4562-9e31-65239e383197-internal-tls-certs\") pod \"keystone-6899ff6f9c-4hbb9\" (UID: \"5dc166e5-5887-4562-9e31-65239e383197\") " pod="openstack/keystone-6899ff6f9c-4hbb9" Feb 16 02:39:26.200469 master-0 kubenswrapper[31559]: I0216 02:39:26.200429 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5lsk\" (UniqueName: \"kubernetes.io/projected/5dc166e5-5887-4562-9e31-65239e383197-kube-api-access-m5lsk\") pod \"keystone-6899ff6f9c-4hbb9\" (UID: \"5dc166e5-5887-4562-9e31-65239e383197\") " pod="openstack/keystone-6899ff6f9c-4hbb9" Feb 16 02:39:26.200721 master-0 kubenswrapper[31559]: I0216 02:39:26.200670 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5dc166e5-5887-4562-9e31-65239e383197-scripts\") pod \"keystone-6899ff6f9c-4hbb9\" (UID: \"5dc166e5-5887-4562-9e31-65239e383197\") " pod="openstack/keystone-6899ff6f9c-4hbb9" Feb 16 02:39:26.207478 master-0 kubenswrapper[31559]: I0216 02:39:26.204954 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5dc166e5-5887-4562-9e31-65239e383197-credential-keys\") pod \"keystone-6899ff6f9c-4hbb9\" (UID: \"5dc166e5-5887-4562-9e31-65239e383197\") " pod="openstack/keystone-6899ff6f9c-4hbb9" Feb 16 02:39:26.207478 master-0 kubenswrapper[31559]: I0216 02:39:26.205049 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5dc166e5-5887-4562-9e31-65239e383197-fernet-keys\") pod \"keystone-6899ff6f9c-4hbb9\" (UID: \"5dc166e5-5887-4562-9e31-65239e383197\") " pod="openstack/keystone-6899ff6f9c-4hbb9" Feb 16 02:39:26.207478 master-0 kubenswrapper[31559]: I0216 02:39:26.206656 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dc166e5-5887-4562-9e31-65239e383197-config-data\") pod \"keystone-6899ff6f9c-4hbb9\" (UID: \"5dc166e5-5887-4562-9e31-65239e383197\") " pod="openstack/keystone-6899ff6f9c-4hbb9" Feb 16 02:39:26.209667 master-0 kubenswrapper[31559]: I0216 02:39:26.209071 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dc166e5-5887-4562-9e31-65239e383197-combined-ca-bundle\") pod \"keystone-6899ff6f9c-4hbb9\" (UID: \"5dc166e5-5887-4562-9e31-65239e383197\") " pod="openstack/keystone-6899ff6f9c-4hbb9" Feb 16 02:39:26.209667 master-0 kubenswrapper[31559]: I0216 02:39:26.209468 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5dc166e5-5887-4562-9e31-65239e383197-public-tls-certs\") pod \"keystone-6899ff6f9c-4hbb9\" (UID: \"5dc166e5-5887-4562-9e31-65239e383197\") " pod="openstack/keystone-6899ff6f9c-4hbb9" Feb 16 02:39:26.209926 master-0 kubenswrapper[31559]: I0216 02:39:26.209783 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5dc166e5-5887-4562-9e31-65239e383197-internal-tls-certs\") pod \"keystone-6899ff6f9c-4hbb9\" (UID: \"5dc166e5-5887-4562-9e31-65239e383197\") " pod="openstack/keystone-6899ff6f9c-4hbb9" Feb 16 02:39:26.211099 master-0 kubenswrapper[31559]: I0216 02:39:26.211063 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5dc166e5-5887-4562-9e31-65239e383197-scripts\") pod \"keystone-6899ff6f9c-4hbb9\" (UID: \"5dc166e5-5887-4562-9e31-65239e383197\") " pod="openstack/keystone-6899ff6f9c-4hbb9" Feb 16 02:39:26.222884 master-0 kubenswrapper[31559]: I0216 02:39:26.222831 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6896ff5478-9txrd" Feb 16 02:39:26.233606 master-0 kubenswrapper[31559]: I0216 02:39:26.233570 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5lsk\" (UniqueName: \"kubernetes.io/projected/5dc166e5-5887-4562-9e31-65239e383197-kube-api-access-m5lsk\") pod \"keystone-6899ff6f9c-4hbb9\" (UID: \"5dc166e5-5887-4562-9e31-65239e383197\") " pod="openstack/keystone-6899ff6f9c-4hbb9" Feb 16 02:39:26.240316 master-0 kubenswrapper[31559]: I0216 02:39:26.240063 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-55c5776498-fmz6g"] Feb 16 02:39:26.244213 master-0 kubenswrapper[31559]: I0216 02:39:26.244173 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-55c5776498-fmz6g" Feb 16 02:39:26.275094 master-0 kubenswrapper[31559]: I0216 02:39:26.275018 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6899ff6f9c-4hbb9" Feb 16 02:39:26.278865 master-0 kubenswrapper[31559]: I0216 02:39:26.278784 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-55c5776498-fmz6g"] Feb 16 02:39:26.328454 master-0 kubenswrapper[31559]: I0216 02:39:26.328373 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-ljf52" event={"ID":"656fd1dc-bb87-40c8-a161-31a194c23629","Type":"ContainerStarted","Data":"aade0af1b7738bfe24a56925bb49b35a9e66c034d6fed4ab1896c428d371799f"} Feb 16 02:39:26.331638 master-0 kubenswrapper[31559]: I0216 02:39:26.331607 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-dde57-db-sync-2z65z" event={"ID":"abbf5e91-b1f0-466e-80ca-d7b79ace4552","Type":"ContainerStarted","Data":"ac6c5bb5b269fc2b71f36c3f832ee3ef82d89b42267b7f6e48ea6b12501e72e9"} Feb 16 02:39:26.380253 master-0 kubenswrapper[31559]: I0216 02:39:26.380189 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-dde57-db-sync-2z65z" podStartSLOduration=3.528297061 podStartE2EDuration="21.380159314s" podCreationTimestamp="2026-02-16 02:39:05 +0000 UTC" firstStartedPulling="2026-02-16 02:39:06.827551946 +0000 UTC m=+999.172157961" lastFinishedPulling="2026-02-16 02:39:24.679414199 +0000 UTC m=+1017.024020214" observedRunningTime="2026-02-16 02:39:26.353772585 +0000 UTC m=+1018.698378600" watchObservedRunningTime="2026-02-16 02:39:26.380159314 +0000 UTC m=+1018.724765329" Feb 16 02:39:26.404688 master-0 kubenswrapper[31559]: I0216 02:39:26.403895 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69a76199-7782-4949-9d32-13210cd195e8-combined-ca-bundle\") pod \"placement-55c5776498-fmz6g\" (UID: \"69a76199-7782-4949-9d32-13210cd195e8\") " pod="openstack/placement-55c5776498-fmz6g" Feb 16 02:39:26.404688 master-0 kubenswrapper[31559]: I0216 02:39:26.403958 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/69a76199-7782-4949-9d32-13210cd195e8-public-tls-certs\") pod \"placement-55c5776498-fmz6g\" (UID: \"69a76199-7782-4949-9d32-13210cd195e8\") " pod="openstack/placement-55c5776498-fmz6g" Feb 16 02:39:26.404688 master-0 kubenswrapper[31559]: I0216 02:39:26.404015 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69a76199-7782-4949-9d32-13210cd195e8-logs\") pod \"placement-55c5776498-fmz6g\" (UID: \"69a76199-7782-4949-9d32-13210cd195e8\") " pod="openstack/placement-55c5776498-fmz6g" Feb 16 02:39:26.404688 master-0 kubenswrapper[31559]: I0216 02:39:26.404171 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69a76199-7782-4949-9d32-13210cd195e8-config-data\") pod \"placement-55c5776498-fmz6g\" (UID: \"69a76199-7782-4949-9d32-13210cd195e8\") " pod="openstack/placement-55c5776498-fmz6g" Feb 16 02:39:26.404688 master-0 kubenswrapper[31559]: I0216 02:39:26.404217 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69a76199-7782-4949-9d32-13210cd195e8-scripts\") pod \"placement-55c5776498-fmz6g\" (UID: \"69a76199-7782-4949-9d32-13210cd195e8\") " pod="openstack/placement-55c5776498-fmz6g" Feb 16 02:39:26.404688 master-0 kubenswrapper[31559]: I0216 02:39:26.404281 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/69a76199-7782-4949-9d32-13210cd195e8-internal-tls-certs\") pod \"placement-55c5776498-fmz6g\" (UID: \"69a76199-7782-4949-9d32-13210cd195e8\") " pod="openstack/placement-55c5776498-fmz6g" Feb 16 02:39:26.404688 master-0 kubenswrapper[31559]: I0216 02:39:26.404335 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2n78\" (UniqueName: \"kubernetes.io/projected/69a76199-7782-4949-9d32-13210cd195e8-kube-api-access-p2n78\") pod \"placement-55c5776498-fmz6g\" (UID: \"69a76199-7782-4949-9d32-13210cd195e8\") " pod="openstack/placement-55c5776498-fmz6g" Feb 16 02:39:26.506108 master-0 kubenswrapper[31559]: I0216 02:39:26.506051 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69a76199-7782-4949-9d32-13210cd195e8-combined-ca-bundle\") pod \"placement-55c5776498-fmz6g\" (UID: \"69a76199-7782-4949-9d32-13210cd195e8\") " pod="openstack/placement-55c5776498-fmz6g" Feb 16 02:39:26.506108 master-0 kubenswrapper[31559]: I0216 02:39:26.506127 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/69a76199-7782-4949-9d32-13210cd195e8-public-tls-certs\") pod \"placement-55c5776498-fmz6g\" (UID: \"69a76199-7782-4949-9d32-13210cd195e8\") " pod="openstack/placement-55c5776498-fmz6g" Feb 16 02:39:26.506656 master-0 kubenswrapper[31559]: I0216 02:39:26.506163 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69a76199-7782-4949-9d32-13210cd195e8-logs\") pod \"placement-55c5776498-fmz6g\" (UID: \"69a76199-7782-4949-9d32-13210cd195e8\") " pod="openstack/placement-55c5776498-fmz6g" Feb 16 02:39:26.507791 master-0 kubenswrapper[31559]: I0216 02:39:26.507646 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69a76199-7782-4949-9d32-13210cd195e8-config-data\") pod \"placement-55c5776498-fmz6g\" (UID: \"69a76199-7782-4949-9d32-13210cd195e8\") " pod="openstack/placement-55c5776498-fmz6g" Feb 16 02:39:26.507791 master-0 kubenswrapper[31559]: I0216 02:39:26.507704 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69a76199-7782-4949-9d32-13210cd195e8-scripts\") pod \"placement-55c5776498-fmz6g\" (UID: \"69a76199-7782-4949-9d32-13210cd195e8\") " pod="openstack/placement-55c5776498-fmz6g" Feb 16 02:39:26.507791 master-0 kubenswrapper[31559]: I0216 02:39:26.507760 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/69a76199-7782-4949-9d32-13210cd195e8-internal-tls-certs\") pod \"placement-55c5776498-fmz6g\" (UID: \"69a76199-7782-4949-9d32-13210cd195e8\") " pod="openstack/placement-55c5776498-fmz6g" Feb 16 02:39:26.507791 master-0 kubenswrapper[31559]: I0216 02:39:26.507787 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2n78\" (UniqueName: \"kubernetes.io/projected/69a76199-7782-4949-9d32-13210cd195e8-kube-api-access-p2n78\") pod \"placement-55c5776498-fmz6g\" (UID: \"69a76199-7782-4949-9d32-13210cd195e8\") " pod="openstack/placement-55c5776498-fmz6g" Feb 16 02:39:26.512669 master-0 kubenswrapper[31559]: I0216 02:39:26.511410 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69a76199-7782-4949-9d32-13210cd195e8-combined-ca-bundle\") pod \"placement-55c5776498-fmz6g\" (UID: \"69a76199-7782-4949-9d32-13210cd195e8\") " pod="openstack/placement-55c5776498-fmz6g" Feb 16 02:39:26.513049 master-0 kubenswrapper[31559]: I0216 02:39:26.513021 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69a76199-7782-4949-9d32-13210cd195e8-config-data\") pod \"placement-55c5776498-fmz6g\" (UID: \"69a76199-7782-4949-9d32-13210cd195e8\") " pod="openstack/placement-55c5776498-fmz6g" Feb 16 02:39:26.513360 master-0 kubenswrapper[31559]: I0216 02:39:26.513336 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69a76199-7782-4949-9d32-13210cd195e8-logs\") pod \"placement-55c5776498-fmz6g\" (UID: \"69a76199-7782-4949-9d32-13210cd195e8\") " pod="openstack/placement-55c5776498-fmz6g" Feb 16 02:39:26.515629 master-0 kubenswrapper[31559]: I0216 02:39:26.515607 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69a76199-7782-4949-9d32-13210cd195e8-scripts\") pod \"placement-55c5776498-fmz6g\" (UID: \"69a76199-7782-4949-9d32-13210cd195e8\") " pod="openstack/placement-55c5776498-fmz6g" Feb 16 02:39:26.515890 master-0 kubenswrapper[31559]: I0216 02:39:26.515843 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/69a76199-7782-4949-9d32-13210cd195e8-public-tls-certs\") pod \"placement-55c5776498-fmz6g\" (UID: \"69a76199-7782-4949-9d32-13210cd195e8\") " pod="openstack/placement-55c5776498-fmz6g" Feb 16 02:39:26.516346 master-0 kubenswrapper[31559]: I0216 02:39:26.516323 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/69a76199-7782-4949-9d32-13210cd195e8-internal-tls-certs\") pod \"placement-55c5776498-fmz6g\" (UID: \"69a76199-7782-4949-9d32-13210cd195e8\") " pod="openstack/placement-55c5776498-fmz6g" Feb 16 02:39:26.531166 master-0 kubenswrapper[31559]: I0216 02:39:26.523962 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2n78\" (UniqueName: \"kubernetes.io/projected/69a76199-7782-4949-9d32-13210cd195e8-kube-api-access-p2n78\") pod \"placement-55c5776498-fmz6g\" (UID: \"69a76199-7782-4949-9d32-13210cd195e8\") " pod="openstack/placement-55c5776498-fmz6g" Feb 16 02:39:26.655156 master-0 kubenswrapper[31559]: I0216 02:39:26.655088 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-55c5776498-fmz6g" Feb 16 02:39:26.772789 master-0 kubenswrapper[31559]: I0216 02:39:26.772730 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6896ff5478-9txrd"] Feb 16 02:39:26.995865 master-0 kubenswrapper[31559]: I0216 02:39:26.995787 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6899ff6f9c-4hbb9"] Feb 16 02:39:27.266468 master-0 kubenswrapper[31559]: W0216 02:39:27.265815 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod69a76199_7782_4949_9d32_13210cd195e8.slice/crio-1fbcee750a7f4465fab92843ba4a5a5da6bfd6c349f25276557dfa426b949577 WatchSource:0}: Error finding container 1fbcee750a7f4465fab92843ba4a5a5da6bfd6c349f25276557dfa426b949577: Status 404 returned error can't find the container with id 1fbcee750a7f4465fab92843ba4a5a5da6bfd6c349f25276557dfa426b949577 Feb 16 02:39:27.278410 master-0 kubenswrapper[31559]: I0216 02:39:27.278341 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-55c5776498-fmz6g"] Feb 16 02:39:27.353619 master-0 kubenswrapper[31559]: I0216 02:39:27.353543 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-55c5776498-fmz6g" event={"ID":"69a76199-7782-4949-9d32-13210cd195e8","Type":"ContainerStarted","Data":"1fbcee750a7f4465fab92843ba4a5a5da6bfd6c349f25276557dfa426b949577"} Feb 16 02:39:27.360206 master-0 kubenswrapper[31559]: I0216 02:39:27.360149 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6896ff5478-9txrd" event={"ID":"d2ed2e4a-8cda-4a54-8b5d-01b559d15d76","Type":"ContainerStarted","Data":"5a972e88a90f54a53819ae538096e27d00cdeaee8a27f592d4ffb473018dd435"} Feb 16 02:39:27.360552 master-0 kubenswrapper[31559]: I0216 02:39:27.360492 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6896ff5478-9txrd" event={"ID":"d2ed2e4a-8cda-4a54-8b5d-01b559d15d76","Type":"ContainerStarted","Data":"42d09209fa1f383a9ba2c1c6066fea1070816588d9dcd70238ed55aa8a08d22d"} Feb 16 02:39:27.362627 master-0 kubenswrapper[31559]: I0216 02:39:27.362589 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6899ff6f9c-4hbb9" event={"ID":"5dc166e5-5887-4562-9e31-65239e383197","Type":"ContainerStarted","Data":"f8add4abc9b93bd88cd33f4d244fe133b905357d5fce5d045582052f802dc1db"} Feb 16 02:39:27.362867 master-0 kubenswrapper[31559]: I0216 02:39:27.362836 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6899ff6f9c-4hbb9" event={"ID":"5dc166e5-5887-4562-9e31-65239e383197","Type":"ContainerStarted","Data":"13ad1e9dba4ddbb1acf735f2531d260fe84a64a8b960f0f1106acedb2373b8d4"} Feb 16 02:39:28.377012 master-0 kubenswrapper[31559]: I0216 02:39:28.376934 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-55c5776498-fmz6g" event={"ID":"69a76199-7782-4949-9d32-13210cd195e8","Type":"ContainerStarted","Data":"e9cd0b873b8722181500af70ab77750bcb9138947b82e68cfdc576696f4453a5"} Feb 16 02:39:28.377012 master-0 kubenswrapper[31559]: I0216 02:39:28.377005 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-55c5776498-fmz6g" event={"ID":"69a76199-7782-4949-9d32-13210cd195e8","Type":"ContainerStarted","Data":"92a4403da38eb5b6e64c404433a803452585657b2492be3e5f1d459441a2cd4e"} Feb 16 02:39:28.377755 master-0 kubenswrapper[31559]: I0216 02:39:28.377133 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-55c5776498-fmz6g" Feb 16 02:39:28.383914 master-0 kubenswrapper[31559]: I0216 02:39:28.382903 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6896ff5478-9txrd" event={"ID":"d2ed2e4a-8cda-4a54-8b5d-01b559d15d76","Type":"ContainerStarted","Data":"8a2500d53ef512fc7f8fc6cec03dcab9aa2e075d86660812d570b721f740dda1"} Feb 16 02:39:28.383914 master-0 kubenswrapper[31559]: I0216 02:39:28.383263 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-6899ff6f9c-4hbb9" Feb 16 02:39:28.383914 master-0 kubenswrapper[31559]: I0216 02:39:28.383386 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6896ff5478-9txrd" Feb 16 02:39:28.383914 master-0 kubenswrapper[31559]: I0216 02:39:28.383480 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6896ff5478-9txrd" Feb 16 02:39:28.423296 master-0 kubenswrapper[31559]: I0216 02:39:28.419660 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-55c5776498-fmz6g" podStartSLOduration=2.419639006 podStartE2EDuration="2.419639006s" podCreationTimestamp="2026-02-16 02:39:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:39:28.406010011 +0000 UTC m=+1020.750616026" watchObservedRunningTime="2026-02-16 02:39:28.419639006 +0000 UTC m=+1020.764245021" Feb 16 02:39:28.442955 master-0 kubenswrapper[31559]: I0216 02:39:28.442847 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-6899ff6f9c-4hbb9" podStartSLOduration=3.442821684 podStartE2EDuration="3.442821684s" podCreationTimestamp="2026-02-16 02:39:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:39:28.438985837 +0000 UTC m=+1020.783591872" watchObservedRunningTime="2026-02-16 02:39:28.442821684 +0000 UTC m=+1020.787427719" Feb 16 02:39:28.472178 master-0 kubenswrapper[31559]: I0216 02:39:28.472090 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6896ff5478-9txrd" podStartSLOduration=3.472070315 podStartE2EDuration="3.472070315s" podCreationTimestamp="2026-02-16 02:39:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:39:28.471366178 +0000 UTC m=+1020.815972193" watchObservedRunningTime="2026-02-16 02:39:28.472070315 +0000 UTC m=+1020.816676330" Feb 16 02:39:29.394478 master-0 kubenswrapper[31559]: I0216 02:39:29.394327 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-55c5776498-fmz6g" Feb 16 02:39:34.475594 master-0 kubenswrapper[31559]: I0216 02:39:34.475533 31559 generic.go:334] "Generic (PLEG): container finished" podID="656fd1dc-bb87-40c8-a161-31a194c23629" containerID="fa760d473a2ab26e3653d949b4f25234338401867f3b4574fc9895f762e78417" exitCode=0 Feb 16 02:39:34.475594 master-0 kubenswrapper[31559]: I0216 02:39:34.475595 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-ljf52" event={"ID":"656fd1dc-bb87-40c8-a161-31a194c23629","Type":"ContainerDied","Data":"fa760d473a2ab26e3653d949b4f25234338401867f3b4574fc9895f762e78417"} Feb 16 02:39:34.794015 master-0 kubenswrapper[31559]: E0216 02:39:34.793937 31559 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Feb 16 02:39:34.794015 master-0 kubenswrapper[31559]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/656fd1dc-bb87-40c8-a161-31a194c23629/volume-subpaths/config-data/ironic-db-sync/3` to `var/lib/kolla/config_files/config.json`: No such file or directory Feb 16 02:39:34.794015 master-0 kubenswrapper[31559]: > podSandboxID="aade0af1b7738bfe24a56925bb49b35a9e66c034d6fed4ab1896c428d371799f" Feb 16 02:39:34.794385 master-0 kubenswrapper[31559]: E0216 02:39:34.794122 31559 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 16 02:39:34.794385 master-0 kubenswrapper[31559]: container &Container{Name:ironic-db-sync,Image:quay.io/podified-antelope-centos9/openstack-ironic-conductor@sha256:1f519a69686478381fe122716a13d116612a9b6eaeb47ab00ef4cd82b93468bf,Command:[/bin/bash],Args:[-c /usr/local/bin/container-scripts/dbsync.sh],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-merged,ReadOnly:false,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-podinfo,ReadOnly:false,MountPath:/etc/podinfo,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-45gnj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-db-sync-ljf52_openstack(656fd1dc-bb87-40c8-a161-31a194c23629): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/656fd1dc-bb87-40c8-a161-31a194c23629/volume-subpaths/config-data/ironic-db-sync/3` to `var/lib/kolla/config_files/config.json`: No such file or directory Feb 16 02:39:34.794385 master-0 kubenswrapper[31559]: > logger="UnhandledError" Feb 16 02:39:34.795513 master-0 kubenswrapper[31559]: E0216 02:39:34.795419 31559 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-db-sync\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/656fd1dc-bb87-40c8-a161-31a194c23629/volume-subpaths/config-data/ironic-db-sync/3` to `var/lib/kolla/config_files/config.json`: No such file or directory\\n\"" pod="openstack/ironic-db-sync-ljf52" podUID="656fd1dc-bb87-40c8-a161-31a194c23629" Feb 16 02:39:35.493548 master-0 kubenswrapper[31559]: I0216 02:39:35.493405 31559 generic.go:334] "Generic (PLEG): container finished" podID="abbf5e91-b1f0-466e-80ca-d7b79ace4552" containerID="ac6c5bb5b269fc2b71f36c3f832ee3ef82d89b42267b7f6e48ea6b12501e72e9" exitCode=0 Feb 16 02:39:35.494217 master-0 kubenswrapper[31559]: I0216 02:39:35.493504 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-dde57-db-sync-2z65z" event={"ID":"abbf5e91-b1f0-466e-80ca-d7b79ace4552","Type":"ContainerDied","Data":"ac6c5bb5b269fc2b71f36c3f832ee3ef82d89b42267b7f6e48ea6b12501e72e9"} Feb 16 02:39:36.512316 master-0 kubenswrapper[31559]: I0216 02:39:36.512224 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-ljf52" event={"ID":"656fd1dc-bb87-40c8-a161-31a194c23629","Type":"ContainerStarted","Data":"cf3ac35f5b26be72e1fc0465c5d5599ca1dddd3c6bf02f04f4ce1162cce6acd6"} Feb 16 02:39:36.515983 master-0 kubenswrapper[31559]: I0216 02:39:36.515901 31559 generic.go:334] "Generic (PLEG): container finished" podID="032a3554-910a-472a-8537-73e08670ffe8" containerID="fa7b96e260598981474c211b1d4e4409f38a196a76e9ee89e6026c1b77ba5afa" exitCode=0 Feb 16 02:39:36.516241 master-0 kubenswrapper[31559]: I0216 02:39:36.516168 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-mxb5k" event={"ID":"032a3554-910a-472a-8537-73e08670ffe8","Type":"ContainerDied","Data":"fa7b96e260598981474c211b1d4e4409f38a196a76e9ee89e6026c1b77ba5afa"} Feb 16 02:39:36.552611 master-0 kubenswrapper[31559]: I0216 02:39:36.552486 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-db-sync-ljf52" podStartSLOduration=13.226419138 podStartE2EDuration="21.55246428s" podCreationTimestamp="2026-02-16 02:39:15 +0000 UTC" firstStartedPulling="2026-02-16 02:39:25.347659919 +0000 UTC m=+1017.692265934" lastFinishedPulling="2026-02-16 02:39:33.673705051 +0000 UTC m=+1026.018311076" observedRunningTime="2026-02-16 02:39:36.549008512 +0000 UTC m=+1028.893614557" watchObservedRunningTime="2026-02-16 02:39:36.55246428 +0000 UTC m=+1028.897070325" Feb 16 02:39:37.017493 master-0 kubenswrapper[31559]: I0216 02:39:37.017457 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-dde57-db-sync-2z65z" Feb 16 02:39:37.110002 master-0 kubenswrapper[31559]: I0216 02:39:37.109935 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abbf5e91-b1f0-466e-80ca-d7b79ace4552-combined-ca-bundle\") pod \"abbf5e91-b1f0-466e-80ca-d7b79ace4552\" (UID: \"abbf5e91-b1f0-466e-80ca-d7b79ace4552\") " Feb 16 02:39:37.110940 master-0 kubenswrapper[31559]: I0216 02:39:37.110906 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abbf5e91-b1f0-466e-80ca-d7b79ace4552-config-data\") pod \"abbf5e91-b1f0-466e-80ca-d7b79ace4552\" (UID: \"abbf5e91-b1f0-466e-80ca-d7b79ace4552\") " Feb 16 02:39:37.111285 master-0 kubenswrapper[31559]: I0216 02:39:37.111252 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rk4k\" (UniqueName: \"kubernetes.io/projected/abbf5e91-b1f0-466e-80ca-d7b79ace4552-kube-api-access-8rk4k\") pod \"abbf5e91-b1f0-466e-80ca-d7b79ace4552\" (UID: \"abbf5e91-b1f0-466e-80ca-d7b79ace4552\") " Feb 16 02:39:37.111423 master-0 kubenswrapper[31559]: I0216 02:39:37.111396 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/abbf5e91-b1f0-466e-80ca-d7b79ace4552-scripts\") pod \"abbf5e91-b1f0-466e-80ca-d7b79ace4552\" (UID: \"abbf5e91-b1f0-466e-80ca-d7b79ace4552\") " Feb 16 02:39:37.111610 master-0 kubenswrapper[31559]: I0216 02:39:37.111579 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/abbf5e91-b1f0-466e-80ca-d7b79ace4552-etc-machine-id\") pod \"abbf5e91-b1f0-466e-80ca-d7b79ace4552\" (UID: \"abbf5e91-b1f0-466e-80ca-d7b79ace4552\") " Feb 16 02:39:37.111663 master-0 kubenswrapper[31559]: I0216 02:39:37.111636 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/abbf5e91-b1f0-466e-80ca-d7b79ace4552-db-sync-config-data\") pod \"abbf5e91-b1f0-466e-80ca-d7b79ace4552\" (UID: \"abbf5e91-b1f0-466e-80ca-d7b79ace4552\") " Feb 16 02:39:37.112399 master-0 kubenswrapper[31559]: I0216 02:39:37.112350 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abbf5e91-b1f0-466e-80ca-d7b79ace4552-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "abbf5e91-b1f0-466e-80ca-d7b79ace4552" (UID: "abbf5e91-b1f0-466e-80ca-d7b79ace4552"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:39:37.114684 master-0 kubenswrapper[31559]: I0216 02:39:37.114654 31559 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/abbf5e91-b1f0-466e-80ca-d7b79ace4552-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:37.117284 master-0 kubenswrapper[31559]: I0216 02:39:37.115992 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abbf5e91-b1f0-466e-80ca-d7b79ace4552-kube-api-access-8rk4k" (OuterVolumeSpecName: "kube-api-access-8rk4k") pod "abbf5e91-b1f0-466e-80ca-d7b79ace4552" (UID: "abbf5e91-b1f0-466e-80ca-d7b79ace4552"). InnerVolumeSpecName "kube-api-access-8rk4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:39:37.117417 master-0 kubenswrapper[31559]: I0216 02:39:37.117363 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abbf5e91-b1f0-466e-80ca-d7b79ace4552-scripts" (OuterVolumeSpecName: "scripts") pod "abbf5e91-b1f0-466e-80ca-d7b79ace4552" (UID: "abbf5e91-b1f0-466e-80ca-d7b79ace4552"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:39:37.126368 master-0 kubenswrapper[31559]: I0216 02:39:37.126289 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abbf5e91-b1f0-466e-80ca-d7b79ace4552-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "abbf5e91-b1f0-466e-80ca-d7b79ace4552" (UID: "abbf5e91-b1f0-466e-80ca-d7b79ace4552"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:39:37.141575 master-0 kubenswrapper[31559]: I0216 02:39:37.141520 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abbf5e91-b1f0-466e-80ca-d7b79ace4552-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "abbf5e91-b1f0-466e-80ca-d7b79ace4552" (UID: "abbf5e91-b1f0-466e-80ca-d7b79ace4552"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:39:37.204700 master-0 kubenswrapper[31559]: I0216 02:39:37.204656 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abbf5e91-b1f0-466e-80ca-d7b79ace4552-config-data" (OuterVolumeSpecName: "config-data") pod "abbf5e91-b1f0-466e-80ca-d7b79ace4552" (UID: "abbf5e91-b1f0-466e-80ca-d7b79ace4552"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:39:37.217500 master-0 kubenswrapper[31559]: I0216 02:39:37.217383 31559 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abbf5e91-b1f0-466e-80ca-d7b79ace4552-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:37.217500 master-0 kubenswrapper[31559]: I0216 02:39:37.217446 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rk4k\" (UniqueName: \"kubernetes.io/projected/abbf5e91-b1f0-466e-80ca-d7b79ace4552-kube-api-access-8rk4k\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:37.217500 master-0 kubenswrapper[31559]: I0216 02:39:37.217466 31559 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/abbf5e91-b1f0-466e-80ca-d7b79ace4552-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:37.217500 master-0 kubenswrapper[31559]: I0216 02:39:37.217477 31559 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/abbf5e91-b1f0-466e-80ca-d7b79ace4552-db-sync-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:37.217500 master-0 kubenswrapper[31559]: I0216 02:39:37.217490 31559 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abbf5e91-b1f0-466e-80ca-d7b79ace4552-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:37.537114 master-0 kubenswrapper[31559]: I0216 02:39:37.536944 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-dde57-db-sync-2z65z" event={"ID":"abbf5e91-b1f0-466e-80ca-d7b79ace4552","Type":"ContainerDied","Data":"8b42be8b811288b40c93e27668cb07bbb709d6bd8ade3a50e3b423a88163b599"} Feb 16 02:39:37.537114 master-0 kubenswrapper[31559]: I0216 02:39:37.537018 31559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b42be8b811288b40c93e27668cb07bbb709d6bd8ade3a50e3b423a88163b599" Feb 16 02:39:37.537114 master-0 kubenswrapper[31559]: I0216 02:39:37.536955 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-dde57-db-sync-2z65z" Feb 16 02:39:37.833826 master-0 kubenswrapper[31559]: I0216 02:39:37.831702 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-dde57-scheduler-0"] Feb 16 02:39:37.833826 master-0 kubenswrapper[31559]: E0216 02:39:37.832187 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abbf5e91-b1f0-466e-80ca-d7b79ace4552" containerName="cinder-dde57-db-sync" Feb 16 02:39:37.833826 master-0 kubenswrapper[31559]: I0216 02:39:37.832204 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="abbf5e91-b1f0-466e-80ca-d7b79ace4552" containerName="cinder-dde57-db-sync" Feb 16 02:39:37.833826 master-0 kubenswrapper[31559]: I0216 02:39:37.832701 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="abbf5e91-b1f0-466e-80ca-d7b79ace4552" containerName="cinder-dde57-db-sync" Feb 16 02:39:37.833826 master-0 kubenswrapper[31559]: I0216 02:39:37.833775 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-dde57-scheduler-0" Feb 16 02:39:37.836414 master-0 kubenswrapper[31559]: I0216 02:39:37.835844 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-dde57-scripts" Feb 16 02:39:37.836414 master-0 kubenswrapper[31559]: I0216 02:39:37.836307 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-dde57-scheduler-config-data" Feb 16 02:39:37.836580 master-0 kubenswrapper[31559]: I0216 02:39:37.836551 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-dde57-config-data" Feb 16 02:39:37.848424 master-0 kubenswrapper[31559]: I0216 02:39:37.846952 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-dde57-scheduler-0"] Feb 16 02:39:37.888507 master-0 kubenswrapper[31559]: I0216 02:39:37.881790 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-mxb5k" Feb 16 02:39:37.952509 master-0 kubenswrapper[31559]: I0216 02:39:37.948744 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snrjb\" (UniqueName: \"kubernetes.io/projected/630fbd6e-863a-4347-acc7-38ae08b97e61-kube-api-access-snrjb\") pod \"cinder-dde57-scheduler-0\" (UID: \"630fbd6e-863a-4347-acc7-38ae08b97e61\") " pod="openstack/cinder-dde57-scheduler-0" Feb 16 02:39:37.952509 master-0 kubenswrapper[31559]: I0216 02:39:37.948893 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/630fbd6e-863a-4347-acc7-38ae08b97e61-config-data\") pod \"cinder-dde57-scheduler-0\" (UID: \"630fbd6e-863a-4347-acc7-38ae08b97e61\") " pod="openstack/cinder-dde57-scheduler-0" Feb 16 02:39:37.952509 master-0 kubenswrapper[31559]: I0216 02:39:37.949014 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/630fbd6e-863a-4347-acc7-38ae08b97e61-etc-machine-id\") pod \"cinder-dde57-scheduler-0\" (UID: \"630fbd6e-863a-4347-acc7-38ae08b97e61\") " pod="openstack/cinder-dde57-scheduler-0" Feb 16 02:39:37.952509 master-0 kubenswrapper[31559]: I0216 02:39:37.949072 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/630fbd6e-863a-4347-acc7-38ae08b97e61-scripts\") pod \"cinder-dde57-scheduler-0\" (UID: \"630fbd6e-863a-4347-acc7-38ae08b97e61\") " pod="openstack/cinder-dde57-scheduler-0" Feb 16 02:39:37.952509 master-0 kubenswrapper[31559]: I0216 02:39:37.949132 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/630fbd6e-863a-4347-acc7-38ae08b97e61-config-data-custom\") pod \"cinder-dde57-scheduler-0\" (UID: \"630fbd6e-863a-4347-acc7-38ae08b97e61\") " pod="openstack/cinder-dde57-scheduler-0" Feb 16 02:39:37.952509 master-0 kubenswrapper[31559]: I0216 02:39:37.949154 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/630fbd6e-863a-4347-acc7-38ae08b97e61-combined-ca-bundle\") pod \"cinder-dde57-scheduler-0\" (UID: \"630fbd6e-863a-4347-acc7-38ae08b97e61\") " pod="openstack/cinder-dde57-scheduler-0" Feb 16 02:39:38.010470 master-0 kubenswrapper[31559]: I0216 02:39:38.007780 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-dde57-volume-lvm-iscsi-0"] Feb 16 02:39:38.010470 master-0 kubenswrapper[31559]: E0216 02:39:38.008254 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="032a3554-910a-472a-8537-73e08670ffe8" containerName="neutron-db-sync" Feb 16 02:39:38.010470 master-0 kubenswrapper[31559]: I0216 02:39:38.008266 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="032a3554-910a-472a-8537-73e08670ffe8" containerName="neutron-db-sync" Feb 16 02:39:38.010470 master-0 kubenswrapper[31559]: I0216 02:39:38.008507 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="032a3554-910a-472a-8537-73e08670ffe8" containerName="neutron-db-sync" Feb 16 02:39:38.010470 master-0 kubenswrapper[31559]: I0216 02:39:38.009517 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:38.067460 master-0 kubenswrapper[31559]: I0216 02:39:38.064573 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-dde57-volume-lvm-iscsi-0"] Feb 16 02:39:38.067460 master-0 kubenswrapper[31559]: I0216 02:39:38.065174 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-dde57-volume-lvm-iscsi-config-data" Feb 16 02:39:38.126974 master-0 kubenswrapper[31559]: I0216 02:39:38.094560 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vvnn\" (UniqueName: \"kubernetes.io/projected/032a3554-910a-472a-8537-73e08670ffe8-kube-api-access-7vvnn\") pod \"032a3554-910a-472a-8537-73e08670ffe8\" (UID: \"032a3554-910a-472a-8537-73e08670ffe8\") " Feb 16 02:39:38.126974 master-0 kubenswrapper[31559]: I0216 02:39:38.094950 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/032a3554-910a-472a-8537-73e08670ffe8-combined-ca-bundle\") pod \"032a3554-910a-472a-8537-73e08670ffe8\" (UID: \"032a3554-910a-472a-8537-73e08670ffe8\") " Feb 16 02:39:38.126974 master-0 kubenswrapper[31559]: I0216 02:39:38.095030 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/032a3554-910a-472a-8537-73e08670ffe8-config\") pod \"032a3554-910a-472a-8537-73e08670ffe8\" (UID: \"032a3554-910a-472a-8537-73e08670ffe8\") " Feb 16 02:39:38.126974 master-0 kubenswrapper[31559]: I0216 02:39:38.097832 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78fccd7959-kjwk5"] Feb 16 02:39:38.126974 master-0 kubenswrapper[31559]: I0216 02:39:38.122524 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78fccd7959-kjwk5" Feb 16 02:39:38.135463 master-0 kubenswrapper[31559]: I0216 02:39:38.129862 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/e15f1a9c-41e2-4f45-9a40-94557f09f863-var-lib-cinder\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"e15f1a9c-41e2-4f45-9a40-94557f09f863\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:38.135463 master-0 kubenswrapper[31559]: I0216 02:39:38.129979 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cnn5\" (UniqueName: \"kubernetes.io/projected/e15f1a9c-41e2-4f45-9a40-94557f09f863-kube-api-access-8cnn5\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"e15f1a9c-41e2-4f45-9a40-94557f09f863\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:38.135463 master-0 kubenswrapper[31559]: I0216 02:39:38.130071 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/e15f1a9c-41e2-4f45-9a40-94557f09f863-var-locks-brick\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"e15f1a9c-41e2-4f45-9a40-94557f09f863\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:38.135463 master-0 kubenswrapper[31559]: I0216 02:39:38.130244 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e15f1a9c-41e2-4f45-9a40-94557f09f863-sys\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"e15f1a9c-41e2-4f45-9a40-94557f09f863\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:38.135463 master-0 kubenswrapper[31559]: I0216 02:39:38.130350 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e15f1a9c-41e2-4f45-9a40-94557f09f863-config-data\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"e15f1a9c-41e2-4f45-9a40-94557f09f863\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:38.135463 master-0 kubenswrapper[31559]: I0216 02:39:38.130503 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/630fbd6e-863a-4347-acc7-38ae08b97e61-etc-machine-id\") pod \"cinder-dde57-scheduler-0\" (UID: \"630fbd6e-863a-4347-acc7-38ae08b97e61\") " pod="openstack/cinder-dde57-scheduler-0" Feb 16 02:39:38.135463 master-0 kubenswrapper[31559]: I0216 02:39:38.130593 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/e15f1a9c-41e2-4f45-9a40-94557f09f863-dev\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"e15f1a9c-41e2-4f45-9a40-94557f09f863\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:38.135463 master-0 kubenswrapper[31559]: I0216 02:39:38.130688 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e15f1a9c-41e2-4f45-9a40-94557f09f863-lib-modules\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"e15f1a9c-41e2-4f45-9a40-94557f09f863\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:38.135463 master-0 kubenswrapper[31559]: I0216 02:39:38.131184 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/630fbd6e-863a-4347-acc7-38ae08b97e61-etc-machine-id\") pod \"cinder-dde57-scheduler-0\" (UID: \"630fbd6e-863a-4347-acc7-38ae08b97e61\") " pod="openstack/cinder-dde57-scheduler-0" Feb 16 02:39:38.135463 master-0 kubenswrapper[31559]: I0216 02:39:38.134980 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/630fbd6e-863a-4347-acc7-38ae08b97e61-scripts\") pod \"cinder-dde57-scheduler-0\" (UID: \"630fbd6e-863a-4347-acc7-38ae08b97e61\") " pod="openstack/cinder-dde57-scheduler-0" Feb 16 02:39:38.135463 master-0 kubenswrapper[31559]: I0216 02:39:38.135176 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/e15f1a9c-41e2-4f45-9a40-94557f09f863-etc-iscsi\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"e15f1a9c-41e2-4f45-9a40-94557f09f863\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:38.153458 master-0 kubenswrapper[31559]: I0216 02:39:38.151484 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/032a3554-910a-472a-8537-73e08670ffe8-kube-api-access-7vvnn" (OuterVolumeSpecName: "kube-api-access-7vvnn") pod "032a3554-910a-472a-8537-73e08670ffe8" (UID: "032a3554-910a-472a-8537-73e08670ffe8"). InnerVolumeSpecName "kube-api-access-7vvnn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:39:38.191897 master-0 kubenswrapper[31559]: I0216 02:39:38.189706 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/630fbd6e-863a-4347-acc7-38ae08b97e61-scripts\") pod \"cinder-dde57-scheduler-0\" (UID: \"630fbd6e-863a-4347-acc7-38ae08b97e61\") " pod="openstack/cinder-dde57-scheduler-0" Feb 16 02:39:38.203451 master-0 kubenswrapper[31559]: I0216 02:39:38.199048 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/630fbd6e-863a-4347-acc7-38ae08b97e61-config-data-custom\") pod \"cinder-dde57-scheduler-0\" (UID: \"630fbd6e-863a-4347-acc7-38ae08b97e61\") " pod="openstack/cinder-dde57-scheduler-0" Feb 16 02:39:38.242477 master-0 kubenswrapper[31559]: I0216 02:39:38.241157 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/630fbd6e-863a-4347-acc7-38ae08b97e61-config-data-custom\") pod \"cinder-dde57-scheduler-0\" (UID: \"630fbd6e-863a-4347-acc7-38ae08b97e61\") " pod="openstack/cinder-dde57-scheduler-0" Feb 16 02:39:38.242477 master-0 kubenswrapper[31559]: I0216 02:39:38.241283 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/032a3554-910a-472a-8537-73e08670ffe8-config" (OuterVolumeSpecName: "config") pod "032a3554-910a-472a-8537-73e08670ffe8" (UID: "032a3554-910a-472a-8537-73e08670ffe8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:39:38.254472 master-0 kubenswrapper[31559]: I0216 02:39:38.253800 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/630fbd6e-863a-4347-acc7-38ae08b97e61-combined-ca-bundle\") pod \"cinder-dde57-scheduler-0\" (UID: \"630fbd6e-863a-4347-acc7-38ae08b97e61\") " pod="openstack/cinder-dde57-scheduler-0" Feb 16 02:39:38.254472 master-0 kubenswrapper[31559]: I0216 02:39:38.253868 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e15f1a9c-41e2-4f45-9a40-94557f09f863-etc-machine-id\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"e15f1a9c-41e2-4f45-9a40-94557f09f863\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:38.254472 master-0 kubenswrapper[31559]: I0216 02:39:38.253982 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/e15f1a9c-41e2-4f45-9a40-94557f09f863-var-locks-cinder\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"e15f1a9c-41e2-4f45-9a40-94557f09f863\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:38.254472 master-0 kubenswrapper[31559]: I0216 02:39:38.254031 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e15f1a9c-41e2-4f45-9a40-94557f09f863-config-data-custom\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"e15f1a9c-41e2-4f45-9a40-94557f09f863\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:38.254472 master-0 kubenswrapper[31559]: I0216 02:39:38.254106 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/e15f1a9c-41e2-4f45-9a40-94557f09f863-etc-nvme\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"e15f1a9c-41e2-4f45-9a40-94557f09f863\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:38.254472 master-0 kubenswrapper[31559]: I0216 02:39:38.254152 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e15f1a9c-41e2-4f45-9a40-94557f09f863-combined-ca-bundle\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"e15f1a9c-41e2-4f45-9a40-94557f09f863\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:38.254472 master-0 kubenswrapper[31559]: I0216 02:39:38.254173 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snrjb\" (UniqueName: \"kubernetes.io/projected/630fbd6e-863a-4347-acc7-38ae08b97e61-kube-api-access-snrjb\") pod \"cinder-dde57-scheduler-0\" (UID: \"630fbd6e-863a-4347-acc7-38ae08b97e61\") " pod="openstack/cinder-dde57-scheduler-0" Feb 16 02:39:38.254472 master-0 kubenswrapper[31559]: I0216 02:39:38.254267 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/630fbd6e-863a-4347-acc7-38ae08b97e61-config-data\") pod \"cinder-dde57-scheduler-0\" (UID: \"630fbd6e-863a-4347-acc7-38ae08b97e61\") " pod="openstack/cinder-dde57-scheduler-0" Feb 16 02:39:38.254472 master-0 kubenswrapper[31559]: I0216 02:39:38.254284 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/e15f1a9c-41e2-4f45-9a40-94557f09f863-run\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"e15f1a9c-41e2-4f45-9a40-94557f09f863\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:38.254472 master-0 kubenswrapper[31559]: I0216 02:39:38.254330 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e15f1a9c-41e2-4f45-9a40-94557f09f863-scripts\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"e15f1a9c-41e2-4f45-9a40-94557f09f863\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:38.254472 master-0 kubenswrapper[31559]: I0216 02:39:38.254457 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vvnn\" (UniqueName: \"kubernetes.io/projected/032a3554-910a-472a-8537-73e08670ffe8-kube-api-access-7vvnn\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:38.254472 master-0 kubenswrapper[31559]: I0216 02:39:38.254473 31559 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/032a3554-910a-472a-8537-73e08670ffe8-config\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:38.265454 master-0 kubenswrapper[31559]: I0216 02:39:38.261295 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/630fbd6e-863a-4347-acc7-38ae08b97e61-combined-ca-bundle\") pod \"cinder-dde57-scheduler-0\" (UID: \"630fbd6e-863a-4347-acc7-38ae08b97e61\") " pod="openstack/cinder-dde57-scheduler-0" Feb 16 02:39:38.265454 master-0 kubenswrapper[31559]: I0216 02:39:38.261362 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78fccd7959-kjwk5"] Feb 16 02:39:38.270464 master-0 kubenswrapper[31559]: I0216 02:39:38.269525 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/032a3554-910a-472a-8537-73e08670ffe8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "032a3554-910a-472a-8537-73e08670ffe8" (UID: "032a3554-910a-472a-8537-73e08670ffe8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:39:38.270464 master-0 kubenswrapper[31559]: I0216 02:39:38.270087 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/630fbd6e-863a-4347-acc7-38ae08b97e61-config-data\") pod \"cinder-dde57-scheduler-0\" (UID: \"630fbd6e-863a-4347-acc7-38ae08b97e61\") " pod="openstack/cinder-dde57-scheduler-0" Feb 16 02:39:38.286457 master-0 kubenswrapper[31559]: I0216 02:39:38.285025 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snrjb\" (UniqueName: \"kubernetes.io/projected/630fbd6e-863a-4347-acc7-38ae08b97e61-kube-api-access-snrjb\") pod \"cinder-dde57-scheduler-0\" (UID: \"630fbd6e-863a-4347-acc7-38ae08b97e61\") " pod="openstack/cinder-dde57-scheduler-0" Feb 16 02:39:38.302520 master-0 kubenswrapper[31559]: I0216 02:39:38.299704 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-dde57-backup-0"] Feb 16 02:39:38.302520 master-0 kubenswrapper[31559]: I0216 02:39:38.301350 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:38.306472 master-0 kubenswrapper[31559]: I0216 02:39:38.305761 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-dde57-backup-config-data" Feb 16 02:39:38.307978 master-0 kubenswrapper[31559]: I0216 02:39:38.306690 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-dde57-backup-0"] Feb 16 02:39:38.328029 master-0 kubenswrapper[31559]: I0216 02:39:38.327985 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-dde57-api-0"] Feb 16 02:39:38.329842 master-0 kubenswrapper[31559]: I0216 02:39:38.329818 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-dde57-api-0" Feb 16 02:39:38.335103 master-0 kubenswrapper[31559]: I0216 02:39:38.334879 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-dde57-api-config-data" Feb 16 02:39:38.343518 master-0 kubenswrapper[31559]: I0216 02:39:38.343096 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-dde57-api-0"] Feb 16 02:39:38.361000 master-0 kubenswrapper[31559]: I0216 02:39:38.360859 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/e15f1a9c-41e2-4f45-9a40-94557f09f863-dev\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"e15f1a9c-41e2-4f45-9a40-94557f09f863\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:38.361000 master-0 kubenswrapper[31559]: I0216 02:39:38.361010 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e15f1a9c-41e2-4f45-9a40-94557f09f863-lib-modules\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"e15f1a9c-41e2-4f45-9a40-94557f09f863\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:38.361264 master-0 kubenswrapper[31559]: I0216 02:39:38.360949 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/e15f1a9c-41e2-4f45-9a40-94557f09f863-dev\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"e15f1a9c-41e2-4f45-9a40-94557f09f863\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:38.362663 master-0 kubenswrapper[31559]: I0216 02:39:38.362615 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e15f1a9c-41e2-4f45-9a40-94557f09f863-lib-modules\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"e15f1a9c-41e2-4f45-9a40-94557f09f863\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:38.362785 master-0 kubenswrapper[31559]: I0216 02:39:38.362759 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhpx5\" (UniqueName: \"kubernetes.io/projected/2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b-kube-api-access-hhpx5\") pod \"dnsmasq-dns-78fccd7959-kjwk5\" (UID: \"2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b\") " pod="openstack/dnsmasq-dns-78fccd7959-kjwk5" Feb 16 02:39:38.362851 master-0 kubenswrapper[31559]: I0216 02:39:38.362835 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/e15f1a9c-41e2-4f45-9a40-94557f09f863-etc-iscsi\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"e15f1a9c-41e2-4f45-9a40-94557f09f863\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:38.362961 master-0 kubenswrapper[31559]: I0216 02:39:38.362900 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e15f1a9c-41e2-4f45-9a40-94557f09f863-etc-machine-id\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"e15f1a9c-41e2-4f45-9a40-94557f09f863\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:38.363002 master-0 kubenswrapper[31559]: I0216 02:39:38.362952 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/e15f1a9c-41e2-4f45-9a40-94557f09f863-etc-iscsi\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"e15f1a9c-41e2-4f45-9a40-94557f09f863\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:38.363002 master-0 kubenswrapper[31559]: I0216 02:39:38.362995 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e15f1a9c-41e2-4f45-9a40-94557f09f863-etc-machine-id\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"e15f1a9c-41e2-4f45-9a40-94557f09f863\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:38.363075 master-0 kubenswrapper[31559]: I0216 02:39:38.363059 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b-config\") pod \"dnsmasq-dns-78fccd7959-kjwk5\" (UID: \"2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b\") " pod="openstack/dnsmasq-dns-78fccd7959-kjwk5" Feb 16 02:39:38.363125 master-0 kubenswrapper[31559]: I0216 02:39:38.363110 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b-dns-svc\") pod \"dnsmasq-dns-78fccd7959-kjwk5\" (UID: \"2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b\") " pod="openstack/dnsmasq-dns-78fccd7959-kjwk5" Feb 16 02:39:38.363214 master-0 kubenswrapper[31559]: I0216 02:39:38.363194 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/e15f1a9c-41e2-4f45-9a40-94557f09f863-var-locks-cinder\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"e15f1a9c-41e2-4f45-9a40-94557f09f863\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:38.363778 master-0 kubenswrapper[31559]: I0216 02:39:38.363398 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/e15f1a9c-41e2-4f45-9a40-94557f09f863-var-locks-cinder\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"e15f1a9c-41e2-4f45-9a40-94557f09f863\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:38.363778 master-0 kubenswrapper[31559]: I0216 02:39:38.363548 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e15f1a9c-41e2-4f45-9a40-94557f09f863-config-data-custom\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"e15f1a9c-41e2-4f45-9a40-94557f09f863\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:38.364307 master-0 kubenswrapper[31559]: I0216 02:39:38.364279 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/e15f1a9c-41e2-4f45-9a40-94557f09f863-etc-nvme\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"e15f1a9c-41e2-4f45-9a40-94557f09f863\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:38.364349 master-0 kubenswrapper[31559]: I0216 02:39:38.364306 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b-ovsdbserver-nb\") pod \"dnsmasq-dns-78fccd7959-kjwk5\" (UID: \"2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b\") " pod="openstack/dnsmasq-dns-78fccd7959-kjwk5" Feb 16 02:39:38.364409 master-0 kubenswrapper[31559]: I0216 02:39:38.364387 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e15f1a9c-41e2-4f45-9a40-94557f09f863-combined-ca-bundle\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"e15f1a9c-41e2-4f45-9a40-94557f09f863\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:38.364527 master-0 kubenswrapper[31559]: I0216 02:39:38.364505 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b-ovsdbserver-sb\") pod \"dnsmasq-dns-78fccd7959-kjwk5\" (UID: \"2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b\") " pod="openstack/dnsmasq-dns-78fccd7959-kjwk5" Feb 16 02:39:38.364566 master-0 kubenswrapper[31559]: I0216 02:39:38.364521 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/e15f1a9c-41e2-4f45-9a40-94557f09f863-etc-nvme\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"e15f1a9c-41e2-4f45-9a40-94557f09f863\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:38.364607 master-0 kubenswrapper[31559]: I0216 02:39:38.364591 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/e15f1a9c-41e2-4f45-9a40-94557f09f863-run\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"e15f1a9c-41e2-4f45-9a40-94557f09f863\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:38.364715 master-0 kubenswrapper[31559]: I0216 02:39:38.364692 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e15f1a9c-41e2-4f45-9a40-94557f09f863-scripts\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"e15f1a9c-41e2-4f45-9a40-94557f09f863\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:38.364786 master-0 kubenswrapper[31559]: I0216 02:39:38.364723 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/e15f1a9c-41e2-4f45-9a40-94557f09f863-run\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"e15f1a9c-41e2-4f45-9a40-94557f09f863\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:38.364786 master-0 kubenswrapper[31559]: I0216 02:39:38.364778 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/e15f1a9c-41e2-4f45-9a40-94557f09f863-var-lib-cinder\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"e15f1a9c-41e2-4f45-9a40-94557f09f863\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:38.365146 master-0 kubenswrapper[31559]: I0216 02:39:38.365072 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/e15f1a9c-41e2-4f45-9a40-94557f09f863-var-lib-cinder\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"e15f1a9c-41e2-4f45-9a40-94557f09f863\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:38.365249 master-0 kubenswrapper[31559]: I0216 02:39:38.365170 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cnn5\" (UniqueName: \"kubernetes.io/projected/e15f1a9c-41e2-4f45-9a40-94557f09f863-kube-api-access-8cnn5\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"e15f1a9c-41e2-4f45-9a40-94557f09f863\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:38.365555 master-0 kubenswrapper[31559]: I0216 02:39:38.365230 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/e15f1a9c-41e2-4f45-9a40-94557f09f863-var-locks-brick\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"e15f1a9c-41e2-4f45-9a40-94557f09f863\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:38.365678 master-0 kubenswrapper[31559]: I0216 02:39:38.365661 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/e15f1a9c-41e2-4f45-9a40-94557f09f863-var-locks-brick\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"e15f1a9c-41e2-4f45-9a40-94557f09f863\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:38.365738 master-0 kubenswrapper[31559]: I0216 02:39:38.365713 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e15f1a9c-41e2-4f45-9a40-94557f09f863-sys\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"e15f1a9c-41e2-4f45-9a40-94557f09f863\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:38.365774 master-0 kubenswrapper[31559]: I0216 02:39:38.365746 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b-dns-swift-storage-0\") pod \"dnsmasq-dns-78fccd7959-kjwk5\" (UID: \"2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b\") " pod="openstack/dnsmasq-dns-78fccd7959-kjwk5" Feb 16 02:39:38.365774 master-0 kubenswrapper[31559]: I0216 02:39:38.365769 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e15f1a9c-41e2-4f45-9a40-94557f09f863-config-data\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"e15f1a9c-41e2-4f45-9a40-94557f09f863\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:38.365933 master-0 kubenswrapper[31559]: I0216 02:39:38.365908 31559 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/032a3554-910a-472a-8537-73e08670ffe8-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:38.366413 master-0 kubenswrapper[31559]: I0216 02:39:38.366390 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e15f1a9c-41e2-4f45-9a40-94557f09f863-sys\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"e15f1a9c-41e2-4f45-9a40-94557f09f863\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:38.367966 master-0 kubenswrapper[31559]: I0216 02:39:38.367675 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e15f1a9c-41e2-4f45-9a40-94557f09f863-scripts\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"e15f1a9c-41e2-4f45-9a40-94557f09f863\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:38.367966 master-0 kubenswrapper[31559]: I0216 02:39:38.367728 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e15f1a9c-41e2-4f45-9a40-94557f09f863-combined-ca-bundle\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"e15f1a9c-41e2-4f45-9a40-94557f09f863\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:38.370866 master-0 kubenswrapper[31559]: I0216 02:39:38.369878 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e15f1a9c-41e2-4f45-9a40-94557f09f863-config-data-custom\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"e15f1a9c-41e2-4f45-9a40-94557f09f863\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:38.372999 master-0 kubenswrapper[31559]: I0216 02:39:38.372959 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e15f1a9c-41e2-4f45-9a40-94557f09f863-config-data\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"e15f1a9c-41e2-4f45-9a40-94557f09f863\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:38.383443 master-0 kubenswrapper[31559]: I0216 02:39:38.383359 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cnn5\" (UniqueName: \"kubernetes.io/projected/e15f1a9c-41e2-4f45-9a40-94557f09f863-kube-api-access-8cnn5\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"e15f1a9c-41e2-4f45-9a40-94557f09f863\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:38.441230 master-0 kubenswrapper[31559]: I0216 02:39:38.438579 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:38.471000 master-0 kubenswrapper[31559]: I0216 02:39:38.468502 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhpx5\" (UniqueName: \"kubernetes.io/projected/2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b-kube-api-access-hhpx5\") pod \"dnsmasq-dns-78fccd7959-kjwk5\" (UID: \"2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b\") " pod="openstack/dnsmasq-dns-78fccd7959-kjwk5" Feb 16 02:39:38.471000 master-0 kubenswrapper[31559]: I0216 02:39:38.468563 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/69ebefce-f456-4d6e-9111-5df13870bbae-sys\") pod \"cinder-dde57-backup-0\" (UID: \"69ebefce-f456-4d6e-9111-5df13870bbae\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:38.471000 master-0 kubenswrapper[31559]: I0216 02:39:38.468582 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14c0d02d-65db-41be-81a2-f4a1c5996cb8-logs\") pod \"cinder-dde57-api-0\" (UID: \"14c0d02d-65db-41be-81a2-f4a1c5996cb8\") " pod="openstack/cinder-dde57-api-0" Feb 16 02:39:38.471000 master-0 kubenswrapper[31559]: I0216 02:39:38.468602 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14c0d02d-65db-41be-81a2-f4a1c5996cb8-config-data\") pod \"cinder-dde57-api-0\" (UID: \"14c0d02d-65db-41be-81a2-f4a1c5996cb8\") " pod="openstack/cinder-dde57-api-0" Feb 16 02:39:38.471000 master-0 kubenswrapper[31559]: I0216 02:39:38.468625 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69ebefce-f456-4d6e-9111-5df13870bbae-config-data\") pod \"cinder-dde57-backup-0\" (UID: \"69ebefce-f456-4d6e-9111-5df13870bbae\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:38.471000 master-0 kubenswrapper[31559]: I0216 02:39:38.468657 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69ebefce-f456-4d6e-9111-5df13870bbae-scripts\") pod \"cinder-dde57-backup-0\" (UID: \"69ebefce-f456-4d6e-9111-5df13870bbae\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:38.471000 master-0 kubenswrapper[31559]: I0216 02:39:38.468691 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b-config\") pod \"dnsmasq-dns-78fccd7959-kjwk5\" (UID: \"2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b\") " pod="openstack/dnsmasq-dns-78fccd7959-kjwk5" Feb 16 02:39:38.471000 master-0 kubenswrapper[31559]: I0216 02:39:38.468712 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/69ebefce-f456-4d6e-9111-5df13870bbae-config-data-custom\") pod \"cinder-dde57-backup-0\" (UID: \"69ebefce-f456-4d6e-9111-5df13870bbae\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:38.471000 master-0 kubenswrapper[31559]: I0216 02:39:38.468729 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqkmm\" (UniqueName: \"kubernetes.io/projected/14c0d02d-65db-41be-81a2-f4a1c5996cb8-kube-api-access-xqkmm\") pod \"cinder-dde57-api-0\" (UID: \"14c0d02d-65db-41be-81a2-f4a1c5996cb8\") " pod="openstack/cinder-dde57-api-0" Feb 16 02:39:38.471000 master-0 kubenswrapper[31559]: I0216 02:39:38.468751 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b-dns-svc\") pod \"dnsmasq-dns-78fccd7959-kjwk5\" (UID: \"2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b\") " pod="openstack/dnsmasq-dns-78fccd7959-kjwk5" Feb 16 02:39:38.471000 master-0 kubenswrapper[31559]: I0216 02:39:38.468772 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14c0d02d-65db-41be-81a2-f4a1c5996cb8-combined-ca-bundle\") pod \"cinder-dde57-api-0\" (UID: \"14c0d02d-65db-41be-81a2-f4a1c5996cb8\") " pod="openstack/cinder-dde57-api-0" Feb 16 02:39:38.471000 master-0 kubenswrapper[31559]: I0216 02:39:38.468790 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/69ebefce-f456-4d6e-9111-5df13870bbae-etc-iscsi\") pod \"cinder-dde57-backup-0\" (UID: \"69ebefce-f456-4d6e-9111-5df13870bbae\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:38.471000 master-0 kubenswrapper[31559]: I0216 02:39:38.468820 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/69ebefce-f456-4d6e-9111-5df13870bbae-run\") pod \"cinder-dde57-backup-0\" (UID: \"69ebefce-f456-4d6e-9111-5df13870bbae\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:38.471000 master-0 kubenswrapper[31559]: I0216 02:39:38.468839 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvvdb\" (UniqueName: \"kubernetes.io/projected/69ebefce-f456-4d6e-9111-5df13870bbae-kube-api-access-rvvdb\") pod \"cinder-dde57-backup-0\" (UID: \"69ebefce-f456-4d6e-9111-5df13870bbae\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:38.471000 master-0 kubenswrapper[31559]: I0216 02:39:38.468859 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/69ebefce-f456-4d6e-9111-5df13870bbae-var-lib-cinder\") pod \"cinder-dde57-backup-0\" (UID: \"69ebefce-f456-4d6e-9111-5df13870bbae\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:38.471000 master-0 kubenswrapper[31559]: I0216 02:39:38.468890 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14c0d02d-65db-41be-81a2-f4a1c5996cb8-scripts\") pod \"cinder-dde57-api-0\" (UID: \"14c0d02d-65db-41be-81a2-f4a1c5996cb8\") " pod="openstack/cinder-dde57-api-0" Feb 16 02:39:38.471000 master-0 kubenswrapper[31559]: I0216 02:39:38.468911 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69ebefce-f456-4d6e-9111-5df13870bbae-combined-ca-bundle\") pod \"cinder-dde57-backup-0\" (UID: \"69ebefce-f456-4d6e-9111-5df13870bbae\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:38.471000 master-0 kubenswrapper[31559]: I0216 02:39:38.468937 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b-ovsdbserver-nb\") pod \"dnsmasq-dns-78fccd7959-kjwk5\" (UID: \"2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b\") " pod="openstack/dnsmasq-dns-78fccd7959-kjwk5" Feb 16 02:39:38.471000 master-0 kubenswrapper[31559]: I0216 02:39:38.468993 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/14c0d02d-65db-41be-81a2-f4a1c5996cb8-etc-machine-id\") pod \"cinder-dde57-api-0\" (UID: \"14c0d02d-65db-41be-81a2-f4a1c5996cb8\") " pod="openstack/cinder-dde57-api-0" Feb 16 02:39:38.471000 master-0 kubenswrapper[31559]: I0216 02:39:38.469017 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/69ebefce-f456-4d6e-9111-5df13870bbae-etc-nvme\") pod \"cinder-dde57-backup-0\" (UID: \"69ebefce-f456-4d6e-9111-5df13870bbae\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:38.471000 master-0 kubenswrapper[31559]: I0216 02:39:38.469051 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/69ebefce-f456-4d6e-9111-5df13870bbae-var-locks-brick\") pod \"cinder-dde57-backup-0\" (UID: \"69ebefce-f456-4d6e-9111-5df13870bbae\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:38.471000 master-0 kubenswrapper[31559]: I0216 02:39:38.469072 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/69ebefce-f456-4d6e-9111-5df13870bbae-lib-modules\") pod \"cinder-dde57-backup-0\" (UID: \"69ebefce-f456-4d6e-9111-5df13870bbae\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:38.471000 master-0 kubenswrapper[31559]: I0216 02:39:38.469096 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b-ovsdbserver-sb\") pod \"dnsmasq-dns-78fccd7959-kjwk5\" (UID: \"2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b\") " pod="openstack/dnsmasq-dns-78fccd7959-kjwk5" Feb 16 02:39:38.471000 master-0 kubenswrapper[31559]: I0216 02:39:38.469137 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/69ebefce-f456-4d6e-9111-5df13870bbae-etc-machine-id\") pod \"cinder-dde57-backup-0\" (UID: \"69ebefce-f456-4d6e-9111-5df13870bbae\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:38.471000 master-0 kubenswrapper[31559]: I0216 02:39:38.469171 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/69ebefce-f456-4d6e-9111-5df13870bbae-var-locks-cinder\") pod \"cinder-dde57-backup-0\" (UID: \"69ebefce-f456-4d6e-9111-5df13870bbae\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:38.471000 master-0 kubenswrapper[31559]: I0216 02:39:38.469541 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/69ebefce-f456-4d6e-9111-5df13870bbae-dev\") pod \"cinder-dde57-backup-0\" (UID: \"69ebefce-f456-4d6e-9111-5df13870bbae\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:38.471000 master-0 kubenswrapper[31559]: I0216 02:39:38.469568 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b-dns-swift-storage-0\") pod \"dnsmasq-dns-78fccd7959-kjwk5\" (UID: \"2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b\") " pod="openstack/dnsmasq-dns-78fccd7959-kjwk5" Feb 16 02:39:38.471000 master-0 kubenswrapper[31559]: I0216 02:39:38.469614 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/14c0d02d-65db-41be-81a2-f4a1c5996cb8-config-data-custom\") pod \"cinder-dde57-api-0\" (UID: \"14c0d02d-65db-41be-81a2-f4a1c5996cb8\") " pod="openstack/cinder-dde57-api-0" Feb 16 02:39:38.472347 master-0 kubenswrapper[31559]: I0216 02:39:38.471194 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b-config\") pod \"dnsmasq-dns-78fccd7959-kjwk5\" (UID: \"2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b\") " pod="openstack/dnsmasq-dns-78fccd7959-kjwk5" Feb 16 02:39:38.472347 master-0 kubenswrapper[31559]: I0216 02:39:38.472035 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b-dns-svc\") pod \"dnsmasq-dns-78fccd7959-kjwk5\" (UID: \"2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b\") " pod="openstack/dnsmasq-dns-78fccd7959-kjwk5" Feb 16 02:39:38.472938 master-0 kubenswrapper[31559]: I0216 02:39:38.472689 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b-ovsdbserver-nb\") pod \"dnsmasq-dns-78fccd7959-kjwk5\" (UID: \"2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b\") " pod="openstack/dnsmasq-dns-78fccd7959-kjwk5" Feb 16 02:39:38.472938 master-0 kubenswrapper[31559]: I0216 02:39:38.472746 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-dde57-scheduler-0" Feb 16 02:39:38.472938 master-0 kubenswrapper[31559]: I0216 02:39:38.472879 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b-ovsdbserver-sb\") pod \"dnsmasq-dns-78fccd7959-kjwk5\" (UID: \"2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b\") " pod="openstack/dnsmasq-dns-78fccd7959-kjwk5" Feb 16 02:39:38.473379 master-0 kubenswrapper[31559]: I0216 02:39:38.472970 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b-dns-swift-storage-0\") pod \"dnsmasq-dns-78fccd7959-kjwk5\" (UID: \"2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b\") " pod="openstack/dnsmasq-dns-78fccd7959-kjwk5" Feb 16 02:39:38.504287 master-0 kubenswrapper[31559]: I0216 02:39:38.504232 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhpx5\" (UniqueName: \"kubernetes.io/projected/2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b-kube-api-access-hhpx5\") pod \"dnsmasq-dns-78fccd7959-kjwk5\" (UID: \"2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b\") " pod="openstack/dnsmasq-dns-78fccd7959-kjwk5" Feb 16 02:39:38.572876 master-0 kubenswrapper[31559]: I0216 02:39:38.572833 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/69ebefce-f456-4d6e-9111-5df13870bbae-var-locks-brick\") pod \"cinder-dde57-backup-0\" (UID: \"69ebefce-f456-4d6e-9111-5df13870bbae\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:38.572876 master-0 kubenswrapper[31559]: I0216 02:39:38.572880 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/69ebefce-f456-4d6e-9111-5df13870bbae-lib-modules\") pod \"cinder-dde57-backup-0\" (UID: \"69ebefce-f456-4d6e-9111-5df13870bbae\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:38.573684 master-0 kubenswrapper[31559]: I0216 02:39:38.572932 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/69ebefce-f456-4d6e-9111-5df13870bbae-etc-machine-id\") pod \"cinder-dde57-backup-0\" (UID: \"69ebefce-f456-4d6e-9111-5df13870bbae\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:38.573684 master-0 kubenswrapper[31559]: I0216 02:39:38.572968 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/69ebefce-f456-4d6e-9111-5df13870bbae-var-locks-cinder\") pod \"cinder-dde57-backup-0\" (UID: \"69ebefce-f456-4d6e-9111-5df13870bbae\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:38.573684 master-0 kubenswrapper[31559]: I0216 02:39:38.572991 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/69ebefce-f456-4d6e-9111-5df13870bbae-dev\") pod \"cinder-dde57-backup-0\" (UID: \"69ebefce-f456-4d6e-9111-5df13870bbae\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:38.573684 master-0 kubenswrapper[31559]: I0216 02:39:38.573014 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/14c0d02d-65db-41be-81a2-f4a1c5996cb8-config-data-custom\") pod \"cinder-dde57-api-0\" (UID: \"14c0d02d-65db-41be-81a2-f4a1c5996cb8\") " pod="openstack/cinder-dde57-api-0" Feb 16 02:39:38.573684 master-0 kubenswrapper[31559]: I0216 02:39:38.573045 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/69ebefce-f456-4d6e-9111-5df13870bbae-sys\") pod \"cinder-dde57-backup-0\" (UID: \"69ebefce-f456-4d6e-9111-5df13870bbae\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:38.573684 master-0 kubenswrapper[31559]: I0216 02:39:38.573062 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14c0d02d-65db-41be-81a2-f4a1c5996cb8-logs\") pod \"cinder-dde57-api-0\" (UID: \"14c0d02d-65db-41be-81a2-f4a1c5996cb8\") " pod="openstack/cinder-dde57-api-0" Feb 16 02:39:38.573684 master-0 kubenswrapper[31559]: I0216 02:39:38.573081 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14c0d02d-65db-41be-81a2-f4a1c5996cb8-config-data\") pod \"cinder-dde57-api-0\" (UID: \"14c0d02d-65db-41be-81a2-f4a1c5996cb8\") " pod="openstack/cinder-dde57-api-0" Feb 16 02:39:38.573684 master-0 kubenswrapper[31559]: I0216 02:39:38.573124 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69ebefce-f456-4d6e-9111-5df13870bbae-config-data\") pod \"cinder-dde57-backup-0\" (UID: \"69ebefce-f456-4d6e-9111-5df13870bbae\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:38.573684 master-0 kubenswrapper[31559]: I0216 02:39:38.573192 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69ebefce-f456-4d6e-9111-5df13870bbae-scripts\") pod \"cinder-dde57-backup-0\" (UID: \"69ebefce-f456-4d6e-9111-5df13870bbae\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:38.573684 master-0 kubenswrapper[31559]: I0216 02:39:38.573236 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/69ebefce-f456-4d6e-9111-5df13870bbae-config-data-custom\") pod \"cinder-dde57-backup-0\" (UID: \"69ebefce-f456-4d6e-9111-5df13870bbae\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:38.573684 master-0 kubenswrapper[31559]: I0216 02:39:38.573257 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqkmm\" (UniqueName: \"kubernetes.io/projected/14c0d02d-65db-41be-81a2-f4a1c5996cb8-kube-api-access-xqkmm\") pod \"cinder-dde57-api-0\" (UID: \"14c0d02d-65db-41be-81a2-f4a1c5996cb8\") " pod="openstack/cinder-dde57-api-0" Feb 16 02:39:38.573684 master-0 kubenswrapper[31559]: I0216 02:39:38.573279 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14c0d02d-65db-41be-81a2-f4a1c5996cb8-combined-ca-bundle\") pod \"cinder-dde57-api-0\" (UID: \"14c0d02d-65db-41be-81a2-f4a1c5996cb8\") " pod="openstack/cinder-dde57-api-0" Feb 16 02:39:38.573684 master-0 kubenswrapper[31559]: I0216 02:39:38.573301 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/69ebefce-f456-4d6e-9111-5df13870bbae-etc-iscsi\") pod \"cinder-dde57-backup-0\" (UID: \"69ebefce-f456-4d6e-9111-5df13870bbae\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:38.573684 master-0 kubenswrapper[31559]: I0216 02:39:38.573317 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/69ebefce-f456-4d6e-9111-5df13870bbae-run\") pod \"cinder-dde57-backup-0\" (UID: \"69ebefce-f456-4d6e-9111-5df13870bbae\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:38.573684 master-0 kubenswrapper[31559]: I0216 02:39:38.573337 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvvdb\" (UniqueName: \"kubernetes.io/projected/69ebefce-f456-4d6e-9111-5df13870bbae-kube-api-access-rvvdb\") pod \"cinder-dde57-backup-0\" (UID: \"69ebefce-f456-4d6e-9111-5df13870bbae\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:38.573684 master-0 kubenswrapper[31559]: I0216 02:39:38.573353 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/69ebefce-f456-4d6e-9111-5df13870bbae-var-lib-cinder\") pod \"cinder-dde57-backup-0\" (UID: \"69ebefce-f456-4d6e-9111-5df13870bbae\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:38.573684 master-0 kubenswrapper[31559]: I0216 02:39:38.573386 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14c0d02d-65db-41be-81a2-f4a1c5996cb8-scripts\") pod \"cinder-dde57-api-0\" (UID: \"14c0d02d-65db-41be-81a2-f4a1c5996cb8\") " pod="openstack/cinder-dde57-api-0" Feb 16 02:39:38.573684 master-0 kubenswrapper[31559]: I0216 02:39:38.573405 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69ebefce-f456-4d6e-9111-5df13870bbae-combined-ca-bundle\") pod \"cinder-dde57-backup-0\" (UID: \"69ebefce-f456-4d6e-9111-5df13870bbae\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:38.573684 master-0 kubenswrapper[31559]: I0216 02:39:38.573446 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/14c0d02d-65db-41be-81a2-f4a1c5996cb8-etc-machine-id\") pod \"cinder-dde57-api-0\" (UID: \"14c0d02d-65db-41be-81a2-f4a1c5996cb8\") " pod="openstack/cinder-dde57-api-0" Feb 16 02:39:38.573684 master-0 kubenswrapper[31559]: I0216 02:39:38.573470 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/69ebefce-f456-4d6e-9111-5df13870bbae-etc-nvme\") pod \"cinder-dde57-backup-0\" (UID: \"69ebefce-f456-4d6e-9111-5df13870bbae\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:38.573684 master-0 kubenswrapper[31559]: I0216 02:39:38.573561 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/69ebefce-f456-4d6e-9111-5df13870bbae-etc-nvme\") pod \"cinder-dde57-backup-0\" (UID: \"69ebefce-f456-4d6e-9111-5df13870bbae\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:38.574921 master-0 kubenswrapper[31559]: I0216 02:39:38.574032 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/69ebefce-f456-4d6e-9111-5df13870bbae-var-locks-brick\") pod \"cinder-dde57-backup-0\" (UID: \"69ebefce-f456-4d6e-9111-5df13870bbae\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:38.574921 master-0 kubenswrapper[31559]: I0216 02:39:38.574065 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/69ebefce-f456-4d6e-9111-5df13870bbae-lib-modules\") pod \"cinder-dde57-backup-0\" (UID: \"69ebefce-f456-4d6e-9111-5df13870bbae\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:38.574921 master-0 kubenswrapper[31559]: I0216 02:39:38.574086 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/69ebefce-f456-4d6e-9111-5df13870bbae-etc-machine-id\") pod \"cinder-dde57-backup-0\" (UID: \"69ebefce-f456-4d6e-9111-5df13870bbae\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:38.574921 master-0 kubenswrapper[31559]: I0216 02:39:38.574116 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/69ebefce-f456-4d6e-9111-5df13870bbae-var-locks-cinder\") pod \"cinder-dde57-backup-0\" (UID: \"69ebefce-f456-4d6e-9111-5df13870bbae\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:38.574921 master-0 kubenswrapper[31559]: I0216 02:39:38.574136 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/69ebefce-f456-4d6e-9111-5df13870bbae-dev\") pod \"cinder-dde57-backup-0\" (UID: \"69ebefce-f456-4d6e-9111-5df13870bbae\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:38.575819 master-0 kubenswrapper[31559]: I0216 02:39:38.575319 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/69ebefce-f456-4d6e-9111-5df13870bbae-run\") pod \"cinder-dde57-backup-0\" (UID: \"69ebefce-f456-4d6e-9111-5df13870bbae\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:38.575819 master-0 kubenswrapper[31559]: I0216 02:39:38.575373 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/69ebefce-f456-4d6e-9111-5df13870bbae-etc-iscsi\") pod \"cinder-dde57-backup-0\" (UID: \"69ebefce-f456-4d6e-9111-5df13870bbae\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:38.575819 master-0 kubenswrapper[31559]: I0216 02:39:38.575427 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/69ebefce-f456-4d6e-9111-5df13870bbae-var-lib-cinder\") pod \"cinder-dde57-backup-0\" (UID: \"69ebefce-f456-4d6e-9111-5df13870bbae\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:38.575819 master-0 kubenswrapper[31559]: I0216 02:39:38.575473 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/69ebefce-f456-4d6e-9111-5df13870bbae-sys\") pod \"cinder-dde57-backup-0\" (UID: \"69ebefce-f456-4d6e-9111-5df13870bbae\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:38.579700 master-0 kubenswrapper[31559]: I0216 02:39:38.579411 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/14c0d02d-65db-41be-81a2-f4a1c5996cb8-config-data-custom\") pod \"cinder-dde57-api-0\" (UID: \"14c0d02d-65db-41be-81a2-f4a1c5996cb8\") " pod="openstack/cinder-dde57-api-0" Feb 16 02:39:38.581364 master-0 kubenswrapper[31559]: I0216 02:39:38.581323 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14c0d02d-65db-41be-81a2-f4a1c5996cb8-config-data\") pod \"cinder-dde57-api-0\" (UID: \"14c0d02d-65db-41be-81a2-f4a1c5996cb8\") " pod="openstack/cinder-dde57-api-0" Feb 16 02:39:38.582917 master-0 kubenswrapper[31559]: I0216 02:39:38.581725 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14c0d02d-65db-41be-81a2-f4a1c5996cb8-scripts\") pod \"cinder-dde57-api-0\" (UID: \"14c0d02d-65db-41be-81a2-f4a1c5996cb8\") " pod="openstack/cinder-dde57-api-0" Feb 16 02:39:38.583147 master-0 kubenswrapper[31559]: I0216 02:39:38.583113 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-mxb5k" event={"ID":"032a3554-910a-472a-8537-73e08670ffe8","Type":"ContainerDied","Data":"abc71f14f21319c5483859a958d4b27119ff55a5ba6483f1ed45f9abe1aefee9"} Feb 16 02:39:38.583216 master-0 kubenswrapper[31559]: I0216 02:39:38.583152 31559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="abc71f14f21319c5483859a958d4b27119ff55a5ba6483f1ed45f9abe1aefee9" Feb 16 02:39:38.583262 master-0 kubenswrapper[31559]: I0216 02:39:38.583227 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-mxb5k" Feb 16 02:39:38.583777 master-0 kubenswrapper[31559]: I0216 02:39:38.583686 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/14c0d02d-65db-41be-81a2-f4a1c5996cb8-etc-machine-id\") pod \"cinder-dde57-api-0\" (UID: \"14c0d02d-65db-41be-81a2-f4a1c5996cb8\") " pod="openstack/cinder-dde57-api-0" Feb 16 02:39:38.585120 master-0 kubenswrapper[31559]: I0216 02:39:38.584999 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14c0d02d-65db-41be-81a2-f4a1c5996cb8-combined-ca-bundle\") pod \"cinder-dde57-api-0\" (UID: \"14c0d02d-65db-41be-81a2-f4a1c5996cb8\") " pod="openstack/cinder-dde57-api-0" Feb 16 02:39:38.585392 master-0 kubenswrapper[31559]: I0216 02:39:38.585352 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69ebefce-f456-4d6e-9111-5df13870bbae-combined-ca-bundle\") pod \"cinder-dde57-backup-0\" (UID: \"69ebefce-f456-4d6e-9111-5df13870bbae\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:38.585890 master-0 kubenswrapper[31559]: I0216 02:39:38.585840 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69ebefce-f456-4d6e-9111-5df13870bbae-scripts\") pod \"cinder-dde57-backup-0\" (UID: \"69ebefce-f456-4d6e-9111-5df13870bbae\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:38.585970 master-0 kubenswrapper[31559]: I0216 02:39:38.585621 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14c0d02d-65db-41be-81a2-f4a1c5996cb8-logs\") pod \"cinder-dde57-api-0\" (UID: \"14c0d02d-65db-41be-81a2-f4a1c5996cb8\") " pod="openstack/cinder-dde57-api-0" Feb 16 02:39:38.587426 master-0 kubenswrapper[31559]: I0216 02:39:38.587394 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69ebefce-f456-4d6e-9111-5df13870bbae-config-data\") pod \"cinder-dde57-backup-0\" (UID: \"69ebefce-f456-4d6e-9111-5df13870bbae\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:38.591839 master-0 kubenswrapper[31559]: I0216 02:39:38.591803 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/69ebefce-f456-4d6e-9111-5df13870bbae-config-data-custom\") pod \"cinder-dde57-backup-0\" (UID: \"69ebefce-f456-4d6e-9111-5df13870bbae\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:38.594975 master-0 kubenswrapper[31559]: I0216 02:39:38.594933 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqkmm\" (UniqueName: \"kubernetes.io/projected/14c0d02d-65db-41be-81a2-f4a1c5996cb8-kube-api-access-xqkmm\") pod \"cinder-dde57-api-0\" (UID: \"14c0d02d-65db-41be-81a2-f4a1c5996cb8\") " pod="openstack/cinder-dde57-api-0" Feb 16 02:39:38.595312 master-0 kubenswrapper[31559]: I0216 02:39:38.595265 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvvdb\" (UniqueName: \"kubernetes.io/projected/69ebefce-f456-4d6e-9111-5df13870bbae-kube-api-access-rvvdb\") pod \"cinder-dde57-backup-0\" (UID: \"69ebefce-f456-4d6e-9111-5df13870bbae\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:38.632911 master-0 kubenswrapper[31559]: I0216 02:39:38.632844 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78fccd7959-kjwk5" Feb 16 02:39:38.642706 master-0 kubenswrapper[31559]: I0216 02:39:38.642651 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:38.656243 master-0 kubenswrapper[31559]: I0216 02:39:38.656148 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-dde57-api-0" Feb 16 02:39:38.862709 master-0 kubenswrapper[31559]: I0216 02:39:38.856292 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78fccd7959-kjwk5"] Feb 16 02:39:38.935545 master-0 kubenswrapper[31559]: I0216 02:39:38.934215 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f4994bbb5-4cdht"] Feb 16 02:39:38.976980 master-0 kubenswrapper[31559]: I0216 02:39:38.975721 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f4994bbb5-4cdht"] Feb 16 02:39:38.976980 master-0 kubenswrapper[31559]: I0216 02:39:38.975855 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f4994bbb5-4cdht" Feb 16 02:39:39.062714 master-0 kubenswrapper[31559]: I0216 02:39:39.062403 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5687464c96-4rx8g"] Feb 16 02:39:39.065265 master-0 kubenswrapper[31559]: I0216 02:39:39.064121 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5687464c96-4rx8g" Feb 16 02:39:39.067587 master-0 kubenswrapper[31559]: I0216 02:39:39.066447 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 16 02:39:39.067587 master-0 kubenswrapper[31559]: I0216 02:39:39.066742 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 16 02:39:39.067587 master-0 kubenswrapper[31559]: I0216 02:39:39.066978 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 16 02:39:39.071889 master-0 kubenswrapper[31559]: I0216 02:39:39.070616 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-dde57-volume-lvm-iscsi-0"] Feb 16 02:39:39.090187 master-0 kubenswrapper[31559]: I0216 02:39:39.085785 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/43e8d4ad-2cb2-43ad-9255-b8285c08e9d5-dns-svc\") pod \"dnsmasq-dns-f4994bbb5-4cdht\" (UID: \"43e8d4ad-2cb2-43ad-9255-b8285c08e9d5\") " pod="openstack/dnsmasq-dns-f4994bbb5-4cdht" Feb 16 02:39:39.090187 master-0 kubenswrapper[31559]: I0216 02:39:39.085840 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dwrc\" (UniqueName: \"kubernetes.io/projected/43e8d4ad-2cb2-43ad-9255-b8285c08e9d5-kube-api-access-9dwrc\") pod \"dnsmasq-dns-f4994bbb5-4cdht\" (UID: \"43e8d4ad-2cb2-43ad-9255-b8285c08e9d5\") " pod="openstack/dnsmasq-dns-f4994bbb5-4cdht" Feb 16 02:39:39.090187 master-0 kubenswrapper[31559]: I0216 02:39:39.085996 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/43e8d4ad-2cb2-43ad-9255-b8285c08e9d5-ovsdbserver-nb\") pod \"dnsmasq-dns-f4994bbb5-4cdht\" (UID: \"43e8d4ad-2cb2-43ad-9255-b8285c08e9d5\") " pod="openstack/dnsmasq-dns-f4994bbb5-4cdht" Feb 16 02:39:39.090187 master-0 kubenswrapper[31559]: I0216 02:39:39.086016 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43e8d4ad-2cb2-43ad-9255-b8285c08e9d5-config\") pod \"dnsmasq-dns-f4994bbb5-4cdht\" (UID: \"43e8d4ad-2cb2-43ad-9255-b8285c08e9d5\") " pod="openstack/dnsmasq-dns-f4994bbb5-4cdht" Feb 16 02:39:39.090187 master-0 kubenswrapper[31559]: I0216 02:39:39.086055 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/43e8d4ad-2cb2-43ad-9255-b8285c08e9d5-dns-swift-storage-0\") pod \"dnsmasq-dns-f4994bbb5-4cdht\" (UID: \"43e8d4ad-2cb2-43ad-9255-b8285c08e9d5\") " pod="openstack/dnsmasq-dns-f4994bbb5-4cdht" Feb 16 02:39:39.090187 master-0 kubenswrapper[31559]: I0216 02:39:39.086117 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/43e8d4ad-2cb2-43ad-9255-b8285c08e9d5-ovsdbserver-sb\") pod \"dnsmasq-dns-f4994bbb5-4cdht\" (UID: \"43e8d4ad-2cb2-43ad-9255-b8285c08e9d5\") " pod="openstack/dnsmasq-dns-f4994bbb5-4cdht" Feb 16 02:39:39.131196 master-0 kubenswrapper[31559]: I0216 02:39:39.129606 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5687464c96-4rx8g"] Feb 16 02:39:39.167374 master-0 kubenswrapper[31559]: I0216 02:39:39.167321 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-dde57-scheduler-0"] Feb 16 02:39:39.187768 master-0 kubenswrapper[31559]: I0216 02:39:39.187720 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/43e8d4ad-2cb2-43ad-9255-b8285c08e9d5-dns-svc\") pod \"dnsmasq-dns-f4994bbb5-4cdht\" (UID: \"43e8d4ad-2cb2-43ad-9255-b8285c08e9d5\") " pod="openstack/dnsmasq-dns-f4994bbb5-4cdht" Feb 16 02:39:39.187923 master-0 kubenswrapper[31559]: I0216 02:39:39.187775 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dwrc\" (UniqueName: \"kubernetes.io/projected/43e8d4ad-2cb2-43ad-9255-b8285c08e9d5-kube-api-access-9dwrc\") pod \"dnsmasq-dns-f4994bbb5-4cdht\" (UID: \"43e8d4ad-2cb2-43ad-9255-b8285c08e9d5\") " pod="openstack/dnsmasq-dns-f4994bbb5-4cdht" Feb 16 02:39:39.187923 master-0 kubenswrapper[31559]: I0216 02:39:39.187810 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cd9993cd-827d-4976-ac75-954bb9ace111-httpd-config\") pod \"neutron-5687464c96-4rx8g\" (UID: \"cd9993cd-827d-4976-ac75-954bb9ace111\") " pod="openstack/neutron-5687464c96-4rx8g" Feb 16 02:39:39.187923 master-0 kubenswrapper[31559]: I0216 02:39:39.187877 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpxww\" (UniqueName: \"kubernetes.io/projected/cd9993cd-827d-4976-ac75-954bb9ace111-kube-api-access-zpxww\") pod \"neutron-5687464c96-4rx8g\" (UID: \"cd9993cd-827d-4976-ac75-954bb9ace111\") " pod="openstack/neutron-5687464c96-4rx8g" Feb 16 02:39:39.188088 master-0 kubenswrapper[31559]: I0216 02:39:39.187924 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd9993cd-827d-4976-ac75-954bb9ace111-ovndb-tls-certs\") pod \"neutron-5687464c96-4rx8g\" (UID: \"cd9993cd-827d-4976-ac75-954bb9ace111\") " pod="openstack/neutron-5687464c96-4rx8g" Feb 16 02:39:39.188088 master-0 kubenswrapper[31559]: I0216 02:39:39.187955 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/43e8d4ad-2cb2-43ad-9255-b8285c08e9d5-ovsdbserver-nb\") pod \"dnsmasq-dns-f4994bbb5-4cdht\" (UID: \"43e8d4ad-2cb2-43ad-9255-b8285c08e9d5\") " pod="openstack/dnsmasq-dns-f4994bbb5-4cdht" Feb 16 02:39:39.188088 master-0 kubenswrapper[31559]: I0216 02:39:39.187971 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cd9993cd-827d-4976-ac75-954bb9ace111-config\") pod \"neutron-5687464c96-4rx8g\" (UID: \"cd9993cd-827d-4976-ac75-954bb9ace111\") " pod="openstack/neutron-5687464c96-4rx8g" Feb 16 02:39:39.188088 master-0 kubenswrapper[31559]: I0216 02:39:39.187991 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43e8d4ad-2cb2-43ad-9255-b8285c08e9d5-config\") pod \"dnsmasq-dns-f4994bbb5-4cdht\" (UID: \"43e8d4ad-2cb2-43ad-9255-b8285c08e9d5\") " pod="openstack/dnsmasq-dns-f4994bbb5-4cdht" Feb 16 02:39:39.188088 master-0 kubenswrapper[31559]: I0216 02:39:39.188030 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/43e8d4ad-2cb2-43ad-9255-b8285c08e9d5-dns-swift-storage-0\") pod \"dnsmasq-dns-f4994bbb5-4cdht\" (UID: \"43e8d4ad-2cb2-43ad-9255-b8285c08e9d5\") " pod="openstack/dnsmasq-dns-f4994bbb5-4cdht" Feb 16 02:39:39.188088 master-0 kubenswrapper[31559]: I0216 02:39:39.188084 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/43e8d4ad-2cb2-43ad-9255-b8285c08e9d5-ovsdbserver-sb\") pod \"dnsmasq-dns-f4994bbb5-4cdht\" (UID: \"43e8d4ad-2cb2-43ad-9255-b8285c08e9d5\") " pod="openstack/dnsmasq-dns-f4994bbb5-4cdht" Feb 16 02:39:39.188820 master-0 kubenswrapper[31559]: I0216 02:39:39.188106 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd9993cd-827d-4976-ac75-954bb9ace111-combined-ca-bundle\") pod \"neutron-5687464c96-4rx8g\" (UID: \"cd9993cd-827d-4976-ac75-954bb9ace111\") " pod="openstack/neutron-5687464c96-4rx8g" Feb 16 02:39:39.189274 master-0 kubenswrapper[31559]: I0216 02:39:39.189249 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/43e8d4ad-2cb2-43ad-9255-b8285c08e9d5-ovsdbserver-nb\") pod \"dnsmasq-dns-f4994bbb5-4cdht\" (UID: \"43e8d4ad-2cb2-43ad-9255-b8285c08e9d5\") " pod="openstack/dnsmasq-dns-f4994bbb5-4cdht" Feb 16 02:39:39.189875 master-0 kubenswrapper[31559]: I0216 02:39:39.189852 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43e8d4ad-2cb2-43ad-9255-b8285c08e9d5-config\") pod \"dnsmasq-dns-f4994bbb5-4cdht\" (UID: \"43e8d4ad-2cb2-43ad-9255-b8285c08e9d5\") " pod="openstack/dnsmasq-dns-f4994bbb5-4cdht" Feb 16 02:39:39.191012 master-0 kubenswrapper[31559]: I0216 02:39:39.190967 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/43e8d4ad-2cb2-43ad-9255-b8285c08e9d5-dns-swift-storage-0\") pod \"dnsmasq-dns-f4994bbb5-4cdht\" (UID: \"43e8d4ad-2cb2-43ad-9255-b8285c08e9d5\") " pod="openstack/dnsmasq-dns-f4994bbb5-4cdht" Feb 16 02:39:39.191616 master-0 kubenswrapper[31559]: I0216 02:39:39.191589 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/43e8d4ad-2cb2-43ad-9255-b8285c08e9d5-ovsdbserver-sb\") pod \"dnsmasq-dns-f4994bbb5-4cdht\" (UID: \"43e8d4ad-2cb2-43ad-9255-b8285c08e9d5\") " pod="openstack/dnsmasq-dns-f4994bbb5-4cdht" Feb 16 02:39:39.192582 master-0 kubenswrapper[31559]: I0216 02:39:39.192550 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/43e8d4ad-2cb2-43ad-9255-b8285c08e9d5-dns-svc\") pod \"dnsmasq-dns-f4994bbb5-4cdht\" (UID: \"43e8d4ad-2cb2-43ad-9255-b8285c08e9d5\") " pod="openstack/dnsmasq-dns-f4994bbb5-4cdht" Feb 16 02:39:39.216934 master-0 kubenswrapper[31559]: I0216 02:39:39.216885 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dwrc\" (UniqueName: \"kubernetes.io/projected/43e8d4ad-2cb2-43ad-9255-b8285c08e9d5-kube-api-access-9dwrc\") pod \"dnsmasq-dns-f4994bbb5-4cdht\" (UID: \"43e8d4ad-2cb2-43ad-9255-b8285c08e9d5\") " pod="openstack/dnsmasq-dns-f4994bbb5-4cdht" Feb 16 02:39:39.289951 master-0 kubenswrapper[31559]: I0216 02:39:39.289860 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd9993cd-827d-4976-ac75-954bb9ace111-ovndb-tls-certs\") pod \"neutron-5687464c96-4rx8g\" (UID: \"cd9993cd-827d-4976-ac75-954bb9ace111\") " pod="openstack/neutron-5687464c96-4rx8g" Feb 16 02:39:39.290151 master-0 kubenswrapper[31559]: I0216 02:39:39.289968 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cd9993cd-827d-4976-ac75-954bb9ace111-config\") pod \"neutron-5687464c96-4rx8g\" (UID: \"cd9993cd-827d-4976-ac75-954bb9ace111\") " pod="openstack/neutron-5687464c96-4rx8g" Feb 16 02:39:39.290151 master-0 kubenswrapper[31559]: I0216 02:39:39.290132 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd9993cd-827d-4976-ac75-954bb9ace111-combined-ca-bundle\") pod \"neutron-5687464c96-4rx8g\" (UID: \"cd9993cd-827d-4976-ac75-954bb9ace111\") " pod="openstack/neutron-5687464c96-4rx8g" Feb 16 02:39:39.290345 master-0 kubenswrapper[31559]: I0216 02:39:39.290268 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cd9993cd-827d-4976-ac75-954bb9ace111-httpd-config\") pod \"neutron-5687464c96-4rx8g\" (UID: \"cd9993cd-827d-4976-ac75-954bb9ace111\") " pod="openstack/neutron-5687464c96-4rx8g" Feb 16 02:39:39.290458 master-0 kubenswrapper[31559]: I0216 02:39:39.290379 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpxww\" (UniqueName: \"kubernetes.io/projected/cd9993cd-827d-4976-ac75-954bb9ace111-kube-api-access-zpxww\") pod \"neutron-5687464c96-4rx8g\" (UID: \"cd9993cd-827d-4976-ac75-954bb9ace111\") " pod="openstack/neutron-5687464c96-4rx8g" Feb 16 02:39:39.295517 master-0 kubenswrapper[31559]: I0216 02:39:39.295476 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd9993cd-827d-4976-ac75-954bb9ace111-ovndb-tls-certs\") pod \"neutron-5687464c96-4rx8g\" (UID: \"cd9993cd-827d-4976-ac75-954bb9ace111\") " pod="openstack/neutron-5687464c96-4rx8g" Feb 16 02:39:39.296394 master-0 kubenswrapper[31559]: I0216 02:39:39.296342 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd9993cd-827d-4976-ac75-954bb9ace111-combined-ca-bundle\") pod \"neutron-5687464c96-4rx8g\" (UID: \"cd9993cd-827d-4976-ac75-954bb9ace111\") " pod="openstack/neutron-5687464c96-4rx8g" Feb 16 02:39:39.299365 master-0 kubenswrapper[31559]: I0216 02:39:39.299303 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/cd9993cd-827d-4976-ac75-954bb9ace111-config\") pod \"neutron-5687464c96-4rx8g\" (UID: \"cd9993cd-827d-4976-ac75-954bb9ace111\") " pod="openstack/neutron-5687464c96-4rx8g" Feb 16 02:39:39.300564 master-0 kubenswrapper[31559]: I0216 02:39:39.300474 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cd9993cd-827d-4976-ac75-954bb9ace111-httpd-config\") pod \"neutron-5687464c96-4rx8g\" (UID: \"cd9993cd-827d-4976-ac75-954bb9ace111\") " pod="openstack/neutron-5687464c96-4rx8g" Feb 16 02:39:39.311353 master-0 kubenswrapper[31559]: I0216 02:39:39.311005 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpxww\" (UniqueName: \"kubernetes.io/projected/cd9993cd-827d-4976-ac75-954bb9ace111-kube-api-access-zpxww\") pod \"neutron-5687464c96-4rx8g\" (UID: \"cd9993cd-827d-4976-ac75-954bb9ace111\") " pod="openstack/neutron-5687464c96-4rx8g" Feb 16 02:39:39.371130 master-0 kubenswrapper[31559]: I0216 02:39:39.371019 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f4994bbb5-4cdht" Feb 16 02:39:39.405188 master-0 kubenswrapper[31559]: I0216 02:39:39.405135 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78fccd7959-kjwk5"] Feb 16 02:39:39.460467 master-0 kubenswrapper[31559]: I0216 02:39:39.460012 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5687464c96-4rx8g" Feb 16 02:39:39.620523 master-0 kubenswrapper[31559]: I0216 02:39:39.617571 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-dde57-volume-lvm-iscsi-0" event={"ID":"e15f1a9c-41e2-4f45-9a40-94557f09f863","Type":"ContainerStarted","Data":"ec23e57ebbd8ee71505729426558e50b00ae8ecafa8900f6c7d96df57ad80c02"} Feb 16 02:39:39.628136 master-0 kubenswrapper[31559]: I0216 02:39:39.626614 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-dde57-scheduler-0" event={"ID":"630fbd6e-863a-4347-acc7-38ae08b97e61","Type":"ContainerStarted","Data":"d6e452c1c69921cc4b18515ace94e43ff1e9ef1cbd236272858710e8a9805e94"} Feb 16 02:39:39.628136 master-0 kubenswrapper[31559]: I0216 02:39:39.627609 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-dde57-api-0"] Feb 16 02:39:39.628225 master-0 kubenswrapper[31559]: I0216 02:39:39.628132 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78fccd7959-kjwk5" event={"ID":"2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b","Type":"ContainerStarted","Data":"36331d46620cb6648f1707bfe22860ed3417b2fcc38a5b0fda7fb3f366ff714b"} Feb 16 02:39:39.643386 master-0 kubenswrapper[31559]: I0216 02:39:39.641475 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-dde57-backup-0"] Feb 16 02:39:39.908098 master-0 kubenswrapper[31559]: I0216 02:39:39.908032 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f4994bbb5-4cdht"] Feb 16 02:39:39.909346 master-0 kubenswrapper[31559]: W0216 02:39:39.909281 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43e8d4ad_2cb2_43ad_9255_b8285c08e9d5.slice/crio-ced7d97d6344a32ca2a2820ea5f13206843efcb26cb76c996fe0f19548a02f5f WatchSource:0}: Error finding container ced7d97d6344a32ca2a2820ea5f13206843efcb26cb76c996fe0f19548a02f5f: Status 404 returned error can't find the container with id ced7d97d6344a32ca2a2820ea5f13206843efcb26cb76c996fe0f19548a02f5f Feb 16 02:39:40.096974 master-0 kubenswrapper[31559]: I0216 02:39:40.096376 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5687464c96-4rx8g"] Feb 16 02:39:40.219345 master-0 kubenswrapper[31559]: I0216 02:39:40.219280 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-dde57-api-0"] Feb 16 02:39:40.652716 master-0 kubenswrapper[31559]: I0216 02:39:40.650487 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-dde57-backup-0" event={"ID":"69ebefce-f456-4d6e-9111-5df13870bbae","Type":"ContainerStarted","Data":"410e777434938c4c8c5e98be007009fbba5f46b9025599709357b86626aa7b58"} Feb 16 02:39:40.657125 master-0 kubenswrapper[31559]: I0216 02:39:40.655555 31559 generic.go:334] "Generic (PLEG): container finished" podID="43e8d4ad-2cb2-43ad-9255-b8285c08e9d5" containerID="a33fd76e6d5311329dd15c3336604ea365083f06ba5c6c82724f49645ee3d5af" exitCode=0 Feb 16 02:39:40.657125 master-0 kubenswrapper[31559]: I0216 02:39:40.655612 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f4994bbb5-4cdht" event={"ID":"43e8d4ad-2cb2-43ad-9255-b8285c08e9d5","Type":"ContainerDied","Data":"a33fd76e6d5311329dd15c3336604ea365083f06ba5c6c82724f49645ee3d5af"} Feb 16 02:39:40.657125 master-0 kubenswrapper[31559]: I0216 02:39:40.655630 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f4994bbb5-4cdht" event={"ID":"43e8d4ad-2cb2-43ad-9255-b8285c08e9d5","Type":"ContainerStarted","Data":"ced7d97d6344a32ca2a2820ea5f13206843efcb26cb76c996fe0f19548a02f5f"} Feb 16 02:39:40.659648 master-0 kubenswrapper[31559]: I0216 02:39:40.659409 31559 generic.go:334] "Generic (PLEG): container finished" podID="2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b" containerID="4dd1a0620a70adb9d769d6726e7f15faf44d57a4c1a2c155592666cda31f3dd8" exitCode=0 Feb 16 02:39:40.659719 master-0 kubenswrapper[31559]: I0216 02:39:40.659511 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78fccd7959-kjwk5" event={"ID":"2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b","Type":"ContainerDied","Data":"4dd1a0620a70adb9d769d6726e7f15faf44d57a4c1a2c155592666cda31f3dd8"} Feb 16 02:39:40.661866 master-0 kubenswrapper[31559]: I0216 02:39:40.661827 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-dde57-volume-lvm-iscsi-0" event={"ID":"e15f1a9c-41e2-4f45-9a40-94557f09f863","Type":"ContainerStarted","Data":"466a438ebab8119453c8ae2e8ab6651a7c9a5be49155d75a0db49d5ed0f13a89"} Feb 16 02:39:40.663347 master-0 kubenswrapper[31559]: I0216 02:39:40.663322 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-dde57-api-0" event={"ID":"14c0d02d-65db-41be-81a2-f4a1c5996cb8","Type":"ContainerStarted","Data":"c64b31dbb9d4a8d89fd1ab605afd41af761009d696a4e083a85dd253a2f9b898"} Feb 16 02:39:40.663486 master-0 kubenswrapper[31559]: I0216 02:39:40.663349 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-dde57-api-0" event={"ID":"14c0d02d-65db-41be-81a2-f4a1c5996cb8","Type":"ContainerStarted","Data":"7d7927c478070f383bbaf3f5f23c09747147f5c0dfd0e760aba7d461f2df2efc"} Feb 16 02:39:40.665816 master-0 kubenswrapper[31559]: I0216 02:39:40.665787 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5687464c96-4rx8g" event={"ID":"cd9993cd-827d-4976-ac75-954bb9ace111","Type":"ContainerStarted","Data":"f45d4d96aae3bcd16a415272cdfd08b0b444279c01764ca6c58606b1e088e4ee"} Feb 16 02:39:40.665816 master-0 kubenswrapper[31559]: I0216 02:39:40.665815 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5687464c96-4rx8g" event={"ID":"cd9993cd-827d-4976-ac75-954bb9ace111","Type":"ContainerStarted","Data":"d1ad75a1acbe8733430cf2074295e5cbccf242d8f095c00805af4dc8bf2f14d0"} Feb 16 02:39:41.210649 master-0 kubenswrapper[31559]: I0216 02:39:41.210611 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78fccd7959-kjwk5" Feb 16 02:39:41.363819 master-0 kubenswrapper[31559]: I0216 02:39:41.363779 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b-ovsdbserver-sb\") pod \"2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b\" (UID: \"2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b\") " Feb 16 02:39:41.363819 master-0 kubenswrapper[31559]: I0216 02:39:41.363819 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b-dns-swift-storage-0\") pod \"2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b\" (UID: \"2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b\") " Feb 16 02:39:41.363997 master-0 kubenswrapper[31559]: I0216 02:39:41.363930 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b-config\") pod \"2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b\" (UID: \"2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b\") " Feb 16 02:39:41.364032 master-0 kubenswrapper[31559]: I0216 02:39:41.363988 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhpx5\" (UniqueName: \"kubernetes.io/projected/2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b-kube-api-access-hhpx5\") pod \"2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b\" (UID: \"2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b\") " Feb 16 02:39:41.364068 master-0 kubenswrapper[31559]: I0216 02:39:41.364045 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b-ovsdbserver-nb\") pod \"2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b\" (UID: \"2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b\") " Feb 16 02:39:41.364139 master-0 kubenswrapper[31559]: I0216 02:39:41.364120 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b-dns-svc\") pod \"2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b\" (UID: \"2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b\") " Feb 16 02:39:41.367858 master-0 kubenswrapper[31559]: I0216 02:39:41.367809 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b-kube-api-access-hhpx5" (OuterVolumeSpecName: "kube-api-access-hhpx5") pod "2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b" (UID: "2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b"). InnerVolumeSpecName "kube-api-access-hhpx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:39:41.393827 master-0 kubenswrapper[31559]: I0216 02:39:41.393737 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b" (UID: "2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:39:41.402195 master-0 kubenswrapper[31559]: I0216 02:39:41.402143 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b" (UID: "2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:39:41.407789 master-0 kubenswrapper[31559]: I0216 02:39:41.407093 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b" (UID: "2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:39:41.408609 master-0 kubenswrapper[31559]: I0216 02:39:41.408581 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b" (UID: "2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:39:41.428711 master-0 kubenswrapper[31559]: I0216 02:39:41.428059 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b-config" (OuterVolumeSpecName: "config") pod "2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b" (UID: "2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:39:41.468689 master-0 kubenswrapper[31559]: I0216 02:39:41.468257 31559 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:41.468689 master-0 kubenswrapper[31559]: I0216 02:39:41.468310 31559 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:41.468689 master-0 kubenswrapper[31559]: I0216 02:39:41.468322 31559 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:41.468689 master-0 kubenswrapper[31559]: I0216 02:39:41.468331 31559 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b-config\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:41.468689 master-0 kubenswrapper[31559]: I0216 02:39:41.468341 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhpx5\" (UniqueName: \"kubernetes.io/projected/2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b-kube-api-access-hhpx5\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:41.468689 master-0 kubenswrapper[31559]: I0216 02:39:41.468349 31559 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:41.682029 master-0 kubenswrapper[31559]: I0216 02:39:41.681765 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5687464c96-4rx8g" event={"ID":"cd9993cd-827d-4976-ac75-954bb9ace111","Type":"ContainerStarted","Data":"742ab028b0f99a9d936cbb8dd6ec80a01d1400a786fdd89945263ec04b24b11f"} Feb 16 02:39:41.682029 master-0 kubenswrapper[31559]: I0216 02:39:41.681858 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5687464c96-4rx8g" Feb 16 02:39:41.684910 master-0 kubenswrapper[31559]: I0216 02:39:41.684869 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-dde57-backup-0" event={"ID":"69ebefce-f456-4d6e-9111-5df13870bbae","Type":"ContainerStarted","Data":"65ad02d23bfe7fd4c4d077dbd9e92199fe76e1b58dad7f8ca534300c24bb0aa8"} Feb 16 02:39:41.685068 master-0 kubenswrapper[31559]: I0216 02:39:41.685046 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-dde57-backup-0" event={"ID":"69ebefce-f456-4d6e-9111-5df13870bbae","Type":"ContainerStarted","Data":"dcd5e87d2ef35ada86ce3ee55e19a736ff52a215d064ff20a20133fe5a979579"} Feb 16 02:39:41.687681 master-0 kubenswrapper[31559]: I0216 02:39:41.687636 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f4994bbb5-4cdht" event={"ID":"43e8d4ad-2cb2-43ad-9255-b8285c08e9d5","Type":"ContainerStarted","Data":"a1d9f4e6292669bc3ff67d074e9ea41ae65a31160716f74e2635233d888e7c50"} Feb 16 02:39:41.688492 master-0 kubenswrapper[31559]: I0216 02:39:41.688467 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-f4994bbb5-4cdht" Feb 16 02:39:41.690698 master-0 kubenswrapper[31559]: I0216 02:39:41.690667 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-dde57-scheduler-0" event={"ID":"630fbd6e-863a-4347-acc7-38ae08b97e61","Type":"ContainerStarted","Data":"8356e3adfe7ddf46d5601a907435bdc1b4b481136ad40d0bc4f316c1555b0441"} Feb 16 02:39:41.690698 master-0 kubenswrapper[31559]: I0216 02:39:41.690694 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-dde57-scheduler-0" event={"ID":"630fbd6e-863a-4347-acc7-38ae08b97e61","Type":"ContainerStarted","Data":"b2e65fae808367adaae117112ad3710da6e808ffbd3bf22d50506954995c4e5a"} Feb 16 02:39:41.694210 master-0 kubenswrapper[31559]: I0216 02:39:41.694175 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78fccd7959-kjwk5" event={"ID":"2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b","Type":"ContainerDied","Data":"36331d46620cb6648f1707bfe22860ed3417b2fcc38a5b0fda7fb3f366ff714b"} Feb 16 02:39:41.694392 master-0 kubenswrapper[31559]: I0216 02:39:41.694368 31559 scope.go:117] "RemoveContainer" containerID="4dd1a0620a70adb9d769d6726e7f15faf44d57a4c1a2c155592666cda31f3dd8" Feb 16 02:39:41.694776 master-0 kubenswrapper[31559]: I0216 02:39:41.694749 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78fccd7959-kjwk5" Feb 16 02:39:41.697852 master-0 kubenswrapper[31559]: I0216 02:39:41.697783 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-dde57-volume-lvm-iscsi-0" event={"ID":"e15f1a9c-41e2-4f45-9a40-94557f09f863","Type":"ContainerStarted","Data":"1722ee0b1d979060e2896d028458d7a30d842dda617ffde8257b72ca530100c0"} Feb 16 02:39:41.704186 master-0 kubenswrapper[31559]: I0216 02:39:41.704133 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-dde57-api-0" event={"ID":"14c0d02d-65db-41be-81a2-f4a1c5996cb8","Type":"ContainerStarted","Data":"96d7e74c9317ad75c2ba8a334fd5a7f3aaa196471bc017f4e4846ca45d038338"} Feb 16 02:39:41.704370 master-0 kubenswrapper[31559]: I0216 02:39:41.704279 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-dde57-api-0" podUID="14c0d02d-65db-41be-81a2-f4a1c5996cb8" containerName="cinder-dde57-api-log" containerID="cri-o://c64b31dbb9d4a8d89fd1ab605afd41af761009d696a4e083a85dd253a2f9b898" gracePeriod=30 Feb 16 02:39:41.704550 master-0 kubenswrapper[31559]: I0216 02:39:41.704536 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-dde57-api-0" Feb 16 02:39:41.704621 master-0 kubenswrapper[31559]: I0216 02:39:41.704533 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-dde57-api-0" podUID="14c0d02d-65db-41be-81a2-f4a1c5996cb8" containerName="cinder-api" containerID="cri-o://96d7e74c9317ad75c2ba8a334fd5a7f3aaa196471bc017f4e4846ca45d038338" gracePeriod=30 Feb 16 02:39:41.715666 master-0 kubenswrapper[31559]: I0216 02:39:41.715533 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5687464c96-4rx8g" podStartSLOduration=3.715512147 podStartE2EDuration="3.715512147s" podCreationTimestamp="2026-02-16 02:39:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:39:41.69785581 +0000 UTC m=+1034.042461825" watchObservedRunningTime="2026-02-16 02:39:41.715512147 +0000 UTC m=+1034.060118162" Feb 16 02:39:41.735642 master-0 kubenswrapper[31559]: I0216 02:39:41.735552 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-dde57-volume-lvm-iscsi-0" podStartSLOduration=3.698192917 podStartE2EDuration="4.735527985s" podCreationTimestamp="2026-02-16 02:39:37 +0000 UTC" firstStartedPulling="2026-02-16 02:39:39.041671522 +0000 UTC m=+1031.386277537" lastFinishedPulling="2026-02-16 02:39:40.07900657 +0000 UTC m=+1032.423612605" observedRunningTime="2026-02-16 02:39:41.725522801 +0000 UTC m=+1034.070128826" watchObservedRunningTime="2026-02-16 02:39:41.735527985 +0000 UTC m=+1034.080134000" Feb 16 02:39:41.777476 master-0 kubenswrapper[31559]: I0216 02:39:41.777369 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-f4994bbb5-4cdht" podStartSLOduration=3.777345785 podStartE2EDuration="3.777345785s" podCreationTimestamp="2026-02-16 02:39:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:39:41.762607861 +0000 UTC m=+1034.107213866" watchObservedRunningTime="2026-02-16 02:39:41.777345785 +0000 UTC m=+1034.121951800" Feb 16 02:39:41.830933 master-0 kubenswrapper[31559]: I0216 02:39:41.828410 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-dde57-backup-0" podStartSLOduration=2.848301223 podStartE2EDuration="3.828387529s" podCreationTimestamp="2026-02-16 02:39:38 +0000 UTC" firstStartedPulling="2026-02-16 02:39:39.696069042 +0000 UTC m=+1032.040675057" lastFinishedPulling="2026-02-16 02:39:40.676155348 +0000 UTC m=+1033.020761363" observedRunningTime="2026-02-16 02:39:41.808606477 +0000 UTC m=+1034.153212492" watchObservedRunningTime="2026-02-16 02:39:41.828387529 +0000 UTC m=+1034.172993544" Feb 16 02:39:41.910300 master-0 kubenswrapper[31559]: I0216 02:39:41.910046 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-dde57-scheduler-0" podStartSLOduration=4.01229917 podStartE2EDuration="4.910027208s" podCreationTimestamp="2026-02-16 02:39:37 +0000 UTC" firstStartedPulling="2026-02-16 02:39:39.17135289 +0000 UTC m=+1031.515958905" lastFinishedPulling="2026-02-16 02:39:40.069080928 +0000 UTC m=+1032.413686943" observedRunningTime="2026-02-16 02:39:41.84185347 +0000 UTC m=+1034.186459485" watchObservedRunningTime="2026-02-16 02:39:41.910027208 +0000 UTC m=+1034.254633223" Feb 16 02:39:41.983700 master-0 kubenswrapper[31559]: I0216 02:39:41.983590 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78fccd7959-kjwk5"] Feb 16 02:39:42.008624 master-0 kubenswrapper[31559]: I0216 02:39:42.008553 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78fccd7959-kjwk5"] Feb 16 02:39:42.019110 master-0 kubenswrapper[31559]: I0216 02:39:42.019043 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-dde57-api-0" podStartSLOduration=4.019026771 podStartE2EDuration="4.019026771s" podCreationTimestamp="2026-02-16 02:39:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:39:41.983562191 +0000 UTC m=+1034.328168226" watchObservedRunningTime="2026-02-16 02:39:42.019026771 +0000 UTC m=+1034.363632786" Feb 16 02:39:42.722848 master-0 kubenswrapper[31559]: I0216 02:39:42.722731 31559 generic.go:334] "Generic (PLEG): container finished" podID="14c0d02d-65db-41be-81a2-f4a1c5996cb8" containerID="c64b31dbb9d4a8d89fd1ab605afd41af761009d696a4e083a85dd253a2f9b898" exitCode=143 Feb 16 02:39:42.723454 master-0 kubenswrapper[31559]: I0216 02:39:42.722810 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-dde57-api-0" event={"ID":"14c0d02d-65db-41be-81a2-f4a1c5996cb8","Type":"ContainerDied","Data":"c64b31dbb9d4a8d89fd1ab605afd41af761009d696a4e083a85dd253a2f9b898"} Feb 16 02:39:43.439600 master-0 kubenswrapper[31559]: I0216 02:39:43.438900 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:43.474362 master-0 kubenswrapper[31559]: I0216 02:39:43.473747 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-dde57-scheduler-0" Feb 16 02:39:43.643241 master-0 kubenswrapper[31559]: I0216 02:39:43.643168 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:43.942945 master-0 kubenswrapper[31559]: I0216 02:39:43.942879 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b" path="/var/lib/kubelet/pods/2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b/volumes" Feb 16 02:39:44.305398 master-0 kubenswrapper[31559]: I0216 02:39:44.305273 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-64cc79985-qdkm5"] Feb 16 02:39:44.306050 master-0 kubenswrapper[31559]: E0216 02:39:44.306034 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b" containerName="init" Feb 16 02:39:44.306120 master-0 kubenswrapper[31559]: I0216 02:39:44.306110 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b" containerName="init" Feb 16 02:39:44.306413 master-0 kubenswrapper[31559]: I0216 02:39:44.306400 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d8d93bc-6d8e-42cf-bd4c-c5830c7d191b" containerName="init" Feb 16 02:39:44.307539 master-0 kubenswrapper[31559]: I0216 02:39:44.307522 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-64cc79985-qdkm5" Feb 16 02:39:44.310908 master-0 kubenswrapper[31559]: I0216 02:39:44.310876 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 16 02:39:44.311176 master-0 kubenswrapper[31559]: I0216 02:39:44.311157 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 16 02:39:44.334414 master-0 kubenswrapper[31559]: I0216 02:39:44.334356 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-64cc79985-qdkm5"] Feb 16 02:39:44.446064 master-0 kubenswrapper[31559]: I0216 02:39:44.445999 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/90e2c78a-29bc-4574-94a9-e509b9cdb596-public-tls-certs\") pod \"neutron-64cc79985-qdkm5\" (UID: \"90e2c78a-29bc-4574-94a9-e509b9cdb596\") " pod="openstack/neutron-64cc79985-qdkm5" Feb 16 02:39:44.446064 master-0 kubenswrapper[31559]: I0216 02:39:44.446062 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/90e2c78a-29bc-4574-94a9-e509b9cdb596-httpd-config\") pod \"neutron-64cc79985-qdkm5\" (UID: \"90e2c78a-29bc-4574-94a9-e509b9cdb596\") " pod="openstack/neutron-64cc79985-qdkm5" Feb 16 02:39:44.446315 master-0 kubenswrapper[31559]: I0216 02:39:44.446127 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/90e2c78a-29bc-4574-94a9-e509b9cdb596-config\") pod \"neutron-64cc79985-qdkm5\" (UID: \"90e2c78a-29bc-4574-94a9-e509b9cdb596\") " pod="openstack/neutron-64cc79985-qdkm5" Feb 16 02:39:44.446315 master-0 kubenswrapper[31559]: I0216 02:39:44.446179 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/90e2c78a-29bc-4574-94a9-e509b9cdb596-ovndb-tls-certs\") pod \"neutron-64cc79985-qdkm5\" (UID: \"90e2c78a-29bc-4574-94a9-e509b9cdb596\") " pod="openstack/neutron-64cc79985-qdkm5" Feb 16 02:39:44.446315 master-0 kubenswrapper[31559]: I0216 02:39:44.446237 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/90e2c78a-29bc-4574-94a9-e509b9cdb596-internal-tls-certs\") pod \"neutron-64cc79985-qdkm5\" (UID: \"90e2c78a-29bc-4574-94a9-e509b9cdb596\") " pod="openstack/neutron-64cc79985-qdkm5" Feb 16 02:39:44.446315 master-0 kubenswrapper[31559]: I0216 02:39:44.446272 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90e2c78a-29bc-4574-94a9-e509b9cdb596-combined-ca-bundle\") pod \"neutron-64cc79985-qdkm5\" (UID: \"90e2c78a-29bc-4574-94a9-e509b9cdb596\") " pod="openstack/neutron-64cc79985-qdkm5" Feb 16 02:39:44.446315 master-0 kubenswrapper[31559]: I0216 02:39:44.446316 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cftfw\" (UniqueName: \"kubernetes.io/projected/90e2c78a-29bc-4574-94a9-e509b9cdb596-kube-api-access-cftfw\") pod \"neutron-64cc79985-qdkm5\" (UID: \"90e2c78a-29bc-4574-94a9-e509b9cdb596\") " pod="openstack/neutron-64cc79985-qdkm5" Feb 16 02:39:44.548262 master-0 kubenswrapper[31559]: I0216 02:39:44.548184 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/90e2c78a-29bc-4574-94a9-e509b9cdb596-httpd-config\") pod \"neutron-64cc79985-qdkm5\" (UID: \"90e2c78a-29bc-4574-94a9-e509b9cdb596\") " pod="openstack/neutron-64cc79985-qdkm5" Feb 16 02:39:44.548594 master-0 kubenswrapper[31559]: I0216 02:39:44.548281 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/90e2c78a-29bc-4574-94a9-e509b9cdb596-config\") pod \"neutron-64cc79985-qdkm5\" (UID: \"90e2c78a-29bc-4574-94a9-e509b9cdb596\") " pod="openstack/neutron-64cc79985-qdkm5" Feb 16 02:39:44.548594 master-0 kubenswrapper[31559]: I0216 02:39:44.548335 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/90e2c78a-29bc-4574-94a9-e509b9cdb596-ovndb-tls-certs\") pod \"neutron-64cc79985-qdkm5\" (UID: \"90e2c78a-29bc-4574-94a9-e509b9cdb596\") " pod="openstack/neutron-64cc79985-qdkm5" Feb 16 02:39:44.548594 master-0 kubenswrapper[31559]: I0216 02:39:44.548389 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/90e2c78a-29bc-4574-94a9-e509b9cdb596-internal-tls-certs\") pod \"neutron-64cc79985-qdkm5\" (UID: \"90e2c78a-29bc-4574-94a9-e509b9cdb596\") " pod="openstack/neutron-64cc79985-qdkm5" Feb 16 02:39:44.548594 master-0 kubenswrapper[31559]: I0216 02:39:44.548419 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90e2c78a-29bc-4574-94a9-e509b9cdb596-combined-ca-bundle\") pod \"neutron-64cc79985-qdkm5\" (UID: \"90e2c78a-29bc-4574-94a9-e509b9cdb596\") " pod="openstack/neutron-64cc79985-qdkm5" Feb 16 02:39:44.548594 master-0 kubenswrapper[31559]: I0216 02:39:44.548472 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cftfw\" (UniqueName: \"kubernetes.io/projected/90e2c78a-29bc-4574-94a9-e509b9cdb596-kube-api-access-cftfw\") pod \"neutron-64cc79985-qdkm5\" (UID: \"90e2c78a-29bc-4574-94a9-e509b9cdb596\") " pod="openstack/neutron-64cc79985-qdkm5" Feb 16 02:39:44.548594 master-0 kubenswrapper[31559]: I0216 02:39:44.548503 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/90e2c78a-29bc-4574-94a9-e509b9cdb596-public-tls-certs\") pod \"neutron-64cc79985-qdkm5\" (UID: \"90e2c78a-29bc-4574-94a9-e509b9cdb596\") " pod="openstack/neutron-64cc79985-qdkm5" Feb 16 02:39:44.552348 master-0 kubenswrapper[31559]: I0216 02:39:44.552309 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/90e2c78a-29bc-4574-94a9-e509b9cdb596-public-tls-certs\") pod \"neutron-64cc79985-qdkm5\" (UID: \"90e2c78a-29bc-4574-94a9-e509b9cdb596\") " pod="openstack/neutron-64cc79985-qdkm5" Feb 16 02:39:44.565930 master-0 kubenswrapper[31559]: I0216 02:39:44.553355 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/90e2c78a-29bc-4574-94a9-e509b9cdb596-internal-tls-certs\") pod \"neutron-64cc79985-qdkm5\" (UID: \"90e2c78a-29bc-4574-94a9-e509b9cdb596\") " pod="openstack/neutron-64cc79985-qdkm5" Feb 16 02:39:44.565930 master-0 kubenswrapper[31559]: I0216 02:39:44.553432 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/90e2c78a-29bc-4574-94a9-e509b9cdb596-httpd-config\") pod \"neutron-64cc79985-qdkm5\" (UID: \"90e2c78a-29bc-4574-94a9-e509b9cdb596\") " pod="openstack/neutron-64cc79985-qdkm5" Feb 16 02:39:44.565930 master-0 kubenswrapper[31559]: I0216 02:39:44.553825 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/90e2c78a-29bc-4574-94a9-e509b9cdb596-ovndb-tls-certs\") pod \"neutron-64cc79985-qdkm5\" (UID: \"90e2c78a-29bc-4574-94a9-e509b9cdb596\") " pod="openstack/neutron-64cc79985-qdkm5" Feb 16 02:39:44.565930 master-0 kubenswrapper[31559]: I0216 02:39:44.554117 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/90e2c78a-29bc-4574-94a9-e509b9cdb596-config\") pod \"neutron-64cc79985-qdkm5\" (UID: \"90e2c78a-29bc-4574-94a9-e509b9cdb596\") " pod="openstack/neutron-64cc79985-qdkm5" Feb 16 02:39:44.565930 master-0 kubenswrapper[31559]: I0216 02:39:44.563947 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cftfw\" (UniqueName: \"kubernetes.io/projected/90e2c78a-29bc-4574-94a9-e509b9cdb596-kube-api-access-cftfw\") pod \"neutron-64cc79985-qdkm5\" (UID: \"90e2c78a-29bc-4574-94a9-e509b9cdb596\") " pod="openstack/neutron-64cc79985-qdkm5" Feb 16 02:39:44.577359 master-0 kubenswrapper[31559]: I0216 02:39:44.577283 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90e2c78a-29bc-4574-94a9-e509b9cdb596-combined-ca-bundle\") pod \"neutron-64cc79985-qdkm5\" (UID: \"90e2c78a-29bc-4574-94a9-e509b9cdb596\") " pod="openstack/neutron-64cc79985-qdkm5" Feb 16 02:39:44.623096 master-0 kubenswrapper[31559]: I0216 02:39:44.623042 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-64cc79985-qdkm5" Feb 16 02:39:45.265514 master-0 kubenswrapper[31559]: I0216 02:39:45.264542 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-64cc79985-qdkm5"] Feb 16 02:39:45.763142 master-0 kubenswrapper[31559]: I0216 02:39:45.760240 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64cc79985-qdkm5" event={"ID":"90e2c78a-29bc-4574-94a9-e509b9cdb596","Type":"ContainerStarted","Data":"53ab74461e6dd6d2ec22bdd298e589957417a04d71449a219b66b465f1f5034f"} Feb 16 02:39:45.763142 master-0 kubenswrapper[31559]: I0216 02:39:45.760287 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64cc79985-qdkm5" event={"ID":"90e2c78a-29bc-4574-94a9-e509b9cdb596","Type":"ContainerStarted","Data":"9a8c9a4c114e1ae98f40f99dbf60fdad7ada03f3ed395bcf0a7b35eca6957282"} Feb 16 02:39:46.779807 master-0 kubenswrapper[31559]: I0216 02:39:46.779719 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64cc79985-qdkm5" event={"ID":"90e2c78a-29bc-4574-94a9-e509b9cdb596","Type":"ContainerStarted","Data":"2143973bc42c68587e5029f94e896ebe7ea9e2d598d568b261d676e2a17b1472"} Feb 16 02:39:46.780692 master-0 kubenswrapper[31559]: I0216 02:39:46.779854 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-64cc79985-qdkm5" Feb 16 02:39:46.821978 master-0 kubenswrapper[31559]: I0216 02:39:46.821864 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-64cc79985-qdkm5" podStartSLOduration=2.821778095 podStartE2EDuration="2.821778095s" podCreationTimestamp="2026-02-16 02:39:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:39:46.79910638 +0000 UTC m=+1039.143712405" watchObservedRunningTime="2026-02-16 02:39:46.821778095 +0000 UTC m=+1039.166384120" Feb 16 02:39:48.641775 master-0 kubenswrapper[31559]: I0216 02:39:48.639984 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:48.685505 master-0 kubenswrapper[31559]: I0216 02:39:48.682787 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-dde57-scheduler-0" Feb 16 02:39:48.703720 master-0 kubenswrapper[31559]: I0216 02:39:48.703631 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-dde57-volume-lvm-iscsi-0"] Feb 16 02:39:48.742071 master-0 kubenswrapper[31559]: I0216 02:39:48.742012 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-dde57-scheduler-0"] Feb 16 02:39:48.841956 master-0 kubenswrapper[31559]: I0216 02:39:48.841805 31559 generic.go:334] "Generic (PLEG): container finished" podID="656fd1dc-bb87-40c8-a161-31a194c23629" containerID="cf3ac35f5b26be72e1fc0465c5d5599ca1dddd3c6bf02f04f4ce1162cce6acd6" exitCode=0 Feb 16 02:39:48.842184 master-0 kubenswrapper[31559]: I0216 02:39:48.842134 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-dde57-scheduler-0" podUID="630fbd6e-863a-4347-acc7-38ae08b97e61" containerName="cinder-scheduler" containerID="cri-o://b2e65fae808367adaae117112ad3710da6e808ffbd3bf22d50506954995c4e5a" gracePeriod=30 Feb 16 02:39:48.842280 master-0 kubenswrapper[31559]: I0216 02:39:48.842250 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-ljf52" event={"ID":"656fd1dc-bb87-40c8-a161-31a194c23629","Type":"ContainerDied","Data":"cf3ac35f5b26be72e1fc0465c5d5599ca1dddd3c6bf02f04f4ce1162cce6acd6"} Feb 16 02:39:48.842550 master-0 kubenswrapper[31559]: I0216 02:39:48.842473 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-dde57-scheduler-0" podUID="630fbd6e-863a-4347-acc7-38ae08b97e61" containerName="probe" containerID="cri-o://8356e3adfe7ddf46d5601a907435bdc1b4b481136ad40d0bc4f316c1555b0441" gracePeriod=30 Feb 16 02:39:48.843335 master-0 kubenswrapper[31559]: I0216 02:39:48.843241 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-dde57-volume-lvm-iscsi-0" podUID="e15f1a9c-41e2-4f45-9a40-94557f09f863" containerName="probe" containerID="cri-o://1722ee0b1d979060e2896d028458d7a30d842dda617ffde8257b72ca530100c0" gracePeriod=30 Feb 16 02:39:48.843412 master-0 kubenswrapper[31559]: I0216 02:39:48.843370 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-dde57-volume-lvm-iscsi-0" podUID="e15f1a9c-41e2-4f45-9a40-94557f09f863" containerName="cinder-volume" containerID="cri-o://466a438ebab8119453c8ae2e8ab6651a7c9a5be49155d75a0db49d5ed0f13a89" gracePeriod=30 Feb 16 02:39:48.956742 master-0 kubenswrapper[31559]: I0216 02:39:48.956679 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:49.008000 master-0 kubenswrapper[31559]: I0216 02:39:49.007937 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-dde57-backup-0"] Feb 16 02:39:49.372819 master-0 kubenswrapper[31559]: I0216 02:39:49.372647 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-f4994bbb5-4cdht" Feb 16 02:39:49.524111 master-0 kubenswrapper[31559]: I0216 02:39:49.524050 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6cc4888b7-xrh7r"] Feb 16 02:39:49.524359 master-0 kubenswrapper[31559]: I0216 02:39:49.524314 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6cc4888b7-xrh7r" podUID="4b6218a7-c0ec-46b5-a1df-cdccc54d540c" containerName="dnsmasq-dns" containerID="cri-o://830f28140856edc42ddb2e2402fdba876374b84819f772279736d4958da75a51" gracePeriod=10 Feb 16 02:39:49.875907 master-0 kubenswrapper[31559]: I0216 02:39:49.875846 31559 generic.go:334] "Generic (PLEG): container finished" podID="630fbd6e-863a-4347-acc7-38ae08b97e61" containerID="8356e3adfe7ddf46d5601a907435bdc1b4b481136ad40d0bc4f316c1555b0441" exitCode=0 Feb 16 02:39:49.876625 master-0 kubenswrapper[31559]: I0216 02:39:49.875919 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-dde57-scheduler-0" event={"ID":"630fbd6e-863a-4347-acc7-38ae08b97e61","Type":"ContainerDied","Data":"8356e3adfe7ddf46d5601a907435bdc1b4b481136ad40d0bc4f316c1555b0441"} Feb 16 02:39:49.883011 master-0 kubenswrapper[31559]: I0216 02:39:49.882913 31559 generic.go:334] "Generic (PLEG): container finished" podID="4b6218a7-c0ec-46b5-a1df-cdccc54d540c" containerID="830f28140856edc42ddb2e2402fdba876374b84819f772279736d4958da75a51" exitCode=0 Feb 16 02:39:49.883197 master-0 kubenswrapper[31559]: I0216 02:39:49.883033 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cc4888b7-xrh7r" event={"ID":"4b6218a7-c0ec-46b5-a1df-cdccc54d540c","Type":"ContainerDied","Data":"830f28140856edc42ddb2e2402fdba876374b84819f772279736d4958da75a51"} Feb 16 02:39:49.888506 master-0 kubenswrapper[31559]: I0216 02:39:49.888463 31559 generic.go:334] "Generic (PLEG): container finished" podID="e15f1a9c-41e2-4f45-9a40-94557f09f863" containerID="1722ee0b1d979060e2896d028458d7a30d842dda617ffde8257b72ca530100c0" exitCode=0 Feb 16 02:39:49.888506 master-0 kubenswrapper[31559]: I0216 02:39:49.888502 31559 generic.go:334] "Generic (PLEG): container finished" podID="e15f1a9c-41e2-4f45-9a40-94557f09f863" containerID="466a438ebab8119453c8ae2e8ab6651a7c9a5be49155d75a0db49d5ed0f13a89" exitCode=0 Feb 16 02:39:49.888802 master-0 kubenswrapper[31559]: I0216 02:39:49.888779 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-dde57-backup-0" podUID="69ebefce-f456-4d6e-9111-5df13870bbae" containerName="cinder-backup" containerID="cri-o://dcd5e87d2ef35ada86ce3ee55e19a736ff52a215d064ff20a20133fe5a979579" gracePeriod=30 Feb 16 02:39:49.889442 master-0 kubenswrapper[31559]: I0216 02:39:49.889324 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-dde57-volume-lvm-iscsi-0" event={"ID":"e15f1a9c-41e2-4f45-9a40-94557f09f863","Type":"ContainerDied","Data":"1722ee0b1d979060e2896d028458d7a30d842dda617ffde8257b72ca530100c0"} Feb 16 02:39:49.889442 master-0 kubenswrapper[31559]: I0216 02:39:49.889403 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-dde57-volume-lvm-iscsi-0" event={"ID":"e15f1a9c-41e2-4f45-9a40-94557f09f863","Type":"ContainerDied","Data":"466a438ebab8119453c8ae2e8ab6651a7c9a5be49155d75a0db49d5ed0f13a89"} Feb 16 02:39:49.890089 master-0 kubenswrapper[31559]: I0216 02:39:49.890027 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-dde57-backup-0" podUID="69ebefce-f456-4d6e-9111-5df13870bbae" containerName="probe" containerID="cri-o://65ad02d23bfe7fd4c4d077dbd9e92199fe76e1b58dad7f8ca534300c24bb0aa8" gracePeriod=30 Feb 16 02:39:49.983713 master-0 kubenswrapper[31559]: I0216 02:39:49.983657 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:50.139077 master-0 kubenswrapper[31559]: I0216 02:39:50.139010 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/e15f1a9c-41e2-4f45-9a40-94557f09f863-var-lib-cinder\") pod \"e15f1a9c-41e2-4f45-9a40-94557f09f863\" (UID: \"e15f1a9c-41e2-4f45-9a40-94557f09f863\") " Feb 16 02:39:50.139077 master-0 kubenswrapper[31559]: I0216 02:39:50.139066 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e15f1a9c-41e2-4f45-9a40-94557f09f863-combined-ca-bundle\") pod \"e15f1a9c-41e2-4f45-9a40-94557f09f863\" (UID: \"e15f1a9c-41e2-4f45-9a40-94557f09f863\") " Feb 16 02:39:50.139304 master-0 kubenswrapper[31559]: I0216 02:39:50.139101 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/e15f1a9c-41e2-4f45-9a40-94557f09f863-etc-nvme\") pod \"e15f1a9c-41e2-4f45-9a40-94557f09f863\" (UID: \"e15f1a9c-41e2-4f45-9a40-94557f09f863\") " Feb 16 02:39:50.139304 master-0 kubenswrapper[31559]: I0216 02:39:50.139120 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/e15f1a9c-41e2-4f45-9a40-94557f09f863-var-locks-brick\") pod \"e15f1a9c-41e2-4f45-9a40-94557f09f863\" (UID: \"e15f1a9c-41e2-4f45-9a40-94557f09f863\") " Feb 16 02:39:50.139304 master-0 kubenswrapper[31559]: I0216 02:39:50.139136 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e15f1a9c-41e2-4f45-9a40-94557f09f863-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "e15f1a9c-41e2-4f45-9a40-94557f09f863" (UID: "e15f1a9c-41e2-4f45-9a40-94557f09f863"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:39:50.139304 master-0 kubenswrapper[31559]: I0216 02:39:50.139174 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e15f1a9c-41e2-4f45-9a40-94557f09f863-config-data-custom\") pod \"e15f1a9c-41e2-4f45-9a40-94557f09f863\" (UID: \"e15f1a9c-41e2-4f45-9a40-94557f09f863\") " Feb 16 02:39:50.139304 master-0 kubenswrapper[31559]: I0216 02:39:50.139198 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e15f1a9c-41e2-4f45-9a40-94557f09f863-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "e15f1a9c-41e2-4f45-9a40-94557f09f863" (UID: "e15f1a9c-41e2-4f45-9a40-94557f09f863"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:39:50.139304 master-0 kubenswrapper[31559]: I0216 02:39:50.139270 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e15f1a9c-41e2-4f45-9a40-94557f09f863-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "e15f1a9c-41e2-4f45-9a40-94557f09f863" (UID: "e15f1a9c-41e2-4f45-9a40-94557f09f863"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:39:50.139304 master-0 kubenswrapper[31559]: I0216 02:39:50.139268 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e15f1a9c-41e2-4f45-9a40-94557f09f863-config-data\") pod \"e15f1a9c-41e2-4f45-9a40-94557f09f863\" (UID: \"e15f1a9c-41e2-4f45-9a40-94557f09f863\") " Feb 16 02:39:50.139613 master-0 kubenswrapper[31559]: I0216 02:39:50.139527 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e15f1a9c-41e2-4f45-9a40-94557f09f863-lib-modules\") pod \"e15f1a9c-41e2-4f45-9a40-94557f09f863\" (UID: \"e15f1a9c-41e2-4f45-9a40-94557f09f863\") " Feb 16 02:39:50.139613 master-0 kubenswrapper[31559]: I0216 02:39:50.139583 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e15f1a9c-41e2-4f45-9a40-94557f09f863-scripts\") pod \"e15f1a9c-41e2-4f45-9a40-94557f09f863\" (UID: \"e15f1a9c-41e2-4f45-9a40-94557f09f863\") " Feb 16 02:39:50.139700 master-0 kubenswrapper[31559]: I0216 02:39:50.139675 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e15f1a9c-41e2-4f45-9a40-94557f09f863-etc-machine-id\") pod \"e15f1a9c-41e2-4f45-9a40-94557f09f863\" (UID: \"e15f1a9c-41e2-4f45-9a40-94557f09f863\") " Feb 16 02:39:50.139762 master-0 kubenswrapper[31559]: I0216 02:39:50.139732 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e15f1a9c-41e2-4f45-9a40-94557f09f863-sys\") pod \"e15f1a9c-41e2-4f45-9a40-94557f09f863\" (UID: \"e15f1a9c-41e2-4f45-9a40-94557f09f863\") " Feb 16 02:39:50.139852 master-0 kubenswrapper[31559]: I0216 02:39:50.139807 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8cnn5\" (UniqueName: \"kubernetes.io/projected/e15f1a9c-41e2-4f45-9a40-94557f09f863-kube-api-access-8cnn5\") pod \"e15f1a9c-41e2-4f45-9a40-94557f09f863\" (UID: \"e15f1a9c-41e2-4f45-9a40-94557f09f863\") " Feb 16 02:39:50.139897 master-0 kubenswrapper[31559]: I0216 02:39:50.139881 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/e15f1a9c-41e2-4f45-9a40-94557f09f863-dev\") pod \"e15f1a9c-41e2-4f45-9a40-94557f09f863\" (UID: \"e15f1a9c-41e2-4f45-9a40-94557f09f863\") " Feb 16 02:39:50.140003 master-0 kubenswrapper[31559]: I0216 02:39:50.139939 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/e15f1a9c-41e2-4f45-9a40-94557f09f863-etc-iscsi\") pod \"e15f1a9c-41e2-4f45-9a40-94557f09f863\" (UID: \"e15f1a9c-41e2-4f45-9a40-94557f09f863\") " Feb 16 02:39:50.140003 master-0 kubenswrapper[31559]: I0216 02:39:50.139991 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/e15f1a9c-41e2-4f45-9a40-94557f09f863-var-locks-cinder\") pod \"e15f1a9c-41e2-4f45-9a40-94557f09f863\" (UID: \"e15f1a9c-41e2-4f45-9a40-94557f09f863\") " Feb 16 02:39:50.140113 master-0 kubenswrapper[31559]: I0216 02:39:50.140059 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/e15f1a9c-41e2-4f45-9a40-94557f09f863-run\") pod \"e15f1a9c-41e2-4f45-9a40-94557f09f863\" (UID: \"e15f1a9c-41e2-4f45-9a40-94557f09f863\") " Feb 16 02:39:50.141177 master-0 kubenswrapper[31559]: I0216 02:39:50.141141 31559 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/e15f1a9c-41e2-4f45-9a40-94557f09f863-var-lib-cinder\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:50.141231 master-0 kubenswrapper[31559]: I0216 02:39:50.141178 31559 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/e15f1a9c-41e2-4f45-9a40-94557f09f863-etc-nvme\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:50.141231 master-0 kubenswrapper[31559]: I0216 02:39:50.141197 31559 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/e15f1a9c-41e2-4f45-9a40-94557f09f863-var-locks-brick\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:50.141540 master-0 kubenswrapper[31559]: I0216 02:39:50.141490 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e15f1a9c-41e2-4f45-9a40-94557f09f863-dev" (OuterVolumeSpecName: "dev") pod "e15f1a9c-41e2-4f45-9a40-94557f09f863" (UID: "e15f1a9c-41e2-4f45-9a40-94557f09f863"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:39:50.141540 master-0 kubenswrapper[31559]: I0216 02:39:50.141493 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e15f1a9c-41e2-4f45-9a40-94557f09f863-run" (OuterVolumeSpecName: "run") pod "e15f1a9c-41e2-4f45-9a40-94557f09f863" (UID: "e15f1a9c-41e2-4f45-9a40-94557f09f863"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:39:50.141540 master-0 kubenswrapper[31559]: I0216 02:39:50.141530 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e15f1a9c-41e2-4f45-9a40-94557f09f863-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "e15f1a9c-41e2-4f45-9a40-94557f09f863" (UID: "e15f1a9c-41e2-4f45-9a40-94557f09f863"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:39:50.141646 master-0 kubenswrapper[31559]: I0216 02:39:50.141536 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e15f1a9c-41e2-4f45-9a40-94557f09f863-sys" (OuterVolumeSpecName: "sys") pod "e15f1a9c-41e2-4f45-9a40-94557f09f863" (UID: "e15f1a9c-41e2-4f45-9a40-94557f09f863"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:39:50.145928 master-0 kubenswrapper[31559]: I0216 02:39:50.141577 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e15f1a9c-41e2-4f45-9a40-94557f09f863-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "e15f1a9c-41e2-4f45-9a40-94557f09f863" (UID: "e15f1a9c-41e2-4f45-9a40-94557f09f863"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:39:50.145928 master-0 kubenswrapper[31559]: I0216 02:39:50.141743 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e15f1a9c-41e2-4f45-9a40-94557f09f863-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e15f1a9c-41e2-4f45-9a40-94557f09f863" (UID: "e15f1a9c-41e2-4f45-9a40-94557f09f863"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:39:50.146421 master-0 kubenswrapper[31559]: I0216 02:39:50.141764 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e15f1a9c-41e2-4f45-9a40-94557f09f863-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "e15f1a9c-41e2-4f45-9a40-94557f09f863" (UID: "e15f1a9c-41e2-4f45-9a40-94557f09f863"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:39:50.146421 master-0 kubenswrapper[31559]: I0216 02:39:50.142226 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e15f1a9c-41e2-4f45-9a40-94557f09f863-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e15f1a9c-41e2-4f45-9a40-94557f09f863" (UID: "e15f1a9c-41e2-4f45-9a40-94557f09f863"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:39:50.146421 master-0 kubenswrapper[31559]: I0216 02:39:50.145128 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e15f1a9c-41e2-4f45-9a40-94557f09f863-scripts" (OuterVolumeSpecName: "scripts") pod "e15f1a9c-41e2-4f45-9a40-94557f09f863" (UID: "e15f1a9c-41e2-4f45-9a40-94557f09f863"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:39:50.147619 master-0 kubenswrapper[31559]: I0216 02:39:50.147577 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e15f1a9c-41e2-4f45-9a40-94557f09f863-kube-api-access-8cnn5" (OuterVolumeSpecName: "kube-api-access-8cnn5") pod "e15f1a9c-41e2-4f45-9a40-94557f09f863" (UID: "e15f1a9c-41e2-4f45-9a40-94557f09f863"). InnerVolumeSpecName "kube-api-access-8cnn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:39:50.243186 master-0 kubenswrapper[31559]: I0216 02:39:50.243114 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e15f1a9c-41e2-4f45-9a40-94557f09f863-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e15f1a9c-41e2-4f45-9a40-94557f09f863" (UID: "e15f1a9c-41e2-4f45-9a40-94557f09f863"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:39:50.243186 master-0 kubenswrapper[31559]: I0216 02:39:50.243126 31559 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/e15f1a9c-41e2-4f45-9a40-94557f09f863-dev\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:50.243383 master-0 kubenswrapper[31559]: I0216 02:39:50.243192 31559 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/e15f1a9c-41e2-4f45-9a40-94557f09f863-etc-iscsi\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:50.243383 master-0 kubenswrapper[31559]: I0216 02:39:50.243207 31559 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/e15f1a9c-41e2-4f45-9a40-94557f09f863-var-locks-cinder\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:50.243383 master-0 kubenswrapper[31559]: I0216 02:39:50.243220 31559 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/e15f1a9c-41e2-4f45-9a40-94557f09f863-run\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:50.243383 master-0 kubenswrapper[31559]: I0216 02:39:50.243229 31559 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e15f1a9c-41e2-4f45-9a40-94557f09f863-config-data-custom\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:50.243383 master-0 kubenswrapper[31559]: I0216 02:39:50.243238 31559 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e15f1a9c-41e2-4f45-9a40-94557f09f863-lib-modules\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:50.243383 master-0 kubenswrapper[31559]: I0216 02:39:50.243248 31559 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e15f1a9c-41e2-4f45-9a40-94557f09f863-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:50.243383 master-0 kubenswrapper[31559]: I0216 02:39:50.243258 31559 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e15f1a9c-41e2-4f45-9a40-94557f09f863-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:50.243383 master-0 kubenswrapper[31559]: I0216 02:39:50.243269 31559 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e15f1a9c-41e2-4f45-9a40-94557f09f863-sys\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:50.243383 master-0 kubenswrapper[31559]: I0216 02:39:50.243279 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8cnn5\" (UniqueName: \"kubernetes.io/projected/e15f1a9c-41e2-4f45-9a40-94557f09f863-kube-api-access-8cnn5\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:50.269314 master-0 kubenswrapper[31559]: I0216 02:39:50.269244 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e15f1a9c-41e2-4f45-9a40-94557f09f863-config-data" (OuterVolumeSpecName: "config-data") pod "e15f1a9c-41e2-4f45-9a40-94557f09f863" (UID: "e15f1a9c-41e2-4f45-9a40-94557f09f863"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:39:50.337538 master-0 kubenswrapper[31559]: I0216 02:39:50.337493 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cc4888b7-xrh7r" Feb 16 02:39:50.345503 master-0 kubenswrapper[31559]: I0216 02:39:50.345421 31559 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e15f1a9c-41e2-4f45-9a40-94557f09f863-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:50.345503 master-0 kubenswrapper[31559]: I0216 02:39:50.345482 31559 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e15f1a9c-41e2-4f45-9a40-94557f09f863-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:50.433459 master-0 kubenswrapper[31559]: I0216 02:39:50.418608 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-ljf52" Feb 16 02:39:50.456598 master-0 kubenswrapper[31559]: I0216 02:39:50.446334 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b6218a7-c0ec-46b5-a1df-cdccc54d540c-config\") pod \"4b6218a7-c0ec-46b5-a1df-cdccc54d540c\" (UID: \"4b6218a7-c0ec-46b5-a1df-cdccc54d540c\") " Feb 16 02:39:50.456598 master-0 kubenswrapper[31559]: I0216 02:39:50.446560 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvmtt\" (UniqueName: \"kubernetes.io/projected/4b6218a7-c0ec-46b5-a1df-cdccc54d540c-kube-api-access-nvmtt\") pod \"4b6218a7-c0ec-46b5-a1df-cdccc54d540c\" (UID: \"4b6218a7-c0ec-46b5-a1df-cdccc54d540c\") " Feb 16 02:39:50.456598 master-0 kubenswrapper[31559]: I0216 02:39:50.446598 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4b6218a7-c0ec-46b5-a1df-cdccc54d540c-ovsdbserver-sb\") pod \"4b6218a7-c0ec-46b5-a1df-cdccc54d540c\" (UID: \"4b6218a7-c0ec-46b5-a1df-cdccc54d540c\") " Feb 16 02:39:50.456598 master-0 kubenswrapper[31559]: I0216 02:39:50.446614 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4b6218a7-c0ec-46b5-a1df-cdccc54d540c-dns-swift-storage-0\") pod \"4b6218a7-c0ec-46b5-a1df-cdccc54d540c\" (UID: \"4b6218a7-c0ec-46b5-a1df-cdccc54d540c\") " Feb 16 02:39:50.456598 master-0 kubenswrapper[31559]: I0216 02:39:50.446664 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4b6218a7-c0ec-46b5-a1df-cdccc54d540c-ovsdbserver-nb\") pod \"4b6218a7-c0ec-46b5-a1df-cdccc54d540c\" (UID: \"4b6218a7-c0ec-46b5-a1df-cdccc54d540c\") " Feb 16 02:39:50.456598 master-0 kubenswrapper[31559]: I0216 02:39:50.446758 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b6218a7-c0ec-46b5-a1df-cdccc54d540c-dns-svc\") pod \"4b6218a7-c0ec-46b5-a1df-cdccc54d540c\" (UID: \"4b6218a7-c0ec-46b5-a1df-cdccc54d540c\") " Feb 16 02:39:50.476527 master-0 kubenswrapper[31559]: I0216 02:39:50.472176 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b6218a7-c0ec-46b5-a1df-cdccc54d540c-kube-api-access-nvmtt" (OuterVolumeSpecName: "kube-api-access-nvmtt") pod "4b6218a7-c0ec-46b5-a1df-cdccc54d540c" (UID: "4b6218a7-c0ec-46b5-a1df-cdccc54d540c"). InnerVolumeSpecName "kube-api-access-nvmtt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:39:50.548491 master-0 kubenswrapper[31559]: I0216 02:39:50.548409 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/656fd1dc-bb87-40c8-a161-31a194c23629-config-data\") pod \"656fd1dc-bb87-40c8-a161-31a194c23629\" (UID: \"656fd1dc-bb87-40c8-a161-31a194c23629\") " Feb 16 02:39:50.548602 master-0 kubenswrapper[31559]: I0216 02:39:50.548530 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/656fd1dc-bb87-40c8-a161-31a194c23629-combined-ca-bundle\") pod \"656fd1dc-bb87-40c8-a161-31a194c23629\" (UID: \"656fd1dc-bb87-40c8-a161-31a194c23629\") " Feb 16 02:39:50.548602 master-0 kubenswrapper[31559]: I0216 02:39:50.548556 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/656fd1dc-bb87-40c8-a161-31a194c23629-scripts\") pod \"656fd1dc-bb87-40c8-a161-31a194c23629\" (UID: \"656fd1dc-bb87-40c8-a161-31a194c23629\") " Feb 16 02:39:50.548901 master-0 kubenswrapper[31559]: I0216 02:39:50.548612 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/656fd1dc-bb87-40c8-a161-31a194c23629-config-data-merged\") pod \"656fd1dc-bb87-40c8-a161-31a194c23629\" (UID: \"656fd1dc-bb87-40c8-a161-31a194c23629\") " Feb 16 02:39:50.548901 master-0 kubenswrapper[31559]: I0216 02:39:50.548631 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/656fd1dc-bb87-40c8-a161-31a194c23629-etc-podinfo\") pod \"656fd1dc-bb87-40c8-a161-31a194c23629\" (UID: \"656fd1dc-bb87-40c8-a161-31a194c23629\") " Feb 16 02:39:50.548901 master-0 kubenswrapper[31559]: I0216 02:39:50.548697 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45gnj\" (UniqueName: \"kubernetes.io/projected/656fd1dc-bb87-40c8-a161-31a194c23629-kube-api-access-45gnj\") pod \"656fd1dc-bb87-40c8-a161-31a194c23629\" (UID: \"656fd1dc-bb87-40c8-a161-31a194c23629\") " Feb 16 02:39:50.549574 master-0 kubenswrapper[31559]: I0216 02:39:50.549300 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvmtt\" (UniqueName: \"kubernetes.io/projected/4b6218a7-c0ec-46b5-a1df-cdccc54d540c-kube-api-access-nvmtt\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:50.549574 master-0 kubenswrapper[31559]: I0216 02:39:50.549513 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/656fd1dc-bb87-40c8-a161-31a194c23629-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "656fd1dc-bb87-40c8-a161-31a194c23629" (UID: "656fd1dc-bb87-40c8-a161-31a194c23629"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 02:39:50.551619 master-0 kubenswrapper[31559]: I0216 02:39:50.551568 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b6218a7-c0ec-46b5-a1df-cdccc54d540c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4b6218a7-c0ec-46b5-a1df-cdccc54d540c" (UID: "4b6218a7-c0ec-46b5-a1df-cdccc54d540c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:39:50.553560 master-0 kubenswrapper[31559]: I0216 02:39:50.553516 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/656fd1dc-bb87-40c8-a161-31a194c23629-scripts" (OuterVolumeSpecName: "scripts") pod "656fd1dc-bb87-40c8-a161-31a194c23629" (UID: "656fd1dc-bb87-40c8-a161-31a194c23629"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:39:50.557066 master-0 kubenswrapper[31559]: I0216 02:39:50.554634 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/656fd1dc-bb87-40c8-a161-31a194c23629-kube-api-access-45gnj" (OuterVolumeSpecName: "kube-api-access-45gnj") pod "656fd1dc-bb87-40c8-a161-31a194c23629" (UID: "656fd1dc-bb87-40c8-a161-31a194c23629"). InnerVolumeSpecName "kube-api-access-45gnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:39:50.562369 master-0 kubenswrapper[31559]: I0216 02:39:50.562311 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/656fd1dc-bb87-40c8-a161-31a194c23629-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "656fd1dc-bb87-40c8-a161-31a194c23629" (UID: "656fd1dc-bb87-40c8-a161-31a194c23629"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 16 02:39:50.580593 master-0 kubenswrapper[31559]: I0216 02:39:50.580518 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/656fd1dc-bb87-40c8-a161-31a194c23629-config-data" (OuterVolumeSpecName: "config-data") pod "656fd1dc-bb87-40c8-a161-31a194c23629" (UID: "656fd1dc-bb87-40c8-a161-31a194c23629"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:39:50.596190 master-0 kubenswrapper[31559]: I0216 02:39:50.596141 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b6218a7-c0ec-46b5-a1df-cdccc54d540c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4b6218a7-c0ec-46b5-a1df-cdccc54d540c" (UID: "4b6218a7-c0ec-46b5-a1df-cdccc54d540c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:39:50.597836 master-0 kubenswrapper[31559]: I0216 02:39:50.597789 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b6218a7-c0ec-46b5-a1df-cdccc54d540c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4b6218a7-c0ec-46b5-a1df-cdccc54d540c" (UID: "4b6218a7-c0ec-46b5-a1df-cdccc54d540c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:39:50.600076 master-0 kubenswrapper[31559]: I0216 02:39:50.600025 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b6218a7-c0ec-46b5-a1df-cdccc54d540c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4b6218a7-c0ec-46b5-a1df-cdccc54d540c" (UID: "4b6218a7-c0ec-46b5-a1df-cdccc54d540c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:39:50.609960 master-0 kubenswrapper[31559]: I0216 02:39:50.609898 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/656fd1dc-bb87-40c8-a161-31a194c23629-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "656fd1dc-bb87-40c8-a161-31a194c23629" (UID: "656fd1dc-bb87-40c8-a161-31a194c23629"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:39:50.613399 master-0 kubenswrapper[31559]: I0216 02:39:50.613334 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b6218a7-c0ec-46b5-a1df-cdccc54d540c-config" (OuterVolumeSpecName: "config") pod "4b6218a7-c0ec-46b5-a1df-cdccc54d540c" (UID: "4b6218a7-c0ec-46b5-a1df-cdccc54d540c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:39:50.651811 master-0 kubenswrapper[31559]: I0216 02:39:50.651729 31559 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/656fd1dc-bb87-40c8-a161-31a194c23629-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:50.651811 master-0 kubenswrapper[31559]: I0216 02:39:50.651778 31559 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/656fd1dc-bb87-40c8-a161-31a194c23629-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:50.651811 master-0 kubenswrapper[31559]: I0216 02:39:50.651790 31559 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/656fd1dc-bb87-40c8-a161-31a194c23629-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:50.651811 master-0 kubenswrapper[31559]: I0216 02:39:50.651798 31559 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4b6218a7-c0ec-46b5-a1df-cdccc54d540c-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:50.651811 master-0 kubenswrapper[31559]: I0216 02:39:50.651811 31559 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4b6218a7-c0ec-46b5-a1df-cdccc54d540c-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:50.651811 master-0 kubenswrapper[31559]: I0216 02:39:50.651820 31559 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/656fd1dc-bb87-40c8-a161-31a194c23629-config-data-merged\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:50.651811 master-0 kubenswrapper[31559]: I0216 02:39:50.651829 31559 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/656fd1dc-bb87-40c8-a161-31a194c23629-etc-podinfo\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:50.652218 master-0 kubenswrapper[31559]: I0216 02:39:50.651837 31559 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4b6218a7-c0ec-46b5-a1df-cdccc54d540c-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:50.652218 master-0 kubenswrapper[31559]: I0216 02:39:50.651867 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45gnj\" (UniqueName: \"kubernetes.io/projected/656fd1dc-bb87-40c8-a161-31a194c23629-kube-api-access-45gnj\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:50.652218 master-0 kubenswrapper[31559]: I0216 02:39:50.651877 31559 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b6218a7-c0ec-46b5-a1df-cdccc54d540c-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:50.652218 master-0 kubenswrapper[31559]: I0216 02:39:50.651885 31559 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b6218a7-c0ec-46b5-a1df-cdccc54d540c-config\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:50.905408 master-0 kubenswrapper[31559]: I0216 02:39:50.905109 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-dde57-volume-lvm-iscsi-0" event={"ID":"e15f1a9c-41e2-4f45-9a40-94557f09f863","Type":"ContainerDied","Data":"ec23e57ebbd8ee71505729426558e50b00ae8ecafa8900f6c7d96df57ad80c02"} Feb 16 02:39:50.905408 master-0 kubenswrapper[31559]: I0216 02:39:50.905140 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:50.905408 master-0 kubenswrapper[31559]: I0216 02:39:50.905176 31559 scope.go:117] "RemoveContainer" containerID="1722ee0b1d979060e2896d028458d7a30d842dda617ffde8257b72ca530100c0" Feb 16 02:39:50.910495 master-0 kubenswrapper[31559]: I0216 02:39:50.909124 31559 generic.go:334] "Generic (PLEG): container finished" podID="69ebefce-f456-4d6e-9111-5df13870bbae" containerID="65ad02d23bfe7fd4c4d077dbd9e92199fe76e1b58dad7f8ca534300c24bb0aa8" exitCode=0 Feb 16 02:39:50.910495 master-0 kubenswrapper[31559]: I0216 02:39:50.909186 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-dde57-backup-0" event={"ID":"69ebefce-f456-4d6e-9111-5df13870bbae","Type":"ContainerDied","Data":"65ad02d23bfe7fd4c4d077dbd9e92199fe76e1b58dad7f8ca534300c24bb0aa8"} Feb 16 02:39:50.911960 master-0 kubenswrapper[31559]: I0216 02:39:50.911887 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-ljf52" event={"ID":"656fd1dc-bb87-40c8-a161-31a194c23629","Type":"ContainerDied","Data":"aade0af1b7738bfe24a56925bb49b35a9e66c034d6fed4ab1896c428d371799f"} Feb 16 02:39:50.912042 master-0 kubenswrapper[31559]: I0216 02:39:50.911965 31559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aade0af1b7738bfe24a56925bb49b35a9e66c034d6fed4ab1896c428d371799f" Feb 16 02:39:50.912042 master-0 kubenswrapper[31559]: I0216 02:39:50.911968 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-ljf52" Feb 16 02:39:50.916685 master-0 kubenswrapper[31559]: I0216 02:39:50.916617 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cc4888b7-xrh7r" event={"ID":"4b6218a7-c0ec-46b5-a1df-cdccc54d540c","Type":"ContainerDied","Data":"98539feb3d65696980b9954c125c831b7a334b6c4b866635c6596a5df8610fb3"} Feb 16 02:39:50.916800 master-0 kubenswrapper[31559]: I0216 02:39:50.916735 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cc4888b7-xrh7r" Feb 16 02:39:50.936023 master-0 kubenswrapper[31559]: I0216 02:39:50.935957 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-dde57-api-0" Feb 16 02:39:50.951456 master-0 kubenswrapper[31559]: I0216 02:39:50.951360 31559 scope.go:117] "RemoveContainer" containerID="466a438ebab8119453c8ae2e8ab6651a7c9a5be49155d75a0db49d5ed0f13a89" Feb 16 02:39:50.968631 master-0 kubenswrapper[31559]: I0216 02:39:50.968573 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6cc4888b7-xrh7r"] Feb 16 02:39:50.979478 master-0 kubenswrapper[31559]: I0216 02:39:50.979409 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6cc4888b7-xrh7r"] Feb 16 02:39:50.995722 master-0 kubenswrapper[31559]: I0216 02:39:50.995644 31559 scope.go:117] "RemoveContainer" containerID="830f28140856edc42ddb2e2402fdba876374b84819f772279736d4958da75a51" Feb 16 02:39:51.020458 master-0 kubenswrapper[31559]: I0216 02:39:51.020344 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-dde57-volume-lvm-iscsi-0"] Feb 16 02:39:51.037615 master-0 kubenswrapper[31559]: I0216 02:39:51.037563 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-dde57-volume-lvm-iscsi-0"] Feb 16 02:39:51.039720 master-0 kubenswrapper[31559]: I0216 02:39:51.039166 31559 scope.go:117] "RemoveContainer" containerID="d8bc4cec40e08a7dc1ec8533dcca3de93db71fbdd72065c9c572dd7a3758ecba" Feb 16 02:39:51.061379 master-0 kubenswrapper[31559]: I0216 02:39:51.061295 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-dde57-volume-lvm-iscsi-0"] Feb 16 02:39:51.077061 master-0 kubenswrapper[31559]: E0216 02:39:51.077007 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e15f1a9c-41e2-4f45-9a40-94557f09f863" containerName="probe" Feb 16 02:39:51.077061 master-0 kubenswrapper[31559]: I0216 02:39:51.077048 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="e15f1a9c-41e2-4f45-9a40-94557f09f863" containerName="probe" Feb 16 02:39:51.077061 master-0 kubenswrapper[31559]: E0216 02:39:51.077057 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="656fd1dc-bb87-40c8-a161-31a194c23629" containerName="init" Feb 16 02:39:51.077061 master-0 kubenswrapper[31559]: I0216 02:39:51.077063 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="656fd1dc-bb87-40c8-a161-31a194c23629" containerName="init" Feb 16 02:39:51.077227 master-0 kubenswrapper[31559]: E0216 02:39:51.077081 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b6218a7-c0ec-46b5-a1df-cdccc54d540c" containerName="dnsmasq-dns" Feb 16 02:39:51.077227 master-0 kubenswrapper[31559]: I0216 02:39:51.077088 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b6218a7-c0ec-46b5-a1df-cdccc54d540c" containerName="dnsmasq-dns" Feb 16 02:39:51.077227 master-0 kubenswrapper[31559]: E0216 02:39:51.077109 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="656fd1dc-bb87-40c8-a161-31a194c23629" containerName="ironic-db-sync" Feb 16 02:39:51.077227 master-0 kubenswrapper[31559]: I0216 02:39:51.077116 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="656fd1dc-bb87-40c8-a161-31a194c23629" containerName="ironic-db-sync" Feb 16 02:39:51.077227 master-0 kubenswrapper[31559]: E0216 02:39:51.077126 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b6218a7-c0ec-46b5-a1df-cdccc54d540c" containerName="init" Feb 16 02:39:51.077227 master-0 kubenswrapper[31559]: I0216 02:39:51.077132 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b6218a7-c0ec-46b5-a1df-cdccc54d540c" containerName="init" Feb 16 02:39:51.087760 master-0 kubenswrapper[31559]: E0216 02:39:51.087691 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e15f1a9c-41e2-4f45-9a40-94557f09f863" containerName="cinder-volume" Feb 16 02:39:51.087760 master-0 kubenswrapper[31559]: I0216 02:39:51.087750 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="e15f1a9c-41e2-4f45-9a40-94557f09f863" containerName="cinder-volume" Feb 16 02:39:51.088996 master-0 kubenswrapper[31559]: I0216 02:39:51.088959 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="e15f1a9c-41e2-4f45-9a40-94557f09f863" containerName="cinder-volume" Feb 16 02:39:51.088996 master-0 kubenswrapper[31559]: I0216 02:39:51.088988 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="656fd1dc-bb87-40c8-a161-31a194c23629" containerName="ironic-db-sync" Feb 16 02:39:51.089076 master-0 kubenswrapper[31559]: I0216 02:39:51.089030 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="e15f1a9c-41e2-4f45-9a40-94557f09f863" containerName="probe" Feb 16 02:39:51.089076 master-0 kubenswrapper[31559]: I0216 02:39:51.089056 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b6218a7-c0ec-46b5-a1df-cdccc54d540c" containerName="dnsmasq-dns" Feb 16 02:39:51.098224 master-0 kubenswrapper[31559]: I0216 02:39:51.097540 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:51.103022 master-0 kubenswrapper[31559]: I0216 02:39:51.102945 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-dde57-volume-lvm-iscsi-0"] Feb 16 02:39:51.115417 master-0 kubenswrapper[31559]: I0216 02:39:51.115364 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-dde57-volume-lvm-iscsi-config-data" Feb 16 02:39:51.298866 master-0 kubenswrapper[31559]: I0216 02:39:51.298148 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/7b7822ff-3da2-41c1-92dd-e44a15cc44f1-etc-iscsi\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"7b7822ff-3da2-41c1-92dd-e44a15cc44f1\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:51.298866 master-0 kubenswrapper[31559]: I0216 02:39:51.298214 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/7b7822ff-3da2-41c1-92dd-e44a15cc44f1-dev\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"7b7822ff-3da2-41c1-92dd-e44a15cc44f1\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:51.298866 master-0 kubenswrapper[31559]: I0216 02:39:51.298241 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/7b7822ff-3da2-41c1-92dd-e44a15cc44f1-var-lib-cinder\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"7b7822ff-3da2-41c1-92dd-e44a15cc44f1\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:51.298866 master-0 kubenswrapper[31559]: I0216 02:39:51.298268 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b7822ff-3da2-41c1-92dd-e44a15cc44f1-combined-ca-bundle\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"7b7822ff-3da2-41c1-92dd-e44a15cc44f1\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:51.298866 master-0 kubenswrapper[31559]: I0216 02:39:51.298290 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/7b7822ff-3da2-41c1-92dd-e44a15cc44f1-var-locks-cinder\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"7b7822ff-3da2-41c1-92dd-e44a15cc44f1\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:51.298866 master-0 kubenswrapper[31559]: I0216 02:39:51.298348 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b7822ff-3da2-41c1-92dd-e44a15cc44f1-scripts\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"7b7822ff-3da2-41c1-92dd-e44a15cc44f1\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:51.298866 master-0 kubenswrapper[31559]: I0216 02:39:51.298382 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/7b7822ff-3da2-41c1-92dd-e44a15cc44f1-run\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"7b7822ff-3da2-41c1-92dd-e44a15cc44f1\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:51.298866 master-0 kubenswrapper[31559]: I0216 02:39:51.298404 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7b7822ff-3da2-41c1-92dd-e44a15cc44f1-config-data-custom\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"7b7822ff-3da2-41c1-92dd-e44a15cc44f1\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:51.298866 master-0 kubenswrapper[31559]: I0216 02:39:51.298427 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7b7822ff-3da2-41c1-92dd-e44a15cc44f1-etc-machine-id\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"7b7822ff-3da2-41c1-92dd-e44a15cc44f1\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:51.298866 master-0 kubenswrapper[31559]: I0216 02:39:51.298467 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/7b7822ff-3da2-41c1-92dd-e44a15cc44f1-etc-nvme\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"7b7822ff-3da2-41c1-92dd-e44a15cc44f1\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:51.298866 master-0 kubenswrapper[31559]: I0216 02:39:51.298487 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7b7822ff-3da2-41c1-92dd-e44a15cc44f1-lib-modules\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"7b7822ff-3da2-41c1-92dd-e44a15cc44f1\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:51.298866 master-0 kubenswrapper[31559]: I0216 02:39:51.298509 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7b7822ff-3da2-41c1-92dd-e44a15cc44f1-sys\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"7b7822ff-3da2-41c1-92dd-e44a15cc44f1\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:51.298866 master-0 kubenswrapper[31559]: I0216 02:39:51.298544 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/7b7822ff-3da2-41c1-92dd-e44a15cc44f1-var-locks-brick\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"7b7822ff-3da2-41c1-92dd-e44a15cc44f1\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:51.298866 master-0 kubenswrapper[31559]: I0216 02:39:51.298563 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b7822ff-3da2-41c1-92dd-e44a15cc44f1-config-data\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"7b7822ff-3da2-41c1-92dd-e44a15cc44f1\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:51.298866 master-0 kubenswrapper[31559]: I0216 02:39:51.298592 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cn98b\" (UniqueName: \"kubernetes.io/projected/7b7822ff-3da2-41c1-92dd-e44a15cc44f1-kube-api-access-cn98b\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"7b7822ff-3da2-41c1-92dd-e44a15cc44f1\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:51.330878 master-0 kubenswrapper[31559]: I0216 02:39:51.329612 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-db-create-ldt4r"] Feb 16 02:39:51.345979 master-0 kubenswrapper[31559]: I0216 02:39:51.345894 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-01eb-account-create-update-cvszz"] Feb 16 02:39:51.347827 master-0 kubenswrapper[31559]: I0216 02:39:51.347513 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-01eb-account-create-update-cvszz" Feb 16 02:39:51.352598 master-0 kubenswrapper[31559]: I0216 02:39:51.352537 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-ldt4r" Feb 16 02:39:51.376537 master-0 kubenswrapper[31559]: I0216 02:39:51.372931 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-create-ldt4r"] Feb 16 02:39:51.382421 master-0 kubenswrapper[31559]: I0216 02:39:51.380922 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-db-secret" Feb 16 02:39:51.399760 master-0 kubenswrapper[31559]: I0216 02:39:51.399665 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-01eb-account-create-update-cvszz"] Feb 16 02:39:51.400041 master-0 kubenswrapper[31559]: I0216 02:39:51.400008 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b7822ff-3da2-41c1-92dd-e44a15cc44f1-scripts\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"7b7822ff-3da2-41c1-92dd-e44a15cc44f1\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:51.400089 master-0 kubenswrapper[31559]: I0216 02:39:51.400061 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/7b7822ff-3da2-41c1-92dd-e44a15cc44f1-run\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"7b7822ff-3da2-41c1-92dd-e44a15cc44f1\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:51.400089 master-0 kubenswrapper[31559]: I0216 02:39:51.400083 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/502da4cf-fead-4624-891d-f8db5815915f-operator-scripts\") pod \"ironic-inspector-db-create-ldt4r\" (UID: \"502da4cf-fead-4624-891d-f8db5815915f\") " pod="openstack/ironic-inspector-db-create-ldt4r" Feb 16 02:39:51.400217 master-0 kubenswrapper[31559]: I0216 02:39:51.400181 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/7b7822ff-3da2-41c1-92dd-e44a15cc44f1-run\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"7b7822ff-3da2-41c1-92dd-e44a15cc44f1\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:51.401502 master-0 kubenswrapper[31559]: I0216 02:39:51.400935 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7b7822ff-3da2-41c1-92dd-e44a15cc44f1-config-data-custom\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"7b7822ff-3da2-41c1-92dd-e44a15cc44f1\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:51.401502 master-0 kubenswrapper[31559]: I0216 02:39:51.400967 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7b7822ff-3da2-41c1-92dd-e44a15cc44f1-etc-machine-id\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"7b7822ff-3da2-41c1-92dd-e44a15cc44f1\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:51.401502 master-0 kubenswrapper[31559]: I0216 02:39:51.400992 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/7b7822ff-3da2-41c1-92dd-e44a15cc44f1-etc-nvme\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"7b7822ff-3da2-41c1-92dd-e44a15cc44f1\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:51.401502 master-0 kubenswrapper[31559]: I0216 02:39:51.401014 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7b7822ff-3da2-41c1-92dd-e44a15cc44f1-lib-modules\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"7b7822ff-3da2-41c1-92dd-e44a15cc44f1\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:51.401502 master-0 kubenswrapper[31559]: I0216 02:39:51.401033 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7b7822ff-3da2-41c1-92dd-e44a15cc44f1-sys\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"7b7822ff-3da2-41c1-92dd-e44a15cc44f1\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:51.401502 master-0 kubenswrapper[31559]: I0216 02:39:51.401059 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df781460-393f-441c-85a9-ab19366c8734-operator-scripts\") pod \"ironic-inspector-01eb-account-create-update-cvszz\" (UID: \"df781460-393f-441c-85a9-ab19366c8734\") " pod="openstack/ironic-inspector-01eb-account-create-update-cvszz" Feb 16 02:39:51.401502 master-0 kubenswrapper[31559]: I0216 02:39:51.401088 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vt7hk\" (UniqueName: \"kubernetes.io/projected/502da4cf-fead-4624-891d-f8db5815915f-kube-api-access-vt7hk\") pod \"ironic-inspector-db-create-ldt4r\" (UID: \"502da4cf-fead-4624-891d-f8db5815915f\") " pod="openstack/ironic-inspector-db-create-ldt4r" Feb 16 02:39:51.401502 master-0 kubenswrapper[31559]: I0216 02:39:51.401109 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/7b7822ff-3da2-41c1-92dd-e44a15cc44f1-var-locks-brick\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"7b7822ff-3da2-41c1-92dd-e44a15cc44f1\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:51.401502 master-0 kubenswrapper[31559]: I0216 02:39:51.401128 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b7822ff-3da2-41c1-92dd-e44a15cc44f1-config-data\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"7b7822ff-3da2-41c1-92dd-e44a15cc44f1\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:51.401502 master-0 kubenswrapper[31559]: I0216 02:39:51.401157 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cn98b\" (UniqueName: \"kubernetes.io/projected/7b7822ff-3da2-41c1-92dd-e44a15cc44f1-kube-api-access-cn98b\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"7b7822ff-3da2-41c1-92dd-e44a15cc44f1\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:51.401502 master-0 kubenswrapper[31559]: I0216 02:39:51.401196 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/7b7822ff-3da2-41c1-92dd-e44a15cc44f1-etc-iscsi\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"7b7822ff-3da2-41c1-92dd-e44a15cc44f1\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:51.401502 master-0 kubenswrapper[31559]: I0216 02:39:51.401213 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/7b7822ff-3da2-41c1-92dd-e44a15cc44f1-dev\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"7b7822ff-3da2-41c1-92dd-e44a15cc44f1\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:51.401502 master-0 kubenswrapper[31559]: I0216 02:39:51.401246 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/7b7822ff-3da2-41c1-92dd-e44a15cc44f1-var-lib-cinder\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"7b7822ff-3da2-41c1-92dd-e44a15cc44f1\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:51.401502 master-0 kubenswrapper[31559]: I0216 02:39:51.401267 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b7822ff-3da2-41c1-92dd-e44a15cc44f1-combined-ca-bundle\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"7b7822ff-3da2-41c1-92dd-e44a15cc44f1\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:51.401502 master-0 kubenswrapper[31559]: I0216 02:39:51.401288 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vxgs\" (UniqueName: \"kubernetes.io/projected/df781460-393f-441c-85a9-ab19366c8734-kube-api-access-4vxgs\") pod \"ironic-inspector-01eb-account-create-update-cvszz\" (UID: \"df781460-393f-441c-85a9-ab19366c8734\") " pod="openstack/ironic-inspector-01eb-account-create-update-cvszz" Feb 16 02:39:51.401502 master-0 kubenswrapper[31559]: I0216 02:39:51.401312 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/7b7822ff-3da2-41c1-92dd-e44a15cc44f1-var-locks-cinder\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"7b7822ff-3da2-41c1-92dd-e44a15cc44f1\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:51.401502 master-0 kubenswrapper[31559]: I0216 02:39:51.401427 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/7b7822ff-3da2-41c1-92dd-e44a15cc44f1-var-locks-cinder\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"7b7822ff-3da2-41c1-92dd-e44a15cc44f1\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:51.402111 master-0 kubenswrapper[31559]: I0216 02:39:51.401636 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7b7822ff-3da2-41c1-92dd-e44a15cc44f1-lib-modules\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"7b7822ff-3da2-41c1-92dd-e44a15cc44f1\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:51.402111 master-0 kubenswrapper[31559]: I0216 02:39:51.401643 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/7b7822ff-3da2-41c1-92dd-e44a15cc44f1-var-locks-brick\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"7b7822ff-3da2-41c1-92dd-e44a15cc44f1\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:51.402111 master-0 kubenswrapper[31559]: I0216 02:39:51.401665 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7b7822ff-3da2-41c1-92dd-e44a15cc44f1-etc-machine-id\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"7b7822ff-3da2-41c1-92dd-e44a15cc44f1\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:51.402111 master-0 kubenswrapper[31559]: I0216 02:39:51.401702 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/7b7822ff-3da2-41c1-92dd-e44a15cc44f1-etc-nvme\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"7b7822ff-3da2-41c1-92dd-e44a15cc44f1\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:51.402111 master-0 kubenswrapper[31559]: I0216 02:39:51.401744 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/7b7822ff-3da2-41c1-92dd-e44a15cc44f1-dev\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"7b7822ff-3da2-41c1-92dd-e44a15cc44f1\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:51.402341 master-0 kubenswrapper[31559]: I0216 02:39:51.402226 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/7b7822ff-3da2-41c1-92dd-e44a15cc44f1-etc-iscsi\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"7b7822ff-3da2-41c1-92dd-e44a15cc44f1\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:51.402568 master-0 kubenswrapper[31559]: I0216 02:39:51.402532 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/7b7822ff-3da2-41c1-92dd-e44a15cc44f1-var-lib-cinder\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"7b7822ff-3da2-41c1-92dd-e44a15cc44f1\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:51.404400 master-0 kubenswrapper[31559]: I0216 02:39:51.404335 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b7822ff-3da2-41c1-92dd-e44a15cc44f1-scripts\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"7b7822ff-3da2-41c1-92dd-e44a15cc44f1\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:51.404731 master-0 kubenswrapper[31559]: I0216 02:39:51.404700 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b7822ff-3da2-41c1-92dd-e44a15cc44f1-combined-ca-bundle\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"7b7822ff-3da2-41c1-92dd-e44a15cc44f1\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:51.404787 master-0 kubenswrapper[31559]: I0216 02:39:51.401731 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7b7822ff-3da2-41c1-92dd-e44a15cc44f1-sys\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"7b7822ff-3da2-41c1-92dd-e44a15cc44f1\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:51.406153 master-0 kubenswrapper[31559]: I0216 02:39:51.406117 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b7822ff-3da2-41c1-92dd-e44a15cc44f1-config-data\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"7b7822ff-3da2-41c1-92dd-e44a15cc44f1\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:51.406290 master-0 kubenswrapper[31559]: I0216 02:39:51.406267 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7b7822ff-3da2-41c1-92dd-e44a15cc44f1-config-data-custom\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"7b7822ff-3da2-41c1-92dd-e44a15cc44f1\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:51.441261 master-0 kubenswrapper[31559]: I0216 02:39:51.441152 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cn98b\" (UniqueName: \"kubernetes.io/projected/7b7822ff-3da2-41c1-92dd-e44a15cc44f1-kube-api-access-cn98b\") pod \"cinder-dde57-volume-lvm-iscsi-0\" (UID: \"7b7822ff-3da2-41c1-92dd-e44a15cc44f1\") " pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:51.448713 master-0 kubenswrapper[31559]: I0216 02:39:51.448597 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6fbd84b845-7w476"] Feb 16 02:39:51.450670 master-0 kubenswrapper[31559]: I0216 02:39:51.450613 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fbd84b845-7w476" Feb 16 02:39:51.491298 master-0 kubenswrapper[31559]: I0216 02:39:51.491218 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6fbd84b845-7w476"] Feb 16 02:39:51.510478 master-0 kubenswrapper[31559]: I0216 02:39:51.504032 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/52d1137e-c1db-4205-ae56-dfd8b4c84b39-ovsdbserver-sb\") pod \"dnsmasq-dns-6fbd84b845-7w476\" (UID: \"52d1137e-c1db-4205-ae56-dfd8b4c84b39\") " pod="openstack/dnsmasq-dns-6fbd84b845-7w476" Feb 16 02:39:51.510478 master-0 kubenswrapper[31559]: I0216 02:39:51.504098 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjn66\" (UniqueName: \"kubernetes.io/projected/52d1137e-c1db-4205-ae56-dfd8b4c84b39-kube-api-access-tjn66\") pod \"dnsmasq-dns-6fbd84b845-7w476\" (UID: \"52d1137e-c1db-4205-ae56-dfd8b4c84b39\") " pod="openstack/dnsmasq-dns-6fbd84b845-7w476" Feb 16 02:39:51.510478 master-0 kubenswrapper[31559]: I0216 02:39:51.504128 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52d1137e-c1db-4205-ae56-dfd8b4c84b39-dns-svc\") pod \"dnsmasq-dns-6fbd84b845-7w476\" (UID: \"52d1137e-c1db-4205-ae56-dfd8b4c84b39\") " pod="openstack/dnsmasq-dns-6fbd84b845-7w476" Feb 16 02:39:51.510478 master-0 kubenswrapper[31559]: I0216 02:39:51.504160 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/502da4cf-fead-4624-891d-f8db5815915f-operator-scripts\") pod \"ironic-inspector-db-create-ldt4r\" (UID: \"502da4cf-fead-4624-891d-f8db5815915f\") " pod="openstack/ironic-inspector-db-create-ldt4r" Feb 16 02:39:51.510478 master-0 kubenswrapper[31559]: I0216 02:39:51.504185 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52d1137e-c1db-4205-ae56-dfd8b4c84b39-ovsdbserver-nb\") pod \"dnsmasq-dns-6fbd84b845-7w476\" (UID: \"52d1137e-c1db-4205-ae56-dfd8b4c84b39\") " pod="openstack/dnsmasq-dns-6fbd84b845-7w476" Feb 16 02:39:51.510478 master-0 kubenswrapper[31559]: I0216 02:39:51.504219 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/52d1137e-c1db-4205-ae56-dfd8b4c84b39-dns-swift-storage-0\") pod \"dnsmasq-dns-6fbd84b845-7w476\" (UID: \"52d1137e-c1db-4205-ae56-dfd8b4c84b39\") " pod="openstack/dnsmasq-dns-6fbd84b845-7w476" Feb 16 02:39:51.510478 master-0 kubenswrapper[31559]: I0216 02:39:51.504246 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df781460-393f-441c-85a9-ab19366c8734-operator-scripts\") pod \"ironic-inspector-01eb-account-create-update-cvszz\" (UID: \"df781460-393f-441c-85a9-ab19366c8734\") " pod="openstack/ironic-inspector-01eb-account-create-update-cvszz" Feb 16 02:39:51.510478 master-0 kubenswrapper[31559]: I0216 02:39:51.504276 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vt7hk\" (UniqueName: \"kubernetes.io/projected/502da4cf-fead-4624-891d-f8db5815915f-kube-api-access-vt7hk\") pod \"ironic-inspector-db-create-ldt4r\" (UID: \"502da4cf-fead-4624-891d-f8db5815915f\") " pod="openstack/ironic-inspector-db-create-ldt4r" Feb 16 02:39:51.510478 master-0 kubenswrapper[31559]: I0216 02:39:51.504329 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52d1137e-c1db-4205-ae56-dfd8b4c84b39-config\") pod \"dnsmasq-dns-6fbd84b845-7w476\" (UID: \"52d1137e-c1db-4205-ae56-dfd8b4c84b39\") " pod="openstack/dnsmasq-dns-6fbd84b845-7w476" Feb 16 02:39:51.510478 master-0 kubenswrapper[31559]: I0216 02:39:51.504359 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vxgs\" (UniqueName: \"kubernetes.io/projected/df781460-393f-441c-85a9-ab19366c8734-kube-api-access-4vxgs\") pod \"ironic-inspector-01eb-account-create-update-cvszz\" (UID: \"df781460-393f-441c-85a9-ab19366c8734\") " pod="openstack/ironic-inspector-01eb-account-create-update-cvszz" Feb 16 02:39:51.510478 master-0 kubenswrapper[31559]: I0216 02:39:51.505527 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/502da4cf-fead-4624-891d-f8db5815915f-operator-scripts\") pod \"ironic-inspector-db-create-ldt4r\" (UID: \"502da4cf-fead-4624-891d-f8db5815915f\") " pod="openstack/ironic-inspector-db-create-ldt4r" Feb 16 02:39:51.514931 master-0 kubenswrapper[31559]: I0216 02:39:51.514759 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-neutron-agent-57d76bb68d-7wt45"] Feb 16 02:39:51.516273 master-0 kubenswrapper[31559]: I0216 02:39:51.516217 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-neutron-agent-57d76bb68d-7wt45" Feb 16 02:39:51.517182 master-0 kubenswrapper[31559]: I0216 02:39:51.517125 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df781460-393f-441c-85a9-ab19366c8734-operator-scripts\") pod \"ironic-inspector-01eb-account-create-update-cvszz\" (UID: \"df781460-393f-441c-85a9-ab19366c8734\") " pod="openstack/ironic-inspector-01eb-account-create-update-cvszz" Feb 16 02:39:51.521097 master-0 kubenswrapper[31559]: I0216 02:39:51.521038 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-ironic-neutron-agent-config-data" Feb 16 02:39:51.521627 master-0 kubenswrapper[31559]: I0216 02:39:51.521547 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vt7hk\" (UniqueName: \"kubernetes.io/projected/502da4cf-fead-4624-891d-f8db5815915f-kube-api-access-vt7hk\") pod \"ironic-inspector-db-create-ldt4r\" (UID: \"502da4cf-fead-4624-891d-f8db5815915f\") " pod="openstack/ironic-inspector-db-create-ldt4r" Feb 16 02:39:51.527568 master-0 kubenswrapper[31559]: I0216 02:39:51.527507 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vxgs\" (UniqueName: \"kubernetes.io/projected/df781460-393f-441c-85a9-ab19366c8734-kube-api-access-4vxgs\") pod \"ironic-inspector-01eb-account-create-update-cvszz\" (UID: \"df781460-393f-441c-85a9-ab19366c8734\") " pod="openstack/ironic-inspector-01eb-account-create-update-cvszz" Feb 16 02:39:51.538324 master-0 kubenswrapper[31559]: I0216 02:39:51.538268 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-neutron-agent-57d76bb68d-7wt45"] Feb 16 02:39:51.576795 master-0 kubenswrapper[31559]: I0216 02:39:51.576744 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:51.608040 master-0 kubenswrapper[31559]: I0216 02:39:51.607957 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjn66\" (UniqueName: \"kubernetes.io/projected/52d1137e-c1db-4205-ae56-dfd8b4c84b39-kube-api-access-tjn66\") pod \"dnsmasq-dns-6fbd84b845-7w476\" (UID: \"52d1137e-c1db-4205-ae56-dfd8b4c84b39\") " pod="openstack/dnsmasq-dns-6fbd84b845-7w476" Feb 16 02:39:51.608228 master-0 kubenswrapper[31559]: I0216 02:39:51.608064 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec163f18-db19-4327-ae8b-6feb4c6004af-combined-ca-bundle\") pod \"ironic-neutron-agent-57d76bb68d-7wt45\" (UID: \"ec163f18-db19-4327-ae8b-6feb4c6004af\") " pod="openstack/ironic-neutron-agent-57d76bb68d-7wt45" Feb 16 02:39:51.608228 master-0 kubenswrapper[31559]: I0216 02:39:51.608099 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52d1137e-c1db-4205-ae56-dfd8b4c84b39-dns-svc\") pod \"dnsmasq-dns-6fbd84b845-7w476\" (UID: \"52d1137e-c1db-4205-ae56-dfd8b4c84b39\") " pod="openstack/dnsmasq-dns-6fbd84b845-7w476" Feb 16 02:39:51.608228 master-0 kubenswrapper[31559]: I0216 02:39:51.608180 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52d1137e-c1db-4205-ae56-dfd8b4c84b39-ovsdbserver-nb\") pod \"dnsmasq-dns-6fbd84b845-7w476\" (UID: \"52d1137e-c1db-4205-ae56-dfd8b4c84b39\") " pod="openstack/dnsmasq-dns-6fbd84b845-7w476" Feb 16 02:39:51.608427 master-0 kubenswrapper[31559]: I0216 02:39:51.608388 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frhcj\" (UniqueName: \"kubernetes.io/projected/ec163f18-db19-4327-ae8b-6feb4c6004af-kube-api-access-frhcj\") pod \"ironic-neutron-agent-57d76bb68d-7wt45\" (UID: \"ec163f18-db19-4327-ae8b-6feb4c6004af\") " pod="openstack/ironic-neutron-agent-57d76bb68d-7wt45" Feb 16 02:39:51.609370 master-0 kubenswrapper[31559]: I0216 02:39:51.609331 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52d1137e-c1db-4205-ae56-dfd8b4c84b39-ovsdbserver-nb\") pod \"dnsmasq-dns-6fbd84b845-7w476\" (UID: \"52d1137e-c1db-4205-ae56-dfd8b4c84b39\") " pod="openstack/dnsmasq-dns-6fbd84b845-7w476" Feb 16 02:39:51.609415 master-0 kubenswrapper[31559]: I0216 02:39:51.609372 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52d1137e-c1db-4205-ae56-dfd8b4c84b39-dns-svc\") pod \"dnsmasq-dns-6fbd84b845-7w476\" (UID: \"52d1137e-c1db-4205-ae56-dfd8b4c84b39\") " pod="openstack/dnsmasq-dns-6fbd84b845-7w476" Feb 16 02:39:51.612254 master-0 kubenswrapper[31559]: I0216 02:39:51.610413 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/52d1137e-c1db-4205-ae56-dfd8b4c84b39-dns-swift-storage-0\") pod \"dnsmasq-dns-6fbd84b845-7w476\" (UID: \"52d1137e-c1db-4205-ae56-dfd8b4c84b39\") " pod="openstack/dnsmasq-dns-6fbd84b845-7w476" Feb 16 02:39:51.612350 master-0 kubenswrapper[31559]: I0216 02:39:51.612224 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/52d1137e-c1db-4205-ae56-dfd8b4c84b39-dns-swift-storage-0\") pod \"dnsmasq-dns-6fbd84b845-7w476\" (UID: \"52d1137e-c1db-4205-ae56-dfd8b4c84b39\") " pod="openstack/dnsmasq-dns-6fbd84b845-7w476" Feb 16 02:39:51.619785 master-0 kubenswrapper[31559]: I0216 02:39:51.612775 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ec163f18-db19-4327-ae8b-6feb4c6004af-config\") pod \"ironic-neutron-agent-57d76bb68d-7wt45\" (UID: \"ec163f18-db19-4327-ae8b-6feb4c6004af\") " pod="openstack/ironic-neutron-agent-57d76bb68d-7wt45" Feb 16 02:39:51.619983 master-0 kubenswrapper[31559]: I0216 02:39:51.619887 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52d1137e-c1db-4205-ae56-dfd8b4c84b39-config\") pod \"dnsmasq-dns-6fbd84b845-7w476\" (UID: \"52d1137e-c1db-4205-ae56-dfd8b4c84b39\") " pod="openstack/dnsmasq-dns-6fbd84b845-7w476" Feb 16 02:39:51.620859 master-0 kubenswrapper[31559]: I0216 02:39:51.620838 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52d1137e-c1db-4205-ae56-dfd8b4c84b39-config\") pod \"dnsmasq-dns-6fbd84b845-7w476\" (UID: \"52d1137e-c1db-4205-ae56-dfd8b4c84b39\") " pod="openstack/dnsmasq-dns-6fbd84b845-7w476" Feb 16 02:39:51.626021 master-0 kubenswrapper[31559]: I0216 02:39:51.625960 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/52d1137e-c1db-4205-ae56-dfd8b4c84b39-ovsdbserver-sb\") pod \"dnsmasq-dns-6fbd84b845-7w476\" (UID: \"52d1137e-c1db-4205-ae56-dfd8b4c84b39\") " pod="openstack/dnsmasq-dns-6fbd84b845-7w476" Feb 16 02:39:51.627147 master-0 kubenswrapper[31559]: I0216 02:39:51.627090 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/52d1137e-c1db-4205-ae56-dfd8b4c84b39-ovsdbserver-sb\") pod \"dnsmasq-dns-6fbd84b845-7w476\" (UID: \"52d1137e-c1db-4205-ae56-dfd8b4c84b39\") " pod="openstack/dnsmasq-dns-6fbd84b845-7w476" Feb 16 02:39:51.638660 master-0 kubenswrapper[31559]: I0216 02:39:51.638607 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjn66\" (UniqueName: \"kubernetes.io/projected/52d1137e-c1db-4205-ae56-dfd8b4c84b39-kube-api-access-tjn66\") pod \"dnsmasq-dns-6fbd84b845-7w476\" (UID: \"52d1137e-c1db-4205-ae56-dfd8b4c84b39\") " pod="openstack/dnsmasq-dns-6fbd84b845-7w476" Feb 16 02:39:51.644619 master-0 kubenswrapper[31559]: I0216 02:39:51.644390 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-86f55b5cf6-zxgrr"] Feb 16 02:39:51.647202 master-0 kubenswrapper[31559]: I0216 02:39:51.647129 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-86f55b5cf6-zxgrr" Feb 16 02:39:51.654530 master-0 kubenswrapper[31559]: I0216 02:39:51.652957 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-transport-url-ironic-transport" Feb 16 02:39:51.654530 master-0 kubenswrapper[31559]: I0216 02:39:51.653274 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-config-data" Feb 16 02:39:51.654530 master-0 kubenswrapper[31559]: I0216 02:39:51.654409 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-api-scripts" Feb 16 02:39:51.654758 master-0 kubenswrapper[31559]: I0216 02:39:51.654601 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 16 02:39:51.654758 master-0 kubenswrapper[31559]: I0216 02:39:51.654747 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-api-config-data" Feb 16 02:39:51.668856 master-0 kubenswrapper[31559]: I0216 02:39:51.668772 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-86f55b5cf6-zxgrr"] Feb 16 02:39:51.699712 master-0 kubenswrapper[31559]: I0216 02:39:51.699658 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-01eb-account-create-update-cvszz" Feb 16 02:39:51.728990 master-0 kubenswrapper[31559]: I0216 02:39:51.728895 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ec163f18-db19-4327-ae8b-6feb4c6004af-config\") pod \"ironic-neutron-agent-57d76bb68d-7wt45\" (UID: \"ec163f18-db19-4327-ae8b-6feb4c6004af\") " pod="openstack/ironic-neutron-agent-57d76bb68d-7wt45" Feb 16 02:39:51.729515 master-0 kubenswrapper[31559]: I0216 02:39:51.728972 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bbb13644-ecbf-43f1-9203-c714b5485f17-config-data-custom\") pod \"ironic-86f55b5cf6-zxgrr\" (UID: \"bbb13644-ecbf-43f1-9203-c714b5485f17\") " pod="openstack/ironic-86f55b5cf6-zxgrr" Feb 16 02:39:51.729515 master-0 kubenswrapper[31559]: I0216 02:39:51.729080 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbb13644-ecbf-43f1-9203-c714b5485f17-scripts\") pod \"ironic-86f55b5cf6-zxgrr\" (UID: \"bbb13644-ecbf-43f1-9203-c714b5485f17\") " pod="openstack/ironic-86f55b5cf6-zxgrr" Feb 16 02:39:51.729515 master-0 kubenswrapper[31559]: I0216 02:39:51.729114 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtdnt\" (UniqueName: \"kubernetes.io/projected/bbb13644-ecbf-43f1-9203-c714b5485f17-kube-api-access-xtdnt\") pod \"ironic-86f55b5cf6-zxgrr\" (UID: \"bbb13644-ecbf-43f1-9203-c714b5485f17\") " pod="openstack/ironic-86f55b5cf6-zxgrr" Feb 16 02:39:51.729515 master-0 kubenswrapper[31559]: I0216 02:39:51.729137 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/bbb13644-ecbf-43f1-9203-c714b5485f17-config-data-merged\") pod \"ironic-86f55b5cf6-zxgrr\" (UID: \"bbb13644-ecbf-43f1-9203-c714b5485f17\") " pod="openstack/ironic-86f55b5cf6-zxgrr" Feb 16 02:39:51.729515 master-0 kubenswrapper[31559]: I0216 02:39:51.729199 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/bbb13644-ecbf-43f1-9203-c714b5485f17-etc-podinfo\") pod \"ironic-86f55b5cf6-zxgrr\" (UID: \"bbb13644-ecbf-43f1-9203-c714b5485f17\") " pod="openstack/ironic-86f55b5cf6-zxgrr" Feb 16 02:39:51.729515 master-0 kubenswrapper[31559]: I0216 02:39:51.729249 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec163f18-db19-4327-ae8b-6feb4c6004af-combined-ca-bundle\") pod \"ironic-neutron-agent-57d76bb68d-7wt45\" (UID: \"ec163f18-db19-4327-ae8b-6feb4c6004af\") " pod="openstack/ironic-neutron-agent-57d76bb68d-7wt45" Feb 16 02:39:51.729515 master-0 kubenswrapper[31559]: I0216 02:39:51.729272 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbb13644-ecbf-43f1-9203-c714b5485f17-config-data\") pod \"ironic-86f55b5cf6-zxgrr\" (UID: \"bbb13644-ecbf-43f1-9203-c714b5485f17\") " pod="openstack/ironic-86f55b5cf6-zxgrr" Feb 16 02:39:51.729515 master-0 kubenswrapper[31559]: I0216 02:39:51.729302 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbb13644-ecbf-43f1-9203-c714b5485f17-combined-ca-bundle\") pod \"ironic-86f55b5cf6-zxgrr\" (UID: \"bbb13644-ecbf-43f1-9203-c714b5485f17\") " pod="openstack/ironic-86f55b5cf6-zxgrr" Feb 16 02:39:51.729515 master-0 kubenswrapper[31559]: I0216 02:39:51.729361 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frhcj\" (UniqueName: \"kubernetes.io/projected/ec163f18-db19-4327-ae8b-6feb4c6004af-kube-api-access-frhcj\") pod \"ironic-neutron-agent-57d76bb68d-7wt45\" (UID: \"ec163f18-db19-4327-ae8b-6feb4c6004af\") " pod="openstack/ironic-neutron-agent-57d76bb68d-7wt45" Feb 16 02:39:51.729515 master-0 kubenswrapper[31559]: I0216 02:39:51.729392 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbb13644-ecbf-43f1-9203-c714b5485f17-logs\") pod \"ironic-86f55b5cf6-zxgrr\" (UID: \"bbb13644-ecbf-43f1-9203-c714b5485f17\") " pod="openstack/ironic-86f55b5cf6-zxgrr" Feb 16 02:39:51.735391 master-0 kubenswrapper[31559]: I0216 02:39:51.735354 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec163f18-db19-4327-ae8b-6feb4c6004af-combined-ca-bundle\") pod \"ironic-neutron-agent-57d76bb68d-7wt45\" (UID: \"ec163f18-db19-4327-ae8b-6feb4c6004af\") " pod="openstack/ironic-neutron-agent-57d76bb68d-7wt45" Feb 16 02:39:51.737202 master-0 kubenswrapper[31559]: I0216 02:39:51.737118 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ec163f18-db19-4327-ae8b-6feb4c6004af-config\") pod \"ironic-neutron-agent-57d76bb68d-7wt45\" (UID: \"ec163f18-db19-4327-ae8b-6feb4c6004af\") " pod="openstack/ironic-neutron-agent-57d76bb68d-7wt45" Feb 16 02:39:51.803408 master-0 kubenswrapper[31559]: I0216 02:39:51.801983 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-ldt4r" Feb 16 02:39:51.818515 master-0 kubenswrapper[31559]: I0216 02:39:51.813114 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frhcj\" (UniqueName: \"kubernetes.io/projected/ec163f18-db19-4327-ae8b-6feb4c6004af-kube-api-access-frhcj\") pod \"ironic-neutron-agent-57d76bb68d-7wt45\" (UID: \"ec163f18-db19-4327-ae8b-6feb4c6004af\") " pod="openstack/ironic-neutron-agent-57d76bb68d-7wt45" Feb 16 02:39:51.818515 master-0 kubenswrapper[31559]: I0216 02:39:51.818257 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fbd84b845-7w476" Feb 16 02:39:51.850471 master-0 kubenswrapper[31559]: I0216 02:39:51.830884 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbb13644-ecbf-43f1-9203-c714b5485f17-scripts\") pod \"ironic-86f55b5cf6-zxgrr\" (UID: \"bbb13644-ecbf-43f1-9203-c714b5485f17\") " pod="openstack/ironic-86f55b5cf6-zxgrr" Feb 16 02:39:51.850471 master-0 kubenswrapper[31559]: I0216 02:39:51.830950 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtdnt\" (UniqueName: \"kubernetes.io/projected/bbb13644-ecbf-43f1-9203-c714b5485f17-kube-api-access-xtdnt\") pod \"ironic-86f55b5cf6-zxgrr\" (UID: \"bbb13644-ecbf-43f1-9203-c714b5485f17\") " pod="openstack/ironic-86f55b5cf6-zxgrr" Feb 16 02:39:51.850471 master-0 kubenswrapper[31559]: I0216 02:39:51.830981 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/bbb13644-ecbf-43f1-9203-c714b5485f17-config-data-merged\") pod \"ironic-86f55b5cf6-zxgrr\" (UID: \"bbb13644-ecbf-43f1-9203-c714b5485f17\") " pod="openstack/ironic-86f55b5cf6-zxgrr" Feb 16 02:39:51.850471 master-0 kubenswrapper[31559]: I0216 02:39:51.831046 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/bbb13644-ecbf-43f1-9203-c714b5485f17-etc-podinfo\") pod \"ironic-86f55b5cf6-zxgrr\" (UID: \"bbb13644-ecbf-43f1-9203-c714b5485f17\") " pod="openstack/ironic-86f55b5cf6-zxgrr" Feb 16 02:39:51.850471 master-0 kubenswrapper[31559]: I0216 02:39:51.831102 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbb13644-ecbf-43f1-9203-c714b5485f17-config-data\") pod \"ironic-86f55b5cf6-zxgrr\" (UID: \"bbb13644-ecbf-43f1-9203-c714b5485f17\") " pod="openstack/ironic-86f55b5cf6-zxgrr" Feb 16 02:39:51.850471 master-0 kubenswrapper[31559]: I0216 02:39:51.831126 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbb13644-ecbf-43f1-9203-c714b5485f17-combined-ca-bundle\") pod \"ironic-86f55b5cf6-zxgrr\" (UID: \"bbb13644-ecbf-43f1-9203-c714b5485f17\") " pod="openstack/ironic-86f55b5cf6-zxgrr" Feb 16 02:39:51.850471 master-0 kubenswrapper[31559]: I0216 02:39:51.831193 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbb13644-ecbf-43f1-9203-c714b5485f17-logs\") pod \"ironic-86f55b5cf6-zxgrr\" (UID: \"bbb13644-ecbf-43f1-9203-c714b5485f17\") " pod="openstack/ironic-86f55b5cf6-zxgrr" Feb 16 02:39:51.850471 master-0 kubenswrapper[31559]: I0216 02:39:51.831292 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bbb13644-ecbf-43f1-9203-c714b5485f17-config-data-custom\") pod \"ironic-86f55b5cf6-zxgrr\" (UID: \"bbb13644-ecbf-43f1-9203-c714b5485f17\") " pod="openstack/ironic-86f55b5cf6-zxgrr" Feb 16 02:39:51.850471 master-0 kubenswrapper[31559]: I0216 02:39:51.834017 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/bbb13644-ecbf-43f1-9203-c714b5485f17-config-data-merged\") pod \"ironic-86f55b5cf6-zxgrr\" (UID: \"bbb13644-ecbf-43f1-9203-c714b5485f17\") " pod="openstack/ironic-86f55b5cf6-zxgrr" Feb 16 02:39:51.850471 master-0 kubenswrapper[31559]: I0216 02:39:51.835730 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbb13644-ecbf-43f1-9203-c714b5485f17-logs\") pod \"ironic-86f55b5cf6-zxgrr\" (UID: \"bbb13644-ecbf-43f1-9203-c714b5485f17\") " pod="openstack/ironic-86f55b5cf6-zxgrr" Feb 16 02:39:51.850471 master-0 kubenswrapper[31559]: I0216 02:39:51.848179 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bbb13644-ecbf-43f1-9203-c714b5485f17-config-data-custom\") pod \"ironic-86f55b5cf6-zxgrr\" (UID: \"bbb13644-ecbf-43f1-9203-c714b5485f17\") " pod="openstack/ironic-86f55b5cf6-zxgrr" Feb 16 02:39:51.871001 master-0 kubenswrapper[31559]: I0216 02:39:51.860039 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/bbb13644-ecbf-43f1-9203-c714b5485f17-etc-podinfo\") pod \"ironic-86f55b5cf6-zxgrr\" (UID: \"bbb13644-ecbf-43f1-9203-c714b5485f17\") " pod="openstack/ironic-86f55b5cf6-zxgrr" Feb 16 02:39:51.884778 master-0 kubenswrapper[31559]: I0216 02:39:51.878801 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbb13644-ecbf-43f1-9203-c714b5485f17-config-data\") pod \"ironic-86f55b5cf6-zxgrr\" (UID: \"bbb13644-ecbf-43f1-9203-c714b5485f17\") " pod="openstack/ironic-86f55b5cf6-zxgrr" Feb 16 02:39:51.893777 master-0 kubenswrapper[31559]: I0216 02:39:51.892977 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-neutron-agent-57d76bb68d-7wt45" Feb 16 02:39:51.900860 master-0 kubenswrapper[31559]: I0216 02:39:51.900040 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbb13644-ecbf-43f1-9203-c714b5485f17-scripts\") pod \"ironic-86f55b5cf6-zxgrr\" (UID: \"bbb13644-ecbf-43f1-9203-c714b5485f17\") " pod="openstack/ironic-86f55b5cf6-zxgrr" Feb 16 02:39:51.913350 master-0 kubenswrapper[31559]: I0216 02:39:51.911969 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtdnt\" (UniqueName: \"kubernetes.io/projected/bbb13644-ecbf-43f1-9203-c714b5485f17-kube-api-access-xtdnt\") pod \"ironic-86f55b5cf6-zxgrr\" (UID: \"bbb13644-ecbf-43f1-9203-c714b5485f17\") " pod="openstack/ironic-86f55b5cf6-zxgrr" Feb 16 02:39:51.913350 master-0 kubenswrapper[31559]: I0216 02:39:51.913162 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbb13644-ecbf-43f1-9203-c714b5485f17-combined-ca-bundle\") pod \"ironic-86f55b5cf6-zxgrr\" (UID: \"bbb13644-ecbf-43f1-9203-c714b5485f17\") " pod="openstack/ironic-86f55b5cf6-zxgrr" Feb 16 02:39:51.975066 master-0 kubenswrapper[31559]: I0216 02:39:51.974941 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b6218a7-c0ec-46b5-a1df-cdccc54d540c" path="/var/lib/kubelet/pods/4b6218a7-c0ec-46b5-a1df-cdccc54d540c/volumes" Feb 16 02:39:51.977591 master-0 kubenswrapper[31559]: I0216 02:39:51.975695 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e15f1a9c-41e2-4f45-9a40-94557f09f863" path="/var/lib/kubelet/pods/e15f1a9c-41e2-4f45-9a40-94557f09f863/volumes" Feb 16 02:39:52.005621 master-0 kubenswrapper[31559]: I0216 02:39:52.004869 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-86f55b5cf6-zxgrr" Feb 16 02:39:52.234744 master-0 kubenswrapper[31559]: I0216 02:39:52.234600 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-dde57-volume-lvm-iscsi-0"] Feb 16 02:39:52.717390 master-0 kubenswrapper[31559]: I0216 02:39:52.717290 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-01eb-account-create-update-cvszz"] Feb 16 02:39:52.719005 master-0 kubenswrapper[31559]: W0216 02:39:52.718976 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf781460_393f_441c_85a9_ab19366c8734.slice/crio-c00624880e36bfe93d505c7010d47df680cb124c827e808de9e77a30d09d2b9a WatchSource:0}: Error finding container c00624880e36bfe93d505c7010d47df680cb124c827e808de9e77a30d09d2b9a: Status 404 returned error can't find the container with id c00624880e36bfe93d505c7010d47df680cb124c827e808de9e77a30d09d2b9a Feb 16 02:39:53.018521 master-0 kubenswrapper[31559]: I0216 02:39:53.018301 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-01eb-account-create-update-cvszz" event={"ID":"df781460-393f-441c-85a9-ab19366c8734","Type":"ContainerStarted","Data":"12cb9765b7f8b2c337e54bfccb3a9365cee56c9c6ee25a776d426bba3814d702"} Feb 16 02:39:53.018521 master-0 kubenswrapper[31559]: I0216 02:39:53.018375 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-01eb-account-create-update-cvszz" event={"ID":"df781460-393f-441c-85a9-ab19366c8734","Type":"ContainerStarted","Data":"c00624880e36bfe93d505c7010d47df680cb124c827e808de9e77a30d09d2b9a"} Feb 16 02:39:53.031455 master-0 kubenswrapper[31559]: I0216 02:39:53.027781 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-dde57-volume-lvm-iscsi-0" event={"ID":"7b7822ff-3da2-41c1-92dd-e44a15cc44f1","Type":"ContainerStarted","Data":"0c198e0238aa003d3a6782e1e197cbea6849dc53d14eb1a29fa6dadab2f96818"} Feb 16 02:39:53.031455 master-0 kubenswrapper[31559]: I0216 02:39:53.027834 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-dde57-volume-lvm-iscsi-0" event={"ID":"7b7822ff-3da2-41c1-92dd-e44a15cc44f1","Type":"ContainerStarted","Data":"0ff675f3f338f469e24d642c33f7e0725c9d93578fc6a0baeb3a14c4ebb969b8"} Feb 16 02:39:53.031455 master-0 kubenswrapper[31559]: I0216 02:39:53.027846 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-dde57-volume-lvm-iscsi-0" event={"ID":"7b7822ff-3da2-41c1-92dd-e44a15cc44f1","Type":"ContainerStarted","Data":"4b5ede5294bfd18b57f03e9fac8e32608de1d721f6e9c12d8d434a1add6a8f0e"} Feb 16 02:39:53.055458 master-0 kubenswrapper[31559]: I0216 02:39:53.052838 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-inspector-01eb-account-create-update-cvszz" podStartSLOduration=2.052819036 podStartE2EDuration="2.052819036s" podCreationTimestamp="2026-02-16 02:39:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:39:53.045368607 +0000 UTC m=+1045.389974622" watchObservedRunningTime="2026-02-16 02:39:53.052819036 +0000 UTC m=+1045.397425051" Feb 16 02:39:53.076605 master-0 kubenswrapper[31559]: W0216 02:39:53.076414 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod502da4cf_fead_4624_891d_f8db5815915f.slice/crio-0ce86daf091ec67418d0159644715c62c9007b428afc25d15679be1cbec53a28 WatchSource:0}: Error finding container 0ce86daf091ec67418d0159644715c62c9007b428afc25d15679be1cbec53a28: Status 404 returned error can't find the container with id 0ce86daf091ec67418d0159644715c62c9007b428afc25d15679be1cbec53a28 Feb 16 02:39:53.080570 master-0 kubenswrapper[31559]: I0216 02:39:53.078671 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-create-ldt4r"] Feb 16 02:39:53.117684 master-0 kubenswrapper[31559]: I0216 02:39:53.117602 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6fbd84b845-7w476"] Feb 16 02:39:53.176459 master-0 kubenswrapper[31559]: I0216 02:39:53.168968 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-neutron-agent-57d76bb68d-7wt45"] Feb 16 02:39:53.196970 master-0 kubenswrapper[31559]: I0216 02:39:53.181034 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-86f55b5cf6-zxgrr"] Feb 16 02:39:53.208471 master-0 kubenswrapper[31559]: I0216 02:39:53.206902 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-dde57-volume-lvm-iscsi-0" podStartSLOduration=3.206886632 podStartE2EDuration="3.206886632s" podCreationTimestamp="2026-02-16 02:39:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:39:53.094658167 +0000 UTC m=+1045.439264182" watchObservedRunningTime="2026-02-16 02:39:53.206886632 +0000 UTC m=+1045.551492647" Feb 16 02:39:53.488463 master-0 kubenswrapper[31559]: I0216 02:39:53.487982 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-conductor-0"] Feb 16 02:39:53.496944 master-0 kubenswrapper[31559]: I0216 02:39:53.496872 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-conductor-0" Feb 16 02:39:53.500595 master-0 kubenswrapper[31559]: I0216 02:39:53.500419 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-conductor-scripts" Feb 16 02:39:53.501069 master-0 kubenswrapper[31559]: I0216 02:39:53.500951 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-conductor-config-data" Feb 16 02:39:53.534480 master-0 kubenswrapper[31559]: I0216 02:39:53.534427 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-conductor-0"] Feb 16 02:39:53.619519 master-0 kubenswrapper[31559]: I0216 02:39:53.619260 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f71d864b-c882-43b8-a7a7-b1e163d38aa4-scripts\") pod \"ironic-conductor-0\" (UID: \"f71d864b-c882-43b8-a7a7-b1e163d38aa4\") " pod="openstack/ironic-conductor-0" Feb 16 02:39:53.619519 master-0 kubenswrapper[31559]: I0216 02:39:53.619388 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f71d864b-c882-43b8-a7a7-b1e163d38aa4-config-data\") pod \"ironic-conductor-0\" (UID: \"f71d864b-c882-43b8-a7a7-b1e163d38aa4\") " pod="openstack/ironic-conductor-0" Feb 16 02:39:53.620488 master-0 kubenswrapper[31559]: I0216 02:39:53.619797 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f71d864b-c882-43b8-a7a7-b1e163d38aa4-combined-ca-bundle\") pod \"ironic-conductor-0\" (UID: \"f71d864b-c882-43b8-a7a7-b1e163d38aa4\") " pod="openstack/ironic-conductor-0" Feb 16 02:39:53.620488 master-0 kubenswrapper[31559]: I0216 02:39:53.619849 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/f71d864b-c882-43b8-a7a7-b1e163d38aa4-etc-podinfo\") pod \"ironic-conductor-0\" (UID: \"f71d864b-c882-43b8-a7a7-b1e163d38aa4\") " pod="openstack/ironic-conductor-0" Feb 16 02:39:53.620488 master-0 kubenswrapper[31559]: I0216 02:39:53.619873 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pws7d\" (UniqueName: \"kubernetes.io/projected/f71d864b-c882-43b8-a7a7-b1e163d38aa4-kube-api-access-pws7d\") pod \"ironic-conductor-0\" (UID: \"f71d864b-c882-43b8-a7a7-b1e163d38aa4\") " pod="openstack/ironic-conductor-0" Feb 16 02:39:53.620488 master-0 kubenswrapper[31559]: I0216 02:39:53.619904 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/f71d864b-c882-43b8-a7a7-b1e163d38aa4-config-data-merged\") pod \"ironic-conductor-0\" (UID: \"f71d864b-c882-43b8-a7a7-b1e163d38aa4\") " pod="openstack/ironic-conductor-0" Feb 16 02:39:53.620488 master-0 kubenswrapper[31559]: I0216 02:39:53.619924 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f71d864b-c882-43b8-a7a7-b1e163d38aa4-config-data-custom\") pod \"ironic-conductor-0\" (UID: \"f71d864b-c882-43b8-a7a7-b1e163d38aa4\") " pod="openstack/ironic-conductor-0" Feb 16 02:39:53.620488 master-0 kubenswrapper[31559]: I0216 02:39:53.620039 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5c08366a-1518-456d-af2c-993989af15a6\" (UniqueName: \"kubernetes.io/csi/topolvm.io^9b5cfba0-3007-4020-acdd-7ab5f787ae62\") pod \"ironic-conductor-0\" (UID: \"f71d864b-c882-43b8-a7a7-b1e163d38aa4\") " pod="openstack/ironic-conductor-0" Feb 16 02:39:53.722933 master-0 kubenswrapper[31559]: I0216 02:39:53.721981 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-5c08366a-1518-456d-af2c-993989af15a6\" (UniqueName: \"kubernetes.io/csi/topolvm.io^9b5cfba0-3007-4020-acdd-7ab5f787ae62\") pod \"ironic-conductor-0\" (UID: \"f71d864b-c882-43b8-a7a7-b1e163d38aa4\") " pod="openstack/ironic-conductor-0" Feb 16 02:39:53.722933 master-0 kubenswrapper[31559]: I0216 02:39:53.722055 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f71d864b-c882-43b8-a7a7-b1e163d38aa4-scripts\") pod \"ironic-conductor-0\" (UID: \"f71d864b-c882-43b8-a7a7-b1e163d38aa4\") " pod="openstack/ironic-conductor-0" Feb 16 02:39:53.722933 master-0 kubenswrapper[31559]: I0216 02:39:53.722122 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f71d864b-c882-43b8-a7a7-b1e163d38aa4-config-data\") pod \"ironic-conductor-0\" (UID: \"f71d864b-c882-43b8-a7a7-b1e163d38aa4\") " pod="openstack/ironic-conductor-0" Feb 16 02:39:53.722933 master-0 kubenswrapper[31559]: I0216 02:39:53.722172 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f71d864b-c882-43b8-a7a7-b1e163d38aa4-combined-ca-bundle\") pod \"ironic-conductor-0\" (UID: \"f71d864b-c882-43b8-a7a7-b1e163d38aa4\") " pod="openstack/ironic-conductor-0" Feb 16 02:39:53.722933 master-0 kubenswrapper[31559]: I0216 02:39:53.722211 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/f71d864b-c882-43b8-a7a7-b1e163d38aa4-etc-podinfo\") pod \"ironic-conductor-0\" (UID: \"f71d864b-c882-43b8-a7a7-b1e163d38aa4\") " pod="openstack/ironic-conductor-0" Feb 16 02:39:53.722933 master-0 kubenswrapper[31559]: I0216 02:39:53.722229 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pws7d\" (UniqueName: \"kubernetes.io/projected/f71d864b-c882-43b8-a7a7-b1e163d38aa4-kube-api-access-pws7d\") pod \"ironic-conductor-0\" (UID: \"f71d864b-c882-43b8-a7a7-b1e163d38aa4\") " pod="openstack/ironic-conductor-0" Feb 16 02:39:53.722933 master-0 kubenswrapper[31559]: I0216 02:39:53.722255 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/f71d864b-c882-43b8-a7a7-b1e163d38aa4-config-data-merged\") pod \"ironic-conductor-0\" (UID: \"f71d864b-c882-43b8-a7a7-b1e163d38aa4\") " pod="openstack/ironic-conductor-0" Feb 16 02:39:53.722933 master-0 kubenswrapper[31559]: I0216 02:39:53.722271 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f71d864b-c882-43b8-a7a7-b1e163d38aa4-config-data-custom\") pod \"ironic-conductor-0\" (UID: \"f71d864b-c882-43b8-a7a7-b1e163d38aa4\") " pod="openstack/ironic-conductor-0" Feb 16 02:39:53.726278 master-0 kubenswrapper[31559]: I0216 02:39:53.725667 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/f71d864b-c882-43b8-a7a7-b1e163d38aa4-config-data-merged\") pod \"ironic-conductor-0\" (UID: \"f71d864b-c882-43b8-a7a7-b1e163d38aa4\") " pod="openstack/ironic-conductor-0" Feb 16 02:39:53.726278 master-0 kubenswrapper[31559]: I0216 02:39:53.726076 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f71d864b-c882-43b8-a7a7-b1e163d38aa4-combined-ca-bundle\") pod \"ironic-conductor-0\" (UID: \"f71d864b-c882-43b8-a7a7-b1e163d38aa4\") " pod="openstack/ironic-conductor-0" Feb 16 02:39:53.728465 master-0 kubenswrapper[31559]: I0216 02:39:53.728177 31559 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 02:39:53.728465 master-0 kubenswrapper[31559]: I0216 02:39:53.728227 31559 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-5c08366a-1518-456d-af2c-993989af15a6\" (UniqueName: \"kubernetes.io/csi/topolvm.io^9b5cfba0-3007-4020-acdd-7ab5f787ae62\") pod \"ironic-conductor-0\" (UID: \"f71d864b-c882-43b8-a7a7-b1e163d38aa4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/1916e2190c0985341b1edbefefe0e48654e8ad7425738beb264b577a1dbe077e/globalmount\"" pod="openstack/ironic-conductor-0" Feb 16 02:39:53.729205 master-0 kubenswrapper[31559]: I0216 02:39:53.729158 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f71d864b-c882-43b8-a7a7-b1e163d38aa4-config-data\") pod \"ironic-conductor-0\" (UID: \"f71d864b-c882-43b8-a7a7-b1e163d38aa4\") " pod="openstack/ironic-conductor-0" Feb 16 02:39:53.730536 master-0 kubenswrapper[31559]: I0216 02:39:53.730135 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/f71d864b-c882-43b8-a7a7-b1e163d38aa4-etc-podinfo\") pod \"ironic-conductor-0\" (UID: \"f71d864b-c882-43b8-a7a7-b1e163d38aa4\") " pod="openstack/ironic-conductor-0" Feb 16 02:39:53.731614 master-0 kubenswrapper[31559]: I0216 02:39:53.731482 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f71d864b-c882-43b8-a7a7-b1e163d38aa4-scripts\") pod \"ironic-conductor-0\" (UID: \"f71d864b-c882-43b8-a7a7-b1e163d38aa4\") " pod="openstack/ironic-conductor-0" Feb 16 02:39:53.741553 master-0 kubenswrapper[31559]: I0216 02:39:53.741485 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f71d864b-c882-43b8-a7a7-b1e163d38aa4-config-data-custom\") pod \"ironic-conductor-0\" (UID: \"f71d864b-c882-43b8-a7a7-b1e163d38aa4\") " pod="openstack/ironic-conductor-0" Feb 16 02:39:53.757759 master-0 kubenswrapper[31559]: I0216 02:39:53.757697 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pws7d\" (UniqueName: \"kubernetes.io/projected/f71d864b-c882-43b8-a7a7-b1e163d38aa4-kube-api-access-pws7d\") pod \"ironic-conductor-0\" (UID: \"f71d864b-c882-43b8-a7a7-b1e163d38aa4\") " pod="openstack/ironic-conductor-0" Feb 16 02:39:53.985062 master-0 kubenswrapper[31559]: I0216 02:39:53.985017 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-dde57-scheduler-0" Feb 16 02:39:54.036292 master-0 kubenswrapper[31559]: I0216 02:39:54.036227 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/630fbd6e-863a-4347-acc7-38ae08b97e61-combined-ca-bundle\") pod \"630fbd6e-863a-4347-acc7-38ae08b97e61\" (UID: \"630fbd6e-863a-4347-acc7-38ae08b97e61\") " Feb 16 02:39:54.036846 master-0 kubenswrapper[31559]: I0216 02:39:54.036384 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/630fbd6e-863a-4347-acc7-38ae08b97e61-config-data-custom\") pod \"630fbd6e-863a-4347-acc7-38ae08b97e61\" (UID: \"630fbd6e-863a-4347-acc7-38ae08b97e61\") " Feb 16 02:39:54.036846 master-0 kubenswrapper[31559]: I0216 02:39:54.036426 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snrjb\" (UniqueName: \"kubernetes.io/projected/630fbd6e-863a-4347-acc7-38ae08b97e61-kube-api-access-snrjb\") pod \"630fbd6e-863a-4347-acc7-38ae08b97e61\" (UID: \"630fbd6e-863a-4347-acc7-38ae08b97e61\") " Feb 16 02:39:54.036846 master-0 kubenswrapper[31559]: I0216 02:39:54.036554 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/630fbd6e-863a-4347-acc7-38ae08b97e61-scripts\") pod \"630fbd6e-863a-4347-acc7-38ae08b97e61\" (UID: \"630fbd6e-863a-4347-acc7-38ae08b97e61\") " Feb 16 02:39:54.036846 master-0 kubenswrapper[31559]: I0216 02:39:54.036731 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/630fbd6e-863a-4347-acc7-38ae08b97e61-config-data\") pod \"630fbd6e-863a-4347-acc7-38ae08b97e61\" (UID: \"630fbd6e-863a-4347-acc7-38ae08b97e61\") " Feb 16 02:39:54.036846 master-0 kubenswrapper[31559]: I0216 02:39:54.036808 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/630fbd6e-863a-4347-acc7-38ae08b97e61-etc-machine-id\") pod \"630fbd6e-863a-4347-acc7-38ae08b97e61\" (UID: \"630fbd6e-863a-4347-acc7-38ae08b97e61\") " Feb 16 02:39:54.042229 master-0 kubenswrapper[31559]: I0216 02:39:54.041218 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/630fbd6e-863a-4347-acc7-38ae08b97e61-kube-api-access-snrjb" (OuterVolumeSpecName: "kube-api-access-snrjb") pod "630fbd6e-863a-4347-acc7-38ae08b97e61" (UID: "630fbd6e-863a-4347-acc7-38ae08b97e61"). InnerVolumeSpecName "kube-api-access-snrjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:39:54.042229 master-0 kubenswrapper[31559]: I0216 02:39:54.041314 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/630fbd6e-863a-4347-acc7-38ae08b97e61-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "630fbd6e-863a-4347-acc7-38ae08b97e61" (UID: "630fbd6e-863a-4347-acc7-38ae08b97e61"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:39:54.053702 master-0 kubenswrapper[31559]: I0216 02:39:54.053636 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-86f55b5cf6-zxgrr" event={"ID":"bbb13644-ecbf-43f1-9203-c714b5485f17","Type":"ContainerStarted","Data":"170a1d2795445711807f791c40d0b8140f0f02e67dec9199cbffaded413cd093"} Feb 16 02:39:54.055396 master-0 kubenswrapper[31559]: I0216 02:39:54.055361 31559 generic.go:334] "Generic (PLEG): container finished" podID="69ebefce-f456-4d6e-9111-5df13870bbae" containerID="dcd5e87d2ef35ada86ce3ee55e19a736ff52a215d064ff20a20133fe5a979579" exitCode=0 Feb 16 02:39:54.055482 master-0 kubenswrapper[31559]: I0216 02:39:54.055408 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-dde57-backup-0" event={"ID":"69ebefce-f456-4d6e-9111-5df13870bbae","Type":"ContainerDied","Data":"dcd5e87d2ef35ada86ce3ee55e19a736ff52a215d064ff20a20133fe5a979579"} Feb 16 02:39:54.056787 master-0 kubenswrapper[31559]: I0216 02:39:54.056748 31559 generic.go:334] "Generic (PLEG): container finished" podID="df781460-393f-441c-85a9-ab19366c8734" containerID="12cb9765b7f8b2c337e54bfccb3a9365cee56c9c6ee25a776d426bba3814d702" exitCode=0 Feb 16 02:39:54.056856 master-0 kubenswrapper[31559]: I0216 02:39:54.056788 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-01eb-account-create-update-cvszz" event={"ID":"df781460-393f-441c-85a9-ab19366c8734","Type":"ContainerDied","Data":"12cb9765b7f8b2c337e54bfccb3a9365cee56c9c6ee25a776d426bba3814d702"} Feb 16 02:39:54.072452 master-0 kubenswrapper[31559]: I0216 02:39:54.072184 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/630fbd6e-863a-4347-acc7-38ae08b97e61-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "630fbd6e-863a-4347-acc7-38ae08b97e61" (UID: "630fbd6e-863a-4347-acc7-38ae08b97e61"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:39:54.073109 master-0 kubenswrapper[31559]: I0216 02:39:54.072891 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/630fbd6e-863a-4347-acc7-38ae08b97e61-scripts" (OuterVolumeSpecName: "scripts") pod "630fbd6e-863a-4347-acc7-38ae08b97e61" (UID: "630fbd6e-863a-4347-acc7-38ae08b97e61"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:39:54.075862 master-0 kubenswrapper[31559]: I0216 02:39:54.075722 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-57d76bb68d-7wt45" event={"ID":"ec163f18-db19-4327-ae8b-6feb4c6004af","Type":"ContainerStarted","Data":"03ece8c22f2b1f470711da471d37685b02266e9c00c9cb8e8aa027f741d4f143"} Feb 16 02:39:54.096519 master-0 kubenswrapper[31559]: I0216 02:39:54.095774 31559 generic.go:334] "Generic (PLEG): container finished" podID="630fbd6e-863a-4347-acc7-38ae08b97e61" containerID="b2e65fae808367adaae117112ad3710da6e808ffbd3bf22d50506954995c4e5a" exitCode=0 Feb 16 02:39:54.096519 master-0 kubenswrapper[31559]: I0216 02:39:54.095817 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-dde57-scheduler-0" event={"ID":"630fbd6e-863a-4347-acc7-38ae08b97e61","Type":"ContainerDied","Data":"b2e65fae808367adaae117112ad3710da6e808ffbd3bf22d50506954995c4e5a"} Feb 16 02:39:54.096519 master-0 kubenswrapper[31559]: I0216 02:39:54.095871 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-dde57-scheduler-0" event={"ID":"630fbd6e-863a-4347-acc7-38ae08b97e61","Type":"ContainerDied","Data":"d6e452c1c69921cc4b18515ace94e43ff1e9ef1cbd236272858710e8a9805e94"} Feb 16 02:39:54.096519 master-0 kubenswrapper[31559]: I0216 02:39:54.095890 31559 scope.go:117] "RemoveContainer" containerID="8356e3adfe7ddf46d5601a907435bdc1b4b481136ad40d0bc4f316c1555b0441" Feb 16 02:39:54.096519 master-0 kubenswrapper[31559]: I0216 02:39:54.095890 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-dde57-scheduler-0" Feb 16 02:39:54.119910 master-0 kubenswrapper[31559]: I0216 02:39:54.119810 31559 generic.go:334] "Generic (PLEG): container finished" podID="502da4cf-fead-4624-891d-f8db5815915f" containerID="06a6f7b34024716d96e0f364464ae5d56c7102cc42d0797fda5389481b33f7e1" exitCode=0 Feb 16 02:39:54.119910 master-0 kubenswrapper[31559]: I0216 02:39:54.119881 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-create-ldt4r" event={"ID":"502da4cf-fead-4624-891d-f8db5815915f","Type":"ContainerDied","Data":"06a6f7b34024716d96e0f364464ae5d56c7102cc42d0797fda5389481b33f7e1"} Feb 16 02:39:54.119910 master-0 kubenswrapper[31559]: I0216 02:39:54.119906 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-create-ldt4r" event={"ID":"502da4cf-fead-4624-891d-f8db5815915f","Type":"ContainerStarted","Data":"0ce86daf091ec67418d0159644715c62c9007b428afc25d15679be1cbec53a28"} Feb 16 02:39:54.126703 master-0 kubenswrapper[31559]: I0216 02:39:54.125339 31559 generic.go:334] "Generic (PLEG): container finished" podID="52d1137e-c1db-4205-ae56-dfd8b4c84b39" containerID="714454ce59af0f2a0a90940ec7784ddc3a76240fac92fab0f359069d0ff43eb6" exitCode=0 Feb 16 02:39:54.126703 master-0 kubenswrapper[31559]: I0216 02:39:54.125627 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fbd84b845-7w476" event={"ID":"52d1137e-c1db-4205-ae56-dfd8b4c84b39","Type":"ContainerDied","Data":"714454ce59af0f2a0a90940ec7784ddc3a76240fac92fab0f359069d0ff43eb6"} Feb 16 02:39:54.126703 master-0 kubenswrapper[31559]: I0216 02:39:54.125673 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fbd84b845-7w476" event={"ID":"52d1137e-c1db-4205-ae56-dfd8b4c84b39","Type":"ContainerStarted","Data":"55e165cca09b6cdb7a0a9bab3cec68d6b44d40ca9c1a7070b1b11b9b62fc1f55"} Feb 16 02:39:54.138533 master-0 kubenswrapper[31559]: I0216 02:39:54.136593 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/630fbd6e-863a-4347-acc7-38ae08b97e61-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "630fbd6e-863a-4347-acc7-38ae08b97e61" (UID: "630fbd6e-863a-4347-acc7-38ae08b97e61"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:39:54.139870 master-0 kubenswrapper[31559]: I0216 02:39:54.139838 31559 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/630fbd6e-863a-4347-acc7-38ae08b97e61-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:54.139870 master-0 kubenswrapper[31559]: I0216 02:39:54.139868 31559 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/630fbd6e-863a-4347-acc7-38ae08b97e61-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:54.140014 master-0 kubenswrapper[31559]: I0216 02:39:54.139877 31559 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/630fbd6e-863a-4347-acc7-38ae08b97e61-config-data-custom\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:54.140014 master-0 kubenswrapper[31559]: I0216 02:39:54.139889 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snrjb\" (UniqueName: \"kubernetes.io/projected/630fbd6e-863a-4347-acc7-38ae08b97e61-kube-api-access-snrjb\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:54.140014 master-0 kubenswrapper[31559]: I0216 02:39:54.139898 31559 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/630fbd6e-863a-4347-acc7-38ae08b97e61-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:54.188659 master-0 kubenswrapper[31559]: I0216 02:39:54.188599 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/630fbd6e-863a-4347-acc7-38ae08b97e61-config-data" (OuterVolumeSpecName: "config-data") pod "630fbd6e-863a-4347-acc7-38ae08b97e61" (UID: "630fbd6e-863a-4347-acc7-38ae08b97e61"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:39:54.245102 master-0 kubenswrapper[31559]: I0216 02:39:54.244843 31559 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/630fbd6e-863a-4347-acc7-38ae08b97e61-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:54.355981 master-0 kubenswrapper[31559]: I0216 02:39:54.355877 31559 scope.go:117] "RemoveContainer" containerID="b2e65fae808367adaae117112ad3710da6e808ffbd3bf22d50506954995c4e5a" Feb 16 02:39:54.357724 master-0 kubenswrapper[31559]: I0216 02:39:54.357692 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:54.393626 master-0 kubenswrapper[31559]: I0216 02:39:54.393581 31559 scope.go:117] "RemoveContainer" containerID="8356e3adfe7ddf46d5601a907435bdc1b4b481136ad40d0bc4f316c1555b0441" Feb 16 02:39:54.394227 master-0 kubenswrapper[31559]: E0216 02:39:54.394076 31559 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8356e3adfe7ddf46d5601a907435bdc1b4b481136ad40d0bc4f316c1555b0441\": container with ID starting with 8356e3adfe7ddf46d5601a907435bdc1b4b481136ad40d0bc4f316c1555b0441 not found: ID does not exist" containerID="8356e3adfe7ddf46d5601a907435bdc1b4b481136ad40d0bc4f316c1555b0441" Feb 16 02:39:54.394296 master-0 kubenswrapper[31559]: I0216 02:39:54.394233 31559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8356e3adfe7ddf46d5601a907435bdc1b4b481136ad40d0bc4f316c1555b0441"} err="failed to get container status \"8356e3adfe7ddf46d5601a907435bdc1b4b481136ad40d0bc4f316c1555b0441\": rpc error: code = NotFound desc = could not find container \"8356e3adfe7ddf46d5601a907435bdc1b4b481136ad40d0bc4f316c1555b0441\": container with ID starting with 8356e3adfe7ddf46d5601a907435bdc1b4b481136ad40d0bc4f316c1555b0441 not found: ID does not exist" Feb 16 02:39:54.394296 master-0 kubenswrapper[31559]: I0216 02:39:54.394264 31559 scope.go:117] "RemoveContainer" containerID="b2e65fae808367adaae117112ad3710da6e808ffbd3bf22d50506954995c4e5a" Feb 16 02:39:54.394899 master-0 kubenswrapper[31559]: E0216 02:39:54.394851 31559 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2e65fae808367adaae117112ad3710da6e808ffbd3bf22d50506954995c4e5a\": container with ID starting with b2e65fae808367adaae117112ad3710da6e808ffbd3bf22d50506954995c4e5a not found: ID does not exist" containerID="b2e65fae808367adaae117112ad3710da6e808ffbd3bf22d50506954995c4e5a" Feb 16 02:39:54.394956 master-0 kubenswrapper[31559]: I0216 02:39:54.394939 31559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2e65fae808367adaae117112ad3710da6e808ffbd3bf22d50506954995c4e5a"} err="failed to get container status \"b2e65fae808367adaae117112ad3710da6e808ffbd3bf22d50506954995c4e5a\": rpc error: code = NotFound desc = could not find container \"b2e65fae808367adaae117112ad3710da6e808ffbd3bf22d50506954995c4e5a\": container with ID starting with b2e65fae808367adaae117112ad3710da6e808ffbd3bf22d50506954995c4e5a not found: ID does not exist" Feb 16 02:39:54.464415 master-0 kubenswrapper[31559]: I0216 02:39:54.464321 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69ebefce-f456-4d6e-9111-5df13870bbae-config-data\") pod \"69ebefce-f456-4d6e-9111-5df13870bbae\" (UID: \"69ebefce-f456-4d6e-9111-5df13870bbae\") " Feb 16 02:39:54.464415 master-0 kubenswrapper[31559]: I0216 02:39:54.464385 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/69ebefce-f456-4d6e-9111-5df13870bbae-etc-iscsi\") pod \"69ebefce-f456-4d6e-9111-5df13870bbae\" (UID: \"69ebefce-f456-4d6e-9111-5df13870bbae\") " Feb 16 02:39:54.464415 master-0 kubenswrapper[31559]: I0216 02:39:54.464461 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/69ebefce-f456-4d6e-9111-5df13870bbae-run\") pod \"69ebefce-f456-4d6e-9111-5df13870bbae\" (UID: \"69ebefce-f456-4d6e-9111-5df13870bbae\") " Feb 16 02:39:54.464415 master-0 kubenswrapper[31559]: I0216 02:39:54.464509 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/69ebefce-f456-4d6e-9111-5df13870bbae-var-lib-cinder\") pod \"69ebefce-f456-4d6e-9111-5df13870bbae\" (UID: \"69ebefce-f456-4d6e-9111-5df13870bbae\") " Feb 16 02:39:54.464973 master-0 kubenswrapper[31559]: I0216 02:39:54.464538 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/69ebefce-f456-4d6e-9111-5df13870bbae-var-locks-brick\") pod \"69ebefce-f456-4d6e-9111-5df13870bbae\" (UID: \"69ebefce-f456-4d6e-9111-5df13870bbae\") " Feb 16 02:39:54.464973 master-0 kubenswrapper[31559]: I0216 02:39:54.464564 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/69ebefce-f456-4d6e-9111-5df13870bbae-etc-machine-id\") pod \"69ebefce-f456-4d6e-9111-5df13870bbae\" (UID: \"69ebefce-f456-4d6e-9111-5df13870bbae\") " Feb 16 02:39:54.464973 master-0 kubenswrapper[31559]: I0216 02:39:54.464639 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/69ebefce-f456-4d6e-9111-5df13870bbae-etc-nvme\") pod \"69ebefce-f456-4d6e-9111-5df13870bbae\" (UID: \"69ebefce-f456-4d6e-9111-5df13870bbae\") " Feb 16 02:39:54.464973 master-0 kubenswrapper[31559]: I0216 02:39:54.464722 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/69ebefce-f456-4d6e-9111-5df13870bbae-var-locks-cinder\") pod \"69ebefce-f456-4d6e-9111-5df13870bbae\" (UID: \"69ebefce-f456-4d6e-9111-5df13870bbae\") " Feb 16 02:39:54.464973 master-0 kubenswrapper[31559]: I0216 02:39:54.464748 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvvdb\" (UniqueName: \"kubernetes.io/projected/69ebefce-f456-4d6e-9111-5df13870bbae-kube-api-access-rvvdb\") pod \"69ebefce-f456-4d6e-9111-5df13870bbae\" (UID: \"69ebefce-f456-4d6e-9111-5df13870bbae\") " Feb 16 02:39:54.464973 master-0 kubenswrapper[31559]: I0216 02:39:54.464764 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/69ebefce-f456-4d6e-9111-5df13870bbae-dev\") pod \"69ebefce-f456-4d6e-9111-5df13870bbae\" (UID: \"69ebefce-f456-4d6e-9111-5df13870bbae\") " Feb 16 02:39:54.464973 master-0 kubenswrapper[31559]: I0216 02:39:54.464798 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69ebefce-f456-4d6e-9111-5df13870bbae-scripts\") pod \"69ebefce-f456-4d6e-9111-5df13870bbae\" (UID: \"69ebefce-f456-4d6e-9111-5df13870bbae\") " Feb 16 02:39:54.464973 master-0 kubenswrapper[31559]: I0216 02:39:54.464814 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69ebefce-f456-4d6e-9111-5df13870bbae-combined-ca-bundle\") pod \"69ebefce-f456-4d6e-9111-5df13870bbae\" (UID: \"69ebefce-f456-4d6e-9111-5df13870bbae\") " Feb 16 02:39:54.464973 master-0 kubenswrapper[31559]: I0216 02:39:54.464879 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/69ebefce-f456-4d6e-9111-5df13870bbae-lib-modules\") pod \"69ebefce-f456-4d6e-9111-5df13870bbae\" (UID: \"69ebefce-f456-4d6e-9111-5df13870bbae\") " Feb 16 02:39:54.464973 master-0 kubenswrapper[31559]: I0216 02:39:54.464899 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/69ebefce-f456-4d6e-9111-5df13870bbae-config-data-custom\") pod \"69ebefce-f456-4d6e-9111-5df13870bbae\" (UID: \"69ebefce-f456-4d6e-9111-5df13870bbae\") " Feb 16 02:39:54.464973 master-0 kubenswrapper[31559]: I0216 02:39:54.464930 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/69ebefce-f456-4d6e-9111-5df13870bbae-sys\") pod \"69ebefce-f456-4d6e-9111-5df13870bbae\" (UID: \"69ebefce-f456-4d6e-9111-5df13870bbae\") " Feb 16 02:39:54.471610 master-0 kubenswrapper[31559]: I0216 02:39:54.467535 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69ebefce-f456-4d6e-9111-5df13870bbae-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "69ebefce-f456-4d6e-9111-5df13870bbae" (UID: "69ebefce-f456-4d6e-9111-5df13870bbae"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:39:54.471610 master-0 kubenswrapper[31559]: I0216 02:39:54.467907 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69ebefce-f456-4d6e-9111-5df13870bbae-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "69ebefce-f456-4d6e-9111-5df13870bbae" (UID: "69ebefce-f456-4d6e-9111-5df13870bbae"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:39:54.471610 master-0 kubenswrapper[31559]: I0216 02:39:54.467934 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69ebefce-f456-4d6e-9111-5df13870bbae-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "69ebefce-f456-4d6e-9111-5df13870bbae" (UID: "69ebefce-f456-4d6e-9111-5df13870bbae"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:39:54.471610 master-0 kubenswrapper[31559]: I0216 02:39:54.467956 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69ebefce-f456-4d6e-9111-5df13870bbae-run" (OuterVolumeSpecName: "run") pod "69ebefce-f456-4d6e-9111-5df13870bbae" (UID: "69ebefce-f456-4d6e-9111-5df13870bbae"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:39:54.471610 master-0 kubenswrapper[31559]: I0216 02:39:54.467980 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69ebefce-f456-4d6e-9111-5df13870bbae-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "69ebefce-f456-4d6e-9111-5df13870bbae" (UID: "69ebefce-f456-4d6e-9111-5df13870bbae"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:39:54.471610 master-0 kubenswrapper[31559]: I0216 02:39:54.468006 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69ebefce-f456-4d6e-9111-5df13870bbae-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "69ebefce-f456-4d6e-9111-5df13870bbae" (UID: "69ebefce-f456-4d6e-9111-5df13870bbae"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:39:54.474101 master-0 kubenswrapper[31559]: I0216 02:39:54.472598 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69ebefce-f456-4d6e-9111-5df13870bbae-dev" (OuterVolumeSpecName: "dev") pod "69ebefce-f456-4d6e-9111-5df13870bbae" (UID: "69ebefce-f456-4d6e-9111-5df13870bbae"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:39:54.474101 master-0 kubenswrapper[31559]: I0216 02:39:54.472670 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69ebefce-f456-4d6e-9111-5df13870bbae-sys" (OuterVolumeSpecName: "sys") pod "69ebefce-f456-4d6e-9111-5df13870bbae" (UID: "69ebefce-f456-4d6e-9111-5df13870bbae"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:39:54.474101 master-0 kubenswrapper[31559]: I0216 02:39:54.472717 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69ebefce-f456-4d6e-9111-5df13870bbae-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "69ebefce-f456-4d6e-9111-5df13870bbae" (UID: "69ebefce-f456-4d6e-9111-5df13870bbae"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:39:54.474101 master-0 kubenswrapper[31559]: I0216 02:39:54.472772 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69ebefce-f456-4d6e-9111-5df13870bbae-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "69ebefce-f456-4d6e-9111-5df13870bbae" (UID: "69ebefce-f456-4d6e-9111-5df13870bbae"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:39:54.474101 master-0 kubenswrapper[31559]: I0216 02:39:54.473386 31559 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/69ebefce-f456-4d6e-9111-5df13870bbae-etc-iscsi\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:54.474101 master-0 kubenswrapper[31559]: I0216 02:39:54.473482 31559 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/69ebefce-f456-4d6e-9111-5df13870bbae-run\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:54.474101 master-0 kubenswrapper[31559]: I0216 02:39:54.473505 31559 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/69ebefce-f456-4d6e-9111-5df13870bbae-var-lib-cinder\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:54.474101 master-0 kubenswrapper[31559]: I0216 02:39:54.473524 31559 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/69ebefce-f456-4d6e-9111-5df13870bbae-var-locks-brick\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:54.474101 master-0 kubenswrapper[31559]: I0216 02:39:54.473576 31559 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/69ebefce-f456-4d6e-9111-5df13870bbae-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:54.474101 master-0 kubenswrapper[31559]: I0216 02:39:54.473593 31559 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/69ebefce-f456-4d6e-9111-5df13870bbae-etc-nvme\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:54.474101 master-0 kubenswrapper[31559]: I0216 02:39:54.473609 31559 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/69ebefce-f456-4d6e-9111-5df13870bbae-dev\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:54.483514 master-0 kubenswrapper[31559]: I0216 02:39:54.478938 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69ebefce-f456-4d6e-9111-5df13870bbae-scripts" (OuterVolumeSpecName: "scripts") pod "69ebefce-f456-4d6e-9111-5df13870bbae" (UID: "69ebefce-f456-4d6e-9111-5df13870bbae"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:39:54.483514 master-0 kubenswrapper[31559]: I0216 02:39:54.479384 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69ebefce-f456-4d6e-9111-5df13870bbae-kube-api-access-rvvdb" (OuterVolumeSpecName: "kube-api-access-rvvdb") pod "69ebefce-f456-4d6e-9111-5df13870bbae" (UID: "69ebefce-f456-4d6e-9111-5df13870bbae"). InnerVolumeSpecName "kube-api-access-rvvdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:39:54.483514 master-0 kubenswrapper[31559]: I0216 02:39:54.482893 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69ebefce-f456-4d6e-9111-5df13870bbae-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "69ebefce-f456-4d6e-9111-5df13870bbae" (UID: "69ebefce-f456-4d6e-9111-5df13870bbae"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:39:54.484469 master-0 kubenswrapper[31559]: I0216 02:39:54.484378 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-dde57-scheduler-0"] Feb 16 02:39:54.525261 master-0 kubenswrapper[31559]: I0216 02:39:54.525130 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-dde57-scheduler-0"] Feb 16 02:39:54.545259 master-0 kubenswrapper[31559]: I0216 02:39:54.544001 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-dde57-scheduler-0"] Feb 16 02:39:54.545259 master-0 kubenswrapper[31559]: E0216 02:39:54.544528 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69ebefce-f456-4d6e-9111-5df13870bbae" containerName="cinder-backup" Feb 16 02:39:54.545259 master-0 kubenswrapper[31559]: I0216 02:39:54.544542 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="69ebefce-f456-4d6e-9111-5df13870bbae" containerName="cinder-backup" Feb 16 02:39:54.546139 master-0 kubenswrapper[31559]: E0216 02:39:54.545348 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="630fbd6e-863a-4347-acc7-38ae08b97e61" containerName="cinder-scheduler" Feb 16 02:39:54.546139 master-0 kubenswrapper[31559]: I0216 02:39:54.545361 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="630fbd6e-863a-4347-acc7-38ae08b97e61" containerName="cinder-scheduler" Feb 16 02:39:54.546139 master-0 kubenswrapper[31559]: E0216 02:39:54.545391 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="630fbd6e-863a-4347-acc7-38ae08b97e61" containerName="probe" Feb 16 02:39:54.546139 master-0 kubenswrapper[31559]: I0216 02:39:54.545537 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="630fbd6e-863a-4347-acc7-38ae08b97e61" containerName="probe" Feb 16 02:39:54.546139 master-0 kubenswrapper[31559]: E0216 02:39:54.545617 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69ebefce-f456-4d6e-9111-5df13870bbae" containerName="probe" Feb 16 02:39:54.546139 master-0 kubenswrapper[31559]: I0216 02:39:54.545627 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="69ebefce-f456-4d6e-9111-5df13870bbae" containerName="probe" Feb 16 02:39:54.547670 master-0 kubenswrapper[31559]: I0216 02:39:54.546682 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="69ebefce-f456-4d6e-9111-5df13870bbae" containerName="cinder-backup" Feb 16 02:39:54.547670 master-0 kubenswrapper[31559]: I0216 02:39:54.546710 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="630fbd6e-863a-4347-acc7-38ae08b97e61" containerName="cinder-scheduler" Feb 16 02:39:54.547670 master-0 kubenswrapper[31559]: I0216 02:39:54.546746 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="69ebefce-f456-4d6e-9111-5df13870bbae" containerName="probe" Feb 16 02:39:54.547670 master-0 kubenswrapper[31559]: I0216 02:39:54.546761 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="630fbd6e-863a-4347-acc7-38ae08b97e61" containerName="probe" Feb 16 02:39:54.549520 master-0 kubenswrapper[31559]: I0216 02:39:54.548587 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-dde57-scheduler-0" Feb 16 02:39:54.556302 master-0 kubenswrapper[31559]: I0216 02:39:54.556270 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-dde57-scheduler-config-data" Feb 16 02:39:54.561637 master-0 kubenswrapper[31559]: I0216 02:39:54.559699 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-dde57-scheduler-0"] Feb 16 02:39:54.576855 master-0 kubenswrapper[31559]: I0216 02:39:54.576795 31559 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/69ebefce-f456-4d6e-9111-5df13870bbae-var-locks-cinder\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:54.576855 master-0 kubenswrapper[31559]: I0216 02:39:54.576839 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvvdb\" (UniqueName: \"kubernetes.io/projected/69ebefce-f456-4d6e-9111-5df13870bbae-kube-api-access-rvvdb\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:54.576855 master-0 kubenswrapper[31559]: I0216 02:39:54.576850 31559 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69ebefce-f456-4d6e-9111-5df13870bbae-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:54.576855 master-0 kubenswrapper[31559]: I0216 02:39:54.576859 31559 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/69ebefce-f456-4d6e-9111-5df13870bbae-lib-modules\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:54.576855 master-0 kubenswrapper[31559]: I0216 02:39:54.576869 31559 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/69ebefce-f456-4d6e-9111-5df13870bbae-config-data-custom\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:54.577324 master-0 kubenswrapper[31559]: I0216 02:39:54.576881 31559 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/69ebefce-f456-4d6e-9111-5df13870bbae-sys\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:54.629954 master-0 kubenswrapper[31559]: I0216 02:39:54.629896 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69ebefce-f456-4d6e-9111-5df13870bbae-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "69ebefce-f456-4d6e-9111-5df13870bbae" (UID: "69ebefce-f456-4d6e-9111-5df13870bbae"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:39:54.659094 master-0 kubenswrapper[31559]: I0216 02:39:54.658966 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69ebefce-f456-4d6e-9111-5df13870bbae-config-data" (OuterVolumeSpecName: "config-data") pod "69ebefce-f456-4d6e-9111-5df13870bbae" (UID: "69ebefce-f456-4d6e-9111-5df13870bbae"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:39:54.668937 master-0 kubenswrapper[31559]: I0216 02:39:54.668873 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-c9d68ffd6-4ln7q"] Feb 16 02:39:54.676857 master-0 kubenswrapper[31559]: I0216 02:39:54.676003 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-c9d68ffd6-4ln7q" Feb 16 02:39:54.678263 master-0 kubenswrapper[31559]: I0216 02:39:54.678237 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c72a1046-0b7c-4706-b4a0-1d87b0e50900-combined-ca-bundle\") pod \"cinder-dde57-scheduler-0\" (UID: \"c72a1046-0b7c-4706-b4a0-1d87b0e50900\") " pod="openstack/cinder-dde57-scheduler-0" Feb 16 02:39:54.678350 master-0 kubenswrapper[31559]: I0216 02:39:54.678317 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c72a1046-0b7c-4706-b4a0-1d87b0e50900-config-data-custom\") pod \"cinder-dde57-scheduler-0\" (UID: \"c72a1046-0b7c-4706-b4a0-1d87b0e50900\") " pod="openstack/cinder-dde57-scheduler-0" Feb 16 02:39:54.678350 master-0 kubenswrapper[31559]: I0216 02:39:54.678338 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5z7dq\" (UniqueName: \"kubernetes.io/projected/c72a1046-0b7c-4706-b4a0-1d87b0e50900-kube-api-access-5z7dq\") pod \"cinder-dde57-scheduler-0\" (UID: \"c72a1046-0b7c-4706-b4a0-1d87b0e50900\") " pod="openstack/cinder-dde57-scheduler-0" Feb 16 02:39:54.678419 master-0 kubenswrapper[31559]: I0216 02:39:54.678379 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c72a1046-0b7c-4706-b4a0-1d87b0e50900-scripts\") pod \"cinder-dde57-scheduler-0\" (UID: \"c72a1046-0b7c-4706-b4a0-1d87b0e50900\") " pod="openstack/cinder-dde57-scheduler-0" Feb 16 02:39:54.678419 master-0 kubenswrapper[31559]: I0216 02:39:54.678409 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c72a1046-0b7c-4706-b4a0-1d87b0e50900-config-data\") pod \"cinder-dde57-scheduler-0\" (UID: \"c72a1046-0b7c-4706-b4a0-1d87b0e50900\") " pod="openstack/cinder-dde57-scheduler-0" Feb 16 02:39:54.678506 master-0 kubenswrapper[31559]: I0216 02:39:54.678453 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c72a1046-0b7c-4706-b4a0-1d87b0e50900-etc-machine-id\") pod \"cinder-dde57-scheduler-0\" (UID: \"c72a1046-0b7c-4706-b4a0-1d87b0e50900\") " pod="openstack/cinder-dde57-scheduler-0" Feb 16 02:39:54.678566 master-0 kubenswrapper[31559]: I0216 02:39:54.678548 31559 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69ebefce-f456-4d6e-9111-5df13870bbae-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:54.678566 master-0 kubenswrapper[31559]: I0216 02:39:54.678565 31559 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69ebefce-f456-4d6e-9111-5df13870bbae-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:54.681410 master-0 kubenswrapper[31559]: I0216 02:39:54.681361 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-c9d68ffd6-4ln7q"] Feb 16 02:39:54.698799 master-0 kubenswrapper[31559]: I0216 02:39:54.698739 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-internal-svc" Feb 16 02:39:54.701248 master-0 kubenswrapper[31559]: I0216 02:39:54.701000 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-public-svc" Feb 16 02:39:54.780496 master-0 kubenswrapper[31559]: I0216 02:39:54.780415 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/778356f6-5ce1-4462-8bf6-8c63775e7fe0-logs\") pod \"ironic-c9d68ffd6-4ln7q\" (UID: \"778356f6-5ce1-4462-8bf6-8c63775e7fe0\") " pod="openstack/ironic-c9d68ffd6-4ln7q" Feb 16 02:39:54.780864 master-0 kubenswrapper[31559]: I0216 02:39:54.780792 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/778356f6-5ce1-4462-8bf6-8c63775e7fe0-internal-tls-certs\") pod \"ironic-c9d68ffd6-4ln7q\" (UID: \"778356f6-5ce1-4462-8bf6-8c63775e7fe0\") " pod="openstack/ironic-c9d68ffd6-4ln7q" Feb 16 02:39:54.780999 master-0 kubenswrapper[31559]: I0216 02:39:54.780985 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/778356f6-5ce1-4462-8bf6-8c63775e7fe0-combined-ca-bundle\") pod \"ironic-c9d68ffd6-4ln7q\" (UID: \"778356f6-5ce1-4462-8bf6-8c63775e7fe0\") " pod="openstack/ironic-c9d68ffd6-4ln7q" Feb 16 02:39:54.781156 master-0 kubenswrapper[31559]: I0216 02:39:54.781124 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c72a1046-0b7c-4706-b4a0-1d87b0e50900-config-data-custom\") pod \"cinder-dde57-scheduler-0\" (UID: \"c72a1046-0b7c-4706-b4a0-1d87b0e50900\") " pod="openstack/cinder-dde57-scheduler-0" Feb 16 02:39:54.781267 master-0 kubenswrapper[31559]: I0216 02:39:54.781253 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5z7dq\" (UniqueName: \"kubernetes.io/projected/c72a1046-0b7c-4706-b4a0-1d87b0e50900-kube-api-access-5z7dq\") pod \"cinder-dde57-scheduler-0\" (UID: \"c72a1046-0b7c-4706-b4a0-1d87b0e50900\") " pod="openstack/cinder-dde57-scheduler-0" Feb 16 02:39:54.781403 master-0 kubenswrapper[31559]: I0216 02:39:54.781386 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/778356f6-5ce1-4462-8bf6-8c63775e7fe0-scripts\") pod \"ironic-c9d68ffd6-4ln7q\" (UID: \"778356f6-5ce1-4462-8bf6-8c63775e7fe0\") " pod="openstack/ironic-c9d68ffd6-4ln7q" Feb 16 02:39:54.781550 master-0 kubenswrapper[31559]: I0216 02:39:54.781534 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c72a1046-0b7c-4706-b4a0-1d87b0e50900-scripts\") pod \"cinder-dde57-scheduler-0\" (UID: \"c72a1046-0b7c-4706-b4a0-1d87b0e50900\") " pod="openstack/cinder-dde57-scheduler-0" Feb 16 02:39:54.781654 master-0 kubenswrapper[31559]: I0216 02:39:54.781640 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/778356f6-5ce1-4462-8bf6-8c63775e7fe0-config-data-custom\") pod \"ironic-c9d68ffd6-4ln7q\" (UID: \"778356f6-5ce1-4462-8bf6-8c63775e7fe0\") " pod="openstack/ironic-c9d68ffd6-4ln7q" Feb 16 02:39:54.781790 master-0 kubenswrapper[31559]: I0216 02:39:54.781777 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c72a1046-0b7c-4706-b4a0-1d87b0e50900-config-data\") pod \"cinder-dde57-scheduler-0\" (UID: \"c72a1046-0b7c-4706-b4a0-1d87b0e50900\") " pod="openstack/cinder-dde57-scheduler-0" Feb 16 02:39:54.781943 master-0 kubenswrapper[31559]: I0216 02:39:54.781909 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c72a1046-0b7c-4706-b4a0-1d87b0e50900-etc-machine-id\") pod \"cinder-dde57-scheduler-0\" (UID: \"c72a1046-0b7c-4706-b4a0-1d87b0e50900\") " pod="openstack/cinder-dde57-scheduler-0" Feb 16 02:39:54.782100 master-0 kubenswrapper[31559]: I0216 02:39:54.782035 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c72a1046-0b7c-4706-b4a0-1d87b0e50900-etc-machine-id\") pod \"cinder-dde57-scheduler-0\" (UID: \"c72a1046-0b7c-4706-b4a0-1d87b0e50900\") " pod="openstack/cinder-dde57-scheduler-0" Feb 16 02:39:54.782152 master-0 kubenswrapper[31559]: I0216 02:39:54.782055 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/778356f6-5ce1-4462-8bf6-8c63775e7fe0-etc-podinfo\") pod \"ironic-c9d68ffd6-4ln7q\" (UID: \"778356f6-5ce1-4462-8bf6-8c63775e7fe0\") " pod="openstack/ironic-c9d68ffd6-4ln7q" Feb 16 02:39:54.782214 master-0 kubenswrapper[31559]: I0216 02:39:54.782169 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/778356f6-5ce1-4462-8bf6-8c63775e7fe0-public-tls-certs\") pod \"ironic-c9d68ffd6-4ln7q\" (UID: \"778356f6-5ce1-4462-8bf6-8c63775e7fe0\") " pod="openstack/ironic-c9d68ffd6-4ln7q" Feb 16 02:39:54.782722 master-0 kubenswrapper[31559]: I0216 02:39:54.782693 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c72a1046-0b7c-4706-b4a0-1d87b0e50900-combined-ca-bundle\") pod \"cinder-dde57-scheduler-0\" (UID: \"c72a1046-0b7c-4706-b4a0-1d87b0e50900\") " pod="openstack/cinder-dde57-scheduler-0" Feb 16 02:39:54.782782 master-0 kubenswrapper[31559]: I0216 02:39:54.782769 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/778356f6-5ce1-4462-8bf6-8c63775e7fe0-config-data-merged\") pod \"ironic-c9d68ffd6-4ln7q\" (UID: \"778356f6-5ce1-4462-8bf6-8c63775e7fe0\") " pod="openstack/ironic-c9d68ffd6-4ln7q" Feb 16 02:39:54.782816 master-0 kubenswrapper[31559]: I0216 02:39:54.782801 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/778356f6-5ce1-4462-8bf6-8c63775e7fe0-config-data\") pod \"ironic-c9d68ffd6-4ln7q\" (UID: \"778356f6-5ce1-4462-8bf6-8c63775e7fe0\") " pod="openstack/ironic-c9d68ffd6-4ln7q" Feb 16 02:39:54.782847 master-0 kubenswrapper[31559]: I0216 02:39:54.782829 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgsp9\" (UniqueName: \"kubernetes.io/projected/778356f6-5ce1-4462-8bf6-8c63775e7fe0-kube-api-access-jgsp9\") pod \"ironic-c9d68ffd6-4ln7q\" (UID: \"778356f6-5ce1-4462-8bf6-8c63775e7fe0\") " pod="openstack/ironic-c9d68ffd6-4ln7q" Feb 16 02:39:54.785828 master-0 kubenswrapper[31559]: I0216 02:39:54.785784 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c72a1046-0b7c-4706-b4a0-1d87b0e50900-scripts\") pod \"cinder-dde57-scheduler-0\" (UID: \"c72a1046-0b7c-4706-b4a0-1d87b0e50900\") " pod="openstack/cinder-dde57-scheduler-0" Feb 16 02:39:54.786081 master-0 kubenswrapper[31559]: I0216 02:39:54.786050 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c72a1046-0b7c-4706-b4a0-1d87b0e50900-config-data-custom\") pod \"cinder-dde57-scheduler-0\" (UID: \"c72a1046-0b7c-4706-b4a0-1d87b0e50900\") " pod="openstack/cinder-dde57-scheduler-0" Feb 16 02:39:54.787484 master-0 kubenswrapper[31559]: I0216 02:39:54.787454 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c72a1046-0b7c-4706-b4a0-1d87b0e50900-config-data\") pod \"cinder-dde57-scheduler-0\" (UID: \"c72a1046-0b7c-4706-b4a0-1d87b0e50900\") " pod="openstack/cinder-dde57-scheduler-0" Feb 16 02:39:54.788415 master-0 kubenswrapper[31559]: I0216 02:39:54.788381 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c72a1046-0b7c-4706-b4a0-1d87b0e50900-combined-ca-bundle\") pod \"cinder-dde57-scheduler-0\" (UID: \"c72a1046-0b7c-4706-b4a0-1d87b0e50900\") " pod="openstack/cinder-dde57-scheduler-0" Feb 16 02:39:54.797676 master-0 kubenswrapper[31559]: I0216 02:39:54.797617 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5z7dq\" (UniqueName: \"kubernetes.io/projected/c72a1046-0b7c-4706-b4a0-1d87b0e50900-kube-api-access-5z7dq\") pod \"cinder-dde57-scheduler-0\" (UID: \"c72a1046-0b7c-4706-b4a0-1d87b0e50900\") " pod="openstack/cinder-dde57-scheduler-0" Feb 16 02:39:54.868310 master-0 kubenswrapper[31559]: I0216 02:39:54.868236 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-dde57-scheduler-0" Feb 16 02:39:54.886921 master-0 kubenswrapper[31559]: I0216 02:39:54.886865 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/778356f6-5ce1-4462-8bf6-8c63775e7fe0-config-data-custom\") pod \"ironic-c9d68ffd6-4ln7q\" (UID: \"778356f6-5ce1-4462-8bf6-8c63775e7fe0\") " pod="openstack/ironic-c9d68ffd6-4ln7q" Feb 16 02:39:54.887119 master-0 kubenswrapper[31559]: I0216 02:39:54.886965 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/778356f6-5ce1-4462-8bf6-8c63775e7fe0-etc-podinfo\") pod \"ironic-c9d68ffd6-4ln7q\" (UID: \"778356f6-5ce1-4462-8bf6-8c63775e7fe0\") " pod="openstack/ironic-c9d68ffd6-4ln7q" Feb 16 02:39:54.887119 master-0 kubenswrapper[31559]: I0216 02:39:54.886999 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/778356f6-5ce1-4462-8bf6-8c63775e7fe0-public-tls-certs\") pod \"ironic-c9d68ffd6-4ln7q\" (UID: \"778356f6-5ce1-4462-8bf6-8c63775e7fe0\") " pod="openstack/ironic-c9d68ffd6-4ln7q" Feb 16 02:39:54.887119 master-0 kubenswrapper[31559]: I0216 02:39:54.887100 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/778356f6-5ce1-4462-8bf6-8c63775e7fe0-config-data\") pod \"ironic-c9d68ffd6-4ln7q\" (UID: \"778356f6-5ce1-4462-8bf6-8c63775e7fe0\") " pod="openstack/ironic-c9d68ffd6-4ln7q" Feb 16 02:39:54.887266 master-0 kubenswrapper[31559]: I0216 02:39:54.887120 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/778356f6-5ce1-4462-8bf6-8c63775e7fe0-config-data-merged\") pod \"ironic-c9d68ffd6-4ln7q\" (UID: \"778356f6-5ce1-4462-8bf6-8c63775e7fe0\") " pod="openstack/ironic-c9d68ffd6-4ln7q" Feb 16 02:39:54.887266 master-0 kubenswrapper[31559]: I0216 02:39:54.887142 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgsp9\" (UniqueName: \"kubernetes.io/projected/778356f6-5ce1-4462-8bf6-8c63775e7fe0-kube-api-access-jgsp9\") pod \"ironic-c9d68ffd6-4ln7q\" (UID: \"778356f6-5ce1-4462-8bf6-8c63775e7fe0\") " pod="openstack/ironic-c9d68ffd6-4ln7q" Feb 16 02:39:54.887266 master-0 kubenswrapper[31559]: I0216 02:39:54.887203 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/778356f6-5ce1-4462-8bf6-8c63775e7fe0-logs\") pod \"ironic-c9d68ffd6-4ln7q\" (UID: \"778356f6-5ce1-4462-8bf6-8c63775e7fe0\") " pod="openstack/ironic-c9d68ffd6-4ln7q" Feb 16 02:39:54.887266 master-0 kubenswrapper[31559]: I0216 02:39:54.887224 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/778356f6-5ce1-4462-8bf6-8c63775e7fe0-internal-tls-certs\") pod \"ironic-c9d68ffd6-4ln7q\" (UID: \"778356f6-5ce1-4462-8bf6-8c63775e7fe0\") " pod="openstack/ironic-c9d68ffd6-4ln7q" Feb 16 02:39:54.887266 master-0 kubenswrapper[31559]: I0216 02:39:54.887240 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/778356f6-5ce1-4462-8bf6-8c63775e7fe0-combined-ca-bundle\") pod \"ironic-c9d68ffd6-4ln7q\" (UID: \"778356f6-5ce1-4462-8bf6-8c63775e7fe0\") " pod="openstack/ironic-c9d68ffd6-4ln7q" Feb 16 02:39:54.887417 master-0 kubenswrapper[31559]: I0216 02:39:54.887275 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/778356f6-5ce1-4462-8bf6-8c63775e7fe0-scripts\") pod \"ironic-c9d68ffd6-4ln7q\" (UID: \"778356f6-5ce1-4462-8bf6-8c63775e7fe0\") " pod="openstack/ironic-c9d68ffd6-4ln7q" Feb 16 02:39:54.888816 master-0 kubenswrapper[31559]: I0216 02:39:54.888576 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/778356f6-5ce1-4462-8bf6-8c63775e7fe0-logs\") pod \"ironic-c9d68ffd6-4ln7q\" (UID: \"778356f6-5ce1-4462-8bf6-8c63775e7fe0\") " pod="openstack/ironic-c9d68ffd6-4ln7q" Feb 16 02:39:54.889072 master-0 kubenswrapper[31559]: I0216 02:39:54.889026 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/778356f6-5ce1-4462-8bf6-8c63775e7fe0-config-data-merged\") pod \"ironic-c9d68ffd6-4ln7q\" (UID: \"778356f6-5ce1-4462-8bf6-8c63775e7fe0\") " pod="openstack/ironic-c9d68ffd6-4ln7q" Feb 16 02:39:54.894151 master-0 kubenswrapper[31559]: I0216 02:39:54.894114 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/778356f6-5ce1-4462-8bf6-8c63775e7fe0-public-tls-certs\") pod \"ironic-c9d68ffd6-4ln7q\" (UID: \"778356f6-5ce1-4462-8bf6-8c63775e7fe0\") " pod="openstack/ironic-c9d68ffd6-4ln7q" Feb 16 02:39:54.895223 master-0 kubenswrapper[31559]: I0216 02:39:54.895182 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/778356f6-5ce1-4462-8bf6-8c63775e7fe0-config-data-custom\") pod \"ironic-c9d68ffd6-4ln7q\" (UID: \"778356f6-5ce1-4462-8bf6-8c63775e7fe0\") " pod="openstack/ironic-c9d68ffd6-4ln7q" Feb 16 02:39:54.895290 master-0 kubenswrapper[31559]: I0216 02:39:54.895147 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/778356f6-5ce1-4462-8bf6-8c63775e7fe0-scripts\") pod \"ironic-c9d68ffd6-4ln7q\" (UID: \"778356f6-5ce1-4462-8bf6-8c63775e7fe0\") " pod="openstack/ironic-c9d68ffd6-4ln7q" Feb 16 02:39:54.895958 master-0 kubenswrapper[31559]: I0216 02:39:54.895903 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/778356f6-5ce1-4462-8bf6-8c63775e7fe0-combined-ca-bundle\") pod \"ironic-c9d68ffd6-4ln7q\" (UID: \"778356f6-5ce1-4462-8bf6-8c63775e7fe0\") " pod="openstack/ironic-c9d68ffd6-4ln7q" Feb 16 02:39:54.896931 master-0 kubenswrapper[31559]: I0216 02:39:54.896897 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/778356f6-5ce1-4462-8bf6-8c63775e7fe0-config-data\") pod \"ironic-c9d68ffd6-4ln7q\" (UID: \"778356f6-5ce1-4462-8bf6-8c63775e7fe0\") " pod="openstack/ironic-c9d68ffd6-4ln7q" Feb 16 02:39:54.899217 master-0 kubenswrapper[31559]: I0216 02:39:54.898633 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/778356f6-5ce1-4462-8bf6-8c63775e7fe0-internal-tls-certs\") pod \"ironic-c9d68ffd6-4ln7q\" (UID: \"778356f6-5ce1-4462-8bf6-8c63775e7fe0\") " pod="openstack/ironic-c9d68ffd6-4ln7q" Feb 16 02:39:54.902222 master-0 kubenswrapper[31559]: I0216 02:39:54.901868 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/778356f6-5ce1-4462-8bf6-8c63775e7fe0-etc-podinfo\") pod \"ironic-c9d68ffd6-4ln7q\" (UID: \"778356f6-5ce1-4462-8bf6-8c63775e7fe0\") " pod="openstack/ironic-c9d68ffd6-4ln7q" Feb 16 02:39:54.905342 master-0 kubenswrapper[31559]: I0216 02:39:54.905277 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgsp9\" (UniqueName: \"kubernetes.io/projected/778356f6-5ce1-4462-8bf6-8c63775e7fe0-kube-api-access-jgsp9\") pod \"ironic-c9d68ffd6-4ln7q\" (UID: \"778356f6-5ce1-4462-8bf6-8c63775e7fe0\") " pod="openstack/ironic-c9d68ffd6-4ln7q" Feb 16 02:39:55.010057 master-0 kubenswrapper[31559]: I0216 02:39:55.009762 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-c9d68ffd6-4ln7q" Feb 16 02:39:55.078278 master-0 kubenswrapper[31559]: I0216 02:39:55.078217 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-5c08366a-1518-456d-af2c-993989af15a6\" (UniqueName: \"kubernetes.io/csi/topolvm.io^9b5cfba0-3007-4020-acdd-7ab5f787ae62\") pod \"ironic-conductor-0\" (UID: \"f71d864b-c882-43b8-a7a7-b1e163d38aa4\") " pod="openstack/ironic-conductor-0" Feb 16 02:39:55.199829 master-0 kubenswrapper[31559]: I0216 02:39:55.199748 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fbd84b845-7w476" event={"ID":"52d1137e-c1db-4205-ae56-dfd8b4c84b39","Type":"ContainerStarted","Data":"686e2f92a7e9f283dd0f139230d85d8489dd8986eea4a01c147f865449eba7a6"} Feb 16 02:39:55.199991 master-0 kubenswrapper[31559]: I0216 02:39:55.199878 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6fbd84b845-7w476" Feb 16 02:39:55.205158 master-0 kubenswrapper[31559]: I0216 02:39:55.204134 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:55.205158 master-0 kubenswrapper[31559]: I0216 02:39:55.204255 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-dde57-backup-0" event={"ID":"69ebefce-f456-4d6e-9111-5df13870bbae","Type":"ContainerDied","Data":"410e777434938c4c8c5e98be007009fbba5f46b9025599709357b86626aa7b58"} Feb 16 02:39:55.205158 master-0 kubenswrapper[31559]: I0216 02:39:55.204284 31559 scope.go:117] "RemoveContainer" containerID="65ad02d23bfe7fd4c4d077dbd9e92199fe76e1b58dad7f8ca534300c24bb0aa8" Feb 16 02:39:55.227334 master-0 kubenswrapper[31559]: I0216 02:39:55.222602 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6fbd84b845-7w476" podStartSLOduration=4.222586232 podStartE2EDuration="4.222586232s" podCreationTimestamp="2026-02-16 02:39:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:39:55.221165626 +0000 UTC m=+1047.565771651" watchObservedRunningTime="2026-02-16 02:39:55.222586232 +0000 UTC m=+1047.567192247" Feb 16 02:39:55.277864 master-0 kubenswrapper[31559]: I0216 02:39:55.277297 31559 scope.go:117] "RemoveContainer" containerID="dcd5e87d2ef35ada86ce3ee55e19a736ff52a215d064ff20a20133fe5a979579" Feb 16 02:39:55.326821 master-0 kubenswrapper[31559]: I0216 02:39:55.326761 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-dde57-backup-0"] Feb 16 02:39:55.334727 master-0 kubenswrapper[31559]: I0216 02:39:55.334692 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-dde57-backup-0"] Feb 16 02:39:55.348751 master-0 kubenswrapper[31559]: I0216 02:39:55.348707 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-dde57-scheduler-0"] Feb 16 02:39:55.364429 master-0 kubenswrapper[31559]: I0216 02:39:55.364331 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-dde57-backup-0"] Feb 16 02:39:55.374087 master-0 kubenswrapper[31559]: I0216 02:39:55.374037 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-conductor-0" Feb 16 02:39:55.379505 master-0 kubenswrapper[31559]: I0216 02:39:55.377340 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:55.379704 master-0 kubenswrapper[31559]: I0216 02:39:55.379669 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-dde57-backup-config-data" Feb 16 02:39:55.384995 master-0 kubenswrapper[31559]: I0216 02:39:55.384942 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-dde57-backup-0"] Feb 16 02:39:55.415799 master-0 kubenswrapper[31559]: I0216 02:39:55.415728 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5502f914-cfa4-4338-b809-c3407e74b10e-scripts\") pod \"cinder-dde57-backup-0\" (UID: \"5502f914-cfa4-4338-b809-c3407e74b10e\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:55.415799 master-0 kubenswrapper[31559]: I0216 02:39:55.415796 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5502f914-cfa4-4338-b809-c3407e74b10e-sys\") pod \"cinder-dde57-backup-0\" (UID: \"5502f914-cfa4-4338-b809-c3407e74b10e\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:55.416114 master-0 kubenswrapper[31559]: I0216 02:39:55.415821 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/5502f914-cfa4-4338-b809-c3407e74b10e-var-locks-brick\") pod \"cinder-dde57-backup-0\" (UID: \"5502f914-cfa4-4338-b809-c3407e74b10e\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:55.416114 master-0 kubenswrapper[31559]: I0216 02:39:55.415913 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5502f914-cfa4-4338-b809-c3407e74b10e-config-data-custom\") pod \"cinder-dde57-backup-0\" (UID: \"5502f914-cfa4-4338-b809-c3407e74b10e\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:55.416114 master-0 kubenswrapper[31559]: I0216 02:39:55.416033 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5502f914-cfa4-4338-b809-c3407e74b10e-lib-modules\") pod \"cinder-dde57-backup-0\" (UID: \"5502f914-cfa4-4338-b809-c3407e74b10e\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:55.416244 master-0 kubenswrapper[31559]: I0216 02:39:55.416149 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5502f914-cfa4-4338-b809-c3407e74b10e-etc-machine-id\") pod \"cinder-dde57-backup-0\" (UID: \"5502f914-cfa4-4338-b809-c3407e74b10e\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:55.416244 master-0 kubenswrapper[31559]: I0216 02:39:55.416167 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/5502f914-cfa4-4338-b809-c3407e74b10e-etc-nvme\") pod \"cinder-dde57-backup-0\" (UID: \"5502f914-cfa4-4338-b809-c3407e74b10e\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:55.416244 master-0 kubenswrapper[31559]: I0216 02:39:55.416189 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5502f914-cfa4-4338-b809-c3407e74b10e-run\") pod \"cinder-dde57-backup-0\" (UID: \"5502f914-cfa4-4338-b809-c3407e74b10e\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:55.416244 master-0 kubenswrapper[31559]: I0216 02:39:55.416226 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/5502f914-cfa4-4338-b809-c3407e74b10e-var-lib-cinder\") pod \"cinder-dde57-backup-0\" (UID: \"5502f914-cfa4-4338-b809-c3407e74b10e\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:55.416418 master-0 kubenswrapper[31559]: I0216 02:39:55.416270 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/5502f914-cfa4-4338-b809-c3407e74b10e-var-locks-cinder\") pod \"cinder-dde57-backup-0\" (UID: \"5502f914-cfa4-4338-b809-c3407e74b10e\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:55.416418 master-0 kubenswrapper[31559]: I0216 02:39:55.416298 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5502f914-cfa4-4338-b809-c3407e74b10e-combined-ca-bundle\") pod \"cinder-dde57-backup-0\" (UID: \"5502f914-cfa4-4338-b809-c3407e74b10e\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:55.416418 master-0 kubenswrapper[31559]: I0216 02:39:55.416324 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/5502f914-cfa4-4338-b809-c3407e74b10e-dev\") pod \"cinder-dde57-backup-0\" (UID: \"5502f914-cfa4-4338-b809-c3407e74b10e\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:55.416575 master-0 kubenswrapper[31559]: I0216 02:39:55.416423 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dj2mc\" (UniqueName: \"kubernetes.io/projected/5502f914-cfa4-4338-b809-c3407e74b10e-kube-api-access-dj2mc\") pod \"cinder-dde57-backup-0\" (UID: \"5502f914-cfa4-4338-b809-c3407e74b10e\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:55.416575 master-0 kubenswrapper[31559]: I0216 02:39:55.416519 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5502f914-cfa4-4338-b809-c3407e74b10e-config-data\") pod \"cinder-dde57-backup-0\" (UID: \"5502f914-cfa4-4338-b809-c3407e74b10e\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:55.416575 master-0 kubenswrapper[31559]: I0216 02:39:55.416571 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/5502f914-cfa4-4338-b809-c3407e74b10e-etc-iscsi\") pod \"cinder-dde57-backup-0\" (UID: \"5502f914-cfa4-4338-b809-c3407e74b10e\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:55.525474 master-0 kubenswrapper[31559]: I0216 02:39:55.525398 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5502f914-cfa4-4338-b809-c3407e74b10e-scripts\") pod \"cinder-dde57-backup-0\" (UID: \"5502f914-cfa4-4338-b809-c3407e74b10e\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:55.525591 master-0 kubenswrapper[31559]: I0216 02:39:55.525526 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5502f914-cfa4-4338-b809-c3407e74b10e-sys\") pod \"cinder-dde57-backup-0\" (UID: \"5502f914-cfa4-4338-b809-c3407e74b10e\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:55.525591 master-0 kubenswrapper[31559]: I0216 02:39:55.525571 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/5502f914-cfa4-4338-b809-c3407e74b10e-var-locks-brick\") pod \"cinder-dde57-backup-0\" (UID: \"5502f914-cfa4-4338-b809-c3407e74b10e\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:55.525672 master-0 kubenswrapper[31559]: I0216 02:39:55.525593 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5502f914-cfa4-4338-b809-c3407e74b10e-config-data-custom\") pod \"cinder-dde57-backup-0\" (UID: \"5502f914-cfa4-4338-b809-c3407e74b10e\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:55.525672 master-0 kubenswrapper[31559]: I0216 02:39:55.525649 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5502f914-cfa4-4338-b809-c3407e74b10e-lib-modules\") pod \"cinder-dde57-backup-0\" (UID: \"5502f914-cfa4-4338-b809-c3407e74b10e\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:55.525792 master-0 kubenswrapper[31559]: I0216 02:39:55.525770 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5502f914-cfa4-4338-b809-c3407e74b10e-etc-machine-id\") pod \"cinder-dde57-backup-0\" (UID: \"5502f914-cfa4-4338-b809-c3407e74b10e\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:55.525840 master-0 kubenswrapper[31559]: I0216 02:39:55.525774 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/5502f914-cfa4-4338-b809-c3407e74b10e-var-locks-brick\") pod \"cinder-dde57-backup-0\" (UID: \"5502f914-cfa4-4338-b809-c3407e74b10e\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:55.525840 master-0 kubenswrapper[31559]: I0216 02:39:55.525798 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/5502f914-cfa4-4338-b809-c3407e74b10e-etc-nvme\") pod \"cinder-dde57-backup-0\" (UID: \"5502f914-cfa4-4338-b809-c3407e74b10e\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:55.525907 master-0 kubenswrapper[31559]: I0216 02:39:55.525859 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5502f914-cfa4-4338-b809-c3407e74b10e-etc-machine-id\") pod \"cinder-dde57-backup-0\" (UID: \"5502f914-cfa4-4338-b809-c3407e74b10e\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:55.525907 master-0 kubenswrapper[31559]: I0216 02:39:55.525865 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5502f914-cfa4-4338-b809-c3407e74b10e-run\") pod \"cinder-dde57-backup-0\" (UID: \"5502f914-cfa4-4338-b809-c3407e74b10e\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:55.525907 master-0 kubenswrapper[31559]: I0216 02:39:55.525881 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5502f914-cfa4-4338-b809-c3407e74b10e-sys\") pod \"cinder-dde57-backup-0\" (UID: \"5502f914-cfa4-4338-b809-c3407e74b10e\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:55.526056 master-0 kubenswrapper[31559]: I0216 02:39:55.525911 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/5502f914-cfa4-4338-b809-c3407e74b10e-var-lib-cinder\") pod \"cinder-dde57-backup-0\" (UID: \"5502f914-cfa4-4338-b809-c3407e74b10e\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:55.526056 master-0 kubenswrapper[31559]: I0216 02:39:55.525920 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/5502f914-cfa4-4338-b809-c3407e74b10e-etc-nvme\") pod \"cinder-dde57-backup-0\" (UID: \"5502f914-cfa4-4338-b809-c3407e74b10e\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:55.526752 master-0 kubenswrapper[31559]: I0216 02:39:55.526702 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/5502f914-cfa4-4338-b809-c3407e74b10e-var-lib-cinder\") pod \"cinder-dde57-backup-0\" (UID: \"5502f914-cfa4-4338-b809-c3407e74b10e\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:55.526819 master-0 kubenswrapper[31559]: I0216 02:39:55.526780 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5502f914-cfa4-4338-b809-c3407e74b10e-lib-modules\") pod \"cinder-dde57-backup-0\" (UID: \"5502f914-cfa4-4338-b809-c3407e74b10e\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:55.527499 master-0 kubenswrapper[31559]: I0216 02:39:55.527023 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5502f914-cfa4-4338-b809-c3407e74b10e-run\") pod \"cinder-dde57-backup-0\" (UID: \"5502f914-cfa4-4338-b809-c3407e74b10e\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:55.527499 master-0 kubenswrapper[31559]: I0216 02:39:55.527087 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/5502f914-cfa4-4338-b809-c3407e74b10e-var-locks-cinder\") pod \"cinder-dde57-backup-0\" (UID: \"5502f914-cfa4-4338-b809-c3407e74b10e\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:55.527499 master-0 kubenswrapper[31559]: I0216 02:39:55.527122 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5502f914-cfa4-4338-b809-c3407e74b10e-combined-ca-bundle\") pod \"cinder-dde57-backup-0\" (UID: \"5502f914-cfa4-4338-b809-c3407e74b10e\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:55.527499 master-0 kubenswrapper[31559]: I0216 02:39:55.527156 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/5502f914-cfa4-4338-b809-c3407e74b10e-dev\") pod \"cinder-dde57-backup-0\" (UID: \"5502f914-cfa4-4338-b809-c3407e74b10e\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:55.527499 master-0 kubenswrapper[31559]: I0216 02:39:55.527186 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/5502f914-cfa4-4338-b809-c3407e74b10e-var-locks-cinder\") pod \"cinder-dde57-backup-0\" (UID: \"5502f914-cfa4-4338-b809-c3407e74b10e\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:55.527499 master-0 kubenswrapper[31559]: I0216 02:39:55.527275 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dj2mc\" (UniqueName: \"kubernetes.io/projected/5502f914-cfa4-4338-b809-c3407e74b10e-kube-api-access-dj2mc\") pod \"cinder-dde57-backup-0\" (UID: \"5502f914-cfa4-4338-b809-c3407e74b10e\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:55.527499 master-0 kubenswrapper[31559]: I0216 02:39:55.527353 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5502f914-cfa4-4338-b809-c3407e74b10e-config-data\") pod \"cinder-dde57-backup-0\" (UID: \"5502f914-cfa4-4338-b809-c3407e74b10e\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:55.527499 master-0 kubenswrapper[31559]: I0216 02:39:55.527413 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/5502f914-cfa4-4338-b809-c3407e74b10e-etc-iscsi\") pod \"cinder-dde57-backup-0\" (UID: \"5502f914-cfa4-4338-b809-c3407e74b10e\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:55.527771 master-0 kubenswrapper[31559]: I0216 02:39:55.527626 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/5502f914-cfa4-4338-b809-c3407e74b10e-etc-iscsi\") pod \"cinder-dde57-backup-0\" (UID: \"5502f914-cfa4-4338-b809-c3407e74b10e\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:55.527845 master-0 kubenswrapper[31559]: I0216 02:39:55.527817 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/5502f914-cfa4-4338-b809-c3407e74b10e-dev\") pod \"cinder-dde57-backup-0\" (UID: \"5502f914-cfa4-4338-b809-c3407e74b10e\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:55.529028 master-0 kubenswrapper[31559]: I0216 02:39:55.529007 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5502f914-cfa4-4338-b809-c3407e74b10e-scripts\") pod \"cinder-dde57-backup-0\" (UID: \"5502f914-cfa4-4338-b809-c3407e74b10e\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:55.531230 master-0 kubenswrapper[31559]: I0216 02:39:55.531193 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5502f914-cfa4-4338-b809-c3407e74b10e-combined-ca-bundle\") pod \"cinder-dde57-backup-0\" (UID: \"5502f914-cfa4-4338-b809-c3407e74b10e\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:55.536972 master-0 kubenswrapper[31559]: I0216 02:39:55.533925 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5502f914-cfa4-4338-b809-c3407e74b10e-config-data\") pod \"cinder-dde57-backup-0\" (UID: \"5502f914-cfa4-4338-b809-c3407e74b10e\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:55.539033 master-0 kubenswrapper[31559]: I0216 02:39:55.538944 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5502f914-cfa4-4338-b809-c3407e74b10e-config-data-custom\") pod \"cinder-dde57-backup-0\" (UID: \"5502f914-cfa4-4338-b809-c3407e74b10e\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:55.542236 master-0 kubenswrapper[31559]: I0216 02:39:55.542190 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dj2mc\" (UniqueName: \"kubernetes.io/projected/5502f914-cfa4-4338-b809-c3407e74b10e-kube-api-access-dj2mc\") pod \"cinder-dde57-backup-0\" (UID: \"5502f914-cfa4-4338-b809-c3407e74b10e\") " pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:55.791507 master-0 kubenswrapper[31559]: I0216 02:39:55.788875 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-c9d68ffd6-4ln7q"] Feb 16 02:39:55.824783 master-0 kubenswrapper[31559]: I0216 02:39:55.823862 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-dde57-backup-0" Feb 16 02:39:55.940335 master-0 kubenswrapper[31559]: I0216 02:39:55.940277 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="630fbd6e-863a-4347-acc7-38ae08b97e61" path="/var/lib/kubelet/pods/630fbd6e-863a-4347-acc7-38ae08b97e61/volumes" Feb 16 02:39:55.941259 master-0 kubenswrapper[31559]: I0216 02:39:55.941230 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69ebefce-f456-4d6e-9111-5df13870bbae" path="/var/lib/kubelet/pods/69ebefce-f456-4d6e-9111-5df13870bbae/volumes" Feb 16 02:39:56.040282 master-0 kubenswrapper[31559]: I0216 02:39:56.040229 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-conductor-0"] Feb 16 02:39:56.059128 master-0 kubenswrapper[31559]: I0216 02:39:56.059089 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-01eb-account-create-update-cvszz" Feb 16 02:39:56.076371 master-0 kubenswrapper[31559]: I0216 02:39:56.076324 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-ldt4r" Feb 16 02:39:56.159422 master-0 kubenswrapper[31559]: I0216 02:39:56.159326 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df781460-393f-441c-85a9-ab19366c8734-operator-scripts\") pod \"df781460-393f-441c-85a9-ab19366c8734\" (UID: \"df781460-393f-441c-85a9-ab19366c8734\") " Feb 16 02:39:56.161047 master-0 kubenswrapper[31559]: I0216 02:39:56.160039 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df781460-393f-441c-85a9-ab19366c8734-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "df781460-393f-441c-85a9-ab19366c8734" (UID: "df781460-393f-441c-85a9-ab19366c8734"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:39:56.161100 master-0 kubenswrapper[31559]: I0216 02:39:56.161085 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vxgs\" (UniqueName: \"kubernetes.io/projected/df781460-393f-441c-85a9-ab19366c8734-kube-api-access-4vxgs\") pod \"df781460-393f-441c-85a9-ab19366c8734\" (UID: \"df781460-393f-441c-85a9-ab19366c8734\") " Feb 16 02:39:56.161353 master-0 kubenswrapper[31559]: I0216 02:39:56.161332 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt7hk\" (UniqueName: \"kubernetes.io/projected/502da4cf-fead-4624-891d-f8db5815915f-kube-api-access-vt7hk\") pod \"502da4cf-fead-4624-891d-f8db5815915f\" (UID: \"502da4cf-fead-4624-891d-f8db5815915f\") " Feb 16 02:39:56.161519 master-0 kubenswrapper[31559]: I0216 02:39:56.161487 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/502da4cf-fead-4624-891d-f8db5815915f-operator-scripts\") pod \"502da4cf-fead-4624-891d-f8db5815915f\" (UID: \"502da4cf-fead-4624-891d-f8db5815915f\") " Feb 16 02:39:56.161973 master-0 kubenswrapper[31559]: I0216 02:39:56.161942 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/502da4cf-fead-4624-891d-f8db5815915f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "502da4cf-fead-4624-891d-f8db5815915f" (UID: "502da4cf-fead-4624-891d-f8db5815915f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:39:56.162509 master-0 kubenswrapper[31559]: I0216 02:39:56.162366 31559 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/502da4cf-fead-4624-891d-f8db5815915f-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:56.162509 master-0 kubenswrapper[31559]: I0216 02:39:56.162392 31559 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df781460-393f-441c-85a9-ab19366c8734-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:56.165492 master-0 kubenswrapper[31559]: I0216 02:39:56.164573 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df781460-393f-441c-85a9-ab19366c8734-kube-api-access-4vxgs" (OuterVolumeSpecName: "kube-api-access-4vxgs") pod "df781460-393f-441c-85a9-ab19366c8734" (UID: "df781460-393f-441c-85a9-ab19366c8734"). InnerVolumeSpecName "kube-api-access-4vxgs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:39:56.166822 master-0 kubenswrapper[31559]: I0216 02:39:56.166773 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/502da4cf-fead-4624-891d-f8db5815915f-kube-api-access-vt7hk" (OuterVolumeSpecName: "kube-api-access-vt7hk") pod "502da4cf-fead-4624-891d-f8db5815915f" (UID: "502da4cf-fead-4624-891d-f8db5815915f"). InnerVolumeSpecName "kube-api-access-vt7hk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:39:56.214621 master-0 kubenswrapper[31559]: I0216 02:39:56.214545 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-dde57-scheduler-0" event={"ID":"c72a1046-0b7c-4706-b4a0-1d87b0e50900","Type":"ContainerStarted","Data":"83d9d25f671c8e6e958f98fc45f5434eba731332c515b7dee9fe008b7434bd13"} Feb 16 02:39:56.214621 master-0 kubenswrapper[31559]: I0216 02:39:56.214621 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-dde57-scheduler-0" event={"ID":"c72a1046-0b7c-4706-b4a0-1d87b0e50900","Type":"ContainerStarted","Data":"63bd2cc77e5ffda7e9ac78a764f1cb9d076acb112bc58e97752955a884760302"} Feb 16 02:39:56.215996 master-0 kubenswrapper[31559]: I0216 02:39:56.215911 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-c9d68ffd6-4ln7q" event={"ID":"778356f6-5ce1-4462-8bf6-8c63775e7fe0","Type":"ContainerStarted","Data":"d4630bafa79ddcbe774c414113311a615ccb91801735452c566a5ca3d2c533df"} Feb 16 02:39:56.217526 master-0 kubenswrapper[31559]: I0216 02:39:56.217485 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-create-ldt4r" event={"ID":"502da4cf-fead-4624-891d-f8db5815915f","Type":"ContainerDied","Data":"0ce86daf091ec67418d0159644715c62c9007b428afc25d15679be1cbec53a28"} Feb 16 02:39:56.217526 master-0 kubenswrapper[31559]: I0216 02:39:56.217510 31559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ce86daf091ec67418d0159644715c62c9007b428afc25d15679be1cbec53a28" Feb 16 02:39:56.217643 master-0 kubenswrapper[31559]: I0216 02:39:56.217556 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-ldt4r" Feb 16 02:39:56.223835 master-0 kubenswrapper[31559]: I0216 02:39:56.223738 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-01eb-account-create-update-cvszz" event={"ID":"df781460-393f-441c-85a9-ab19366c8734","Type":"ContainerDied","Data":"c00624880e36bfe93d505c7010d47df680cb124c827e808de9e77a30d09d2b9a"} Feb 16 02:39:56.223835 master-0 kubenswrapper[31559]: I0216 02:39:56.223796 31559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c00624880e36bfe93d505c7010d47df680cb124c827e808de9e77a30d09d2b9a" Feb 16 02:39:56.223835 master-0 kubenswrapper[31559]: I0216 02:39:56.223813 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-01eb-account-create-update-cvszz" Feb 16 02:39:56.268948 master-0 kubenswrapper[31559]: I0216 02:39:56.268895 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vxgs\" (UniqueName: \"kubernetes.io/projected/df781460-393f-441c-85a9-ab19366c8734-kube-api-access-4vxgs\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:56.269221 master-0 kubenswrapper[31559]: I0216 02:39:56.269208 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt7hk\" (UniqueName: \"kubernetes.io/projected/502da4cf-fead-4624-891d-f8db5815915f-kube-api-access-vt7hk\") on node \"master-0\" DevicePath \"\"" Feb 16 02:39:56.576772 master-0 kubenswrapper[31559]: I0216 02:39:56.576644 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:39:57.243684 master-0 kubenswrapper[31559]: I0216 02:39:57.243220 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"f71d864b-c882-43b8-a7a7-b1e163d38aa4","Type":"ContainerStarted","Data":"c1c4a850a01cdf8ef0c7eaae6b2545c325fcc80b2c2e68ff457da727016652e1"} Feb 16 02:39:57.316519 master-0 kubenswrapper[31559]: I0216 02:39:57.316466 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6896ff5478-9txrd" Feb 16 02:39:57.318587 master-0 kubenswrapper[31559]: I0216 02:39:57.318184 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6896ff5478-9txrd" Feb 16 02:39:58.150455 master-0 kubenswrapper[31559]: I0216 02:39:58.146846 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-55c5776498-fmz6g" Feb 16 02:39:58.263542 master-0 kubenswrapper[31559]: I0216 02:39:58.263495 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-dde57-backup-0"] Feb 16 02:39:58.270272 master-0 kubenswrapper[31559]: I0216 02:39:58.270191 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-dde57-backup-0" event={"ID":"5502f914-cfa4-4338-b809-c3407e74b10e","Type":"ContainerStarted","Data":"268b1121e6230ac2347546448fcdd44db4eac1c2036e3c7b58ec3179bec8b9b5"} Feb 16 02:39:58.271712 master-0 kubenswrapper[31559]: I0216 02:39:58.271677 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-57d76bb68d-7wt45" event={"ID":"ec163f18-db19-4327-ae8b-6feb4c6004af","Type":"ContainerStarted","Data":"e8fb80b64e740d447995e951c1f82f5aee648bdeca359a68246174f512005a67"} Feb 16 02:39:58.271954 master-0 kubenswrapper[31559]: I0216 02:39:58.271916 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-57d76bb68d-7wt45" Feb 16 02:39:58.274176 master-0 kubenswrapper[31559]: I0216 02:39:58.274145 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-c9d68ffd6-4ln7q" event={"ID":"778356f6-5ce1-4462-8bf6-8c63775e7fe0","Type":"ContainerStarted","Data":"2b453b643179e93d930fde171775c2905ad1e13f781b2cfc535d0ae3a902398c"} Feb 16 02:39:58.275919 master-0 kubenswrapper[31559]: I0216 02:39:58.275889 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-86f55b5cf6-zxgrr" event={"ID":"bbb13644-ecbf-43f1-9203-c714b5485f17","Type":"ContainerStarted","Data":"8624eec5aa4fb47d6023393bb2cd4416013c7b136771923cd96484cf18318d46"} Feb 16 02:39:58.294741 master-0 kubenswrapper[31559]: I0216 02:39:58.294561 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-neutron-agent-57d76bb68d-7wt45" podStartSLOduration=3.066613506 podStartE2EDuration="7.294527697s" podCreationTimestamp="2026-02-16 02:39:51 +0000 UTC" firstStartedPulling="2026-02-16 02:39:53.134036505 +0000 UTC m=+1045.478642520" lastFinishedPulling="2026-02-16 02:39:57.361950696 +0000 UTC m=+1049.706556711" observedRunningTime="2026-02-16 02:39:58.286533735 +0000 UTC m=+1050.631139750" watchObservedRunningTime="2026-02-16 02:39:58.294527697 +0000 UTC m=+1050.639133712" Feb 16 02:39:58.324507 master-0 kubenswrapper[31559]: I0216 02:39:58.324406 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-55c5776498-fmz6g" Feb 16 02:39:58.465003 master-0 kubenswrapper[31559]: I0216 02:39:58.462066 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-6899ff6f9c-4hbb9" Feb 16 02:39:58.516482 master-0 kubenswrapper[31559]: I0216 02:39:58.511327 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-6896ff5478-9txrd"] Feb 16 02:39:59.196582 master-0 kubenswrapper[31559]: I0216 02:39:59.196522 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 16 02:39:59.197182 master-0 kubenswrapper[31559]: E0216 02:39:59.197147 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df781460-393f-441c-85a9-ab19366c8734" containerName="mariadb-account-create-update" Feb 16 02:39:59.197240 master-0 kubenswrapper[31559]: I0216 02:39:59.197185 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="df781460-393f-441c-85a9-ab19366c8734" containerName="mariadb-account-create-update" Feb 16 02:39:59.197279 master-0 kubenswrapper[31559]: E0216 02:39:59.197235 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="502da4cf-fead-4624-891d-f8db5815915f" containerName="mariadb-database-create" Feb 16 02:39:59.197279 master-0 kubenswrapper[31559]: I0216 02:39:59.197254 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="502da4cf-fead-4624-891d-f8db5815915f" containerName="mariadb-database-create" Feb 16 02:39:59.197717 master-0 kubenswrapper[31559]: I0216 02:39:59.197684 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="df781460-393f-441c-85a9-ab19366c8734" containerName="mariadb-account-create-update" Feb 16 02:39:59.197771 master-0 kubenswrapper[31559]: I0216 02:39:59.197759 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="502da4cf-fead-4624-891d-f8db5815915f" containerName="mariadb-database-create" Feb 16 02:39:59.198997 master-0 kubenswrapper[31559]: I0216 02:39:59.198966 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 16 02:39:59.217456 master-0 kubenswrapper[31559]: I0216 02:39:59.216359 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/507c516c-4177-4f63-bd9b-512c486e44eb-openstack-config-secret\") pod \"openstackclient\" (UID: \"507c516c-4177-4f63-bd9b-512c486e44eb\") " pod="openstack/openstackclient" Feb 16 02:39:59.217456 master-0 kubenswrapper[31559]: I0216 02:39:59.216422 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86f8t\" (UniqueName: \"kubernetes.io/projected/507c516c-4177-4f63-bd9b-512c486e44eb-kube-api-access-86f8t\") pod \"openstackclient\" (UID: \"507c516c-4177-4f63-bd9b-512c486e44eb\") " pod="openstack/openstackclient" Feb 16 02:39:59.217456 master-0 kubenswrapper[31559]: I0216 02:39:59.216514 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/507c516c-4177-4f63-bd9b-512c486e44eb-combined-ca-bundle\") pod \"openstackclient\" (UID: \"507c516c-4177-4f63-bd9b-512c486e44eb\") " pod="openstack/openstackclient" Feb 16 02:39:59.217456 master-0 kubenswrapper[31559]: I0216 02:39:59.216557 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/507c516c-4177-4f63-bd9b-512c486e44eb-openstack-config\") pod \"openstackclient\" (UID: \"507c516c-4177-4f63-bd9b-512c486e44eb\") " pod="openstack/openstackclient" Feb 16 02:39:59.242506 master-0 kubenswrapper[31559]: I0216 02:39:59.227018 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 16 02:39:59.247594 master-0 kubenswrapper[31559]: I0216 02:39:59.247548 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 16 02:39:59.247982 master-0 kubenswrapper[31559]: I0216 02:39:59.247677 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 16 02:39:59.288572 master-0 kubenswrapper[31559]: I0216 02:39:59.288514 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-dde57-scheduler-0" event={"ID":"c72a1046-0b7c-4706-b4a0-1d87b0e50900","Type":"ContainerStarted","Data":"914b52b271f93366bee8e401f798773f0bebf6d25725352ad1b4e93470c2dc1b"} Feb 16 02:39:59.294827 master-0 kubenswrapper[31559]: I0216 02:39:59.291328 31559 generic.go:334] "Generic (PLEG): container finished" podID="778356f6-5ce1-4462-8bf6-8c63775e7fe0" containerID="2b453b643179e93d930fde171775c2905ad1e13f781b2cfc535d0ae3a902398c" exitCode=0 Feb 16 02:39:59.294827 master-0 kubenswrapper[31559]: I0216 02:39:59.291371 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-c9d68ffd6-4ln7q" event={"ID":"778356f6-5ce1-4462-8bf6-8c63775e7fe0","Type":"ContainerDied","Data":"2b453b643179e93d930fde171775c2905ad1e13f781b2cfc535d0ae3a902398c"} Feb 16 02:39:59.296060 master-0 kubenswrapper[31559]: I0216 02:39:59.295802 31559 generic.go:334] "Generic (PLEG): container finished" podID="bbb13644-ecbf-43f1-9203-c714b5485f17" containerID="8624eec5aa4fb47d6023393bb2cd4416013c7b136771923cd96484cf18318d46" exitCode=1 Feb 16 02:39:59.296060 master-0 kubenswrapper[31559]: I0216 02:39:59.295926 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-86f55b5cf6-zxgrr" event={"ID":"bbb13644-ecbf-43f1-9203-c714b5485f17","Type":"ContainerDied","Data":"8624eec5aa4fb47d6023393bb2cd4416013c7b136771923cd96484cf18318d46"} Feb 16 02:39:59.305986 master-0 kubenswrapper[31559]: I0216 02:39:59.305442 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"f71d864b-c882-43b8-a7a7-b1e163d38aa4","Type":"ContainerStarted","Data":"f03f43aa9c18c888101d1b592175d2d6d71a6ff08784742267c79ea8927482ba"} Feb 16 02:39:59.311548 master-0 kubenswrapper[31559]: I0216 02:39:59.311492 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-dde57-scheduler-0" podStartSLOduration=5.311474548 podStartE2EDuration="5.311474548s" podCreationTimestamp="2026-02-16 02:39:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:39:59.310342159 +0000 UTC m=+1051.654948174" watchObservedRunningTime="2026-02-16 02:39:59.311474548 +0000 UTC m=+1051.656080563" Feb 16 02:39:59.318639 master-0 kubenswrapper[31559]: I0216 02:39:59.318584 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/507c516c-4177-4f63-bd9b-512c486e44eb-openstack-config-secret\") pod \"openstackclient\" (UID: \"507c516c-4177-4f63-bd9b-512c486e44eb\") " pod="openstack/openstackclient" Feb 16 02:39:59.318721 master-0 kubenswrapper[31559]: I0216 02:39:59.318655 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86f8t\" (UniqueName: \"kubernetes.io/projected/507c516c-4177-4f63-bd9b-512c486e44eb-kube-api-access-86f8t\") pod \"openstackclient\" (UID: \"507c516c-4177-4f63-bd9b-512c486e44eb\") " pod="openstack/openstackclient" Feb 16 02:39:59.318759 master-0 kubenswrapper[31559]: I0216 02:39:59.318748 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/507c516c-4177-4f63-bd9b-512c486e44eb-combined-ca-bundle\") pod \"openstackclient\" (UID: \"507c516c-4177-4f63-bd9b-512c486e44eb\") " pod="openstack/openstackclient" Feb 16 02:39:59.318811 master-0 kubenswrapper[31559]: I0216 02:39:59.318794 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/507c516c-4177-4f63-bd9b-512c486e44eb-openstack-config\") pod \"openstackclient\" (UID: \"507c516c-4177-4f63-bd9b-512c486e44eb\") " pod="openstack/openstackclient" Feb 16 02:39:59.319709 master-0 kubenswrapper[31559]: I0216 02:39:59.319682 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/507c516c-4177-4f63-bd9b-512c486e44eb-openstack-config\") pod \"openstackclient\" (UID: \"507c516c-4177-4f63-bd9b-512c486e44eb\") " pod="openstack/openstackclient" Feb 16 02:39:59.329701 master-0 kubenswrapper[31559]: I0216 02:39:59.326957 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-6896ff5478-9txrd" podUID="d2ed2e4a-8cda-4a54-8b5d-01b559d15d76" containerName="placement-log" containerID="cri-o://5a972e88a90f54a53819ae538096e27d00cdeaee8a27f592d4ffb473018dd435" gracePeriod=30 Feb 16 02:39:59.329701 master-0 kubenswrapper[31559]: I0216 02:39:59.327460 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-dde57-backup-0" event={"ID":"5502f914-cfa4-4338-b809-c3407e74b10e","Type":"ContainerStarted","Data":"54c8fd53d729d513e9b0a7642075953b6a610e61925a39cd8c6bb9b6304d10a6"} Feb 16 02:39:59.329701 master-0 kubenswrapper[31559]: I0216 02:39:59.327486 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-dde57-backup-0" event={"ID":"5502f914-cfa4-4338-b809-c3407e74b10e","Type":"ContainerStarted","Data":"a7b7d547841ec075f2851c2f7ef090179bac893f86136cdc9ae649e9d1405880"} Feb 16 02:39:59.329701 master-0 kubenswrapper[31559]: I0216 02:39:59.327724 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-6896ff5478-9txrd" podUID="d2ed2e4a-8cda-4a54-8b5d-01b559d15d76" containerName="placement-api" containerID="cri-o://8a2500d53ef512fc7f8fc6cec03dcab9aa2e075d86660812d570b721f740dda1" gracePeriod=30 Feb 16 02:39:59.341616 master-0 kubenswrapper[31559]: I0216 02:39:59.341563 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86f8t\" (UniqueName: \"kubernetes.io/projected/507c516c-4177-4f63-bd9b-512c486e44eb-kube-api-access-86f8t\") pod \"openstackclient\" (UID: \"507c516c-4177-4f63-bd9b-512c486e44eb\") " pod="openstack/openstackclient" Feb 16 02:39:59.345419 master-0 kubenswrapper[31559]: I0216 02:39:59.342343 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/507c516c-4177-4f63-bd9b-512c486e44eb-combined-ca-bundle\") pod \"openstackclient\" (UID: \"507c516c-4177-4f63-bd9b-512c486e44eb\") " pod="openstack/openstackclient" Feb 16 02:39:59.345419 master-0 kubenswrapper[31559]: I0216 02:39:59.344673 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/507c516c-4177-4f63-bd9b-512c486e44eb-openstack-config-secret\") pod \"openstackclient\" (UID: \"507c516c-4177-4f63-bd9b-512c486e44eb\") " pod="openstack/openstackclient" Feb 16 02:39:59.378711 master-0 kubenswrapper[31559]: I0216 02:39:59.378640 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Feb 16 02:39:59.379780 master-0 kubenswrapper[31559]: I0216 02:39:59.379727 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 16 02:39:59.445674 master-0 kubenswrapper[31559]: I0216 02:39:59.418871 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Feb 16 02:39:59.453954 master-0 kubenswrapper[31559]: I0216 02:39:59.453835 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 16 02:39:59.455315 master-0 kubenswrapper[31559]: I0216 02:39:59.455288 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 16 02:39:59.490650 master-0 kubenswrapper[31559]: I0216 02:39:59.490564 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 16 02:39:59.493873 master-0 kubenswrapper[31559]: I0216 02:39:59.493809 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-dde57-backup-0" podStartSLOduration=4.49378731 podStartE2EDuration="4.49378731s" podCreationTimestamp="2026-02-16 02:39:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:39:59.454852873 +0000 UTC m=+1051.799458888" watchObservedRunningTime="2026-02-16 02:39:59.49378731 +0000 UTC m=+1051.838393325" Feb 16 02:39:59.633824 master-0 kubenswrapper[31559]: I0216 02:39:59.631577 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqfpj\" (UniqueName: \"kubernetes.io/projected/96e7ee31-a94f-4d6d-a1c7-abb314357ff7-kube-api-access-cqfpj\") pod \"openstackclient\" (UID: \"96e7ee31-a94f-4d6d-a1c7-abb314357ff7\") " pod="openstack/openstackclient" Feb 16 02:39:59.633824 master-0 kubenswrapper[31559]: I0216 02:39:59.631768 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/96e7ee31-a94f-4d6d-a1c7-abb314357ff7-openstack-config-secret\") pod \"openstackclient\" (UID: \"96e7ee31-a94f-4d6d-a1c7-abb314357ff7\") " pod="openstack/openstackclient" Feb 16 02:39:59.633824 master-0 kubenswrapper[31559]: I0216 02:39:59.632333 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e7ee31-a94f-4d6d-a1c7-abb314357ff7-combined-ca-bundle\") pod \"openstackclient\" (UID: \"96e7ee31-a94f-4d6d-a1c7-abb314357ff7\") " pod="openstack/openstackclient" Feb 16 02:39:59.633824 master-0 kubenswrapper[31559]: I0216 02:39:59.632596 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/96e7ee31-a94f-4d6d-a1c7-abb314357ff7-openstack-config\") pod \"openstackclient\" (UID: \"96e7ee31-a94f-4d6d-a1c7-abb314357ff7\") " pod="openstack/openstackclient" Feb 16 02:39:59.738780 master-0 kubenswrapper[31559]: I0216 02:39:59.736790 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/96e7ee31-a94f-4d6d-a1c7-abb314357ff7-openstack-config-secret\") pod \"openstackclient\" (UID: \"96e7ee31-a94f-4d6d-a1c7-abb314357ff7\") " pod="openstack/openstackclient" Feb 16 02:39:59.738780 master-0 kubenswrapper[31559]: I0216 02:39:59.736865 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e7ee31-a94f-4d6d-a1c7-abb314357ff7-combined-ca-bundle\") pod \"openstackclient\" (UID: \"96e7ee31-a94f-4d6d-a1c7-abb314357ff7\") " pod="openstack/openstackclient" Feb 16 02:39:59.738780 master-0 kubenswrapper[31559]: I0216 02:39:59.736937 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/96e7ee31-a94f-4d6d-a1c7-abb314357ff7-openstack-config\") pod \"openstackclient\" (UID: \"96e7ee31-a94f-4d6d-a1c7-abb314357ff7\") " pod="openstack/openstackclient" Feb 16 02:39:59.738780 master-0 kubenswrapper[31559]: I0216 02:39:59.737003 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqfpj\" (UniqueName: \"kubernetes.io/projected/96e7ee31-a94f-4d6d-a1c7-abb314357ff7-kube-api-access-cqfpj\") pod \"openstackclient\" (UID: \"96e7ee31-a94f-4d6d-a1c7-abb314357ff7\") " pod="openstack/openstackclient" Feb 16 02:39:59.738780 master-0 kubenswrapper[31559]: I0216 02:39:59.738517 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/96e7ee31-a94f-4d6d-a1c7-abb314357ff7-openstack-config\") pod \"openstackclient\" (UID: \"96e7ee31-a94f-4d6d-a1c7-abb314357ff7\") " pod="openstack/openstackclient" Feb 16 02:39:59.746461 master-0 kubenswrapper[31559]: I0216 02:39:59.743067 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/96e7ee31-a94f-4d6d-a1c7-abb314357ff7-openstack-config-secret\") pod \"openstackclient\" (UID: \"96e7ee31-a94f-4d6d-a1c7-abb314357ff7\") " pod="openstack/openstackclient" Feb 16 02:39:59.764534 master-0 kubenswrapper[31559]: E0216 02:39:59.763610 31559 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 16 02:39:59.764534 master-0 kubenswrapper[31559]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_507c516c-4177-4f63-bd9b-512c486e44eb_0(d6ff4bc99c14f6b77cac70c7aff6b8a568e6956f8a7e6eaf503b3ea2c2736adf): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"d6ff4bc99c14f6b77cac70c7aff6b8a568e6956f8a7e6eaf503b3ea2c2736adf" Netns:"/var/run/netns/789476d4-45ac-4110-836c-10a8dcc8c46f" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=d6ff4bc99c14f6b77cac70c7aff6b8a568e6956f8a7e6eaf503b3ea2c2736adf;K8S_POD_UID=507c516c-4177-4f63-bd9b-512c486e44eb" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/507c516c-4177-4f63-bd9b-512c486e44eb]: expected pod UID "507c516c-4177-4f63-bd9b-512c486e44eb" but got "96e7ee31-a94f-4d6d-a1c7-abb314357ff7" from Kube API Feb 16 02:39:59.764534 master-0 kubenswrapper[31559]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 02:39:59.764534 master-0 kubenswrapper[31559]: > Feb 16 02:39:59.764534 master-0 kubenswrapper[31559]: E0216 02:39:59.763680 31559 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 16 02:39:59.764534 master-0 kubenswrapper[31559]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_507c516c-4177-4f63-bd9b-512c486e44eb_0(d6ff4bc99c14f6b77cac70c7aff6b8a568e6956f8a7e6eaf503b3ea2c2736adf): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"d6ff4bc99c14f6b77cac70c7aff6b8a568e6956f8a7e6eaf503b3ea2c2736adf" Netns:"/var/run/netns/789476d4-45ac-4110-836c-10a8dcc8c46f" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=d6ff4bc99c14f6b77cac70c7aff6b8a568e6956f8a7e6eaf503b3ea2c2736adf;K8S_POD_UID=507c516c-4177-4f63-bd9b-512c486e44eb" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/507c516c-4177-4f63-bd9b-512c486e44eb]: expected pod UID "507c516c-4177-4f63-bd9b-512c486e44eb" but got "96e7ee31-a94f-4d6d-a1c7-abb314357ff7" from Kube API Feb 16 02:39:59.764534 master-0 kubenswrapper[31559]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 02:39:59.764534 master-0 kubenswrapper[31559]: > pod="openstack/openstackclient" Feb 16 02:39:59.764534 master-0 kubenswrapper[31559]: I0216 02:39:59.764114 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e7ee31-a94f-4d6d-a1c7-abb314357ff7-combined-ca-bundle\") pod \"openstackclient\" (UID: \"96e7ee31-a94f-4d6d-a1c7-abb314357ff7\") " pod="openstack/openstackclient" Feb 16 02:39:59.772695 master-0 kubenswrapper[31559]: I0216 02:39:59.771818 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqfpj\" (UniqueName: \"kubernetes.io/projected/96e7ee31-a94f-4d6d-a1c7-abb314357ff7-kube-api-access-cqfpj\") pod \"openstackclient\" (UID: \"96e7ee31-a94f-4d6d-a1c7-abb314357ff7\") " pod="openstack/openstackclient" Feb 16 02:39:59.791137 master-0 kubenswrapper[31559]: I0216 02:39:59.791046 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 16 02:39:59.869303 master-0 kubenswrapper[31559]: I0216 02:39:59.869244 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-dde57-scheduler-0" Feb 16 02:40:00.278518 master-0 kubenswrapper[31559]: I0216 02:40:00.278370 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 16 02:40:00.367543 master-0 kubenswrapper[31559]: I0216 02:40:00.367111 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-c9d68ffd6-4ln7q" event={"ID":"778356f6-5ce1-4462-8bf6-8c63775e7fe0","Type":"ContainerStarted","Data":"e76f238be015af8507ae39e8e2be65875298c014cac43b0c54b9d4c285a55c74"} Feb 16 02:40:00.367543 master-0 kubenswrapper[31559]: I0216 02:40:00.367162 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-c9d68ffd6-4ln7q" event={"ID":"778356f6-5ce1-4462-8bf6-8c63775e7fe0","Type":"ContainerStarted","Data":"4575a8d355180cabfe9fcae86cd181e92bf1d6130a62dd9b8082f676b53557b9"} Feb 16 02:40:00.368519 master-0 kubenswrapper[31559]: I0216 02:40:00.368417 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-c9d68ffd6-4ln7q" Feb 16 02:40:00.372419 master-0 kubenswrapper[31559]: I0216 02:40:00.372381 31559 generic.go:334] "Generic (PLEG): container finished" podID="d2ed2e4a-8cda-4a54-8b5d-01b559d15d76" containerID="5a972e88a90f54a53819ae538096e27d00cdeaee8a27f592d4ffb473018dd435" exitCode=143 Feb 16 02:40:00.372570 master-0 kubenswrapper[31559]: I0216 02:40:00.372548 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6896ff5478-9txrd" event={"ID":"d2ed2e4a-8cda-4a54-8b5d-01b559d15d76","Type":"ContainerDied","Data":"5a972e88a90f54a53819ae538096e27d00cdeaee8a27f592d4ffb473018dd435"} Feb 16 02:40:00.410678 master-0 kubenswrapper[31559]: I0216 02:40:00.410615 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-86f55b5cf6-zxgrr" event={"ID":"bbb13644-ecbf-43f1-9203-c714b5485f17","Type":"ContainerStarted","Data":"6d97272db2893c7c52f68fe3a07382e729c317a53b5116fe9d87e3237529020f"} Feb 16 02:40:00.424146 master-0 kubenswrapper[31559]: I0216 02:40:00.419553 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 16 02:40:00.424146 master-0 kubenswrapper[31559]: I0216 02:40:00.420728 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"96e7ee31-a94f-4d6d-a1c7-abb314357ff7","Type":"ContainerStarted","Data":"ba3b4b38bf107f6eca115bb177ca22d284da6aba1402211cc31862b2b51807d6"} Feb 16 02:40:00.428607 master-0 kubenswrapper[31559]: I0216 02:40:00.425144 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-c9d68ffd6-4ln7q" podStartSLOduration=5.128009327 podStartE2EDuration="6.425120049s" podCreationTimestamp="2026-02-16 02:39:54 +0000 UTC" firstStartedPulling="2026-02-16 02:39:56.064834904 +0000 UTC m=+1048.409440919" lastFinishedPulling="2026-02-16 02:39:57.361945636 +0000 UTC m=+1049.706551641" observedRunningTime="2026-02-16 02:40:00.395506948 +0000 UTC m=+1052.740112963" watchObservedRunningTime="2026-02-16 02:40:00.425120049 +0000 UTC m=+1052.769726054" Feb 16 02:40:00.438546 master-0 kubenswrapper[31559]: I0216 02:40:00.438473 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 16 02:40:00.441138 master-0 kubenswrapper[31559]: I0216 02:40:00.441093 31559 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="507c516c-4177-4f63-bd9b-512c486e44eb" podUID="96e7ee31-a94f-4d6d-a1c7-abb314357ff7" Feb 16 02:40:00.558307 master-0 kubenswrapper[31559]: I0216 02:40:00.558260 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/507c516c-4177-4f63-bd9b-512c486e44eb-openstack-config\") pod \"507c516c-4177-4f63-bd9b-512c486e44eb\" (UID: \"507c516c-4177-4f63-bd9b-512c486e44eb\") " Feb 16 02:40:00.558607 master-0 kubenswrapper[31559]: I0216 02:40:00.558575 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86f8t\" (UniqueName: \"kubernetes.io/projected/507c516c-4177-4f63-bd9b-512c486e44eb-kube-api-access-86f8t\") pod \"507c516c-4177-4f63-bd9b-512c486e44eb\" (UID: \"507c516c-4177-4f63-bd9b-512c486e44eb\") " Feb 16 02:40:00.558904 master-0 kubenswrapper[31559]: I0216 02:40:00.558877 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/507c516c-4177-4f63-bd9b-512c486e44eb-openstack-config-secret\") pod \"507c516c-4177-4f63-bd9b-512c486e44eb\" (UID: \"507c516c-4177-4f63-bd9b-512c486e44eb\") " Feb 16 02:40:00.559260 master-0 kubenswrapper[31559]: I0216 02:40:00.559195 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/507c516c-4177-4f63-bd9b-512c486e44eb-combined-ca-bundle\") pod \"507c516c-4177-4f63-bd9b-512c486e44eb\" (UID: \"507c516c-4177-4f63-bd9b-512c486e44eb\") " Feb 16 02:40:00.563043 master-0 kubenswrapper[31559]: I0216 02:40:00.563003 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/507c516c-4177-4f63-bd9b-512c486e44eb-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "507c516c-4177-4f63-bd9b-512c486e44eb" (UID: "507c516c-4177-4f63-bd9b-512c486e44eb"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:40:00.564217 master-0 kubenswrapper[31559]: I0216 02:40:00.564186 31559 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/507c516c-4177-4f63-bd9b-512c486e44eb-openstack-config\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:00.566518 master-0 kubenswrapper[31559]: I0216 02:40:00.566481 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/507c516c-4177-4f63-bd9b-512c486e44eb-kube-api-access-86f8t" (OuterVolumeSpecName: "kube-api-access-86f8t") pod "507c516c-4177-4f63-bd9b-512c486e44eb" (UID: "507c516c-4177-4f63-bd9b-512c486e44eb"). InnerVolumeSpecName "kube-api-access-86f8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:40:00.567692 master-0 kubenswrapper[31559]: I0216 02:40:00.567618 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/507c516c-4177-4f63-bd9b-512c486e44eb-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "507c516c-4177-4f63-bd9b-512c486e44eb" (UID: "507c516c-4177-4f63-bd9b-512c486e44eb"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:40:00.573082 master-0 kubenswrapper[31559]: I0216 02:40:00.573026 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/507c516c-4177-4f63-bd9b-512c486e44eb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "507c516c-4177-4f63-bd9b-512c486e44eb" (UID: "507c516c-4177-4f63-bd9b-512c486e44eb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:40:00.670081 master-0 kubenswrapper[31559]: I0216 02:40:00.670016 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86f8t\" (UniqueName: \"kubernetes.io/projected/507c516c-4177-4f63-bd9b-512c486e44eb-kube-api-access-86f8t\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:00.671668 master-0 kubenswrapper[31559]: I0216 02:40:00.671611 31559 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/507c516c-4177-4f63-bd9b-512c486e44eb-openstack-config-secret\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:00.671668 master-0 kubenswrapper[31559]: I0216 02:40:00.671661 31559 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/507c516c-4177-4f63-bd9b-512c486e44eb-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:00.824391 master-0 kubenswrapper[31559]: I0216 02:40:00.824327 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-dde57-backup-0" Feb 16 02:40:01.436654 master-0 kubenswrapper[31559]: I0216 02:40:01.436599 31559 generic.go:334] "Generic (PLEG): container finished" podID="bbb13644-ecbf-43f1-9203-c714b5485f17" containerID="6d97272db2893c7c52f68fe3a07382e729c317a53b5116fe9d87e3237529020f" exitCode=0 Feb 16 02:40:01.437977 master-0 kubenswrapper[31559]: I0216 02:40:01.436715 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-86f55b5cf6-zxgrr" event={"ID":"bbb13644-ecbf-43f1-9203-c714b5485f17","Type":"ContainerDied","Data":"6d97272db2893c7c52f68fe3a07382e729c317a53b5116fe9d87e3237529020f"} Feb 16 02:40:01.437977 master-0 kubenswrapper[31559]: I0216 02:40:01.436776 31559 scope.go:117] "RemoveContainer" containerID="8624eec5aa4fb47d6023393bb2cd4416013c7b136771923cd96484cf18318d46" Feb 16 02:40:01.437977 master-0 kubenswrapper[31559]: I0216 02:40:01.437343 31559 scope.go:117] "RemoveContainer" containerID="8624eec5aa4fb47d6023393bb2cd4416013c7b136771923cd96484cf18318d46" Feb 16 02:40:01.439845 master-0 kubenswrapper[31559]: I0216 02:40:01.439219 31559 generic.go:334] "Generic (PLEG): container finished" podID="f71d864b-c882-43b8-a7a7-b1e163d38aa4" containerID="f03f43aa9c18c888101d1b592175d2d6d71a6ff08784742267c79ea8927482ba" exitCode=0 Feb 16 02:40:01.439845 master-0 kubenswrapper[31559]: I0216 02:40:01.439275 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"f71d864b-c882-43b8-a7a7-b1e163d38aa4","Type":"ContainerDied","Data":"f03f43aa9c18c888101d1b592175d2d6d71a6ff08784742267c79ea8927482ba"} Feb 16 02:40:01.452604 master-0 kubenswrapper[31559]: I0216 02:40:01.444994 31559 generic.go:334] "Generic (PLEG): container finished" podID="ec163f18-db19-4327-ae8b-6feb4c6004af" containerID="e8fb80b64e740d447995e951c1f82f5aee648bdeca359a68246174f512005a67" exitCode=1 Feb 16 02:40:01.452604 master-0 kubenswrapper[31559]: I0216 02:40:01.446196 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-57d76bb68d-7wt45" event={"ID":"ec163f18-db19-4327-ae8b-6feb4c6004af","Type":"ContainerDied","Data":"e8fb80b64e740d447995e951c1f82f5aee648bdeca359a68246174f512005a67"} Feb 16 02:40:01.452604 master-0 kubenswrapper[31559]: I0216 02:40:01.447057 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 16 02:40:01.452604 master-0 kubenswrapper[31559]: I0216 02:40:01.447284 31559 scope.go:117] "RemoveContainer" containerID="e8fb80b64e740d447995e951c1f82f5aee648bdeca359a68246174f512005a67" Feb 16 02:40:01.551896 master-0 kubenswrapper[31559]: E0216 02:40:01.551722 31559 log.go:32] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to delete container k8s_init_ironic-86f55b5cf6-zxgrr_openstack_bbb13644-ecbf-43f1-9203-c714b5485f17_0 in pod sandbox 170a1d2795445711807f791c40d0b8140f0f02e67dec9199cbffaded413cd093 from index: no such id: '8624eec5aa4fb47d6023393bb2cd4416013c7b136771923cd96484cf18318d46'" containerID="8624eec5aa4fb47d6023393bb2cd4416013c7b136771923cd96484cf18318d46" Feb 16 02:40:01.552824 master-0 kubenswrapper[31559]: E0216 02:40:01.552787 31559 kuberuntime_container.go:896] "Unhandled Error" err="failed to remove pod init container \"init\": rpc error: code = Unknown desc = failed to delete container k8s_init_ironic-86f55b5cf6-zxgrr_openstack_bbb13644-ecbf-43f1-9203-c714b5485f17_0 in pod sandbox 170a1d2795445711807f791c40d0b8140f0f02e67dec9199cbffaded413cd093 from index: no such id: '8624eec5aa4fb47d6023393bb2cd4416013c7b136771923cd96484cf18318d46'; Skipping pod \"ironic-86f55b5cf6-zxgrr_openstack(bbb13644-ecbf-43f1-9203-c714b5485f17)\"" logger="UnhandledError" Feb 16 02:40:01.579817 master-0 kubenswrapper[31559]: I0216 02:40:01.579760 31559 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="507c516c-4177-4f63-bd9b-512c486e44eb" podUID="96e7ee31-a94f-4d6d-a1c7-abb314357ff7" Feb 16 02:40:01.755970 master-0 kubenswrapper[31559]: I0216 02:40:01.755494 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-dde57-volume-lvm-iscsi-0" Feb 16 02:40:01.820774 master-0 kubenswrapper[31559]: I0216 02:40:01.820730 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6fbd84b845-7w476" Feb 16 02:40:01.893337 master-0 kubenswrapper[31559]: I0216 02:40:01.893183 31559 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ironic-neutron-agent-57d76bb68d-7wt45" Feb 16 02:40:01.954752 master-0 kubenswrapper[31559]: I0216 02:40:01.954468 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="507c516c-4177-4f63-bd9b-512c486e44eb" path="/var/lib/kubelet/pods/507c516c-4177-4f63-bd9b-512c486e44eb/volumes" Feb 16 02:40:01.955056 master-0 kubenswrapper[31559]: I0216 02:40:01.954910 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f4994bbb5-4cdht"] Feb 16 02:40:01.955156 master-0 kubenswrapper[31559]: I0216 02:40:01.955118 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-f4994bbb5-4cdht" podUID="43e8d4ad-2cb2-43ad-9255-b8285c08e9d5" containerName="dnsmasq-dns" containerID="cri-o://a1d9f4e6292669bc3ff67d074e9ea41ae65a31160716f74e2635233d888e7c50" gracePeriod=10 Feb 16 02:40:02.483708 master-0 kubenswrapper[31559]: I0216 02:40:02.478833 31559 generic.go:334] "Generic (PLEG): container finished" podID="43e8d4ad-2cb2-43ad-9255-b8285c08e9d5" containerID="a1d9f4e6292669bc3ff67d074e9ea41ae65a31160716f74e2635233d888e7c50" exitCode=0 Feb 16 02:40:02.483708 master-0 kubenswrapper[31559]: I0216 02:40:02.478937 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f4994bbb5-4cdht" event={"ID":"43e8d4ad-2cb2-43ad-9255-b8285c08e9d5","Type":"ContainerDied","Data":"a1d9f4e6292669bc3ff67d074e9ea41ae65a31160716f74e2635233d888e7c50"} Feb 16 02:40:02.499550 master-0 kubenswrapper[31559]: I0216 02:40:02.486661 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-57d76bb68d-7wt45" event={"ID":"ec163f18-db19-4327-ae8b-6feb4c6004af","Type":"ContainerStarted","Data":"8f139a71e4d429c694dd54dcec5c0508e8a590e9c75fd37980746cd97af2f3ee"} Feb 16 02:40:02.499550 master-0 kubenswrapper[31559]: I0216 02:40:02.486724 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-57d76bb68d-7wt45" Feb 16 02:40:02.506582 master-0 kubenswrapper[31559]: I0216 02:40:02.506528 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-86f55b5cf6-zxgrr" event={"ID":"bbb13644-ecbf-43f1-9203-c714b5485f17","Type":"ContainerStarted","Data":"0a4e84b2ea6f10e1d3e74861c5da5436c395ea15046a8e869a906c01a02e571f"} Feb 16 02:40:02.506582 master-0 kubenswrapper[31559]: I0216 02:40:02.506585 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-86f55b5cf6-zxgrr" event={"ID":"bbb13644-ecbf-43f1-9203-c714b5485f17","Type":"ContainerStarted","Data":"b85a5b72da96bec9978439d8cea19f4daef5f5443be540ddd413f27358a35045"} Feb 16 02:40:02.537275 master-0 kubenswrapper[31559]: I0216 02:40:02.533399 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-86b5fc89c6-rhb4k"] Feb 16 02:40:02.539158 master-0 kubenswrapper[31559]: I0216 02:40:02.538318 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-86b5fc89c6-rhb4k" Feb 16 02:40:02.546948 master-0 kubenswrapper[31559]: I0216 02:40:02.541575 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 16 02:40:02.546948 master-0 kubenswrapper[31559]: I0216 02:40:02.541634 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 16 02:40:02.546948 master-0 kubenswrapper[31559]: I0216 02:40:02.541589 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 16 02:40:02.572454 master-0 kubenswrapper[31559]: I0216 02:40:02.572246 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-86b5fc89c6-rhb4k"] Feb 16 02:40:02.608540 master-0 kubenswrapper[31559]: I0216 02:40:02.607310 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f4994bbb5-4cdht" Feb 16 02:40:02.644485 master-0 kubenswrapper[31559]: I0216 02:40:02.632365 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e89de18a-e96b-4703-836c-354a5a6f88ee-log-httpd\") pod \"swift-proxy-86b5fc89c6-rhb4k\" (UID: \"e89de18a-e96b-4703-836c-354a5a6f88ee\") " pod="openstack/swift-proxy-86b5fc89c6-rhb4k" Feb 16 02:40:02.644485 master-0 kubenswrapper[31559]: I0216 02:40:02.632568 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e89de18a-e96b-4703-836c-354a5a6f88ee-run-httpd\") pod \"swift-proxy-86b5fc89c6-rhb4k\" (UID: \"e89de18a-e96b-4703-836c-354a5a6f88ee\") " pod="openstack/swift-proxy-86b5fc89c6-rhb4k" Feb 16 02:40:02.644485 master-0 kubenswrapper[31559]: I0216 02:40:02.632621 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5j45\" (UniqueName: \"kubernetes.io/projected/e89de18a-e96b-4703-836c-354a5a6f88ee-kube-api-access-b5j45\") pod \"swift-proxy-86b5fc89c6-rhb4k\" (UID: \"e89de18a-e96b-4703-836c-354a5a6f88ee\") " pod="openstack/swift-proxy-86b5fc89c6-rhb4k" Feb 16 02:40:02.644485 master-0 kubenswrapper[31559]: I0216 02:40:02.632643 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e89de18a-e96b-4703-836c-354a5a6f88ee-public-tls-certs\") pod \"swift-proxy-86b5fc89c6-rhb4k\" (UID: \"e89de18a-e96b-4703-836c-354a5a6f88ee\") " pod="openstack/swift-proxy-86b5fc89c6-rhb4k" Feb 16 02:40:02.644485 master-0 kubenswrapper[31559]: I0216 02:40:02.632694 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e89de18a-e96b-4703-836c-354a5a6f88ee-etc-swift\") pod \"swift-proxy-86b5fc89c6-rhb4k\" (UID: \"e89de18a-e96b-4703-836c-354a5a6f88ee\") " pod="openstack/swift-proxy-86b5fc89c6-rhb4k" Feb 16 02:40:02.644485 master-0 kubenswrapper[31559]: I0216 02:40:02.632757 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e89de18a-e96b-4703-836c-354a5a6f88ee-config-data\") pod \"swift-proxy-86b5fc89c6-rhb4k\" (UID: \"e89de18a-e96b-4703-836c-354a5a6f88ee\") " pod="openstack/swift-proxy-86b5fc89c6-rhb4k" Feb 16 02:40:02.644485 master-0 kubenswrapper[31559]: I0216 02:40:02.632883 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e89de18a-e96b-4703-836c-354a5a6f88ee-internal-tls-certs\") pod \"swift-proxy-86b5fc89c6-rhb4k\" (UID: \"e89de18a-e96b-4703-836c-354a5a6f88ee\") " pod="openstack/swift-proxy-86b5fc89c6-rhb4k" Feb 16 02:40:02.644485 master-0 kubenswrapper[31559]: I0216 02:40:02.632917 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e89de18a-e96b-4703-836c-354a5a6f88ee-combined-ca-bundle\") pod \"swift-proxy-86b5fc89c6-rhb4k\" (UID: \"e89de18a-e96b-4703-836c-354a5a6f88ee\") " pod="openstack/swift-proxy-86b5fc89c6-rhb4k" Feb 16 02:40:02.644485 master-0 kubenswrapper[31559]: I0216 02:40:02.634554 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-86f55b5cf6-zxgrr" podStartSLOduration=7.379110872 podStartE2EDuration="11.63453378s" podCreationTimestamp="2026-02-16 02:39:51 +0000 UTC" firstStartedPulling="2026-02-16 02:39:53.133749548 +0000 UTC m=+1045.478355563" lastFinishedPulling="2026-02-16 02:39:57.389172456 +0000 UTC m=+1049.733778471" observedRunningTime="2026-02-16 02:40:02.561067197 +0000 UTC m=+1054.905673212" watchObservedRunningTime="2026-02-16 02:40:02.63453378 +0000 UTC m=+1054.979139795" Feb 16 02:40:02.736308 master-0 kubenswrapper[31559]: I0216 02:40:02.736200 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/43e8d4ad-2cb2-43ad-9255-b8285c08e9d5-ovsdbserver-sb\") pod \"43e8d4ad-2cb2-43ad-9255-b8285c08e9d5\" (UID: \"43e8d4ad-2cb2-43ad-9255-b8285c08e9d5\") " Feb 16 02:40:02.736680 master-0 kubenswrapper[31559]: I0216 02:40:02.736566 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/43e8d4ad-2cb2-43ad-9255-b8285c08e9d5-dns-swift-storage-0\") pod \"43e8d4ad-2cb2-43ad-9255-b8285c08e9d5\" (UID: \"43e8d4ad-2cb2-43ad-9255-b8285c08e9d5\") " Feb 16 02:40:02.736680 master-0 kubenswrapper[31559]: I0216 02:40:02.736653 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/43e8d4ad-2cb2-43ad-9255-b8285c08e9d5-ovsdbserver-nb\") pod \"43e8d4ad-2cb2-43ad-9255-b8285c08e9d5\" (UID: \"43e8d4ad-2cb2-43ad-9255-b8285c08e9d5\") " Feb 16 02:40:02.736767 master-0 kubenswrapper[31559]: I0216 02:40:02.736683 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/43e8d4ad-2cb2-43ad-9255-b8285c08e9d5-dns-svc\") pod \"43e8d4ad-2cb2-43ad-9255-b8285c08e9d5\" (UID: \"43e8d4ad-2cb2-43ad-9255-b8285c08e9d5\") " Feb 16 02:40:02.736767 master-0 kubenswrapper[31559]: I0216 02:40:02.736715 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9dwrc\" (UniqueName: \"kubernetes.io/projected/43e8d4ad-2cb2-43ad-9255-b8285c08e9d5-kube-api-access-9dwrc\") pod \"43e8d4ad-2cb2-43ad-9255-b8285c08e9d5\" (UID: \"43e8d4ad-2cb2-43ad-9255-b8285c08e9d5\") " Feb 16 02:40:02.736841 master-0 kubenswrapper[31559]: I0216 02:40:02.736812 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43e8d4ad-2cb2-43ad-9255-b8285c08e9d5-config\") pod \"43e8d4ad-2cb2-43ad-9255-b8285c08e9d5\" (UID: \"43e8d4ad-2cb2-43ad-9255-b8285c08e9d5\") " Feb 16 02:40:02.737325 master-0 kubenswrapper[31559]: I0216 02:40:02.737267 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e89de18a-e96b-4703-836c-354a5a6f88ee-run-httpd\") pod \"swift-proxy-86b5fc89c6-rhb4k\" (UID: \"e89de18a-e96b-4703-836c-354a5a6f88ee\") " pod="openstack/swift-proxy-86b5fc89c6-rhb4k" Feb 16 02:40:02.737392 master-0 kubenswrapper[31559]: I0216 02:40:02.737330 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5j45\" (UniqueName: \"kubernetes.io/projected/e89de18a-e96b-4703-836c-354a5a6f88ee-kube-api-access-b5j45\") pod \"swift-proxy-86b5fc89c6-rhb4k\" (UID: \"e89de18a-e96b-4703-836c-354a5a6f88ee\") " pod="openstack/swift-proxy-86b5fc89c6-rhb4k" Feb 16 02:40:02.737392 master-0 kubenswrapper[31559]: I0216 02:40:02.737356 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e89de18a-e96b-4703-836c-354a5a6f88ee-public-tls-certs\") pod \"swift-proxy-86b5fc89c6-rhb4k\" (UID: \"e89de18a-e96b-4703-836c-354a5a6f88ee\") " pod="openstack/swift-proxy-86b5fc89c6-rhb4k" Feb 16 02:40:02.737392 master-0 kubenswrapper[31559]: I0216 02:40:02.737386 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e89de18a-e96b-4703-836c-354a5a6f88ee-etc-swift\") pod \"swift-proxy-86b5fc89c6-rhb4k\" (UID: \"e89de18a-e96b-4703-836c-354a5a6f88ee\") " pod="openstack/swift-proxy-86b5fc89c6-rhb4k" Feb 16 02:40:02.737585 master-0 kubenswrapper[31559]: I0216 02:40:02.737422 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e89de18a-e96b-4703-836c-354a5a6f88ee-config-data\") pod \"swift-proxy-86b5fc89c6-rhb4k\" (UID: \"e89de18a-e96b-4703-836c-354a5a6f88ee\") " pod="openstack/swift-proxy-86b5fc89c6-rhb4k" Feb 16 02:40:02.737585 master-0 kubenswrapper[31559]: I0216 02:40:02.737511 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e89de18a-e96b-4703-836c-354a5a6f88ee-internal-tls-certs\") pod \"swift-proxy-86b5fc89c6-rhb4k\" (UID: \"e89de18a-e96b-4703-836c-354a5a6f88ee\") " pod="openstack/swift-proxy-86b5fc89c6-rhb4k" Feb 16 02:40:02.737585 master-0 kubenswrapper[31559]: I0216 02:40:02.737539 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e89de18a-e96b-4703-836c-354a5a6f88ee-combined-ca-bundle\") pod \"swift-proxy-86b5fc89c6-rhb4k\" (UID: \"e89de18a-e96b-4703-836c-354a5a6f88ee\") " pod="openstack/swift-proxy-86b5fc89c6-rhb4k" Feb 16 02:40:02.737698 master-0 kubenswrapper[31559]: I0216 02:40:02.737600 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e89de18a-e96b-4703-836c-354a5a6f88ee-log-httpd\") pod \"swift-proxy-86b5fc89c6-rhb4k\" (UID: \"e89de18a-e96b-4703-836c-354a5a6f88ee\") " pod="openstack/swift-proxy-86b5fc89c6-rhb4k" Feb 16 02:40:02.738567 master-0 kubenswrapper[31559]: I0216 02:40:02.738542 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e89de18a-e96b-4703-836c-354a5a6f88ee-log-httpd\") pod \"swift-proxy-86b5fc89c6-rhb4k\" (UID: \"e89de18a-e96b-4703-836c-354a5a6f88ee\") " pod="openstack/swift-proxy-86b5fc89c6-rhb4k" Feb 16 02:40:02.738718 master-0 kubenswrapper[31559]: I0216 02:40:02.738687 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e89de18a-e96b-4703-836c-354a5a6f88ee-run-httpd\") pod \"swift-proxy-86b5fc89c6-rhb4k\" (UID: \"e89de18a-e96b-4703-836c-354a5a6f88ee\") " pod="openstack/swift-proxy-86b5fc89c6-rhb4k" Feb 16 02:40:02.747596 master-0 kubenswrapper[31559]: I0216 02:40:02.747533 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43e8d4ad-2cb2-43ad-9255-b8285c08e9d5-kube-api-access-9dwrc" (OuterVolumeSpecName: "kube-api-access-9dwrc") pod "43e8d4ad-2cb2-43ad-9255-b8285c08e9d5" (UID: "43e8d4ad-2cb2-43ad-9255-b8285c08e9d5"). InnerVolumeSpecName "kube-api-access-9dwrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:40:02.753760 master-0 kubenswrapper[31559]: I0216 02:40:02.751718 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e89de18a-e96b-4703-836c-354a5a6f88ee-public-tls-certs\") pod \"swift-proxy-86b5fc89c6-rhb4k\" (UID: \"e89de18a-e96b-4703-836c-354a5a6f88ee\") " pod="openstack/swift-proxy-86b5fc89c6-rhb4k" Feb 16 02:40:02.753760 master-0 kubenswrapper[31559]: I0216 02:40:02.753005 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e89de18a-e96b-4703-836c-354a5a6f88ee-config-data\") pod \"swift-proxy-86b5fc89c6-rhb4k\" (UID: \"e89de18a-e96b-4703-836c-354a5a6f88ee\") " pod="openstack/swift-proxy-86b5fc89c6-rhb4k" Feb 16 02:40:02.755845 master-0 kubenswrapper[31559]: I0216 02:40:02.754979 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e89de18a-e96b-4703-836c-354a5a6f88ee-internal-tls-certs\") pod \"swift-proxy-86b5fc89c6-rhb4k\" (UID: \"e89de18a-e96b-4703-836c-354a5a6f88ee\") " pod="openstack/swift-proxy-86b5fc89c6-rhb4k" Feb 16 02:40:02.758812 master-0 kubenswrapper[31559]: I0216 02:40:02.758386 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e89de18a-e96b-4703-836c-354a5a6f88ee-combined-ca-bundle\") pod \"swift-proxy-86b5fc89c6-rhb4k\" (UID: \"e89de18a-e96b-4703-836c-354a5a6f88ee\") " pod="openstack/swift-proxy-86b5fc89c6-rhb4k" Feb 16 02:40:02.772453 master-0 kubenswrapper[31559]: I0216 02:40:02.770620 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e89de18a-e96b-4703-836c-354a5a6f88ee-etc-swift\") pod \"swift-proxy-86b5fc89c6-rhb4k\" (UID: \"e89de18a-e96b-4703-836c-354a5a6f88ee\") " pod="openstack/swift-proxy-86b5fc89c6-rhb4k" Feb 16 02:40:02.783448 master-0 kubenswrapper[31559]: I0216 02:40:02.775695 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5j45\" (UniqueName: \"kubernetes.io/projected/e89de18a-e96b-4703-836c-354a5a6f88ee-kube-api-access-b5j45\") pod \"swift-proxy-86b5fc89c6-rhb4k\" (UID: \"e89de18a-e96b-4703-836c-354a5a6f88ee\") " pod="openstack/swift-proxy-86b5fc89c6-rhb4k" Feb 16 02:40:02.812276 master-0 kubenswrapper[31559]: I0216 02:40:02.810945 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43e8d4ad-2cb2-43ad-9255-b8285c08e9d5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "43e8d4ad-2cb2-43ad-9255-b8285c08e9d5" (UID: "43e8d4ad-2cb2-43ad-9255-b8285c08e9d5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:40:02.812276 master-0 kubenswrapper[31559]: I0216 02:40:02.811017 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43e8d4ad-2cb2-43ad-9255-b8285c08e9d5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "43e8d4ad-2cb2-43ad-9255-b8285c08e9d5" (UID: "43e8d4ad-2cb2-43ad-9255-b8285c08e9d5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:40:02.828465 master-0 kubenswrapper[31559]: I0216 02:40:02.827351 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43e8d4ad-2cb2-43ad-9255-b8285c08e9d5-config" (OuterVolumeSpecName: "config") pod "43e8d4ad-2cb2-43ad-9255-b8285c08e9d5" (UID: "43e8d4ad-2cb2-43ad-9255-b8285c08e9d5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:40:02.839684 master-0 kubenswrapper[31559]: I0216 02:40:02.839630 31559 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/43e8d4ad-2cb2-43ad-9255-b8285c08e9d5-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:02.839684 master-0 kubenswrapper[31559]: I0216 02:40:02.839670 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9dwrc\" (UniqueName: \"kubernetes.io/projected/43e8d4ad-2cb2-43ad-9255-b8285c08e9d5-kube-api-access-9dwrc\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:02.839684 master-0 kubenswrapper[31559]: I0216 02:40:02.839681 31559 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43e8d4ad-2cb2-43ad-9255-b8285c08e9d5-config\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:02.839684 master-0 kubenswrapper[31559]: I0216 02:40:02.839691 31559 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/43e8d4ad-2cb2-43ad-9255-b8285c08e9d5-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:02.840402 master-0 kubenswrapper[31559]: I0216 02:40:02.840351 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43e8d4ad-2cb2-43ad-9255-b8285c08e9d5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "43e8d4ad-2cb2-43ad-9255-b8285c08e9d5" (UID: "43e8d4ad-2cb2-43ad-9255-b8285c08e9d5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:40:02.854552 master-0 kubenswrapper[31559]: I0216 02:40:02.854475 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43e8d4ad-2cb2-43ad-9255-b8285c08e9d5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "43e8d4ad-2cb2-43ad-9255-b8285c08e9d5" (UID: "43e8d4ad-2cb2-43ad-9255-b8285c08e9d5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:40:02.924242 master-0 kubenswrapper[31559]: I0216 02:40:02.924177 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-86b5fc89c6-rhb4k" Feb 16 02:40:02.945525 master-0 kubenswrapper[31559]: I0216 02:40:02.943905 31559 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/43e8d4ad-2cb2-43ad-9255-b8285c08e9d5-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:02.945525 master-0 kubenswrapper[31559]: I0216 02:40:02.943975 31559 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/43e8d4ad-2cb2-43ad-9255-b8285c08e9d5-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:03.269914 master-0 kubenswrapper[31559]: I0216 02:40:03.269757 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6896ff5478-9txrd" Feb 16 02:40:03.356080 master-0 kubenswrapper[31559]: I0216 02:40:03.356021 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2ed2e4a-8cda-4a54-8b5d-01b559d15d76-public-tls-certs\") pod \"d2ed2e4a-8cda-4a54-8b5d-01b559d15d76\" (UID: \"d2ed2e4a-8cda-4a54-8b5d-01b559d15d76\") " Feb 16 02:40:03.356277 master-0 kubenswrapper[31559]: I0216 02:40:03.356110 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2ed2e4a-8cda-4a54-8b5d-01b559d15d76-combined-ca-bundle\") pod \"d2ed2e4a-8cda-4a54-8b5d-01b559d15d76\" (UID: \"d2ed2e4a-8cda-4a54-8b5d-01b559d15d76\") " Feb 16 02:40:03.356277 master-0 kubenswrapper[31559]: I0216 02:40:03.356149 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2ed2e4a-8cda-4a54-8b5d-01b559d15d76-config-data\") pod \"d2ed2e4a-8cda-4a54-8b5d-01b559d15d76\" (UID: \"d2ed2e4a-8cda-4a54-8b5d-01b559d15d76\") " Feb 16 02:40:03.356277 master-0 kubenswrapper[31559]: I0216 02:40:03.356173 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2ed2e4a-8cda-4a54-8b5d-01b559d15d76-scripts\") pod \"d2ed2e4a-8cda-4a54-8b5d-01b559d15d76\" (UID: \"d2ed2e4a-8cda-4a54-8b5d-01b559d15d76\") " Feb 16 02:40:03.356277 master-0 kubenswrapper[31559]: I0216 02:40:03.356252 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2ed2e4a-8cda-4a54-8b5d-01b559d15d76-logs\") pod \"d2ed2e4a-8cda-4a54-8b5d-01b559d15d76\" (UID: \"d2ed2e4a-8cda-4a54-8b5d-01b559d15d76\") " Feb 16 02:40:03.356402 master-0 kubenswrapper[31559]: I0216 02:40:03.356387 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlbjg\" (UniqueName: \"kubernetes.io/projected/d2ed2e4a-8cda-4a54-8b5d-01b559d15d76-kube-api-access-wlbjg\") pod \"d2ed2e4a-8cda-4a54-8b5d-01b559d15d76\" (UID: \"d2ed2e4a-8cda-4a54-8b5d-01b559d15d76\") " Feb 16 02:40:03.356483 master-0 kubenswrapper[31559]: I0216 02:40:03.356463 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2ed2e4a-8cda-4a54-8b5d-01b559d15d76-internal-tls-certs\") pod \"d2ed2e4a-8cda-4a54-8b5d-01b559d15d76\" (UID: \"d2ed2e4a-8cda-4a54-8b5d-01b559d15d76\") " Feb 16 02:40:03.356958 master-0 kubenswrapper[31559]: I0216 02:40:03.356912 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2ed2e4a-8cda-4a54-8b5d-01b559d15d76-logs" (OuterVolumeSpecName: "logs") pod "d2ed2e4a-8cda-4a54-8b5d-01b559d15d76" (UID: "d2ed2e4a-8cda-4a54-8b5d-01b559d15d76"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 02:40:03.357359 master-0 kubenswrapper[31559]: I0216 02:40:03.357319 31559 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2ed2e4a-8cda-4a54-8b5d-01b559d15d76-logs\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:03.359956 master-0 kubenswrapper[31559]: I0216 02:40:03.359908 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2ed2e4a-8cda-4a54-8b5d-01b559d15d76-kube-api-access-wlbjg" (OuterVolumeSpecName: "kube-api-access-wlbjg") pod "d2ed2e4a-8cda-4a54-8b5d-01b559d15d76" (UID: "d2ed2e4a-8cda-4a54-8b5d-01b559d15d76"). InnerVolumeSpecName "kube-api-access-wlbjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:40:03.362010 master-0 kubenswrapper[31559]: I0216 02:40:03.361964 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2ed2e4a-8cda-4a54-8b5d-01b559d15d76-scripts" (OuterVolumeSpecName: "scripts") pod "d2ed2e4a-8cda-4a54-8b5d-01b559d15d76" (UID: "d2ed2e4a-8cda-4a54-8b5d-01b559d15d76"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:40:03.437609 master-0 kubenswrapper[31559]: I0216 02:40:03.437551 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2ed2e4a-8cda-4a54-8b5d-01b559d15d76-config-data" (OuterVolumeSpecName: "config-data") pod "d2ed2e4a-8cda-4a54-8b5d-01b559d15d76" (UID: "d2ed2e4a-8cda-4a54-8b5d-01b559d15d76"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:40:03.459664 master-0 kubenswrapper[31559]: I0216 02:40:03.459052 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlbjg\" (UniqueName: \"kubernetes.io/projected/d2ed2e4a-8cda-4a54-8b5d-01b559d15d76-kube-api-access-wlbjg\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:03.459664 master-0 kubenswrapper[31559]: I0216 02:40:03.459095 31559 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2ed2e4a-8cda-4a54-8b5d-01b559d15d76-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:03.459664 master-0 kubenswrapper[31559]: I0216 02:40:03.459106 31559 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2ed2e4a-8cda-4a54-8b5d-01b559d15d76-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:03.461469 master-0 kubenswrapper[31559]: I0216 02:40:03.461412 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2ed2e4a-8cda-4a54-8b5d-01b559d15d76-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d2ed2e4a-8cda-4a54-8b5d-01b559d15d76" (UID: "d2ed2e4a-8cda-4a54-8b5d-01b559d15d76"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:40:03.472788 master-0 kubenswrapper[31559]: I0216 02:40:03.472477 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-86b5fc89c6-rhb4k"] Feb 16 02:40:03.523639 master-0 kubenswrapper[31559]: I0216 02:40:03.519582 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2ed2e4a-8cda-4a54-8b5d-01b559d15d76-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "d2ed2e4a-8cda-4a54-8b5d-01b559d15d76" (UID: "d2ed2e4a-8cda-4a54-8b5d-01b559d15d76"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:40:03.527259 master-0 kubenswrapper[31559]: I0216 02:40:03.526769 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-86b5fc89c6-rhb4k" event={"ID":"e89de18a-e96b-4703-836c-354a5a6f88ee","Type":"ContainerStarted","Data":"6af30bbf6fb94f6c32635dda43cae06a1e5fddb10e52fe36a0e6cefe0fe94143"} Feb 16 02:40:03.529889 master-0 kubenswrapper[31559]: I0216 02:40:03.529860 31559 generic.go:334] "Generic (PLEG): container finished" podID="d2ed2e4a-8cda-4a54-8b5d-01b559d15d76" containerID="8a2500d53ef512fc7f8fc6cec03dcab9aa2e075d86660812d570b721f740dda1" exitCode=0 Feb 16 02:40:03.529978 master-0 kubenswrapper[31559]: I0216 02:40:03.529909 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6896ff5478-9txrd" event={"ID":"d2ed2e4a-8cda-4a54-8b5d-01b559d15d76","Type":"ContainerDied","Data":"8a2500d53ef512fc7f8fc6cec03dcab9aa2e075d86660812d570b721f740dda1"} Feb 16 02:40:03.529978 master-0 kubenswrapper[31559]: I0216 02:40:03.529927 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6896ff5478-9txrd" event={"ID":"d2ed2e4a-8cda-4a54-8b5d-01b559d15d76","Type":"ContainerDied","Data":"42d09209fa1f383a9ba2c1c6066fea1070816588d9dcd70238ed55aa8a08d22d"} Feb 16 02:40:03.529978 master-0 kubenswrapper[31559]: I0216 02:40:03.529943 31559 scope.go:117] "RemoveContainer" containerID="8a2500d53ef512fc7f8fc6cec03dcab9aa2e075d86660812d570b721f740dda1" Feb 16 02:40:03.530081 master-0 kubenswrapper[31559]: I0216 02:40:03.529984 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6896ff5478-9txrd" Feb 16 02:40:03.535894 master-0 kubenswrapper[31559]: I0216 02:40:03.533500 31559 generic.go:334] "Generic (PLEG): container finished" podID="bbb13644-ecbf-43f1-9203-c714b5485f17" containerID="0a4e84b2ea6f10e1d3e74861c5da5436c395ea15046a8e869a906c01a02e571f" exitCode=1 Feb 16 02:40:03.535894 master-0 kubenswrapper[31559]: I0216 02:40:03.533580 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-86f55b5cf6-zxgrr" event={"ID":"bbb13644-ecbf-43f1-9203-c714b5485f17","Type":"ContainerDied","Data":"0a4e84b2ea6f10e1d3e74861c5da5436c395ea15046a8e869a906c01a02e571f"} Feb 16 02:40:03.535894 master-0 kubenswrapper[31559]: I0216 02:40:03.535135 31559 scope.go:117] "RemoveContainer" containerID="0a4e84b2ea6f10e1d3e74861c5da5436c395ea15046a8e869a906c01a02e571f" Feb 16 02:40:03.539623 master-0 kubenswrapper[31559]: I0216 02:40:03.536758 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f4994bbb5-4cdht" Feb 16 02:40:03.539623 master-0 kubenswrapper[31559]: I0216 02:40:03.537421 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f4994bbb5-4cdht" event={"ID":"43e8d4ad-2cb2-43ad-9255-b8285c08e9d5","Type":"ContainerDied","Data":"ced7d97d6344a32ca2a2820ea5f13206843efcb26cb76c996fe0f19548a02f5f"} Feb 16 02:40:03.558710 master-0 kubenswrapper[31559]: I0216 02:40:03.558654 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2ed2e4a-8cda-4a54-8b5d-01b559d15d76-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d2ed2e4a-8cda-4a54-8b5d-01b559d15d76" (UID: "d2ed2e4a-8cda-4a54-8b5d-01b559d15d76"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:40:03.562325 master-0 kubenswrapper[31559]: I0216 02:40:03.562229 31559 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2ed2e4a-8cda-4a54-8b5d-01b559d15d76-public-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:03.562325 master-0 kubenswrapper[31559]: I0216 02:40:03.562287 31559 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2ed2e4a-8cda-4a54-8b5d-01b559d15d76-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:03.562325 master-0 kubenswrapper[31559]: I0216 02:40:03.562299 31559 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2ed2e4a-8cda-4a54-8b5d-01b559d15d76-internal-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:03.669548 master-0 kubenswrapper[31559]: I0216 02:40:03.668091 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f4994bbb5-4cdht"] Feb 16 02:40:03.679232 master-0 kubenswrapper[31559]: I0216 02:40:03.679174 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f4994bbb5-4cdht"] Feb 16 02:40:03.680393 master-0 kubenswrapper[31559]: I0216 02:40:03.679342 31559 scope.go:117] "RemoveContainer" containerID="5a972e88a90f54a53819ae538096e27d00cdeaee8a27f592d4ffb473018dd435" Feb 16 02:40:03.785944 master-0 kubenswrapper[31559]: I0216 02:40:03.785906 31559 scope.go:117] "RemoveContainer" containerID="8a2500d53ef512fc7f8fc6cec03dcab9aa2e075d86660812d570b721f740dda1" Feb 16 02:40:03.786608 master-0 kubenswrapper[31559]: E0216 02:40:03.786367 31559 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a2500d53ef512fc7f8fc6cec03dcab9aa2e075d86660812d570b721f740dda1\": container with ID starting with 8a2500d53ef512fc7f8fc6cec03dcab9aa2e075d86660812d570b721f740dda1 not found: ID does not exist" containerID="8a2500d53ef512fc7f8fc6cec03dcab9aa2e075d86660812d570b721f740dda1" Feb 16 02:40:03.786608 master-0 kubenswrapper[31559]: I0216 02:40:03.786398 31559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a2500d53ef512fc7f8fc6cec03dcab9aa2e075d86660812d570b721f740dda1"} err="failed to get container status \"8a2500d53ef512fc7f8fc6cec03dcab9aa2e075d86660812d570b721f740dda1\": rpc error: code = NotFound desc = could not find container \"8a2500d53ef512fc7f8fc6cec03dcab9aa2e075d86660812d570b721f740dda1\": container with ID starting with 8a2500d53ef512fc7f8fc6cec03dcab9aa2e075d86660812d570b721f740dda1 not found: ID does not exist" Feb 16 02:40:03.786608 master-0 kubenswrapper[31559]: I0216 02:40:03.786418 31559 scope.go:117] "RemoveContainer" containerID="5a972e88a90f54a53819ae538096e27d00cdeaee8a27f592d4ffb473018dd435" Feb 16 02:40:03.786910 master-0 kubenswrapper[31559]: E0216 02:40:03.786862 31559 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a972e88a90f54a53819ae538096e27d00cdeaee8a27f592d4ffb473018dd435\": container with ID starting with 5a972e88a90f54a53819ae538096e27d00cdeaee8a27f592d4ffb473018dd435 not found: ID does not exist" containerID="5a972e88a90f54a53819ae538096e27d00cdeaee8a27f592d4ffb473018dd435" Feb 16 02:40:03.786910 master-0 kubenswrapper[31559]: I0216 02:40:03.786899 31559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a972e88a90f54a53819ae538096e27d00cdeaee8a27f592d4ffb473018dd435"} err="failed to get container status \"5a972e88a90f54a53819ae538096e27d00cdeaee8a27f592d4ffb473018dd435\": rpc error: code = NotFound desc = could not find container \"5a972e88a90f54a53819ae538096e27d00cdeaee8a27f592d4ffb473018dd435\": container with ID starting with 5a972e88a90f54a53819ae538096e27d00cdeaee8a27f592d4ffb473018dd435 not found: ID does not exist" Feb 16 02:40:03.786999 master-0 kubenswrapper[31559]: I0216 02:40:03.786917 31559 scope.go:117] "RemoveContainer" containerID="a1d9f4e6292669bc3ff67d074e9ea41ae65a31160716f74e2635233d888e7c50" Feb 16 02:40:03.834809 master-0 kubenswrapper[31559]: I0216 02:40:03.834732 31559 scope.go:117] "RemoveContainer" containerID="a33fd76e6d5311329dd15c3336604ea365083f06ba5c6c82724f49645ee3d5af" Feb 16 02:40:03.951797 master-0 kubenswrapper[31559]: I0216 02:40:03.951067 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43e8d4ad-2cb2-43ad-9255-b8285c08e9d5" path="/var/lib/kubelet/pods/43e8d4ad-2cb2-43ad-9255-b8285c08e9d5/volumes" Feb 16 02:40:04.016460 master-0 kubenswrapper[31559]: I0216 02:40:04.012905 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-c9d68ffd6-4ln7q" Feb 16 02:40:04.016460 master-0 kubenswrapper[31559]: I0216 02:40:04.014237 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-6896ff5478-9txrd"] Feb 16 02:40:04.043468 master-0 kubenswrapper[31559]: I0216 02:40:04.041503 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-6896ff5478-9txrd"] Feb 16 02:40:04.099478 master-0 kubenswrapper[31559]: I0216 02:40:04.094681 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-86f55b5cf6-zxgrr"] Feb 16 02:40:04.560530 master-0 kubenswrapper[31559]: I0216 02:40:04.559626 31559 generic.go:334] "Generic (PLEG): container finished" podID="bbb13644-ecbf-43f1-9203-c714b5485f17" containerID="9ff6a2ac1cdf56f52d161c83ed23c0e15c9865bbf0a306439cf0f5e6bbcf32ce" exitCode=1 Feb 16 02:40:04.560530 master-0 kubenswrapper[31559]: I0216 02:40:04.559701 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-86f55b5cf6-zxgrr" event={"ID":"bbb13644-ecbf-43f1-9203-c714b5485f17","Type":"ContainerDied","Data":"9ff6a2ac1cdf56f52d161c83ed23c0e15c9865bbf0a306439cf0f5e6bbcf32ce"} Feb 16 02:40:04.560530 master-0 kubenswrapper[31559]: I0216 02:40:04.559734 31559 scope.go:117] "RemoveContainer" containerID="0a4e84b2ea6f10e1d3e74861c5da5436c395ea15046a8e869a906c01a02e571f" Feb 16 02:40:04.560530 master-0 kubenswrapper[31559]: I0216 02:40:04.560342 31559 scope.go:117] "RemoveContainer" containerID="9ff6a2ac1cdf56f52d161c83ed23c0e15c9865bbf0a306439cf0f5e6bbcf32ce" Feb 16 02:40:04.561156 master-0 kubenswrapper[31559]: E0216 02:40:04.560726 31559 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-api pod=ironic-86f55b5cf6-zxgrr_openstack(bbb13644-ecbf-43f1-9203-c714b5485f17)\"" pod="openstack/ironic-86f55b5cf6-zxgrr" podUID="bbb13644-ecbf-43f1-9203-c714b5485f17" Feb 16 02:40:04.564820 master-0 kubenswrapper[31559]: I0216 02:40:04.563849 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-86b5fc89c6-rhb4k" event={"ID":"e89de18a-e96b-4703-836c-354a5a6f88ee","Type":"ContainerStarted","Data":"42e62173da8db59b16f39e2c2dff89d7505570abc7056934d6b818fe625d26c6"} Feb 16 02:40:04.564820 master-0 kubenswrapper[31559]: I0216 02:40:04.563902 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-86b5fc89c6-rhb4k" event={"ID":"e89de18a-e96b-4703-836c-354a5a6f88ee","Type":"ContainerStarted","Data":"59d6b463d79576ef44ecdb36b00bcbc6b32f3b1705773eb250ad0fe302fab479"} Feb 16 02:40:04.564820 master-0 kubenswrapper[31559]: I0216 02:40:04.564053 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-86b5fc89c6-rhb4k" Feb 16 02:40:04.564820 master-0 kubenswrapper[31559]: I0216 02:40:04.564075 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-86b5fc89c6-rhb4k" Feb 16 02:40:04.650483 master-0 kubenswrapper[31559]: I0216 02:40:04.649578 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-86b5fc89c6-rhb4k" podStartSLOduration=2.649556202 podStartE2EDuration="2.649556202s" podCreationTimestamp="2026-02-16 02:40:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:40:04.620166637 +0000 UTC m=+1056.964772652" watchObservedRunningTime="2026-02-16 02:40:04.649556202 +0000 UTC m=+1056.994162217" Feb 16 02:40:05.077752 master-0 kubenswrapper[31559]: I0216 02:40:05.077706 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-dde57-scheduler-0" Feb 16 02:40:05.594719 master-0 kubenswrapper[31559]: I0216 02:40:05.594669 31559 generic.go:334] "Generic (PLEG): container finished" podID="ec163f18-db19-4327-ae8b-6feb4c6004af" containerID="8f139a71e4d429c694dd54dcec5c0508e8a590e9c75fd37980746cd97af2f3ee" exitCode=1 Feb 16 02:40:05.595472 master-0 kubenswrapper[31559]: I0216 02:40:05.595077 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-57d76bb68d-7wt45" event={"ID":"ec163f18-db19-4327-ae8b-6feb4c6004af","Type":"ContainerDied","Data":"8f139a71e4d429c694dd54dcec5c0508e8a590e9c75fd37980746cd97af2f3ee"} Feb 16 02:40:05.595598 master-0 kubenswrapper[31559]: I0216 02:40:05.595585 31559 scope.go:117] "RemoveContainer" containerID="e8fb80b64e740d447995e951c1f82f5aee648bdeca359a68246174f512005a67" Feb 16 02:40:05.596910 master-0 kubenswrapper[31559]: I0216 02:40:05.596892 31559 scope.go:117] "RemoveContainer" containerID="8f139a71e4d429c694dd54dcec5c0508e8a590e9c75fd37980746cd97af2f3ee" Feb 16 02:40:05.597672 master-0 kubenswrapper[31559]: E0216 02:40:05.597625 31559 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-neutron-agent\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-neutron-agent pod=ironic-neutron-agent-57d76bb68d-7wt45_openstack(ec163f18-db19-4327-ae8b-6feb4c6004af)\"" pod="openstack/ironic-neutron-agent-57d76bb68d-7wt45" podUID="ec163f18-db19-4327-ae8b-6feb4c6004af" Feb 16 02:40:05.617250 master-0 kubenswrapper[31559]: I0216 02:40:05.617180 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ironic-86f55b5cf6-zxgrr" podUID="bbb13644-ecbf-43f1-9203-c714b5485f17" containerName="ironic-api-log" containerID="cri-o://b85a5b72da96bec9978439d8cea19f4daef5f5443be540ddd413f27358a35045" gracePeriod=60 Feb 16 02:40:05.957687 master-0 kubenswrapper[31559]: I0216 02:40:05.957625 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2ed2e4a-8cda-4a54-8b5d-01b559d15d76" path="/var/lib/kubelet/pods/d2ed2e4a-8cda-4a54-8b5d-01b559d15d76/volumes" Feb 16 02:40:05.994108 master-0 kubenswrapper[31559]: I0216 02:40:05.994038 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-dde57-backup-0" Feb 16 02:40:06.384366 master-0 kubenswrapper[31559]: I0216 02:40:06.384205 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-db-sync-2p5mt"] Feb 16 02:40:06.384800 master-0 kubenswrapper[31559]: E0216 02:40:06.384773 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2ed2e4a-8cda-4a54-8b5d-01b559d15d76" containerName="placement-log" Feb 16 02:40:06.384800 master-0 kubenswrapper[31559]: I0216 02:40:06.384796 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2ed2e4a-8cda-4a54-8b5d-01b559d15d76" containerName="placement-log" Feb 16 02:40:06.384887 master-0 kubenswrapper[31559]: E0216 02:40:06.384824 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2ed2e4a-8cda-4a54-8b5d-01b559d15d76" containerName="placement-api" Feb 16 02:40:06.384887 master-0 kubenswrapper[31559]: I0216 02:40:06.384834 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2ed2e4a-8cda-4a54-8b5d-01b559d15d76" containerName="placement-api" Feb 16 02:40:06.384887 master-0 kubenswrapper[31559]: E0216 02:40:06.384853 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43e8d4ad-2cb2-43ad-9255-b8285c08e9d5" containerName="init" Feb 16 02:40:06.384887 master-0 kubenswrapper[31559]: I0216 02:40:06.384862 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="43e8d4ad-2cb2-43ad-9255-b8285c08e9d5" containerName="init" Feb 16 02:40:06.385023 master-0 kubenswrapper[31559]: E0216 02:40:06.384916 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43e8d4ad-2cb2-43ad-9255-b8285c08e9d5" containerName="dnsmasq-dns" Feb 16 02:40:06.385023 master-0 kubenswrapper[31559]: I0216 02:40:06.384924 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="43e8d4ad-2cb2-43ad-9255-b8285c08e9d5" containerName="dnsmasq-dns" Feb 16 02:40:06.385317 master-0 kubenswrapper[31559]: I0216 02:40:06.385285 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2ed2e4a-8cda-4a54-8b5d-01b559d15d76" containerName="placement-api" Feb 16 02:40:06.385370 master-0 kubenswrapper[31559]: I0216 02:40:06.385326 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2ed2e4a-8cda-4a54-8b5d-01b559d15d76" containerName="placement-log" Feb 16 02:40:06.385403 master-0 kubenswrapper[31559]: I0216 02:40:06.385380 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="43e8d4ad-2cb2-43ad-9255-b8285c08e9d5" containerName="dnsmasq-dns" Feb 16 02:40:06.388581 master-0 kubenswrapper[31559]: I0216 02:40:06.388530 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-2p5mt" Feb 16 02:40:06.390957 master-0 kubenswrapper[31559]: I0216 02:40:06.390918 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-scripts" Feb 16 02:40:06.392165 master-0 kubenswrapper[31559]: I0216 02:40:06.392105 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-config-data" Feb 16 02:40:06.407093 master-0 kubenswrapper[31559]: I0216 02:40:06.407037 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-sync-2p5mt"] Feb 16 02:40:06.474672 master-0 kubenswrapper[31559]: I0216 02:40:06.473298 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e68f2b97-da69-4ad8-aa22-20cbf6fcb819-combined-ca-bundle\") pod \"ironic-inspector-db-sync-2p5mt\" (UID: \"e68f2b97-da69-4ad8-aa22-20cbf6fcb819\") " pod="openstack/ironic-inspector-db-sync-2p5mt" Feb 16 02:40:06.474672 master-0 kubenswrapper[31559]: I0216 02:40:06.473376 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/e68f2b97-da69-4ad8-aa22-20cbf6fcb819-var-lib-ironic\") pod \"ironic-inspector-db-sync-2p5mt\" (UID: \"e68f2b97-da69-4ad8-aa22-20cbf6fcb819\") " pod="openstack/ironic-inspector-db-sync-2p5mt" Feb 16 02:40:06.474672 master-0 kubenswrapper[31559]: I0216 02:40:06.473406 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e68f2b97-da69-4ad8-aa22-20cbf6fcb819-scripts\") pod \"ironic-inspector-db-sync-2p5mt\" (UID: \"e68f2b97-da69-4ad8-aa22-20cbf6fcb819\") " pod="openstack/ironic-inspector-db-sync-2p5mt" Feb 16 02:40:06.474672 master-0 kubenswrapper[31559]: I0216 02:40:06.474031 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e68f2b97-da69-4ad8-aa22-20cbf6fcb819-config\") pod \"ironic-inspector-db-sync-2p5mt\" (UID: \"e68f2b97-da69-4ad8-aa22-20cbf6fcb819\") " pod="openstack/ironic-inspector-db-sync-2p5mt" Feb 16 02:40:06.474672 master-0 kubenswrapper[31559]: I0216 02:40:06.474297 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/e68f2b97-da69-4ad8-aa22-20cbf6fcb819-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-db-sync-2p5mt\" (UID: \"e68f2b97-da69-4ad8-aa22-20cbf6fcb819\") " pod="openstack/ironic-inspector-db-sync-2p5mt" Feb 16 02:40:06.474672 master-0 kubenswrapper[31559]: I0216 02:40:06.474544 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lmpv\" (UniqueName: \"kubernetes.io/projected/e68f2b97-da69-4ad8-aa22-20cbf6fcb819-kube-api-access-6lmpv\") pod \"ironic-inspector-db-sync-2p5mt\" (UID: \"e68f2b97-da69-4ad8-aa22-20cbf6fcb819\") " pod="openstack/ironic-inspector-db-sync-2p5mt" Feb 16 02:40:06.474672 master-0 kubenswrapper[31559]: I0216 02:40:06.474614 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/e68f2b97-da69-4ad8-aa22-20cbf6fcb819-etc-podinfo\") pod \"ironic-inspector-db-sync-2p5mt\" (UID: \"e68f2b97-da69-4ad8-aa22-20cbf6fcb819\") " pod="openstack/ironic-inspector-db-sync-2p5mt" Feb 16 02:40:06.577165 master-0 kubenswrapper[31559]: I0216 02:40:06.577094 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/e68f2b97-da69-4ad8-aa22-20cbf6fcb819-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-db-sync-2p5mt\" (UID: \"e68f2b97-da69-4ad8-aa22-20cbf6fcb819\") " pod="openstack/ironic-inspector-db-sync-2p5mt" Feb 16 02:40:06.577358 master-0 kubenswrapper[31559]: I0216 02:40:06.577230 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lmpv\" (UniqueName: \"kubernetes.io/projected/e68f2b97-da69-4ad8-aa22-20cbf6fcb819-kube-api-access-6lmpv\") pod \"ironic-inspector-db-sync-2p5mt\" (UID: \"e68f2b97-da69-4ad8-aa22-20cbf6fcb819\") " pod="openstack/ironic-inspector-db-sync-2p5mt" Feb 16 02:40:06.577358 master-0 kubenswrapper[31559]: I0216 02:40:06.577290 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/e68f2b97-da69-4ad8-aa22-20cbf6fcb819-etc-podinfo\") pod \"ironic-inspector-db-sync-2p5mt\" (UID: \"e68f2b97-da69-4ad8-aa22-20cbf6fcb819\") " pod="openstack/ironic-inspector-db-sync-2p5mt" Feb 16 02:40:06.577554 master-0 kubenswrapper[31559]: I0216 02:40:06.577378 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e68f2b97-da69-4ad8-aa22-20cbf6fcb819-combined-ca-bundle\") pod \"ironic-inspector-db-sync-2p5mt\" (UID: \"e68f2b97-da69-4ad8-aa22-20cbf6fcb819\") " pod="openstack/ironic-inspector-db-sync-2p5mt" Feb 16 02:40:06.577554 master-0 kubenswrapper[31559]: I0216 02:40:06.577523 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/e68f2b97-da69-4ad8-aa22-20cbf6fcb819-var-lib-ironic\") pod \"ironic-inspector-db-sync-2p5mt\" (UID: \"e68f2b97-da69-4ad8-aa22-20cbf6fcb819\") " pod="openstack/ironic-inspector-db-sync-2p5mt" Feb 16 02:40:06.577727 master-0 kubenswrapper[31559]: I0216 02:40:06.577618 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e68f2b97-da69-4ad8-aa22-20cbf6fcb819-scripts\") pod \"ironic-inspector-db-sync-2p5mt\" (UID: \"e68f2b97-da69-4ad8-aa22-20cbf6fcb819\") " pod="openstack/ironic-inspector-db-sync-2p5mt" Feb 16 02:40:06.577727 master-0 kubenswrapper[31559]: I0216 02:40:06.577644 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/e68f2b97-da69-4ad8-aa22-20cbf6fcb819-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-db-sync-2p5mt\" (UID: \"e68f2b97-da69-4ad8-aa22-20cbf6fcb819\") " pod="openstack/ironic-inspector-db-sync-2p5mt" Feb 16 02:40:06.577880 master-0 kubenswrapper[31559]: I0216 02:40:06.577809 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e68f2b97-da69-4ad8-aa22-20cbf6fcb819-config\") pod \"ironic-inspector-db-sync-2p5mt\" (UID: \"e68f2b97-da69-4ad8-aa22-20cbf6fcb819\") " pod="openstack/ironic-inspector-db-sync-2p5mt" Feb 16 02:40:06.579156 master-0 kubenswrapper[31559]: I0216 02:40:06.578756 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/e68f2b97-da69-4ad8-aa22-20cbf6fcb819-var-lib-ironic\") pod \"ironic-inspector-db-sync-2p5mt\" (UID: \"e68f2b97-da69-4ad8-aa22-20cbf6fcb819\") " pod="openstack/ironic-inspector-db-sync-2p5mt" Feb 16 02:40:06.582339 master-0 kubenswrapper[31559]: I0216 02:40:06.582283 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/e68f2b97-da69-4ad8-aa22-20cbf6fcb819-etc-podinfo\") pod \"ironic-inspector-db-sync-2p5mt\" (UID: \"e68f2b97-da69-4ad8-aa22-20cbf6fcb819\") " pod="openstack/ironic-inspector-db-sync-2p5mt" Feb 16 02:40:06.583994 master-0 kubenswrapper[31559]: I0216 02:40:06.583911 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/e68f2b97-da69-4ad8-aa22-20cbf6fcb819-config\") pod \"ironic-inspector-db-sync-2p5mt\" (UID: \"e68f2b97-da69-4ad8-aa22-20cbf6fcb819\") " pod="openstack/ironic-inspector-db-sync-2p5mt" Feb 16 02:40:06.584697 master-0 kubenswrapper[31559]: I0216 02:40:06.584655 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e68f2b97-da69-4ad8-aa22-20cbf6fcb819-combined-ca-bundle\") pod \"ironic-inspector-db-sync-2p5mt\" (UID: \"e68f2b97-da69-4ad8-aa22-20cbf6fcb819\") " pod="openstack/ironic-inspector-db-sync-2p5mt" Feb 16 02:40:06.588784 master-0 kubenswrapper[31559]: I0216 02:40:06.588745 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e68f2b97-da69-4ad8-aa22-20cbf6fcb819-scripts\") pod \"ironic-inspector-db-sync-2p5mt\" (UID: \"e68f2b97-da69-4ad8-aa22-20cbf6fcb819\") " pod="openstack/ironic-inspector-db-sync-2p5mt" Feb 16 02:40:06.599023 master-0 kubenswrapper[31559]: I0216 02:40:06.598963 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lmpv\" (UniqueName: \"kubernetes.io/projected/e68f2b97-da69-4ad8-aa22-20cbf6fcb819-kube-api-access-6lmpv\") pod \"ironic-inspector-db-sync-2p5mt\" (UID: \"e68f2b97-da69-4ad8-aa22-20cbf6fcb819\") " pod="openstack/ironic-inspector-db-sync-2p5mt" Feb 16 02:40:06.635618 master-0 kubenswrapper[31559]: I0216 02:40:06.635470 31559 generic.go:334] "Generic (PLEG): container finished" podID="bbb13644-ecbf-43f1-9203-c714b5485f17" containerID="b85a5b72da96bec9978439d8cea19f4daef5f5443be540ddd413f27358a35045" exitCode=143 Feb 16 02:40:06.635618 master-0 kubenswrapper[31559]: I0216 02:40:06.635530 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-86f55b5cf6-zxgrr" event={"ID":"bbb13644-ecbf-43f1-9203-c714b5485f17","Type":"ContainerDied","Data":"b85a5b72da96bec9978439d8cea19f4daef5f5443be540ddd413f27358a35045"} Feb 16 02:40:06.736594 master-0 kubenswrapper[31559]: I0216 02:40:06.736518 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-2p5mt" Feb 16 02:40:06.897942 master-0 kubenswrapper[31559]: I0216 02:40:06.893273 31559 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ironic-neutron-agent-57d76bb68d-7wt45" Feb 16 02:40:06.897942 master-0 kubenswrapper[31559]: I0216 02:40:06.894157 31559 scope.go:117] "RemoveContainer" containerID="8f139a71e4d429c694dd54dcec5c0508e8a590e9c75fd37980746cd97af2f3ee" Feb 16 02:40:06.897942 master-0 kubenswrapper[31559]: E0216 02:40:06.894479 31559 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-neutron-agent\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-neutron-agent pod=ironic-neutron-agent-57d76bb68d-7wt45_openstack(ec163f18-db19-4327-ae8b-6feb4c6004af)\"" pod="openstack/ironic-neutron-agent-57d76bb68d-7wt45" podUID="ec163f18-db19-4327-ae8b-6feb4c6004af" Feb 16 02:40:07.008207 master-0 kubenswrapper[31559]: I0216 02:40:07.008159 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-86f55b5cf6-zxgrr" Feb 16 02:40:07.101599 master-0 kubenswrapper[31559]: I0216 02:40:07.100147 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-86f55b5cf6-zxgrr" Feb 16 02:40:07.232696 master-0 kubenswrapper[31559]: I0216 02:40:07.232618 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/bbb13644-ecbf-43f1-9203-c714b5485f17-etc-podinfo\") pod \"bbb13644-ecbf-43f1-9203-c714b5485f17\" (UID: \"bbb13644-ecbf-43f1-9203-c714b5485f17\") " Feb 16 02:40:07.232696 master-0 kubenswrapper[31559]: I0216 02:40:07.232686 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbb13644-ecbf-43f1-9203-c714b5485f17-combined-ca-bundle\") pod \"bbb13644-ecbf-43f1-9203-c714b5485f17\" (UID: \"bbb13644-ecbf-43f1-9203-c714b5485f17\") " Feb 16 02:40:07.232993 master-0 kubenswrapper[31559]: I0216 02:40:07.232838 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bbb13644-ecbf-43f1-9203-c714b5485f17-config-data-custom\") pod \"bbb13644-ecbf-43f1-9203-c714b5485f17\" (UID: \"bbb13644-ecbf-43f1-9203-c714b5485f17\") " Feb 16 02:40:07.232993 master-0 kubenswrapper[31559]: I0216 02:40:07.232923 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbb13644-ecbf-43f1-9203-c714b5485f17-scripts\") pod \"bbb13644-ecbf-43f1-9203-c714b5485f17\" (UID: \"bbb13644-ecbf-43f1-9203-c714b5485f17\") " Feb 16 02:40:07.233111 master-0 kubenswrapper[31559]: I0216 02:40:07.233076 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbb13644-ecbf-43f1-9203-c714b5485f17-logs\") pod \"bbb13644-ecbf-43f1-9203-c714b5485f17\" (UID: \"bbb13644-ecbf-43f1-9203-c714b5485f17\") " Feb 16 02:40:07.233282 master-0 kubenswrapper[31559]: I0216 02:40:07.233256 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbb13644-ecbf-43f1-9203-c714b5485f17-config-data\") pod \"bbb13644-ecbf-43f1-9203-c714b5485f17\" (UID: \"bbb13644-ecbf-43f1-9203-c714b5485f17\") " Feb 16 02:40:07.233331 master-0 kubenswrapper[31559]: I0216 02:40:07.233289 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtdnt\" (UniqueName: \"kubernetes.io/projected/bbb13644-ecbf-43f1-9203-c714b5485f17-kube-api-access-xtdnt\") pod \"bbb13644-ecbf-43f1-9203-c714b5485f17\" (UID: \"bbb13644-ecbf-43f1-9203-c714b5485f17\") " Feb 16 02:40:07.233371 master-0 kubenswrapper[31559]: I0216 02:40:07.233343 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/bbb13644-ecbf-43f1-9203-c714b5485f17-config-data-merged\") pod \"bbb13644-ecbf-43f1-9203-c714b5485f17\" (UID: \"bbb13644-ecbf-43f1-9203-c714b5485f17\") " Feb 16 02:40:07.235545 master-0 kubenswrapper[31559]: I0216 02:40:07.235510 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbb13644-ecbf-43f1-9203-c714b5485f17-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "bbb13644-ecbf-43f1-9203-c714b5485f17" (UID: "bbb13644-ecbf-43f1-9203-c714b5485f17"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 02:40:07.241267 master-0 kubenswrapper[31559]: I0216 02:40:07.241231 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/bbb13644-ecbf-43f1-9203-c714b5485f17-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "bbb13644-ecbf-43f1-9203-c714b5485f17" (UID: "bbb13644-ecbf-43f1-9203-c714b5485f17"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 16 02:40:07.246494 master-0 kubenswrapper[31559]: I0216 02:40:07.243354 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbb13644-ecbf-43f1-9203-c714b5485f17-logs" (OuterVolumeSpecName: "logs") pod "bbb13644-ecbf-43f1-9203-c714b5485f17" (UID: "bbb13644-ecbf-43f1-9203-c714b5485f17"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 02:40:07.246494 master-0 kubenswrapper[31559]: I0216 02:40:07.243949 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbb13644-ecbf-43f1-9203-c714b5485f17-scripts" (OuterVolumeSpecName: "scripts") pod "bbb13644-ecbf-43f1-9203-c714b5485f17" (UID: "bbb13644-ecbf-43f1-9203-c714b5485f17"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:40:07.246494 master-0 kubenswrapper[31559]: I0216 02:40:07.245702 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-prwlf"] Feb 16 02:40:07.246494 master-0 kubenswrapper[31559]: E0216 02:40:07.246352 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbb13644-ecbf-43f1-9203-c714b5485f17" containerName="ironic-api" Feb 16 02:40:07.246494 master-0 kubenswrapper[31559]: I0216 02:40:07.246370 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbb13644-ecbf-43f1-9203-c714b5485f17" containerName="ironic-api" Feb 16 02:40:07.246494 master-0 kubenswrapper[31559]: E0216 02:40:07.246388 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbb13644-ecbf-43f1-9203-c714b5485f17" containerName="init" Feb 16 02:40:07.246494 master-0 kubenswrapper[31559]: I0216 02:40:07.246397 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbb13644-ecbf-43f1-9203-c714b5485f17" containerName="init" Feb 16 02:40:07.246494 master-0 kubenswrapper[31559]: E0216 02:40:07.246417 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbb13644-ecbf-43f1-9203-c714b5485f17" containerName="ironic-api" Feb 16 02:40:07.246494 master-0 kubenswrapper[31559]: I0216 02:40:07.246426 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbb13644-ecbf-43f1-9203-c714b5485f17" containerName="ironic-api" Feb 16 02:40:07.252204 master-0 kubenswrapper[31559]: E0216 02:40:07.246589 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbb13644-ecbf-43f1-9203-c714b5485f17" containerName="ironic-api-log" Feb 16 02:40:07.252204 master-0 kubenswrapper[31559]: I0216 02:40:07.246604 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbb13644-ecbf-43f1-9203-c714b5485f17" containerName="ironic-api-log" Feb 16 02:40:07.252204 master-0 kubenswrapper[31559]: I0216 02:40:07.246890 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbb13644-ecbf-43f1-9203-c714b5485f17-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "bbb13644-ecbf-43f1-9203-c714b5485f17" (UID: "bbb13644-ecbf-43f1-9203-c714b5485f17"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:40:07.252204 master-0 kubenswrapper[31559]: I0216 02:40:07.246909 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbb13644-ecbf-43f1-9203-c714b5485f17" containerName="ironic-api" Feb 16 02:40:07.252204 master-0 kubenswrapper[31559]: I0216 02:40:07.246952 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbb13644-ecbf-43f1-9203-c714b5485f17" containerName="ironic-api-log" Feb 16 02:40:07.252204 master-0 kubenswrapper[31559]: I0216 02:40:07.246967 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbb13644-ecbf-43f1-9203-c714b5485f17" containerName="ironic-api" Feb 16 02:40:07.252204 master-0 kubenswrapper[31559]: I0216 02:40:07.247836 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-prwlf" Feb 16 02:40:07.252204 master-0 kubenswrapper[31559]: I0216 02:40:07.250017 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbb13644-ecbf-43f1-9203-c714b5485f17-kube-api-access-xtdnt" (OuterVolumeSpecName: "kube-api-access-xtdnt") pod "bbb13644-ecbf-43f1-9203-c714b5485f17" (UID: "bbb13644-ecbf-43f1-9203-c714b5485f17"). InnerVolumeSpecName "kube-api-access-xtdnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:40:07.275749 master-0 kubenswrapper[31559]: I0216 02:40:07.275297 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-prwlf"] Feb 16 02:40:07.295650 master-0 kubenswrapper[31559]: I0216 02:40:07.294532 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-sync-2p5mt"] Feb 16 02:40:07.303055 master-0 kubenswrapper[31559]: I0216 02:40:07.303020 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbb13644-ecbf-43f1-9203-c714b5485f17-config-data" (OuterVolumeSpecName: "config-data") pod "bbb13644-ecbf-43f1-9203-c714b5485f17" (UID: "bbb13644-ecbf-43f1-9203-c714b5485f17"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:40:07.329331 master-0 kubenswrapper[31559]: I0216 02:40:07.329287 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-xv575"] Feb 16 02:40:07.329917 master-0 kubenswrapper[31559]: E0216 02:40:07.329889 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbb13644-ecbf-43f1-9203-c714b5485f17" containerName="init" Feb 16 02:40:07.329917 master-0 kubenswrapper[31559]: I0216 02:40:07.329911 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbb13644-ecbf-43f1-9203-c714b5485f17" containerName="init" Feb 16 02:40:07.330882 master-0 kubenswrapper[31559]: I0216 02:40:07.330856 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-xv575" Feb 16 02:40:07.342770 master-0 kubenswrapper[31559]: I0216 02:40:07.342718 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-dbe0-account-create-update-jbp2r"] Feb 16 02:40:07.344453 master-0 kubenswrapper[31559]: I0216 02:40:07.344415 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-dbe0-account-create-update-jbp2r" Feb 16 02:40:07.352592 master-0 kubenswrapper[31559]: I0216 02:40:07.351775 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 16 02:40:07.361483 master-0 kubenswrapper[31559]: I0216 02:40:07.361406 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbb13644-ecbf-43f1-9203-c714b5485f17-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bbb13644-ecbf-43f1-9203-c714b5485f17" (UID: "bbb13644-ecbf-43f1-9203-c714b5485f17"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:40:07.361906 master-0 kubenswrapper[31559]: I0216 02:40:07.361855 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4mm5\" (UniqueName: \"kubernetes.io/projected/323b3672-0931-4d00-9d68-6d4eae9a4cec-kube-api-access-p4mm5\") pod \"nova-api-db-create-prwlf\" (UID: \"323b3672-0931-4d00-9d68-6d4eae9a4cec\") " pod="openstack/nova-api-db-create-prwlf" Feb 16 02:40:07.362112 master-0 kubenswrapper[31559]: I0216 02:40:07.362090 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/323b3672-0931-4d00-9d68-6d4eae9a4cec-operator-scripts\") pod \"nova-api-db-create-prwlf\" (UID: \"323b3672-0931-4d00-9d68-6d4eae9a4cec\") " pod="openstack/nova-api-db-create-prwlf" Feb 16 02:40:07.365706 master-0 kubenswrapper[31559]: I0216 02:40:07.365420 31559 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/bbb13644-ecbf-43f1-9203-c714b5485f17-etc-podinfo\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:07.365706 master-0 kubenswrapper[31559]: I0216 02:40:07.365479 31559 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbb13644-ecbf-43f1-9203-c714b5485f17-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:07.365706 master-0 kubenswrapper[31559]: I0216 02:40:07.365490 31559 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bbb13644-ecbf-43f1-9203-c714b5485f17-config-data-custom\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:07.365706 master-0 kubenswrapper[31559]: I0216 02:40:07.365505 31559 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbb13644-ecbf-43f1-9203-c714b5485f17-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:07.365706 master-0 kubenswrapper[31559]: I0216 02:40:07.365514 31559 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbb13644-ecbf-43f1-9203-c714b5485f17-logs\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:07.365706 master-0 kubenswrapper[31559]: I0216 02:40:07.365528 31559 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbb13644-ecbf-43f1-9203-c714b5485f17-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:07.365706 master-0 kubenswrapper[31559]: I0216 02:40:07.365538 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xtdnt\" (UniqueName: \"kubernetes.io/projected/bbb13644-ecbf-43f1-9203-c714b5485f17-kube-api-access-xtdnt\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:07.365706 master-0 kubenswrapper[31559]: I0216 02:40:07.365549 31559 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/bbb13644-ecbf-43f1-9203-c714b5485f17-config-data-merged\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:07.380989 master-0 kubenswrapper[31559]: I0216 02:40:07.380924 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-xv575"] Feb 16 02:40:07.406259 master-0 kubenswrapper[31559]: I0216 02:40:07.406195 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-dbe0-account-create-update-jbp2r"] Feb 16 02:40:07.471070 master-0 kubenswrapper[31559]: I0216 02:40:07.466114 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-vc447"] Feb 16 02:40:07.471070 master-0 kubenswrapper[31559]: I0216 02:40:07.467846 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-vc447" Feb 16 02:40:07.471070 master-0 kubenswrapper[31559]: I0216 02:40:07.470832 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed677e03-0917-4744-93f0-d7e64470c27d-operator-scripts\") pod \"nova-api-dbe0-account-create-update-jbp2r\" (UID: \"ed677e03-0917-4744-93f0-d7e64470c27d\") " pod="openstack/nova-api-dbe0-account-create-update-jbp2r" Feb 16 02:40:07.471070 master-0 kubenswrapper[31559]: I0216 02:40:07.470910 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jshp9\" (UniqueName: \"kubernetes.io/projected/85150fe7-7e88-4ebb-a2c7-643274767b45-kube-api-access-jshp9\") pod \"nova-cell0-db-create-xv575\" (UID: \"85150fe7-7e88-4ebb-a2c7-643274767b45\") " pod="openstack/nova-cell0-db-create-xv575" Feb 16 02:40:07.471070 master-0 kubenswrapper[31559]: I0216 02:40:07.470938 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85150fe7-7e88-4ebb-a2c7-643274767b45-operator-scripts\") pod \"nova-cell0-db-create-xv575\" (UID: \"85150fe7-7e88-4ebb-a2c7-643274767b45\") " pod="openstack/nova-cell0-db-create-xv575" Feb 16 02:40:07.471070 master-0 kubenswrapper[31559]: I0216 02:40:07.470969 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4mm5\" (UniqueName: \"kubernetes.io/projected/323b3672-0931-4d00-9d68-6d4eae9a4cec-kube-api-access-p4mm5\") pod \"nova-api-db-create-prwlf\" (UID: \"323b3672-0931-4d00-9d68-6d4eae9a4cec\") " pod="openstack/nova-api-db-create-prwlf" Feb 16 02:40:07.471070 master-0 kubenswrapper[31559]: I0216 02:40:07.471005 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9ztl\" (UniqueName: \"kubernetes.io/projected/ed677e03-0917-4744-93f0-d7e64470c27d-kube-api-access-f9ztl\") pod \"nova-api-dbe0-account-create-update-jbp2r\" (UID: \"ed677e03-0917-4744-93f0-d7e64470c27d\") " pod="openstack/nova-api-dbe0-account-create-update-jbp2r" Feb 16 02:40:07.471070 master-0 kubenswrapper[31559]: I0216 02:40:07.471055 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/323b3672-0931-4d00-9d68-6d4eae9a4cec-operator-scripts\") pod \"nova-api-db-create-prwlf\" (UID: \"323b3672-0931-4d00-9d68-6d4eae9a4cec\") " pod="openstack/nova-api-db-create-prwlf" Feb 16 02:40:07.471766 master-0 kubenswrapper[31559]: I0216 02:40:07.471752 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/323b3672-0931-4d00-9d68-6d4eae9a4cec-operator-scripts\") pod \"nova-api-db-create-prwlf\" (UID: \"323b3672-0931-4d00-9d68-6d4eae9a4cec\") " pod="openstack/nova-api-db-create-prwlf" Feb 16 02:40:07.493168 master-0 kubenswrapper[31559]: I0216 02:40:07.492923 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4mm5\" (UniqueName: \"kubernetes.io/projected/323b3672-0931-4d00-9d68-6d4eae9a4cec-kube-api-access-p4mm5\") pod \"nova-api-db-create-prwlf\" (UID: \"323b3672-0931-4d00-9d68-6d4eae9a4cec\") " pod="openstack/nova-api-db-create-prwlf" Feb 16 02:40:07.498612 master-0 kubenswrapper[31559]: I0216 02:40:07.498532 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-vc447"] Feb 16 02:40:07.531673 master-0 kubenswrapper[31559]: I0216 02:40:07.531611 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-e957-account-create-update-7245k"] Feb 16 02:40:07.533833 master-0 kubenswrapper[31559]: I0216 02:40:07.533791 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-e957-account-create-update-7245k" Feb 16 02:40:07.535707 master-0 kubenswrapper[31559]: I0216 02:40:07.535607 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 16 02:40:07.541673 master-0 kubenswrapper[31559]: I0216 02:40:07.541218 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-e957-account-create-update-7245k"] Feb 16 02:40:07.576076 master-0 kubenswrapper[31559]: I0216 02:40:07.575998 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed677e03-0917-4744-93f0-d7e64470c27d-operator-scripts\") pod \"nova-api-dbe0-account-create-update-jbp2r\" (UID: \"ed677e03-0917-4744-93f0-d7e64470c27d\") " pod="openstack/nova-api-dbe0-account-create-update-jbp2r" Feb 16 02:40:07.576328 master-0 kubenswrapper[31559]: I0216 02:40:07.576169 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rr5zn\" (UniqueName: \"kubernetes.io/projected/68b63ce3-aaa3-4eeb-9264-48d821ee7f81-kube-api-access-rr5zn\") pod \"nova-cell1-db-create-vc447\" (UID: \"68b63ce3-aaa3-4eeb-9264-48d821ee7f81\") " pod="openstack/nova-cell1-db-create-vc447" Feb 16 02:40:07.576328 master-0 kubenswrapper[31559]: I0216 02:40:07.576216 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jshp9\" (UniqueName: \"kubernetes.io/projected/85150fe7-7e88-4ebb-a2c7-643274767b45-kube-api-access-jshp9\") pod \"nova-cell0-db-create-xv575\" (UID: \"85150fe7-7e88-4ebb-a2c7-643274767b45\") " pod="openstack/nova-cell0-db-create-xv575" Feb 16 02:40:07.576328 master-0 kubenswrapper[31559]: I0216 02:40:07.576253 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85150fe7-7e88-4ebb-a2c7-643274767b45-operator-scripts\") pod \"nova-cell0-db-create-xv575\" (UID: \"85150fe7-7e88-4ebb-a2c7-643274767b45\") " pod="openstack/nova-cell0-db-create-xv575" Feb 16 02:40:07.576543 master-0 kubenswrapper[31559]: I0216 02:40:07.576494 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9ztl\" (UniqueName: \"kubernetes.io/projected/ed677e03-0917-4744-93f0-d7e64470c27d-kube-api-access-f9ztl\") pod \"nova-api-dbe0-account-create-update-jbp2r\" (UID: \"ed677e03-0917-4744-93f0-d7e64470c27d\") " pod="openstack/nova-api-dbe0-account-create-update-jbp2r" Feb 16 02:40:07.576595 master-0 kubenswrapper[31559]: I0216 02:40:07.576573 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68b63ce3-aaa3-4eeb-9264-48d821ee7f81-operator-scripts\") pod \"nova-cell1-db-create-vc447\" (UID: \"68b63ce3-aaa3-4eeb-9264-48d821ee7f81\") " pod="openstack/nova-cell1-db-create-vc447" Feb 16 02:40:07.576891 master-0 kubenswrapper[31559]: I0216 02:40:07.576843 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed677e03-0917-4744-93f0-d7e64470c27d-operator-scripts\") pod \"nova-api-dbe0-account-create-update-jbp2r\" (UID: \"ed677e03-0917-4744-93f0-d7e64470c27d\") " pod="openstack/nova-api-dbe0-account-create-update-jbp2r" Feb 16 02:40:07.577552 master-0 kubenswrapper[31559]: I0216 02:40:07.577511 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85150fe7-7e88-4ebb-a2c7-643274767b45-operator-scripts\") pod \"nova-cell0-db-create-xv575\" (UID: \"85150fe7-7e88-4ebb-a2c7-643274767b45\") " pod="openstack/nova-cell0-db-create-xv575" Feb 16 02:40:07.589949 master-0 kubenswrapper[31559]: I0216 02:40:07.589883 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-prwlf" Feb 16 02:40:07.600131 master-0 kubenswrapper[31559]: I0216 02:40:07.600067 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9ztl\" (UniqueName: \"kubernetes.io/projected/ed677e03-0917-4744-93f0-d7e64470c27d-kube-api-access-f9ztl\") pod \"nova-api-dbe0-account-create-update-jbp2r\" (UID: \"ed677e03-0917-4744-93f0-d7e64470c27d\") " pod="openstack/nova-api-dbe0-account-create-update-jbp2r" Feb 16 02:40:07.603256 master-0 kubenswrapper[31559]: I0216 02:40:07.603197 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jshp9\" (UniqueName: \"kubernetes.io/projected/85150fe7-7e88-4ebb-a2c7-643274767b45-kube-api-access-jshp9\") pod \"nova-cell0-db-create-xv575\" (UID: \"85150fe7-7e88-4ebb-a2c7-643274767b45\") " pod="openstack/nova-cell0-db-create-xv575" Feb 16 02:40:07.656615 master-0 kubenswrapper[31559]: I0216 02:40:07.656541 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-86f55b5cf6-zxgrr" event={"ID":"bbb13644-ecbf-43f1-9203-c714b5485f17","Type":"ContainerDied","Data":"170a1d2795445711807f791c40d0b8140f0f02e67dec9199cbffaded413cd093"} Feb 16 02:40:07.656615 master-0 kubenswrapper[31559]: I0216 02:40:07.656585 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-86f55b5cf6-zxgrr" Feb 16 02:40:07.656898 master-0 kubenswrapper[31559]: I0216 02:40:07.656631 31559 scope.go:117] "RemoveContainer" containerID="9ff6a2ac1cdf56f52d161c83ed23c0e15c9865bbf0a306439cf0f5e6bbcf32ce" Feb 16 02:40:07.659931 master-0 kubenswrapper[31559]: I0216 02:40:07.659884 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-xv575" Feb 16 02:40:07.669755 master-0 kubenswrapper[31559]: I0216 02:40:07.660780 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-2p5mt" event={"ID":"e68f2b97-da69-4ad8-aa22-20cbf6fcb819","Type":"ContainerStarted","Data":"5c16a0dee925858f3663d66585e3aa25b12c6573f7e1bba6974bdae0c39da006"} Feb 16 02:40:07.680550 master-0 kubenswrapper[31559]: I0216 02:40:07.680426 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rr5zn\" (UniqueName: \"kubernetes.io/projected/68b63ce3-aaa3-4eeb-9264-48d821ee7f81-kube-api-access-rr5zn\") pod \"nova-cell1-db-create-vc447\" (UID: \"68b63ce3-aaa3-4eeb-9264-48d821ee7f81\") " pod="openstack/nova-cell1-db-create-vc447" Feb 16 02:40:07.680771 master-0 kubenswrapper[31559]: I0216 02:40:07.680590 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68b63ce3-aaa3-4eeb-9264-48d821ee7f81-operator-scripts\") pod \"nova-cell1-db-create-vc447\" (UID: \"68b63ce3-aaa3-4eeb-9264-48d821ee7f81\") " pod="openstack/nova-cell1-db-create-vc447" Feb 16 02:40:07.680771 master-0 kubenswrapper[31559]: I0216 02:40:07.680762 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3376cdb9-0b42-43bc-a145-81508f342ccd-operator-scripts\") pod \"nova-cell0-e957-account-create-update-7245k\" (UID: \"3376cdb9-0b42-43bc-a145-81508f342ccd\") " pod="openstack/nova-cell0-e957-account-create-update-7245k" Feb 16 02:40:07.682529 master-0 kubenswrapper[31559]: I0216 02:40:07.680807 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8hdc\" (UniqueName: \"kubernetes.io/projected/3376cdb9-0b42-43bc-a145-81508f342ccd-kube-api-access-m8hdc\") pod \"nova-cell0-e957-account-create-update-7245k\" (UID: \"3376cdb9-0b42-43bc-a145-81508f342ccd\") " pod="openstack/nova-cell0-e957-account-create-update-7245k" Feb 16 02:40:07.723251 master-0 kubenswrapper[31559]: I0216 02:40:07.689525 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68b63ce3-aaa3-4eeb-9264-48d821ee7f81-operator-scripts\") pod \"nova-cell1-db-create-vc447\" (UID: \"68b63ce3-aaa3-4eeb-9264-48d821ee7f81\") " pod="openstack/nova-cell1-db-create-vc447" Feb 16 02:40:07.723251 master-0 kubenswrapper[31559]: I0216 02:40:07.706419 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-dbe0-account-create-update-jbp2r" Feb 16 02:40:07.723251 master-0 kubenswrapper[31559]: I0216 02:40:07.711055 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rr5zn\" (UniqueName: \"kubernetes.io/projected/68b63ce3-aaa3-4eeb-9264-48d821ee7f81-kube-api-access-rr5zn\") pod \"nova-cell1-db-create-vc447\" (UID: \"68b63ce3-aaa3-4eeb-9264-48d821ee7f81\") " pod="openstack/nova-cell1-db-create-vc447" Feb 16 02:40:07.728176 master-0 kubenswrapper[31559]: I0216 02:40:07.727395 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-245b-account-create-update-rgllf"] Feb 16 02:40:07.733625 master-0 kubenswrapper[31559]: I0216 02:40:07.729707 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-245b-account-create-update-rgllf" Feb 16 02:40:07.734692 master-0 kubenswrapper[31559]: I0216 02:40:07.734331 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 16 02:40:07.760750 master-0 kubenswrapper[31559]: I0216 02:40:07.760665 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-245b-account-create-update-rgllf"] Feb 16 02:40:07.787731 master-0 kubenswrapper[31559]: I0216 02:40:07.787479 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3376cdb9-0b42-43bc-a145-81508f342ccd-operator-scripts\") pod \"nova-cell0-e957-account-create-update-7245k\" (UID: \"3376cdb9-0b42-43bc-a145-81508f342ccd\") " pod="openstack/nova-cell0-e957-account-create-update-7245k" Feb 16 02:40:07.787731 master-0 kubenswrapper[31559]: I0216 02:40:07.787679 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8hdc\" (UniqueName: \"kubernetes.io/projected/3376cdb9-0b42-43bc-a145-81508f342ccd-kube-api-access-m8hdc\") pod \"nova-cell0-e957-account-create-update-7245k\" (UID: \"3376cdb9-0b42-43bc-a145-81508f342ccd\") " pod="openstack/nova-cell0-e957-account-create-update-7245k" Feb 16 02:40:07.788574 master-0 kubenswrapper[31559]: I0216 02:40:07.788502 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3376cdb9-0b42-43bc-a145-81508f342ccd-operator-scripts\") pod \"nova-cell0-e957-account-create-update-7245k\" (UID: \"3376cdb9-0b42-43bc-a145-81508f342ccd\") " pod="openstack/nova-cell0-e957-account-create-update-7245k" Feb 16 02:40:07.813150 master-0 kubenswrapper[31559]: I0216 02:40:07.807118 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-vc447" Feb 16 02:40:07.814212 master-0 kubenswrapper[31559]: I0216 02:40:07.814170 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8hdc\" (UniqueName: \"kubernetes.io/projected/3376cdb9-0b42-43bc-a145-81508f342ccd-kube-api-access-m8hdc\") pod \"nova-cell0-e957-account-create-update-7245k\" (UID: \"3376cdb9-0b42-43bc-a145-81508f342ccd\") " pod="openstack/nova-cell0-e957-account-create-update-7245k" Feb 16 02:40:07.869842 master-0 kubenswrapper[31559]: I0216 02:40:07.869794 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-e957-account-create-update-7245k" Feb 16 02:40:07.900991 master-0 kubenswrapper[31559]: I0216 02:40:07.900818 31559 scope.go:117] "RemoveContainer" containerID="b85a5b72da96bec9978439d8cea19f4daef5f5443be540ddd413f27358a35045" Feb 16 02:40:07.902924 master-0 kubenswrapper[31559]: I0216 02:40:07.902158 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6448424-28c1-42d1-9f7f-67db21f0e53c-operator-scripts\") pod \"nova-cell1-245b-account-create-update-rgllf\" (UID: \"c6448424-28c1-42d1-9f7f-67db21f0e53c\") " pod="openstack/nova-cell1-245b-account-create-update-rgllf" Feb 16 02:40:07.904785 master-0 kubenswrapper[31559]: I0216 02:40:07.904555 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcndj\" (UniqueName: \"kubernetes.io/projected/c6448424-28c1-42d1-9f7f-67db21f0e53c-kube-api-access-zcndj\") pod \"nova-cell1-245b-account-create-update-rgllf\" (UID: \"c6448424-28c1-42d1-9f7f-67db21f0e53c\") " pod="openstack/nova-cell1-245b-account-create-update-rgllf" Feb 16 02:40:07.954518 master-0 kubenswrapper[31559]: I0216 02:40:07.954448 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-86f55b5cf6-zxgrr"] Feb 16 02:40:07.980126 master-0 kubenswrapper[31559]: I0216 02:40:07.979015 31559 scope.go:117] "RemoveContainer" containerID="6d97272db2893c7c52f68fe3a07382e729c317a53b5116fe9d87e3237529020f" Feb 16 02:40:08.004005 master-0 kubenswrapper[31559]: I0216 02:40:07.999968 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-86f55b5cf6-zxgrr"] Feb 16 02:40:08.010237 master-0 kubenswrapper[31559]: I0216 02:40:08.009616 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcndj\" (UniqueName: \"kubernetes.io/projected/c6448424-28c1-42d1-9f7f-67db21f0e53c-kube-api-access-zcndj\") pod \"nova-cell1-245b-account-create-update-rgllf\" (UID: \"c6448424-28c1-42d1-9f7f-67db21f0e53c\") " pod="openstack/nova-cell1-245b-account-create-update-rgllf" Feb 16 02:40:08.010237 master-0 kubenswrapper[31559]: I0216 02:40:08.009729 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6448424-28c1-42d1-9f7f-67db21f0e53c-operator-scripts\") pod \"nova-cell1-245b-account-create-update-rgllf\" (UID: \"c6448424-28c1-42d1-9f7f-67db21f0e53c\") " pod="openstack/nova-cell1-245b-account-create-update-rgllf" Feb 16 02:40:08.016109 master-0 kubenswrapper[31559]: I0216 02:40:08.011356 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6448424-28c1-42d1-9f7f-67db21f0e53c-operator-scripts\") pod \"nova-cell1-245b-account-create-update-rgllf\" (UID: \"c6448424-28c1-42d1-9f7f-67db21f0e53c\") " pod="openstack/nova-cell1-245b-account-create-update-rgllf" Feb 16 02:40:08.056106 master-0 kubenswrapper[31559]: I0216 02:40:08.050636 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcndj\" (UniqueName: \"kubernetes.io/projected/c6448424-28c1-42d1-9f7f-67db21f0e53c-kube-api-access-zcndj\") pod \"nova-cell1-245b-account-create-update-rgllf\" (UID: \"c6448424-28c1-42d1-9f7f-67db21f0e53c\") " pod="openstack/nova-cell1-245b-account-create-update-rgllf" Feb 16 02:40:08.114989 master-0 kubenswrapper[31559]: I0216 02:40:08.114902 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-prwlf"] Feb 16 02:40:08.179718 master-0 kubenswrapper[31559]: I0216 02:40:08.179657 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-245b-account-create-update-rgllf" Feb 16 02:40:08.302995 master-0 kubenswrapper[31559]: I0216 02:40:08.302914 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-xv575"] Feb 16 02:40:08.467016 master-0 kubenswrapper[31559]: I0216 02:40:08.466913 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-dbe0-account-create-update-jbp2r"] Feb 16 02:40:08.489584 master-0 kubenswrapper[31559]: I0216 02:40:08.486224 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-vc447"] Feb 16 02:40:08.686462 master-0 kubenswrapper[31559]: I0216 02:40:08.682073 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-e957-account-create-update-7245k"] Feb 16 02:40:09.470352 master-0 kubenswrapper[31559]: I0216 02:40:09.470100 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5687464c96-4rx8g" Feb 16 02:40:09.941980 master-0 kubenswrapper[31559]: I0216 02:40:09.941450 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbb13644-ecbf-43f1-9203-c714b5485f17" path="/var/lib/kubelet/pods/bbb13644-ecbf-43f1-9203-c714b5485f17/volumes" Feb 16 02:40:12.750318 master-0 kubenswrapper[31559]: I0216 02:40:12.750256 31559 generic.go:334] "Generic (PLEG): container finished" podID="14c0d02d-65db-41be-81a2-f4a1c5996cb8" containerID="96d7e74c9317ad75c2ba8a334fd5a7f3aaa196471bc017f4e4846ca45d038338" exitCode=137 Feb 16 02:40:12.750318 master-0 kubenswrapper[31559]: I0216 02:40:12.750311 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-dde57-api-0" event={"ID":"14c0d02d-65db-41be-81a2-f4a1c5996cb8","Type":"ContainerDied","Data":"96d7e74c9317ad75c2ba8a334fd5a7f3aaa196471bc017f4e4846ca45d038338"} Feb 16 02:40:13.007539 master-0 kubenswrapper[31559]: I0216 02:40:13.006291 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-86b5fc89c6-rhb4k" Feb 16 02:40:13.024608 master-0 kubenswrapper[31559]: I0216 02:40:13.024234 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-86b5fc89c6-rhb4k" Feb 16 02:40:13.574029 master-0 kubenswrapper[31559]: W0216 02:40:13.573961 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod68b63ce3_aaa3_4eeb_9264_48d821ee7f81.slice/crio-5fbb07ee8d64f3828feb0785871cd57467314c116c504792e3ff88dcf857e574 WatchSource:0}: Error finding container 5fbb07ee8d64f3828feb0785871cd57467314c116c504792e3ff88dcf857e574: Status 404 returned error can't find the container with id 5fbb07ee8d64f3828feb0785871cd57467314c116c504792e3ff88dcf857e574 Feb 16 02:40:13.576986 master-0 kubenswrapper[31559]: W0216 02:40:13.576928 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod85150fe7_7e88_4ebb_a2c7_643274767b45.slice/crio-08d8fde1b124abe574038b9e2a9f24f5f32b15ebb705c877aba570953a689723 WatchSource:0}: Error finding container 08d8fde1b124abe574038b9e2a9f24f5f32b15ebb705c877aba570953a689723: Status 404 returned error can't find the container with id 08d8fde1b124abe574038b9e2a9f24f5f32b15ebb705c877aba570953a689723 Feb 16 02:40:13.580251 master-0 kubenswrapper[31559]: W0216 02:40:13.580201 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded677e03_0917_4744_93f0_d7e64470c27d.slice/crio-5383b4526c74e856ad1a34231a1141dec8bd2dae8809ccb250461331620cfda8 WatchSource:0}: Error finding container 5383b4526c74e856ad1a34231a1141dec8bd2dae8809ccb250461331620cfda8: Status 404 returned error can't find the container with id 5383b4526c74e856ad1a34231a1141dec8bd2dae8809ccb250461331620cfda8 Feb 16 02:40:13.660313 master-0 kubenswrapper[31559]: I0216 02:40:13.660219 31559 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-dde57-api-0" podUID="14c0d02d-65db-41be-81a2-f4a1c5996cb8" containerName="cinder-api" probeResult="failure" output="Get \"http://10.128.0.227:8776/healthcheck\": dial tcp 10.128.0.227:8776: connect: connection refused" Feb 16 02:40:13.772975 master-0 kubenswrapper[31559]: I0216 02:40:13.772908 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-dbe0-account-create-update-jbp2r" event={"ID":"ed677e03-0917-4744-93f0-d7e64470c27d","Type":"ContainerStarted","Data":"5383b4526c74e856ad1a34231a1141dec8bd2dae8809ccb250461331620cfda8"} Feb 16 02:40:13.774835 master-0 kubenswrapper[31559]: I0216 02:40:13.774787 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-xv575" event={"ID":"85150fe7-7e88-4ebb-a2c7-643274767b45","Type":"ContainerStarted","Data":"08d8fde1b124abe574038b9e2a9f24f5f32b15ebb705c877aba570953a689723"} Feb 16 02:40:13.778323 master-0 kubenswrapper[31559]: I0216 02:40:13.778184 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-vc447" event={"ID":"68b63ce3-aaa3-4eeb-9264-48d821ee7f81","Type":"ContainerStarted","Data":"5fbb07ee8d64f3828feb0785871cd57467314c116c504792e3ff88dcf857e574"} Feb 16 02:40:14.406504 master-0 kubenswrapper[31559]: W0216 02:40:14.406448 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod323b3672_0931_4d00_9d68_6d4eae9a4cec.slice/crio-7437ed7a5c813d98a7b421cda589959dc38657de8068e6a87fb4cb74948700f8 WatchSource:0}: Error finding container 7437ed7a5c813d98a7b421cda589959dc38657de8068e6a87fb4cb74948700f8: Status 404 returned error can't find the container with id 7437ed7a5c813d98a7b421cda589959dc38657de8068e6a87fb4cb74948700f8 Feb 16 02:40:14.689354 master-0 kubenswrapper[31559]: I0216 02:40:14.688581 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-64cc79985-qdkm5" Feb 16 02:40:14.822185 master-0 kubenswrapper[31559]: I0216 02:40:14.822141 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-e957-account-create-update-7245k" event={"ID":"3376cdb9-0b42-43bc-a145-81508f342ccd","Type":"ContainerStarted","Data":"10e6db89b7b66e8ee2b11ca79726596a6f4e9a9f367ddcd3ce1166b3b5eaf439"} Feb 16 02:40:14.827772 master-0 kubenswrapper[31559]: I0216 02:40:14.827169 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-prwlf" event={"ID":"323b3672-0931-4d00-9d68-6d4eae9a4cec","Type":"ContainerStarted","Data":"7437ed7a5c813d98a7b421cda589959dc38657de8068e6a87fb4cb74948700f8"} Feb 16 02:40:14.831751 master-0 kubenswrapper[31559]: I0216 02:40:14.831529 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5687464c96-4rx8g"] Feb 16 02:40:14.831941 master-0 kubenswrapper[31559]: I0216 02:40:14.831875 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5687464c96-4rx8g" podUID="cd9993cd-827d-4976-ac75-954bb9ace111" containerName="neutron-api" containerID="cri-o://f45d4d96aae3bcd16a415272cdfd08b0b444279c01764ca6c58606b1e088e4ee" gracePeriod=30 Feb 16 02:40:14.833310 master-0 kubenswrapper[31559]: I0216 02:40:14.832085 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5687464c96-4rx8g" podUID="cd9993cd-827d-4976-ac75-954bb9ace111" containerName="neutron-httpd" containerID="cri-o://742ab028b0f99a9d936cbb8dd6ec80a01d1400a786fdd89945263ec04b24b11f" gracePeriod=30 Feb 16 02:40:14.943558 master-0 kubenswrapper[31559]: I0216 02:40:14.943422 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-dde57-api-0" Feb 16 02:40:15.088162 master-0 kubenswrapper[31559]: I0216 02:40:15.087637 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-245b-account-create-update-rgllf"] Feb 16 02:40:15.144341 master-0 kubenswrapper[31559]: I0216 02:40:15.144298 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqkmm\" (UniqueName: \"kubernetes.io/projected/14c0d02d-65db-41be-81a2-f4a1c5996cb8-kube-api-access-xqkmm\") pod \"14c0d02d-65db-41be-81a2-f4a1c5996cb8\" (UID: \"14c0d02d-65db-41be-81a2-f4a1c5996cb8\") " Feb 16 02:40:15.144962 master-0 kubenswrapper[31559]: I0216 02:40:15.144945 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/14c0d02d-65db-41be-81a2-f4a1c5996cb8-etc-machine-id\") pod \"14c0d02d-65db-41be-81a2-f4a1c5996cb8\" (UID: \"14c0d02d-65db-41be-81a2-f4a1c5996cb8\") " Feb 16 02:40:15.145139 master-0 kubenswrapper[31559]: I0216 02:40:15.145126 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14c0d02d-65db-41be-81a2-f4a1c5996cb8-scripts\") pod \"14c0d02d-65db-41be-81a2-f4a1c5996cb8\" (UID: \"14c0d02d-65db-41be-81a2-f4a1c5996cb8\") " Feb 16 02:40:15.145214 master-0 kubenswrapper[31559]: I0216 02:40:15.145202 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14c0d02d-65db-41be-81a2-f4a1c5996cb8-combined-ca-bundle\") pod \"14c0d02d-65db-41be-81a2-f4a1c5996cb8\" (UID: \"14c0d02d-65db-41be-81a2-f4a1c5996cb8\") " Feb 16 02:40:15.145313 master-0 kubenswrapper[31559]: I0216 02:40:15.145301 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/14c0d02d-65db-41be-81a2-f4a1c5996cb8-config-data-custom\") pod \"14c0d02d-65db-41be-81a2-f4a1c5996cb8\" (UID: \"14c0d02d-65db-41be-81a2-f4a1c5996cb8\") " Feb 16 02:40:15.145385 master-0 kubenswrapper[31559]: I0216 02:40:15.145374 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14c0d02d-65db-41be-81a2-f4a1c5996cb8-logs\") pod \"14c0d02d-65db-41be-81a2-f4a1c5996cb8\" (UID: \"14c0d02d-65db-41be-81a2-f4a1c5996cb8\") " Feb 16 02:40:15.145561 master-0 kubenswrapper[31559]: I0216 02:40:15.145547 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14c0d02d-65db-41be-81a2-f4a1c5996cb8-config-data\") pod \"14c0d02d-65db-41be-81a2-f4a1c5996cb8\" (UID: \"14c0d02d-65db-41be-81a2-f4a1c5996cb8\") " Feb 16 02:40:15.154550 master-0 kubenswrapper[31559]: I0216 02:40:15.153578 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14c0d02d-65db-41be-81a2-f4a1c5996cb8-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "14c0d02d-65db-41be-81a2-f4a1c5996cb8" (UID: "14c0d02d-65db-41be-81a2-f4a1c5996cb8"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 02:40:15.154550 master-0 kubenswrapper[31559]: I0216 02:40:15.153729 31559 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/14c0d02d-65db-41be-81a2-f4a1c5996cb8-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:15.156048 master-0 kubenswrapper[31559]: I0216 02:40:15.156007 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14c0d02d-65db-41be-81a2-f4a1c5996cb8-logs" (OuterVolumeSpecName: "logs") pod "14c0d02d-65db-41be-81a2-f4a1c5996cb8" (UID: "14c0d02d-65db-41be-81a2-f4a1c5996cb8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 02:40:15.171856 master-0 kubenswrapper[31559]: I0216 02:40:15.171779 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14c0d02d-65db-41be-81a2-f4a1c5996cb8-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "14c0d02d-65db-41be-81a2-f4a1c5996cb8" (UID: "14c0d02d-65db-41be-81a2-f4a1c5996cb8"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:40:15.176908 master-0 kubenswrapper[31559]: I0216 02:40:15.176299 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14c0d02d-65db-41be-81a2-f4a1c5996cb8-kube-api-access-xqkmm" (OuterVolumeSpecName: "kube-api-access-xqkmm") pod "14c0d02d-65db-41be-81a2-f4a1c5996cb8" (UID: "14c0d02d-65db-41be-81a2-f4a1c5996cb8"). InnerVolumeSpecName "kube-api-access-xqkmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:40:15.178898 master-0 kubenswrapper[31559]: I0216 02:40:15.178402 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14c0d02d-65db-41be-81a2-f4a1c5996cb8-scripts" (OuterVolumeSpecName: "scripts") pod "14c0d02d-65db-41be-81a2-f4a1c5996cb8" (UID: "14c0d02d-65db-41be-81a2-f4a1c5996cb8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:40:15.258911 master-0 kubenswrapper[31559]: I0216 02:40:15.257917 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqkmm\" (UniqueName: \"kubernetes.io/projected/14c0d02d-65db-41be-81a2-f4a1c5996cb8-kube-api-access-xqkmm\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:15.258911 master-0 kubenswrapper[31559]: I0216 02:40:15.257974 31559 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14c0d02d-65db-41be-81a2-f4a1c5996cb8-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:15.258911 master-0 kubenswrapper[31559]: I0216 02:40:15.257988 31559 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14c0d02d-65db-41be-81a2-f4a1c5996cb8-logs\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:15.258911 master-0 kubenswrapper[31559]: I0216 02:40:15.258001 31559 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/14c0d02d-65db-41be-81a2-f4a1c5996cb8-config-data-custom\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:15.464454 master-0 kubenswrapper[31559]: I0216 02:40:15.460989 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14c0d02d-65db-41be-81a2-f4a1c5996cb8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "14c0d02d-65db-41be-81a2-f4a1c5996cb8" (UID: "14c0d02d-65db-41be-81a2-f4a1c5996cb8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:40:15.464454 master-0 kubenswrapper[31559]: I0216 02:40:15.463510 31559 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14c0d02d-65db-41be-81a2-f4a1c5996cb8-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:15.492019 master-0 kubenswrapper[31559]: I0216 02:40:15.491361 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14c0d02d-65db-41be-81a2-f4a1c5996cb8-config-data" (OuterVolumeSpecName: "config-data") pod "14c0d02d-65db-41be-81a2-f4a1c5996cb8" (UID: "14c0d02d-65db-41be-81a2-f4a1c5996cb8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:40:15.567457 master-0 kubenswrapper[31559]: I0216 02:40:15.565244 31559 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14c0d02d-65db-41be-81a2-f4a1c5996cb8-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:15.853909 master-0 kubenswrapper[31559]: I0216 02:40:15.853269 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-dde57-api-0" event={"ID":"14c0d02d-65db-41be-81a2-f4a1c5996cb8","Type":"ContainerDied","Data":"7d7927c478070f383bbaf3f5f23c09747147f5c0dfd0e760aba7d461f2df2efc"} Feb 16 02:40:15.853909 master-0 kubenswrapper[31559]: I0216 02:40:15.853290 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-dde57-api-0" Feb 16 02:40:15.853909 master-0 kubenswrapper[31559]: I0216 02:40:15.853356 31559 scope.go:117] "RemoveContainer" containerID="96d7e74c9317ad75c2ba8a334fd5a7f3aaa196471bc017f4e4846ca45d038338" Feb 16 02:40:15.856666 master-0 kubenswrapper[31559]: I0216 02:40:15.856198 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-2p5mt" event={"ID":"e68f2b97-da69-4ad8-aa22-20cbf6fcb819","Type":"ContainerStarted","Data":"e6bbea076546c7400fdafe4f2000c907151e73ce02439ce04f17a22b27b86a09"} Feb 16 02:40:15.858942 master-0 kubenswrapper[31559]: I0216 02:40:15.858641 31559 generic.go:334] "Generic (PLEG): container finished" podID="c6448424-28c1-42d1-9f7f-67db21f0e53c" containerID="01c06861ebd5625318f57097560883b98994f0ad57474de583b2a4b05182eaa0" exitCode=0 Feb 16 02:40:15.858942 master-0 kubenswrapper[31559]: I0216 02:40:15.858700 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-245b-account-create-update-rgllf" event={"ID":"c6448424-28c1-42d1-9f7f-67db21f0e53c","Type":"ContainerDied","Data":"01c06861ebd5625318f57097560883b98994f0ad57474de583b2a4b05182eaa0"} Feb 16 02:40:15.858942 master-0 kubenswrapper[31559]: I0216 02:40:15.858723 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-245b-account-create-update-rgllf" event={"ID":"c6448424-28c1-42d1-9f7f-67db21f0e53c","Type":"ContainerStarted","Data":"b9d57ce5454eda9d9d301ba676a1fb525b22f3b4e61138c857bcdbad21fd4ea7"} Feb 16 02:40:15.867262 master-0 kubenswrapper[31559]: I0216 02:40:15.867201 31559 generic.go:334] "Generic (PLEG): container finished" podID="cd9993cd-827d-4976-ac75-954bb9ace111" containerID="742ab028b0f99a9d936cbb8dd6ec80a01d1400a786fdd89945263ec04b24b11f" exitCode=0 Feb 16 02:40:15.867393 master-0 kubenswrapper[31559]: I0216 02:40:15.867289 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5687464c96-4rx8g" event={"ID":"cd9993cd-827d-4976-ac75-954bb9ace111","Type":"ContainerDied","Data":"742ab028b0f99a9d936cbb8dd6ec80a01d1400a786fdd89945263ec04b24b11f"} Feb 16 02:40:15.871502 master-0 kubenswrapper[31559]: I0216 02:40:15.871366 31559 generic.go:334] "Generic (PLEG): container finished" podID="323b3672-0931-4d00-9d68-6d4eae9a4cec" containerID="43488a4f351287cd2ba5e19c90c76153ba4dbd58ef00adb6ad5a8fa8a6f47ed6" exitCode=0 Feb 16 02:40:15.871767 master-0 kubenswrapper[31559]: I0216 02:40:15.871552 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-prwlf" event={"ID":"323b3672-0931-4d00-9d68-6d4eae9a4cec","Type":"ContainerDied","Data":"43488a4f351287cd2ba5e19c90c76153ba4dbd58ef00adb6ad5a8fa8a6f47ed6"} Feb 16 02:40:15.879879 master-0 kubenswrapper[31559]: I0216 02:40:15.877304 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"96e7ee31-a94f-4d6d-a1c7-abb314357ff7","Type":"ContainerStarted","Data":"306f6597687016867af034c7e78c0411bd70981dc1552847512264a1f34ea3fb"} Feb 16 02:40:15.884268 master-0 kubenswrapper[31559]: I0216 02:40:15.884216 31559 generic.go:334] "Generic (PLEG): container finished" podID="85150fe7-7e88-4ebb-a2c7-643274767b45" containerID="52e32c9653749fa46ebe4502645f4369efd319735791134ff4820bf98d64d539" exitCode=0 Feb 16 02:40:15.884481 master-0 kubenswrapper[31559]: I0216 02:40:15.884313 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-xv575" event={"ID":"85150fe7-7e88-4ebb-a2c7-643274767b45","Type":"ContainerDied","Data":"52e32c9653749fa46ebe4502645f4369efd319735791134ff4820bf98d64d539"} Feb 16 02:40:15.886367 master-0 kubenswrapper[31559]: I0216 02:40:15.886331 31559 generic.go:334] "Generic (PLEG): container finished" podID="68b63ce3-aaa3-4eeb-9264-48d821ee7f81" containerID="ae642a2108709fc64fc5fdad7b85e1857a36ee9843f78e9d3a995993871301a1" exitCode=0 Feb 16 02:40:15.886462 master-0 kubenswrapper[31559]: I0216 02:40:15.886371 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-vc447" event={"ID":"68b63ce3-aaa3-4eeb-9264-48d821ee7f81","Type":"ContainerDied","Data":"ae642a2108709fc64fc5fdad7b85e1857a36ee9843f78e9d3a995993871301a1"} Feb 16 02:40:15.888735 master-0 kubenswrapper[31559]: I0216 02:40:15.888691 31559 generic.go:334] "Generic (PLEG): container finished" podID="3376cdb9-0b42-43bc-a145-81508f342ccd" containerID="b6771a6f34446d7d5b6513f70ccf41ba6fb0e6b7561268b116cb2f6ad74d23b6" exitCode=0 Feb 16 02:40:15.888820 master-0 kubenswrapper[31559]: I0216 02:40:15.888750 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-e957-account-create-update-7245k" event={"ID":"3376cdb9-0b42-43bc-a145-81508f342ccd","Type":"ContainerDied","Data":"b6771a6f34446d7d5b6513f70ccf41ba6fb0e6b7561268b116cb2f6ad74d23b6"} Feb 16 02:40:15.889279 master-0 kubenswrapper[31559]: I0216 02:40:15.889210 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-inspector-db-sync-2p5mt" podStartSLOduration=2.493861168 podStartE2EDuration="9.889196155s" podCreationTimestamp="2026-02-16 02:40:06 +0000 UTC" firstStartedPulling="2026-02-16 02:40:07.302319571 +0000 UTC m=+1059.646925576" lastFinishedPulling="2026-02-16 02:40:14.697654548 +0000 UTC m=+1067.042260563" observedRunningTime="2026-02-16 02:40:15.873294142 +0000 UTC m=+1068.217900157" watchObservedRunningTime="2026-02-16 02:40:15.889196155 +0000 UTC m=+1068.233802170" Feb 16 02:40:15.894358 master-0 kubenswrapper[31559]: I0216 02:40:15.894297 31559 generic.go:334] "Generic (PLEG): container finished" podID="ed677e03-0917-4744-93f0-d7e64470c27d" containerID="89f83216dcf29606697d9b433300397e8357f8d45c7bbc3534ff236ae0e18cdc" exitCode=0 Feb 16 02:40:15.894358 master-0 kubenswrapper[31559]: I0216 02:40:15.894357 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-dbe0-account-create-update-jbp2r" event={"ID":"ed677e03-0917-4744-93f0-d7e64470c27d","Type":"ContainerDied","Data":"89f83216dcf29606697d9b433300397e8357f8d45c7bbc3534ff236ae0e18cdc"} Feb 16 02:40:15.994077 master-0 kubenswrapper[31559]: I0216 02:40:15.994047 31559 scope.go:117] "RemoveContainer" containerID="c64b31dbb9d4a8d89fd1ab605afd41af761009d696a4e083a85dd253a2f9b898" Feb 16 02:40:16.055990 master-0 kubenswrapper[31559]: I0216 02:40:16.055918 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.809836853 podStartE2EDuration="17.055901341s" podCreationTimestamp="2026-02-16 02:39:59 +0000 UTC" firstStartedPulling="2026-02-16 02:40:00.333355943 +0000 UTC m=+1052.677961958" lastFinishedPulling="2026-02-16 02:40:14.579420431 +0000 UTC m=+1066.924026446" observedRunningTime="2026-02-16 02:40:16.028328202 +0000 UTC m=+1068.372934217" watchObservedRunningTime="2026-02-16 02:40:16.055901341 +0000 UTC m=+1068.400507356" Feb 16 02:40:16.112471 master-0 kubenswrapper[31559]: I0216 02:40:16.110226 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-dde57-api-0"] Feb 16 02:40:16.128537 master-0 kubenswrapper[31559]: I0216 02:40:16.128482 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-dde57-api-0"] Feb 16 02:40:16.141201 master-0 kubenswrapper[31559]: I0216 02:40:16.141128 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-dde57-api-0"] Feb 16 02:40:16.141863 master-0 kubenswrapper[31559]: E0216 02:40:16.141840 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14c0d02d-65db-41be-81a2-f4a1c5996cb8" containerName="cinder-dde57-api-log" Feb 16 02:40:16.141946 master-0 kubenswrapper[31559]: I0216 02:40:16.141877 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="14c0d02d-65db-41be-81a2-f4a1c5996cb8" containerName="cinder-dde57-api-log" Feb 16 02:40:16.142013 master-0 kubenswrapper[31559]: E0216 02:40:16.141978 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14c0d02d-65db-41be-81a2-f4a1c5996cb8" containerName="cinder-api" Feb 16 02:40:16.142013 master-0 kubenswrapper[31559]: I0216 02:40:16.141987 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="14c0d02d-65db-41be-81a2-f4a1c5996cb8" containerName="cinder-api" Feb 16 02:40:16.142343 master-0 kubenswrapper[31559]: I0216 02:40:16.142324 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="14c0d02d-65db-41be-81a2-f4a1c5996cb8" containerName="cinder-api" Feb 16 02:40:16.142510 master-0 kubenswrapper[31559]: I0216 02:40:16.142457 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="14c0d02d-65db-41be-81a2-f4a1c5996cb8" containerName="cinder-dde57-api-log" Feb 16 02:40:16.145270 master-0 kubenswrapper[31559]: I0216 02:40:16.145240 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-dde57-api-0" Feb 16 02:40:16.146820 master-0 kubenswrapper[31559]: I0216 02:40:16.146780 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-dde57-api-config-data" Feb 16 02:40:16.148343 master-0 kubenswrapper[31559]: I0216 02:40:16.148308 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 16 02:40:16.148553 master-0 kubenswrapper[31559]: I0216 02:40:16.148533 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 16 02:40:16.160408 master-0 kubenswrapper[31559]: I0216 02:40:16.160334 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-dde57-api-0"] Feb 16 02:40:16.186642 master-0 kubenswrapper[31559]: I0216 02:40:16.186571 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3c55abd-a513-492c-a28d-57b494edf9f9-combined-ca-bundle\") pod \"cinder-dde57-api-0\" (UID: \"f3c55abd-a513-492c-a28d-57b494edf9f9\") " pod="openstack/cinder-dde57-api-0" Feb 16 02:40:16.186841 master-0 kubenswrapper[31559]: I0216 02:40:16.186661 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4h2g5\" (UniqueName: \"kubernetes.io/projected/f3c55abd-a513-492c-a28d-57b494edf9f9-kube-api-access-4h2g5\") pod \"cinder-dde57-api-0\" (UID: \"f3c55abd-a513-492c-a28d-57b494edf9f9\") " pod="openstack/cinder-dde57-api-0" Feb 16 02:40:16.186841 master-0 kubenswrapper[31559]: I0216 02:40:16.186751 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3c55abd-a513-492c-a28d-57b494edf9f9-public-tls-certs\") pod \"cinder-dde57-api-0\" (UID: \"f3c55abd-a513-492c-a28d-57b494edf9f9\") " pod="openstack/cinder-dde57-api-0" Feb 16 02:40:16.186841 master-0 kubenswrapper[31559]: I0216 02:40:16.186826 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3c55abd-a513-492c-a28d-57b494edf9f9-config-data-custom\") pod \"cinder-dde57-api-0\" (UID: \"f3c55abd-a513-492c-a28d-57b494edf9f9\") " pod="openstack/cinder-dde57-api-0" Feb 16 02:40:16.187047 master-0 kubenswrapper[31559]: I0216 02:40:16.186851 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3c55abd-a513-492c-a28d-57b494edf9f9-scripts\") pod \"cinder-dde57-api-0\" (UID: \"f3c55abd-a513-492c-a28d-57b494edf9f9\") " pod="openstack/cinder-dde57-api-0" Feb 16 02:40:16.187047 master-0 kubenswrapper[31559]: I0216 02:40:16.186885 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3c55abd-a513-492c-a28d-57b494edf9f9-logs\") pod \"cinder-dde57-api-0\" (UID: \"f3c55abd-a513-492c-a28d-57b494edf9f9\") " pod="openstack/cinder-dde57-api-0" Feb 16 02:40:16.187047 master-0 kubenswrapper[31559]: I0216 02:40:16.186910 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3c55abd-a513-492c-a28d-57b494edf9f9-config-data\") pod \"cinder-dde57-api-0\" (UID: \"f3c55abd-a513-492c-a28d-57b494edf9f9\") " pod="openstack/cinder-dde57-api-0" Feb 16 02:40:16.187047 master-0 kubenswrapper[31559]: I0216 02:40:16.186940 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3c55abd-a513-492c-a28d-57b494edf9f9-internal-tls-certs\") pod \"cinder-dde57-api-0\" (UID: \"f3c55abd-a513-492c-a28d-57b494edf9f9\") " pod="openstack/cinder-dde57-api-0" Feb 16 02:40:16.187047 master-0 kubenswrapper[31559]: I0216 02:40:16.186966 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f3c55abd-a513-492c-a28d-57b494edf9f9-etc-machine-id\") pod \"cinder-dde57-api-0\" (UID: \"f3c55abd-a513-492c-a28d-57b494edf9f9\") " pod="openstack/cinder-dde57-api-0" Feb 16 02:40:16.288917 master-0 kubenswrapper[31559]: I0216 02:40:16.288645 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3c55abd-a513-492c-a28d-57b494edf9f9-public-tls-certs\") pod \"cinder-dde57-api-0\" (UID: \"f3c55abd-a513-492c-a28d-57b494edf9f9\") " pod="openstack/cinder-dde57-api-0" Feb 16 02:40:16.288917 master-0 kubenswrapper[31559]: I0216 02:40:16.288917 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3c55abd-a513-492c-a28d-57b494edf9f9-config-data-custom\") pod \"cinder-dde57-api-0\" (UID: \"f3c55abd-a513-492c-a28d-57b494edf9f9\") " pod="openstack/cinder-dde57-api-0" Feb 16 02:40:16.289189 master-0 kubenswrapper[31559]: I0216 02:40:16.288946 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3c55abd-a513-492c-a28d-57b494edf9f9-scripts\") pod \"cinder-dde57-api-0\" (UID: \"f3c55abd-a513-492c-a28d-57b494edf9f9\") " pod="openstack/cinder-dde57-api-0" Feb 16 02:40:16.289189 master-0 kubenswrapper[31559]: I0216 02:40:16.288984 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3c55abd-a513-492c-a28d-57b494edf9f9-logs\") pod \"cinder-dde57-api-0\" (UID: \"f3c55abd-a513-492c-a28d-57b494edf9f9\") " pod="openstack/cinder-dde57-api-0" Feb 16 02:40:16.289189 master-0 kubenswrapper[31559]: I0216 02:40:16.289087 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3c55abd-a513-492c-a28d-57b494edf9f9-config-data\") pod \"cinder-dde57-api-0\" (UID: \"f3c55abd-a513-492c-a28d-57b494edf9f9\") " pod="openstack/cinder-dde57-api-0" Feb 16 02:40:16.289189 master-0 kubenswrapper[31559]: I0216 02:40:16.289124 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3c55abd-a513-492c-a28d-57b494edf9f9-internal-tls-certs\") pod \"cinder-dde57-api-0\" (UID: \"f3c55abd-a513-492c-a28d-57b494edf9f9\") " pod="openstack/cinder-dde57-api-0" Feb 16 02:40:16.289189 master-0 kubenswrapper[31559]: I0216 02:40:16.289147 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f3c55abd-a513-492c-a28d-57b494edf9f9-etc-machine-id\") pod \"cinder-dde57-api-0\" (UID: \"f3c55abd-a513-492c-a28d-57b494edf9f9\") " pod="openstack/cinder-dde57-api-0" Feb 16 02:40:16.289189 master-0 kubenswrapper[31559]: I0216 02:40:16.289169 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3c55abd-a513-492c-a28d-57b494edf9f9-combined-ca-bundle\") pod \"cinder-dde57-api-0\" (UID: \"f3c55abd-a513-492c-a28d-57b494edf9f9\") " pod="openstack/cinder-dde57-api-0" Feb 16 02:40:16.289549 master-0 kubenswrapper[31559]: I0216 02:40:16.289204 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4h2g5\" (UniqueName: \"kubernetes.io/projected/f3c55abd-a513-492c-a28d-57b494edf9f9-kube-api-access-4h2g5\") pod \"cinder-dde57-api-0\" (UID: \"f3c55abd-a513-492c-a28d-57b494edf9f9\") " pod="openstack/cinder-dde57-api-0" Feb 16 02:40:16.289549 master-0 kubenswrapper[31559]: I0216 02:40:16.289545 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f3c55abd-a513-492c-a28d-57b494edf9f9-etc-machine-id\") pod \"cinder-dde57-api-0\" (UID: \"f3c55abd-a513-492c-a28d-57b494edf9f9\") " pod="openstack/cinder-dde57-api-0" Feb 16 02:40:16.293650 master-0 kubenswrapper[31559]: I0216 02:40:16.293322 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3c55abd-a513-492c-a28d-57b494edf9f9-internal-tls-certs\") pod \"cinder-dde57-api-0\" (UID: \"f3c55abd-a513-492c-a28d-57b494edf9f9\") " pod="openstack/cinder-dde57-api-0" Feb 16 02:40:16.293650 master-0 kubenswrapper[31559]: I0216 02:40:16.293495 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3c55abd-a513-492c-a28d-57b494edf9f9-combined-ca-bundle\") pod \"cinder-dde57-api-0\" (UID: \"f3c55abd-a513-492c-a28d-57b494edf9f9\") " pod="openstack/cinder-dde57-api-0" Feb 16 02:40:16.294728 master-0 kubenswrapper[31559]: I0216 02:40:16.294657 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3c55abd-a513-492c-a28d-57b494edf9f9-config-data-custom\") pod \"cinder-dde57-api-0\" (UID: \"f3c55abd-a513-492c-a28d-57b494edf9f9\") " pod="openstack/cinder-dde57-api-0" Feb 16 02:40:16.295413 master-0 kubenswrapper[31559]: I0216 02:40:16.295353 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3c55abd-a513-492c-a28d-57b494edf9f9-logs\") pod \"cinder-dde57-api-0\" (UID: \"f3c55abd-a513-492c-a28d-57b494edf9f9\") " pod="openstack/cinder-dde57-api-0" Feb 16 02:40:16.299033 master-0 kubenswrapper[31559]: I0216 02:40:16.298985 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3c55abd-a513-492c-a28d-57b494edf9f9-scripts\") pod \"cinder-dde57-api-0\" (UID: \"f3c55abd-a513-492c-a28d-57b494edf9f9\") " pod="openstack/cinder-dde57-api-0" Feb 16 02:40:16.299546 master-0 kubenswrapper[31559]: I0216 02:40:16.299512 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3c55abd-a513-492c-a28d-57b494edf9f9-public-tls-certs\") pod \"cinder-dde57-api-0\" (UID: \"f3c55abd-a513-492c-a28d-57b494edf9f9\") " pod="openstack/cinder-dde57-api-0" Feb 16 02:40:16.299629 master-0 kubenswrapper[31559]: I0216 02:40:16.299609 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3c55abd-a513-492c-a28d-57b494edf9f9-config-data\") pod \"cinder-dde57-api-0\" (UID: \"f3c55abd-a513-492c-a28d-57b494edf9f9\") " pod="openstack/cinder-dde57-api-0" Feb 16 02:40:16.317367 master-0 kubenswrapper[31559]: I0216 02:40:16.317303 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4h2g5\" (UniqueName: \"kubernetes.io/projected/f3c55abd-a513-492c-a28d-57b494edf9f9-kube-api-access-4h2g5\") pod \"cinder-dde57-api-0\" (UID: \"f3c55abd-a513-492c-a28d-57b494edf9f9\") " pod="openstack/cinder-dde57-api-0" Feb 16 02:40:16.490555 master-0 kubenswrapper[31559]: I0216 02:40:16.490505 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-dde57-api-0" Feb 16 02:40:17.019961 master-0 kubenswrapper[31559]: I0216 02:40:17.019756 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-dde57-api-0"] Feb 16 02:40:17.021954 master-0 kubenswrapper[31559]: W0216 02:40:17.021611 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf3c55abd_a513_492c_a28d_57b494edf9f9.slice/crio-05a8c66b492cffd860d09ff155679380e4d86d057a444ffcb0f8021c4dfcbfcf WatchSource:0}: Error finding container 05a8c66b492cffd860d09ff155679380e4d86d057a444ffcb0f8021c4dfcbfcf: Status 404 returned error can't find the container with id 05a8c66b492cffd860d09ff155679380e4d86d057a444ffcb0f8021c4dfcbfcf Feb 16 02:40:17.513426 master-0 kubenswrapper[31559]: I0216 02:40:17.511753 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-dbe0-account-create-update-jbp2r" Feb 16 02:40:17.652471 master-0 kubenswrapper[31559]: I0216 02:40:17.652348 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed677e03-0917-4744-93f0-d7e64470c27d-operator-scripts\") pod \"ed677e03-0917-4744-93f0-d7e64470c27d\" (UID: \"ed677e03-0917-4744-93f0-d7e64470c27d\") " Feb 16 02:40:17.652667 master-0 kubenswrapper[31559]: I0216 02:40:17.652517 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9ztl\" (UniqueName: \"kubernetes.io/projected/ed677e03-0917-4744-93f0-d7e64470c27d-kube-api-access-f9ztl\") pod \"ed677e03-0917-4744-93f0-d7e64470c27d\" (UID: \"ed677e03-0917-4744-93f0-d7e64470c27d\") " Feb 16 02:40:17.653872 master-0 kubenswrapper[31559]: I0216 02:40:17.653827 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed677e03-0917-4744-93f0-d7e64470c27d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ed677e03-0917-4744-93f0-d7e64470c27d" (UID: "ed677e03-0917-4744-93f0-d7e64470c27d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:40:17.660620 master-0 kubenswrapper[31559]: I0216 02:40:17.660572 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed677e03-0917-4744-93f0-d7e64470c27d-kube-api-access-f9ztl" (OuterVolumeSpecName: "kube-api-access-f9ztl") pod "ed677e03-0917-4744-93f0-d7e64470c27d" (UID: "ed677e03-0917-4744-93f0-d7e64470c27d"). InnerVolumeSpecName "kube-api-access-f9ztl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:40:17.772016 master-0 kubenswrapper[31559]: I0216 02:40:17.771961 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9ztl\" (UniqueName: \"kubernetes.io/projected/ed677e03-0917-4744-93f0-d7e64470c27d-kube-api-access-f9ztl\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:17.772016 master-0 kubenswrapper[31559]: I0216 02:40:17.772011 31559 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed677e03-0917-4744-93f0-d7e64470c27d-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:17.890611 master-0 kubenswrapper[31559]: I0216 02:40:17.890564 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-e957-account-create-update-7245k" Feb 16 02:40:17.895565 master-0 kubenswrapper[31559]: I0216 02:40:17.895505 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-245b-account-create-update-rgllf" Feb 16 02:40:17.903941 master-0 kubenswrapper[31559]: I0216 02:40:17.903864 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-vc447" Feb 16 02:40:17.937925 master-0 kubenswrapper[31559]: I0216 02:40:17.937881 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-prwlf" Feb 16 02:40:17.952224 master-0 kubenswrapper[31559]: I0216 02:40:17.952168 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-dbe0-account-create-update-jbp2r" Feb 16 02:40:17.957842 master-0 kubenswrapper[31559]: I0216 02:40:17.957679 31559 generic.go:334] "Generic (PLEG): container finished" podID="e68f2b97-da69-4ad8-aa22-20cbf6fcb819" containerID="e6bbea076546c7400fdafe4f2000c907151e73ce02439ce04f17a22b27b86a09" exitCode=0 Feb 16 02:40:17.966232 master-0 kubenswrapper[31559]: I0216 02:40:17.966171 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-prwlf" Feb 16 02:40:17.966320 master-0 kubenswrapper[31559]: I0216 02:40:17.966250 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-xv575" Feb 16 02:40:17.966373 master-0 kubenswrapper[31559]: I0216 02:40:17.966320 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-245b-account-create-update-rgllf" Feb 16 02:40:17.968905 master-0 kubenswrapper[31559]: I0216 02:40:17.968865 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14c0d02d-65db-41be-81a2-f4a1c5996cb8" path="/var/lib/kubelet/pods/14c0d02d-65db-41be-81a2-f4a1c5996cb8/volumes" Feb 16 02:40:17.970538 master-0 kubenswrapper[31559]: I0216 02:40:17.970513 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-vc447" Feb 16 02:40:17.979716 master-0 kubenswrapper[31559]: I0216 02:40:17.979692 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-e957-account-create-update-7245k" Feb 16 02:40:17.980274 master-0 kubenswrapper[31559]: I0216 02:40:17.980228 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3376cdb9-0b42-43bc-a145-81508f342ccd-operator-scripts\") pod \"3376cdb9-0b42-43bc-a145-81508f342ccd\" (UID: \"3376cdb9-0b42-43bc-a145-81508f342ccd\") " Feb 16 02:40:17.980341 master-0 kubenswrapper[31559]: I0216 02:40:17.980329 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68b63ce3-aaa3-4eeb-9264-48d821ee7f81-operator-scripts\") pod \"68b63ce3-aaa3-4eeb-9264-48d821ee7f81\" (UID: \"68b63ce3-aaa3-4eeb-9264-48d821ee7f81\") " Feb 16 02:40:17.980384 master-0 kubenswrapper[31559]: I0216 02:40:17.980354 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6448424-28c1-42d1-9f7f-67db21f0e53c-operator-scripts\") pod \"c6448424-28c1-42d1-9f7f-67db21f0e53c\" (UID: \"c6448424-28c1-42d1-9f7f-67db21f0e53c\") " Feb 16 02:40:17.980421 master-0 kubenswrapper[31559]: I0216 02:40:17.980386 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rr5zn\" (UniqueName: \"kubernetes.io/projected/68b63ce3-aaa3-4eeb-9264-48d821ee7f81-kube-api-access-rr5zn\") pod \"68b63ce3-aaa3-4eeb-9264-48d821ee7f81\" (UID: \"68b63ce3-aaa3-4eeb-9264-48d821ee7f81\") " Feb 16 02:40:17.980477 master-0 kubenswrapper[31559]: I0216 02:40:17.980457 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/323b3672-0931-4d00-9d68-6d4eae9a4cec-operator-scripts\") pod \"323b3672-0931-4d00-9d68-6d4eae9a4cec\" (UID: \"323b3672-0931-4d00-9d68-6d4eae9a4cec\") " Feb 16 02:40:17.980596 master-0 kubenswrapper[31559]: I0216 02:40:17.980577 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8hdc\" (UniqueName: \"kubernetes.io/projected/3376cdb9-0b42-43bc-a145-81508f342ccd-kube-api-access-m8hdc\") pod \"3376cdb9-0b42-43bc-a145-81508f342ccd\" (UID: \"3376cdb9-0b42-43bc-a145-81508f342ccd\") " Feb 16 02:40:17.980644 master-0 kubenswrapper[31559]: I0216 02:40:17.980619 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4mm5\" (UniqueName: \"kubernetes.io/projected/323b3672-0931-4d00-9d68-6d4eae9a4cec-kube-api-access-p4mm5\") pod \"323b3672-0931-4d00-9d68-6d4eae9a4cec\" (UID: \"323b3672-0931-4d00-9d68-6d4eae9a4cec\") " Feb 16 02:40:17.980675 master-0 kubenswrapper[31559]: I0216 02:40:17.980650 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zcndj\" (UniqueName: \"kubernetes.io/projected/c6448424-28c1-42d1-9f7f-67db21f0e53c-kube-api-access-zcndj\") pod \"c6448424-28c1-42d1-9f7f-67db21f0e53c\" (UID: \"c6448424-28c1-42d1-9f7f-67db21f0e53c\") " Feb 16 02:40:17.981037 master-0 kubenswrapper[31559]: I0216 02:40:17.980992 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3376cdb9-0b42-43bc-a145-81508f342ccd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3376cdb9-0b42-43bc-a145-81508f342ccd" (UID: "3376cdb9-0b42-43bc-a145-81508f342ccd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:40:17.981160 master-0 kubenswrapper[31559]: I0216 02:40:17.981100 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68b63ce3-aaa3-4eeb-9264-48d821ee7f81-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "68b63ce3-aaa3-4eeb-9264-48d821ee7f81" (UID: "68b63ce3-aaa3-4eeb-9264-48d821ee7f81"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:40:17.981160 master-0 kubenswrapper[31559]: I0216 02:40:17.981112 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6448424-28c1-42d1-9f7f-67db21f0e53c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c6448424-28c1-42d1-9f7f-67db21f0e53c" (UID: "c6448424-28c1-42d1-9f7f-67db21f0e53c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:40:17.981688 master-0 kubenswrapper[31559]: I0216 02:40:17.981655 31559 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3376cdb9-0b42-43bc-a145-81508f342ccd-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:17.981688 master-0 kubenswrapper[31559]: I0216 02:40:17.981681 31559 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68b63ce3-aaa3-4eeb-9264-48d821ee7f81-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:17.981688 master-0 kubenswrapper[31559]: I0216 02:40:17.981691 31559 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6448424-28c1-42d1-9f7f-67db21f0e53c-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:17.982310 master-0 kubenswrapper[31559]: I0216 02:40:17.982268 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-dbe0-account-create-update-jbp2r" event={"ID":"ed677e03-0917-4744-93f0-d7e64470c27d","Type":"ContainerDied","Data":"5383b4526c74e856ad1a34231a1141dec8bd2dae8809ccb250461331620cfda8"} Feb 16 02:40:17.982381 master-0 kubenswrapper[31559]: I0216 02:40:17.982323 31559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5383b4526c74e856ad1a34231a1141dec8bd2dae8809ccb250461331620cfda8" Feb 16 02:40:17.982381 master-0 kubenswrapper[31559]: I0216 02:40:17.982336 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-prwlf" event={"ID":"323b3672-0931-4d00-9d68-6d4eae9a4cec","Type":"ContainerDied","Data":"7437ed7a5c813d98a7b421cda589959dc38657de8068e6a87fb4cb74948700f8"} Feb 16 02:40:17.982381 master-0 kubenswrapper[31559]: I0216 02:40:17.982349 31559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7437ed7a5c813d98a7b421cda589959dc38657de8068e6a87fb4cb74948700f8" Feb 16 02:40:17.982381 master-0 kubenswrapper[31559]: I0216 02:40:17.982357 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-dde57-api-0" event={"ID":"f3c55abd-a513-492c-a28d-57b494edf9f9","Type":"ContainerStarted","Data":"05a8c66b492cffd860d09ff155679380e4d86d057a444ffcb0f8021c4dfcbfcf"} Feb 16 02:40:17.982381 master-0 kubenswrapper[31559]: I0216 02:40:17.982368 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-2p5mt" event={"ID":"e68f2b97-da69-4ad8-aa22-20cbf6fcb819","Type":"ContainerDied","Data":"e6bbea076546c7400fdafe4f2000c907151e73ce02439ce04f17a22b27b86a09"} Feb 16 02:40:17.982381 master-0 kubenswrapper[31559]: I0216 02:40:17.982381 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-245b-account-create-update-rgllf" event={"ID":"c6448424-28c1-42d1-9f7f-67db21f0e53c","Type":"ContainerDied","Data":"b9d57ce5454eda9d9d301ba676a1fb525b22f3b4e61138c857bcdbad21fd4ea7"} Feb 16 02:40:17.982692 master-0 kubenswrapper[31559]: I0216 02:40:17.982393 31559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9d57ce5454eda9d9d301ba676a1fb525b22f3b4e61138c857bcdbad21fd4ea7" Feb 16 02:40:17.982692 master-0 kubenswrapper[31559]: I0216 02:40:17.982402 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-xv575" event={"ID":"85150fe7-7e88-4ebb-a2c7-643274767b45","Type":"ContainerDied","Data":"08d8fde1b124abe574038b9e2a9f24f5f32b15ebb705c877aba570953a689723"} Feb 16 02:40:17.982692 master-0 kubenswrapper[31559]: I0216 02:40:17.982413 31559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08d8fde1b124abe574038b9e2a9f24f5f32b15ebb705c877aba570953a689723" Feb 16 02:40:17.982692 master-0 kubenswrapper[31559]: I0216 02:40:17.982420 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-vc447" event={"ID":"68b63ce3-aaa3-4eeb-9264-48d821ee7f81","Type":"ContainerDied","Data":"5fbb07ee8d64f3828feb0785871cd57467314c116c504792e3ff88dcf857e574"} Feb 16 02:40:17.982692 master-0 kubenswrapper[31559]: I0216 02:40:17.982442 31559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5fbb07ee8d64f3828feb0785871cd57467314c116c504792e3ff88dcf857e574" Feb 16 02:40:17.982692 master-0 kubenswrapper[31559]: I0216 02:40:17.982450 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-e957-account-create-update-7245k" event={"ID":"3376cdb9-0b42-43bc-a145-81508f342ccd","Type":"ContainerDied","Data":"10e6db89b7b66e8ee2b11ca79726596a6f4e9a9f367ddcd3ce1166b3b5eaf439"} Feb 16 02:40:17.982692 master-0 kubenswrapper[31559]: I0216 02:40:17.982459 31559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10e6db89b7b66e8ee2b11ca79726596a6f4e9a9f367ddcd3ce1166b3b5eaf439" Feb 16 02:40:17.983054 master-0 kubenswrapper[31559]: I0216 02:40:17.982721 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/323b3672-0931-4d00-9d68-6d4eae9a4cec-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "323b3672-0931-4d00-9d68-6d4eae9a4cec" (UID: "323b3672-0931-4d00-9d68-6d4eae9a4cec"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:40:17.984984 master-0 kubenswrapper[31559]: I0216 02:40:17.984959 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3376cdb9-0b42-43bc-a145-81508f342ccd-kube-api-access-m8hdc" (OuterVolumeSpecName: "kube-api-access-m8hdc") pod "3376cdb9-0b42-43bc-a145-81508f342ccd" (UID: "3376cdb9-0b42-43bc-a145-81508f342ccd"). InnerVolumeSpecName "kube-api-access-m8hdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:40:17.986201 master-0 kubenswrapper[31559]: I0216 02:40:17.986156 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6448424-28c1-42d1-9f7f-67db21f0e53c-kube-api-access-zcndj" (OuterVolumeSpecName: "kube-api-access-zcndj") pod "c6448424-28c1-42d1-9f7f-67db21f0e53c" (UID: "c6448424-28c1-42d1-9f7f-67db21f0e53c"). InnerVolumeSpecName "kube-api-access-zcndj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:40:17.993916 master-0 kubenswrapper[31559]: I0216 02:40:17.993835 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/323b3672-0931-4d00-9d68-6d4eae9a4cec-kube-api-access-p4mm5" (OuterVolumeSpecName: "kube-api-access-p4mm5") pod "323b3672-0931-4d00-9d68-6d4eae9a4cec" (UID: "323b3672-0931-4d00-9d68-6d4eae9a4cec"). InnerVolumeSpecName "kube-api-access-p4mm5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:40:17.998798 master-0 kubenswrapper[31559]: I0216 02:40:17.998730 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68b63ce3-aaa3-4eeb-9264-48d821ee7f81-kube-api-access-rr5zn" (OuterVolumeSpecName: "kube-api-access-rr5zn") pod "68b63ce3-aaa3-4eeb-9264-48d821ee7f81" (UID: "68b63ce3-aaa3-4eeb-9264-48d821ee7f81"). InnerVolumeSpecName "kube-api-access-rr5zn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:40:18.107664 master-0 kubenswrapper[31559]: I0216 02:40:18.107598 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rr5zn\" (UniqueName: \"kubernetes.io/projected/68b63ce3-aaa3-4eeb-9264-48d821ee7f81-kube-api-access-rr5zn\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:18.107664 master-0 kubenswrapper[31559]: I0216 02:40:18.107646 31559 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/323b3672-0931-4d00-9d68-6d4eae9a4cec-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:18.107664 master-0 kubenswrapper[31559]: I0216 02:40:18.107658 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8hdc\" (UniqueName: \"kubernetes.io/projected/3376cdb9-0b42-43bc-a145-81508f342ccd-kube-api-access-m8hdc\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:18.107664 master-0 kubenswrapper[31559]: I0216 02:40:18.107668 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4mm5\" (UniqueName: \"kubernetes.io/projected/323b3672-0931-4d00-9d68-6d4eae9a4cec-kube-api-access-p4mm5\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:18.107664 master-0 kubenswrapper[31559]: I0216 02:40:18.107678 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zcndj\" (UniqueName: \"kubernetes.io/projected/c6448424-28c1-42d1-9f7f-67db21f0e53c-kube-api-access-zcndj\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:18.210461 master-0 kubenswrapper[31559]: I0216 02:40:18.209428 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jshp9\" (UniqueName: \"kubernetes.io/projected/85150fe7-7e88-4ebb-a2c7-643274767b45-kube-api-access-jshp9\") pod \"85150fe7-7e88-4ebb-a2c7-643274767b45\" (UID: \"85150fe7-7e88-4ebb-a2c7-643274767b45\") " Feb 16 02:40:18.210461 master-0 kubenswrapper[31559]: I0216 02:40:18.209637 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85150fe7-7e88-4ebb-a2c7-643274767b45-operator-scripts\") pod \"85150fe7-7e88-4ebb-a2c7-643274767b45\" (UID: \"85150fe7-7e88-4ebb-a2c7-643274767b45\") " Feb 16 02:40:18.220710 master-0 kubenswrapper[31559]: I0216 02:40:18.211144 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85150fe7-7e88-4ebb-a2c7-643274767b45-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "85150fe7-7e88-4ebb-a2c7-643274767b45" (UID: "85150fe7-7e88-4ebb-a2c7-643274767b45"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:40:18.223622 master-0 kubenswrapper[31559]: I0216 02:40:18.223547 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85150fe7-7e88-4ebb-a2c7-643274767b45-kube-api-access-jshp9" (OuterVolumeSpecName: "kube-api-access-jshp9") pod "85150fe7-7e88-4ebb-a2c7-643274767b45" (UID: "85150fe7-7e88-4ebb-a2c7-643274767b45"). InnerVolumeSpecName "kube-api-access-jshp9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:40:18.312668 master-0 kubenswrapper[31559]: I0216 02:40:18.312616 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jshp9\" (UniqueName: \"kubernetes.io/projected/85150fe7-7e88-4ebb-a2c7-643274767b45-kube-api-access-jshp9\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:18.312668 master-0 kubenswrapper[31559]: I0216 02:40:18.312658 31559 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85150fe7-7e88-4ebb-a2c7-643274767b45-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:18.996029 master-0 kubenswrapper[31559]: I0216 02:40:18.995758 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-dde57-api-0" event={"ID":"f3c55abd-a513-492c-a28d-57b494edf9f9","Type":"ContainerStarted","Data":"87ce79c731e093a1b03fc18f9036dee71b09b847e6a31549e8aa4f564b936b5c"} Feb 16 02:40:18.996029 master-0 kubenswrapper[31559]: I0216 02:40:18.996033 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-dde57-api-0" Feb 16 02:40:18.996638 master-0 kubenswrapper[31559]: I0216 02:40:18.995937 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-xv575" Feb 16 02:40:18.996638 master-0 kubenswrapper[31559]: I0216 02:40:18.996050 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-dde57-api-0" event={"ID":"f3c55abd-a513-492c-a28d-57b494edf9f9","Type":"ContainerStarted","Data":"ebe271abdcd782ed2a8afa0c2aae2cc221ad2df00add7ceee493288232281097"} Feb 16 02:40:19.072734 master-0 kubenswrapper[31559]: I0216 02:40:19.071512 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-dde57-api-0" podStartSLOduration=3.071490928 podStartE2EDuration="3.071490928s" podCreationTimestamp="2026-02-16 02:40:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:40:19.035187088 +0000 UTC m=+1071.379793123" watchObservedRunningTime="2026-02-16 02:40:19.071490928 +0000 UTC m=+1071.416096943" Feb 16 02:40:19.419618 master-0 kubenswrapper[31559]: I0216 02:40:19.419580 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-2p5mt" Feb 16 02:40:19.564248 master-0 kubenswrapper[31559]: I0216 02:40:19.564077 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e68f2b97-da69-4ad8-aa22-20cbf6fcb819-scripts\") pod \"e68f2b97-da69-4ad8-aa22-20cbf6fcb819\" (UID: \"e68f2b97-da69-4ad8-aa22-20cbf6fcb819\") " Feb 16 02:40:19.565950 master-0 kubenswrapper[31559]: I0216 02:40:19.564556 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e68f2b97-da69-4ad8-aa22-20cbf6fcb819-combined-ca-bundle\") pod \"e68f2b97-da69-4ad8-aa22-20cbf6fcb819\" (UID: \"e68f2b97-da69-4ad8-aa22-20cbf6fcb819\") " Feb 16 02:40:19.565950 master-0 kubenswrapper[31559]: I0216 02:40:19.564686 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e68f2b97-da69-4ad8-aa22-20cbf6fcb819-config\") pod \"e68f2b97-da69-4ad8-aa22-20cbf6fcb819\" (UID: \"e68f2b97-da69-4ad8-aa22-20cbf6fcb819\") " Feb 16 02:40:19.565950 master-0 kubenswrapper[31559]: I0216 02:40:19.564723 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/e68f2b97-da69-4ad8-aa22-20cbf6fcb819-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"e68f2b97-da69-4ad8-aa22-20cbf6fcb819\" (UID: \"e68f2b97-da69-4ad8-aa22-20cbf6fcb819\") " Feb 16 02:40:19.565950 master-0 kubenswrapper[31559]: I0216 02:40:19.564749 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lmpv\" (UniqueName: \"kubernetes.io/projected/e68f2b97-da69-4ad8-aa22-20cbf6fcb819-kube-api-access-6lmpv\") pod \"e68f2b97-da69-4ad8-aa22-20cbf6fcb819\" (UID: \"e68f2b97-da69-4ad8-aa22-20cbf6fcb819\") " Feb 16 02:40:19.565950 master-0 kubenswrapper[31559]: I0216 02:40:19.564807 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/e68f2b97-da69-4ad8-aa22-20cbf6fcb819-etc-podinfo\") pod \"e68f2b97-da69-4ad8-aa22-20cbf6fcb819\" (UID: \"e68f2b97-da69-4ad8-aa22-20cbf6fcb819\") " Feb 16 02:40:19.565950 master-0 kubenswrapper[31559]: I0216 02:40:19.564916 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/e68f2b97-da69-4ad8-aa22-20cbf6fcb819-var-lib-ironic\") pod \"e68f2b97-da69-4ad8-aa22-20cbf6fcb819\" (UID: \"e68f2b97-da69-4ad8-aa22-20cbf6fcb819\") " Feb 16 02:40:19.565950 master-0 kubenswrapper[31559]: I0216 02:40:19.565238 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e68f2b97-da69-4ad8-aa22-20cbf6fcb819-var-lib-ironic-inspector-dhcp-hostsdir" (OuterVolumeSpecName: "var-lib-ironic-inspector-dhcp-hostsdir") pod "e68f2b97-da69-4ad8-aa22-20cbf6fcb819" (UID: "e68f2b97-da69-4ad8-aa22-20cbf6fcb819"). InnerVolumeSpecName "var-lib-ironic-inspector-dhcp-hostsdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 02:40:19.566489 master-0 kubenswrapper[31559]: I0216 02:40:19.566157 31559 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/e68f2b97-da69-4ad8-aa22-20cbf6fcb819-var-lib-ironic-inspector-dhcp-hostsdir\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:19.566728 master-0 kubenswrapper[31559]: I0216 02:40:19.566680 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e68f2b97-da69-4ad8-aa22-20cbf6fcb819-var-lib-ironic" (OuterVolumeSpecName: "var-lib-ironic") pod "e68f2b97-da69-4ad8-aa22-20cbf6fcb819" (UID: "e68f2b97-da69-4ad8-aa22-20cbf6fcb819"). InnerVolumeSpecName "var-lib-ironic". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 02:40:19.566992 master-0 kubenswrapper[31559]: I0216 02:40:19.566878 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e68f2b97-da69-4ad8-aa22-20cbf6fcb819-scripts" (OuterVolumeSpecName: "scripts") pod "e68f2b97-da69-4ad8-aa22-20cbf6fcb819" (UID: "e68f2b97-da69-4ad8-aa22-20cbf6fcb819"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:40:19.569850 master-0 kubenswrapper[31559]: I0216 02:40:19.568948 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/e68f2b97-da69-4ad8-aa22-20cbf6fcb819-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "e68f2b97-da69-4ad8-aa22-20cbf6fcb819" (UID: "e68f2b97-da69-4ad8-aa22-20cbf6fcb819"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 16 02:40:19.569850 master-0 kubenswrapper[31559]: I0216 02:40:19.569784 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e68f2b97-da69-4ad8-aa22-20cbf6fcb819-kube-api-access-6lmpv" (OuterVolumeSpecName: "kube-api-access-6lmpv") pod "e68f2b97-da69-4ad8-aa22-20cbf6fcb819" (UID: "e68f2b97-da69-4ad8-aa22-20cbf6fcb819"). InnerVolumeSpecName "kube-api-access-6lmpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:40:19.596663 master-0 kubenswrapper[31559]: I0216 02:40:19.596592 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e68f2b97-da69-4ad8-aa22-20cbf6fcb819-config" (OuterVolumeSpecName: "config") pod "e68f2b97-da69-4ad8-aa22-20cbf6fcb819" (UID: "e68f2b97-da69-4ad8-aa22-20cbf6fcb819"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:40:19.610170 master-0 kubenswrapper[31559]: I0216 02:40:19.610080 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e68f2b97-da69-4ad8-aa22-20cbf6fcb819-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e68f2b97-da69-4ad8-aa22-20cbf6fcb819" (UID: "e68f2b97-da69-4ad8-aa22-20cbf6fcb819"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:40:19.671455 master-0 kubenswrapper[31559]: I0216 02:40:19.670851 31559 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e68f2b97-da69-4ad8-aa22-20cbf6fcb819-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:19.671455 master-0 kubenswrapper[31559]: I0216 02:40:19.670897 31559 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/e68f2b97-da69-4ad8-aa22-20cbf6fcb819-config\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:19.671455 master-0 kubenswrapper[31559]: I0216 02:40:19.670943 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lmpv\" (UniqueName: \"kubernetes.io/projected/e68f2b97-da69-4ad8-aa22-20cbf6fcb819-kube-api-access-6lmpv\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:19.671455 master-0 kubenswrapper[31559]: I0216 02:40:19.671023 31559 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/e68f2b97-da69-4ad8-aa22-20cbf6fcb819-etc-podinfo\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:19.671455 master-0 kubenswrapper[31559]: I0216 02:40:19.671087 31559 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/e68f2b97-da69-4ad8-aa22-20cbf6fcb819-var-lib-ironic\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:19.671455 master-0 kubenswrapper[31559]: I0216 02:40:19.671099 31559 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e68f2b97-da69-4ad8-aa22-20cbf6fcb819-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:20.018883 master-0 kubenswrapper[31559]: I0216 02:40:20.018099 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-2p5mt" event={"ID":"e68f2b97-da69-4ad8-aa22-20cbf6fcb819","Type":"ContainerDied","Data":"5c16a0dee925858f3663d66585e3aa25b12c6573f7e1bba6974bdae0c39da006"} Feb 16 02:40:20.018883 master-0 kubenswrapper[31559]: I0216 02:40:20.018153 31559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c16a0dee925858f3663d66585e3aa25b12c6573f7e1bba6974bdae0c39da006" Feb 16 02:40:20.018883 master-0 kubenswrapper[31559]: I0216 02:40:20.018287 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-2p5mt" Feb 16 02:40:20.687902 master-0 kubenswrapper[31559]: I0216 02:40:20.685737 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6dcfcd5c95-986r2"] Feb 16 02:40:20.687902 master-0 kubenswrapper[31559]: E0216 02:40:20.686195 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68b63ce3-aaa3-4eeb-9264-48d821ee7f81" containerName="mariadb-database-create" Feb 16 02:40:20.687902 master-0 kubenswrapper[31559]: I0216 02:40:20.686207 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="68b63ce3-aaa3-4eeb-9264-48d821ee7f81" containerName="mariadb-database-create" Feb 16 02:40:20.687902 master-0 kubenswrapper[31559]: E0216 02:40:20.686253 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85150fe7-7e88-4ebb-a2c7-643274767b45" containerName="mariadb-database-create" Feb 16 02:40:20.687902 master-0 kubenswrapper[31559]: I0216 02:40:20.686260 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="85150fe7-7e88-4ebb-a2c7-643274767b45" containerName="mariadb-database-create" Feb 16 02:40:20.687902 master-0 kubenswrapper[31559]: E0216 02:40:20.686269 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed677e03-0917-4744-93f0-d7e64470c27d" containerName="mariadb-account-create-update" Feb 16 02:40:20.687902 master-0 kubenswrapper[31559]: I0216 02:40:20.686277 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed677e03-0917-4744-93f0-d7e64470c27d" containerName="mariadb-account-create-update" Feb 16 02:40:20.687902 master-0 kubenswrapper[31559]: E0216 02:40:20.686285 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e68f2b97-da69-4ad8-aa22-20cbf6fcb819" containerName="ironic-inspector-db-sync" Feb 16 02:40:20.687902 master-0 kubenswrapper[31559]: I0216 02:40:20.686290 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="e68f2b97-da69-4ad8-aa22-20cbf6fcb819" containerName="ironic-inspector-db-sync" Feb 16 02:40:20.687902 master-0 kubenswrapper[31559]: E0216 02:40:20.686305 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="323b3672-0931-4d00-9d68-6d4eae9a4cec" containerName="mariadb-database-create" Feb 16 02:40:20.687902 master-0 kubenswrapper[31559]: I0216 02:40:20.686311 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="323b3672-0931-4d00-9d68-6d4eae9a4cec" containerName="mariadb-database-create" Feb 16 02:40:20.687902 master-0 kubenswrapper[31559]: E0216 02:40:20.686326 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6448424-28c1-42d1-9f7f-67db21f0e53c" containerName="mariadb-account-create-update" Feb 16 02:40:20.687902 master-0 kubenswrapper[31559]: I0216 02:40:20.686332 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6448424-28c1-42d1-9f7f-67db21f0e53c" containerName="mariadb-account-create-update" Feb 16 02:40:20.687902 master-0 kubenswrapper[31559]: E0216 02:40:20.686348 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3376cdb9-0b42-43bc-a145-81508f342ccd" containerName="mariadb-account-create-update" Feb 16 02:40:20.687902 master-0 kubenswrapper[31559]: I0216 02:40:20.686354 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="3376cdb9-0b42-43bc-a145-81508f342ccd" containerName="mariadb-account-create-update" Feb 16 02:40:20.687902 master-0 kubenswrapper[31559]: I0216 02:40:20.686544 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="323b3672-0931-4d00-9d68-6d4eae9a4cec" containerName="mariadb-database-create" Feb 16 02:40:20.687902 master-0 kubenswrapper[31559]: I0216 02:40:20.686571 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="e68f2b97-da69-4ad8-aa22-20cbf6fcb819" containerName="ironic-inspector-db-sync" Feb 16 02:40:20.687902 master-0 kubenswrapper[31559]: I0216 02:40:20.686590 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="3376cdb9-0b42-43bc-a145-81508f342ccd" containerName="mariadb-account-create-update" Feb 16 02:40:20.687902 master-0 kubenswrapper[31559]: I0216 02:40:20.686604 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed677e03-0917-4744-93f0-d7e64470c27d" containerName="mariadb-account-create-update" Feb 16 02:40:20.687902 master-0 kubenswrapper[31559]: I0216 02:40:20.686629 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6448424-28c1-42d1-9f7f-67db21f0e53c" containerName="mariadb-account-create-update" Feb 16 02:40:20.687902 master-0 kubenswrapper[31559]: I0216 02:40:20.686640 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="68b63ce3-aaa3-4eeb-9264-48d821ee7f81" containerName="mariadb-database-create" Feb 16 02:40:20.687902 master-0 kubenswrapper[31559]: I0216 02:40:20.686655 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="85150fe7-7e88-4ebb-a2c7-643274767b45" containerName="mariadb-database-create" Feb 16 02:40:20.687902 master-0 kubenswrapper[31559]: I0216 02:40:20.687735 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6dcfcd5c95-986r2" Feb 16 02:40:20.796369 master-0 kubenswrapper[31559]: I0216 02:40:20.795955 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/416c4107-c9b1-4c74-8084-5e5b3ea18696-config\") pod \"dnsmasq-dns-6dcfcd5c95-986r2\" (UID: \"416c4107-c9b1-4c74-8084-5e5b3ea18696\") " pod="openstack/dnsmasq-dns-6dcfcd5c95-986r2" Feb 16 02:40:20.796369 master-0 kubenswrapper[31559]: I0216 02:40:20.796147 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/416c4107-c9b1-4c74-8084-5e5b3ea18696-ovsdbserver-nb\") pod \"dnsmasq-dns-6dcfcd5c95-986r2\" (UID: \"416c4107-c9b1-4c74-8084-5e5b3ea18696\") " pod="openstack/dnsmasq-dns-6dcfcd5c95-986r2" Feb 16 02:40:20.796369 master-0 kubenswrapper[31559]: I0216 02:40:20.796369 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/416c4107-c9b1-4c74-8084-5e5b3ea18696-dns-svc\") pod \"dnsmasq-dns-6dcfcd5c95-986r2\" (UID: \"416c4107-c9b1-4c74-8084-5e5b3ea18696\") " pod="openstack/dnsmasq-dns-6dcfcd5c95-986r2" Feb 16 02:40:20.796721 master-0 kubenswrapper[31559]: I0216 02:40:20.796589 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2qz8\" (UniqueName: \"kubernetes.io/projected/416c4107-c9b1-4c74-8084-5e5b3ea18696-kube-api-access-n2qz8\") pod \"dnsmasq-dns-6dcfcd5c95-986r2\" (UID: \"416c4107-c9b1-4c74-8084-5e5b3ea18696\") " pod="openstack/dnsmasq-dns-6dcfcd5c95-986r2" Feb 16 02:40:20.796772 master-0 kubenswrapper[31559]: I0216 02:40:20.796744 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/416c4107-c9b1-4c74-8084-5e5b3ea18696-ovsdbserver-sb\") pod \"dnsmasq-dns-6dcfcd5c95-986r2\" (UID: \"416c4107-c9b1-4c74-8084-5e5b3ea18696\") " pod="openstack/dnsmasq-dns-6dcfcd5c95-986r2" Feb 16 02:40:20.796973 master-0 kubenswrapper[31559]: I0216 02:40:20.796932 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/416c4107-c9b1-4c74-8084-5e5b3ea18696-dns-swift-storage-0\") pod \"dnsmasq-dns-6dcfcd5c95-986r2\" (UID: \"416c4107-c9b1-4c74-8084-5e5b3ea18696\") " pod="openstack/dnsmasq-dns-6dcfcd5c95-986r2" Feb 16 02:40:20.900074 master-0 kubenswrapper[31559]: I0216 02:40:20.899931 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/416c4107-c9b1-4c74-8084-5e5b3ea18696-dns-swift-storage-0\") pod \"dnsmasq-dns-6dcfcd5c95-986r2\" (UID: \"416c4107-c9b1-4c74-8084-5e5b3ea18696\") " pod="openstack/dnsmasq-dns-6dcfcd5c95-986r2" Feb 16 02:40:20.900298 master-0 kubenswrapper[31559]: I0216 02:40:20.900109 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/416c4107-c9b1-4c74-8084-5e5b3ea18696-config\") pod \"dnsmasq-dns-6dcfcd5c95-986r2\" (UID: \"416c4107-c9b1-4c74-8084-5e5b3ea18696\") " pod="openstack/dnsmasq-dns-6dcfcd5c95-986r2" Feb 16 02:40:20.900298 master-0 kubenswrapper[31559]: I0216 02:40:20.900213 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/416c4107-c9b1-4c74-8084-5e5b3ea18696-ovsdbserver-nb\") pod \"dnsmasq-dns-6dcfcd5c95-986r2\" (UID: \"416c4107-c9b1-4c74-8084-5e5b3ea18696\") " pod="openstack/dnsmasq-dns-6dcfcd5c95-986r2" Feb 16 02:40:20.900399 master-0 kubenswrapper[31559]: I0216 02:40:20.900320 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/416c4107-c9b1-4c74-8084-5e5b3ea18696-dns-svc\") pod \"dnsmasq-dns-6dcfcd5c95-986r2\" (UID: \"416c4107-c9b1-4c74-8084-5e5b3ea18696\") " pod="openstack/dnsmasq-dns-6dcfcd5c95-986r2" Feb 16 02:40:20.900471 master-0 kubenswrapper[31559]: I0216 02:40:20.900402 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2qz8\" (UniqueName: \"kubernetes.io/projected/416c4107-c9b1-4c74-8084-5e5b3ea18696-kube-api-access-n2qz8\") pod \"dnsmasq-dns-6dcfcd5c95-986r2\" (UID: \"416c4107-c9b1-4c74-8084-5e5b3ea18696\") " pod="openstack/dnsmasq-dns-6dcfcd5c95-986r2" Feb 16 02:40:20.900560 master-0 kubenswrapper[31559]: I0216 02:40:20.900529 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/416c4107-c9b1-4c74-8084-5e5b3ea18696-ovsdbserver-sb\") pod \"dnsmasq-dns-6dcfcd5c95-986r2\" (UID: \"416c4107-c9b1-4c74-8084-5e5b3ea18696\") " pod="openstack/dnsmasq-dns-6dcfcd5c95-986r2" Feb 16 02:40:20.902353 master-0 kubenswrapper[31559]: I0216 02:40:20.902310 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/416c4107-c9b1-4c74-8084-5e5b3ea18696-ovsdbserver-sb\") pod \"dnsmasq-dns-6dcfcd5c95-986r2\" (UID: \"416c4107-c9b1-4c74-8084-5e5b3ea18696\") " pod="openstack/dnsmasq-dns-6dcfcd5c95-986r2" Feb 16 02:40:20.903695 master-0 kubenswrapper[31559]: I0216 02:40:20.903653 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/416c4107-c9b1-4c74-8084-5e5b3ea18696-dns-swift-storage-0\") pod \"dnsmasq-dns-6dcfcd5c95-986r2\" (UID: \"416c4107-c9b1-4c74-8084-5e5b3ea18696\") " pod="openstack/dnsmasq-dns-6dcfcd5c95-986r2" Feb 16 02:40:20.904883 master-0 kubenswrapper[31559]: I0216 02:40:20.904843 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/416c4107-c9b1-4c74-8084-5e5b3ea18696-config\") pod \"dnsmasq-dns-6dcfcd5c95-986r2\" (UID: \"416c4107-c9b1-4c74-8084-5e5b3ea18696\") " pod="openstack/dnsmasq-dns-6dcfcd5c95-986r2" Feb 16 02:40:20.906063 master-0 kubenswrapper[31559]: I0216 02:40:20.906018 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/416c4107-c9b1-4c74-8084-5e5b3ea18696-ovsdbserver-nb\") pod \"dnsmasq-dns-6dcfcd5c95-986r2\" (UID: \"416c4107-c9b1-4c74-8084-5e5b3ea18696\") " pod="openstack/dnsmasq-dns-6dcfcd5c95-986r2" Feb 16 02:40:20.907292 master-0 kubenswrapper[31559]: I0216 02:40:20.907252 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/416c4107-c9b1-4c74-8084-5e5b3ea18696-dns-svc\") pod \"dnsmasq-dns-6dcfcd5c95-986r2\" (UID: \"416c4107-c9b1-4c74-8084-5e5b3ea18696\") " pod="openstack/dnsmasq-dns-6dcfcd5c95-986r2" Feb 16 02:40:21.099742 master-0 kubenswrapper[31559]: I0216 02:40:21.099684 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2qz8\" (UniqueName: \"kubernetes.io/projected/416c4107-c9b1-4c74-8084-5e5b3ea18696-kube-api-access-n2qz8\") pod \"dnsmasq-dns-6dcfcd5c95-986r2\" (UID: \"416c4107-c9b1-4c74-8084-5e5b3ea18696\") " pod="openstack/dnsmasq-dns-6dcfcd5c95-986r2" Feb 16 02:40:21.114169 master-0 kubenswrapper[31559]: I0216 02:40:21.114091 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6dcfcd5c95-986r2"] Feb 16 02:40:21.136773 master-0 kubenswrapper[31559]: I0216 02:40:21.136061 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-0"] Feb 16 02:40:21.149188 master-0 kubenswrapper[31559]: I0216 02:40:21.149135 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Feb 16 02:40:21.151271 master-0 kubenswrapper[31559]: I0216 02:40:21.151183 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-transport-url-ironic-inspector-transport" Feb 16 02:40:21.151761 master-0 kubenswrapper[31559]: I0216 02:40:21.151738 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-scripts" Feb 16 02:40:21.152309 master-0 kubenswrapper[31559]: I0216 02:40:21.152225 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-config-data" Feb 16 02:40:21.167198 master-0 kubenswrapper[31559]: I0216 02:40:21.167125 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Feb 16 02:40:21.316083 master-0 kubenswrapper[31559]: I0216 02:40:21.315540 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6dcfcd5c95-986r2" Feb 16 02:40:21.326157 master-0 kubenswrapper[31559]: I0216 02:40:21.326084 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12b7b092-d70d-4cdf-85d6-9d98d6b97ca0-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"12b7b092-d70d-4cdf-85d6-9d98d6b97ca0\") " pod="openstack/ironic-inspector-0" Feb 16 02:40:21.327090 master-0 kubenswrapper[31559]: I0216 02:40:21.327011 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/12b7b092-d70d-4cdf-85d6-9d98d6b97ca0-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"12b7b092-d70d-4cdf-85d6-9d98d6b97ca0\") " pod="openstack/ironic-inspector-0" Feb 16 02:40:21.327312 master-0 kubenswrapper[31559]: I0216 02:40:21.327132 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/12b7b092-d70d-4cdf-85d6-9d98d6b97ca0-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"12b7b092-d70d-4cdf-85d6-9d98d6b97ca0\") " pod="openstack/ironic-inspector-0" Feb 16 02:40:21.327312 master-0 kubenswrapper[31559]: I0216 02:40:21.327240 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbwn8\" (UniqueName: \"kubernetes.io/projected/12b7b092-d70d-4cdf-85d6-9d98d6b97ca0-kube-api-access-sbwn8\") pod \"ironic-inspector-0\" (UID: \"12b7b092-d70d-4cdf-85d6-9d98d6b97ca0\") " pod="openstack/ironic-inspector-0" Feb 16 02:40:21.327312 master-0 kubenswrapper[31559]: I0216 02:40:21.327264 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/12b7b092-d70d-4cdf-85d6-9d98d6b97ca0-config\") pod \"ironic-inspector-0\" (UID: \"12b7b092-d70d-4cdf-85d6-9d98d6b97ca0\") " pod="openstack/ironic-inspector-0" Feb 16 02:40:21.327312 master-0 kubenswrapper[31559]: I0216 02:40:21.327317 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12b7b092-d70d-4cdf-85d6-9d98d6b97ca0-scripts\") pod \"ironic-inspector-0\" (UID: \"12b7b092-d70d-4cdf-85d6-9d98d6b97ca0\") " pod="openstack/ironic-inspector-0" Feb 16 02:40:21.327879 master-0 kubenswrapper[31559]: I0216 02:40:21.327402 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/12b7b092-d70d-4cdf-85d6-9d98d6b97ca0-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"12b7b092-d70d-4cdf-85d6-9d98d6b97ca0\") " pod="openstack/ironic-inspector-0" Feb 16 02:40:21.430296 master-0 kubenswrapper[31559]: I0216 02:40:21.430233 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbwn8\" (UniqueName: \"kubernetes.io/projected/12b7b092-d70d-4cdf-85d6-9d98d6b97ca0-kube-api-access-sbwn8\") pod \"ironic-inspector-0\" (UID: \"12b7b092-d70d-4cdf-85d6-9d98d6b97ca0\") " pod="openstack/ironic-inspector-0" Feb 16 02:40:21.430550 master-0 kubenswrapper[31559]: I0216 02:40:21.430302 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/12b7b092-d70d-4cdf-85d6-9d98d6b97ca0-config\") pod \"ironic-inspector-0\" (UID: \"12b7b092-d70d-4cdf-85d6-9d98d6b97ca0\") " pod="openstack/ironic-inspector-0" Feb 16 02:40:21.430550 master-0 kubenswrapper[31559]: I0216 02:40:21.430372 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12b7b092-d70d-4cdf-85d6-9d98d6b97ca0-scripts\") pod \"ironic-inspector-0\" (UID: \"12b7b092-d70d-4cdf-85d6-9d98d6b97ca0\") " pod="openstack/ironic-inspector-0" Feb 16 02:40:21.430550 master-0 kubenswrapper[31559]: I0216 02:40:21.430461 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/12b7b092-d70d-4cdf-85d6-9d98d6b97ca0-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"12b7b092-d70d-4cdf-85d6-9d98d6b97ca0\") " pod="openstack/ironic-inspector-0" Feb 16 02:40:21.430678 master-0 kubenswrapper[31559]: I0216 02:40:21.430659 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12b7b092-d70d-4cdf-85d6-9d98d6b97ca0-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"12b7b092-d70d-4cdf-85d6-9d98d6b97ca0\") " pod="openstack/ironic-inspector-0" Feb 16 02:40:21.430744 master-0 kubenswrapper[31559]: I0216 02:40:21.430731 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/12b7b092-d70d-4cdf-85d6-9d98d6b97ca0-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"12b7b092-d70d-4cdf-85d6-9d98d6b97ca0\") " pod="openstack/ironic-inspector-0" Feb 16 02:40:21.430871 master-0 kubenswrapper[31559]: I0216 02:40:21.430856 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/12b7b092-d70d-4cdf-85d6-9d98d6b97ca0-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"12b7b092-d70d-4cdf-85d6-9d98d6b97ca0\") " pod="openstack/ironic-inspector-0" Feb 16 02:40:21.432085 master-0 kubenswrapper[31559]: I0216 02:40:21.431932 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/12b7b092-d70d-4cdf-85d6-9d98d6b97ca0-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"12b7b092-d70d-4cdf-85d6-9d98d6b97ca0\") " pod="openstack/ironic-inspector-0" Feb 16 02:40:21.433074 master-0 kubenswrapper[31559]: I0216 02:40:21.433005 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/12b7b092-d70d-4cdf-85d6-9d98d6b97ca0-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"12b7b092-d70d-4cdf-85d6-9d98d6b97ca0\") " pod="openstack/ironic-inspector-0" Feb 16 02:40:21.436796 master-0 kubenswrapper[31559]: I0216 02:40:21.434803 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/12b7b092-d70d-4cdf-85d6-9d98d6b97ca0-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"12b7b092-d70d-4cdf-85d6-9d98d6b97ca0\") " pod="openstack/ironic-inspector-0" Feb 16 02:40:21.436796 master-0 kubenswrapper[31559]: I0216 02:40:21.435835 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/12b7b092-d70d-4cdf-85d6-9d98d6b97ca0-config\") pod \"ironic-inspector-0\" (UID: \"12b7b092-d70d-4cdf-85d6-9d98d6b97ca0\") " pod="openstack/ironic-inspector-0" Feb 16 02:40:21.436796 master-0 kubenswrapper[31559]: I0216 02:40:21.435926 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12b7b092-d70d-4cdf-85d6-9d98d6b97ca0-scripts\") pod \"ironic-inspector-0\" (UID: \"12b7b092-d70d-4cdf-85d6-9d98d6b97ca0\") " pod="openstack/ironic-inspector-0" Feb 16 02:40:21.441857 master-0 kubenswrapper[31559]: I0216 02:40:21.438622 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12b7b092-d70d-4cdf-85d6-9d98d6b97ca0-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"12b7b092-d70d-4cdf-85d6-9d98d6b97ca0\") " pod="openstack/ironic-inspector-0" Feb 16 02:40:21.452376 master-0 kubenswrapper[31559]: I0216 02:40:21.451912 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbwn8\" (UniqueName: \"kubernetes.io/projected/12b7b092-d70d-4cdf-85d6-9d98d6b97ca0-kube-api-access-sbwn8\") pod \"ironic-inspector-0\" (UID: \"12b7b092-d70d-4cdf-85d6-9d98d6b97ca0\") " pod="openstack/ironic-inspector-0" Feb 16 02:40:21.475549 master-0 kubenswrapper[31559]: I0216 02:40:21.467520 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Feb 16 02:40:21.837786 master-0 kubenswrapper[31559]: I0216 02:40:21.837718 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6dcfcd5c95-986r2"] Feb 16 02:40:21.926396 master-0 kubenswrapper[31559]: I0216 02:40:21.926344 31559 scope.go:117] "RemoveContainer" containerID="8f139a71e4d429c694dd54dcec5c0508e8a590e9c75fd37980746cd97af2f3ee" Feb 16 02:40:22.147954 master-0 kubenswrapper[31559]: I0216 02:40:22.147900 31559 generic.go:334] "Generic (PLEG): container finished" podID="cd9993cd-827d-4976-ac75-954bb9ace111" containerID="f45d4d96aae3bcd16a415272cdfd08b0b444279c01764ca6c58606b1e088e4ee" exitCode=0 Feb 16 02:40:22.148605 master-0 kubenswrapper[31559]: I0216 02:40:22.147985 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5687464c96-4rx8g" event={"ID":"cd9993cd-827d-4976-ac75-954bb9ace111","Type":"ContainerDied","Data":"f45d4d96aae3bcd16a415272cdfd08b0b444279c01764ca6c58606b1e088e4ee"} Feb 16 02:40:22.152831 master-0 kubenswrapper[31559]: I0216 02:40:22.152186 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6dcfcd5c95-986r2" event={"ID":"416c4107-c9b1-4c74-8084-5e5b3ea18696","Type":"ContainerStarted","Data":"cbca2071a74a5f5b48c10860a2ca4ffc816cfc0405215bee1a7960caed36c5ed"} Feb 16 02:40:22.152996 master-0 kubenswrapper[31559]: I0216 02:40:22.152846 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6dcfcd5c95-986r2" event={"ID":"416c4107-c9b1-4c74-8084-5e5b3ea18696","Type":"ContainerStarted","Data":"bc35d2496277f24b3feda3f5faca5048481d176724a381fbe9d9fcd0b0ad9940"} Feb 16 02:40:22.225112 master-0 kubenswrapper[31559]: I0216 02:40:22.224829 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Feb 16 02:40:22.226914 master-0 kubenswrapper[31559]: W0216 02:40:22.226877 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod12b7b092_d70d_4cdf_85d6_9d98d6b97ca0.slice/crio-27f3b1c8f975b87ee6b11d38d84a11ee466a6e12cbc23f7eb69dff130e40a8c8 WatchSource:0}: Error finding container 27f3b1c8f975b87ee6b11d38d84a11ee466a6e12cbc23f7eb69dff130e40a8c8: Status 404 returned error can't find the container with id 27f3b1c8f975b87ee6b11d38d84a11ee466a6e12cbc23f7eb69dff130e40a8c8 Feb 16 02:40:22.245602 master-0 kubenswrapper[31559]: I0216 02:40:22.245155 31559 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 02:40:22.847915 master-0 kubenswrapper[31559]: I0216 02:40:22.847749 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-4dzgp"] Feb 16 02:40:22.850275 master-0 kubenswrapper[31559]: I0216 02:40:22.850224 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-4dzgp" Feb 16 02:40:22.856874 master-0 kubenswrapper[31559]: I0216 02:40:22.856581 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 16 02:40:22.856874 master-0 kubenswrapper[31559]: I0216 02:40:22.856783 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 16 02:40:22.877146 master-0 kubenswrapper[31559]: I0216 02:40:22.876980 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b2abcfb-1594-4f85-8068-80c1a8d7fc3e-scripts\") pod \"nova-cell0-conductor-db-sync-4dzgp\" (UID: \"8b2abcfb-1594-4f85-8068-80c1a8d7fc3e\") " pod="openstack/nova-cell0-conductor-db-sync-4dzgp" Feb 16 02:40:22.877146 master-0 kubenswrapper[31559]: I0216 02:40:22.877086 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2dkl\" (UniqueName: \"kubernetes.io/projected/8b2abcfb-1594-4f85-8068-80c1a8d7fc3e-kube-api-access-n2dkl\") pod \"nova-cell0-conductor-db-sync-4dzgp\" (UID: \"8b2abcfb-1594-4f85-8068-80c1a8d7fc3e\") " pod="openstack/nova-cell0-conductor-db-sync-4dzgp" Feb 16 02:40:22.877466 master-0 kubenswrapper[31559]: I0216 02:40:22.877195 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b2abcfb-1594-4f85-8068-80c1a8d7fc3e-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-4dzgp\" (UID: \"8b2abcfb-1594-4f85-8068-80c1a8d7fc3e\") " pod="openstack/nova-cell0-conductor-db-sync-4dzgp" Feb 16 02:40:22.877466 master-0 kubenswrapper[31559]: I0216 02:40:22.877299 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b2abcfb-1594-4f85-8068-80c1a8d7fc3e-config-data\") pod \"nova-cell0-conductor-db-sync-4dzgp\" (UID: \"8b2abcfb-1594-4f85-8068-80c1a8d7fc3e\") " pod="openstack/nova-cell0-conductor-db-sync-4dzgp" Feb 16 02:40:22.894657 master-0 kubenswrapper[31559]: I0216 02:40:22.894595 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-4dzgp"] Feb 16 02:40:22.984047 master-0 kubenswrapper[31559]: I0216 02:40:22.982626 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2dkl\" (UniqueName: \"kubernetes.io/projected/8b2abcfb-1594-4f85-8068-80c1a8d7fc3e-kube-api-access-n2dkl\") pod \"nova-cell0-conductor-db-sync-4dzgp\" (UID: \"8b2abcfb-1594-4f85-8068-80c1a8d7fc3e\") " pod="openstack/nova-cell0-conductor-db-sync-4dzgp" Feb 16 02:40:22.984047 master-0 kubenswrapper[31559]: I0216 02:40:22.982829 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b2abcfb-1594-4f85-8068-80c1a8d7fc3e-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-4dzgp\" (UID: \"8b2abcfb-1594-4f85-8068-80c1a8d7fc3e\") " pod="openstack/nova-cell0-conductor-db-sync-4dzgp" Feb 16 02:40:22.984047 master-0 kubenswrapper[31559]: I0216 02:40:22.983506 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b2abcfb-1594-4f85-8068-80c1a8d7fc3e-config-data\") pod \"nova-cell0-conductor-db-sync-4dzgp\" (UID: \"8b2abcfb-1594-4f85-8068-80c1a8d7fc3e\") " pod="openstack/nova-cell0-conductor-db-sync-4dzgp" Feb 16 02:40:22.984047 master-0 kubenswrapper[31559]: I0216 02:40:22.983809 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b2abcfb-1594-4f85-8068-80c1a8d7fc3e-scripts\") pod \"nova-cell0-conductor-db-sync-4dzgp\" (UID: \"8b2abcfb-1594-4f85-8068-80c1a8d7fc3e\") " pod="openstack/nova-cell0-conductor-db-sync-4dzgp" Feb 16 02:40:22.991784 master-0 kubenswrapper[31559]: I0216 02:40:22.991728 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b2abcfb-1594-4f85-8068-80c1a8d7fc3e-scripts\") pod \"nova-cell0-conductor-db-sync-4dzgp\" (UID: \"8b2abcfb-1594-4f85-8068-80c1a8d7fc3e\") " pod="openstack/nova-cell0-conductor-db-sync-4dzgp" Feb 16 02:40:22.999014 master-0 kubenswrapper[31559]: I0216 02:40:22.994753 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-72940-default-external-api-0"] Feb 16 02:40:22.999014 master-0 kubenswrapper[31559]: I0216 02:40:22.995005 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-72940-default-external-api-0" podUID="9b2014b4-1c15-4883-8435-1f26396ed008" containerName="glance-log" containerID="cri-o://84179352863d97c35954bf2153a31c72640c6c68fb23f5bda3ccdb7a1950ede3" gracePeriod=30 Feb 16 02:40:22.999014 master-0 kubenswrapper[31559]: I0216 02:40:22.995497 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-72940-default-external-api-0" podUID="9b2014b4-1c15-4883-8435-1f26396ed008" containerName="glance-httpd" containerID="cri-o://d727907c96d4b6dc6c4393991a32d768caf72eeec9fb1de049864242f34db444" gracePeriod=30 Feb 16 02:40:22.999014 master-0 kubenswrapper[31559]: I0216 02:40:22.997590 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b2abcfb-1594-4f85-8068-80c1a8d7fc3e-config-data\") pod \"nova-cell0-conductor-db-sync-4dzgp\" (UID: \"8b2abcfb-1594-4f85-8068-80c1a8d7fc3e\") " pod="openstack/nova-cell0-conductor-db-sync-4dzgp" Feb 16 02:40:23.010600 master-0 kubenswrapper[31559]: I0216 02:40:23.007640 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b2abcfb-1594-4f85-8068-80c1a8d7fc3e-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-4dzgp\" (UID: \"8b2abcfb-1594-4f85-8068-80c1a8d7fc3e\") " pod="openstack/nova-cell0-conductor-db-sync-4dzgp" Feb 16 02:40:23.011783 master-0 kubenswrapper[31559]: I0216 02:40:23.011750 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2dkl\" (UniqueName: \"kubernetes.io/projected/8b2abcfb-1594-4f85-8068-80c1a8d7fc3e-kube-api-access-n2dkl\") pod \"nova-cell0-conductor-db-sync-4dzgp\" (UID: \"8b2abcfb-1594-4f85-8068-80c1a8d7fc3e\") " pod="openstack/nova-cell0-conductor-db-sync-4dzgp" Feb 16 02:40:23.181245 master-0 kubenswrapper[31559]: I0216 02:40:23.180952 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-4dzgp" Feb 16 02:40:23.210913 master-0 kubenswrapper[31559]: I0216 02:40:23.210809 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"12b7b092-d70d-4cdf-85d6-9d98d6b97ca0","Type":"ContainerStarted","Data":"27f3b1c8f975b87ee6b11d38d84a11ee466a6e12cbc23f7eb69dff130e40a8c8"} Feb 16 02:40:23.225792 master-0 kubenswrapper[31559]: I0216 02:40:23.225607 31559 generic.go:334] "Generic (PLEG): container finished" podID="416c4107-c9b1-4c74-8084-5e5b3ea18696" containerID="cbca2071a74a5f5b48c10860a2ca4ffc816cfc0405215bee1a7960caed36c5ed" exitCode=0 Feb 16 02:40:23.225792 master-0 kubenswrapper[31559]: I0216 02:40:23.225680 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6dcfcd5c95-986r2" event={"ID":"416c4107-c9b1-4c74-8084-5e5b3ea18696","Type":"ContainerDied","Data":"cbca2071a74a5f5b48c10860a2ca4ffc816cfc0405215bee1a7960caed36c5ed"} Feb 16 02:40:23.225792 master-0 kubenswrapper[31559]: I0216 02:40:23.225710 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6dcfcd5c95-986r2" event={"ID":"416c4107-c9b1-4c74-8084-5e5b3ea18696","Type":"ContainerStarted","Data":"1bfa7679a8bea2070f612a1384832a6f1cfa76ca29e92456471e9c440abc1790"} Feb 16 02:40:23.229797 master-0 kubenswrapper[31559]: I0216 02:40:23.229757 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6dcfcd5c95-986r2" Feb 16 02:40:23.250873 master-0 kubenswrapper[31559]: I0216 02:40:23.250776 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-57d76bb68d-7wt45" event={"ID":"ec163f18-db19-4327-ae8b-6feb4c6004af","Type":"ContainerStarted","Data":"777fceca4e648ddf4b4b2a33354f70bb54bb454f2ca62f314928d393685c8f72"} Feb 16 02:40:23.251102 master-0 kubenswrapper[31559]: I0216 02:40:23.251057 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-57d76bb68d-7wt45" Feb 16 02:40:23.258701 master-0 kubenswrapper[31559]: I0216 02:40:23.258662 31559 generic.go:334] "Generic (PLEG): container finished" podID="9b2014b4-1c15-4883-8435-1f26396ed008" containerID="84179352863d97c35954bf2153a31c72640c6c68fb23f5bda3ccdb7a1950ede3" exitCode=143 Feb 16 02:40:23.258777 master-0 kubenswrapper[31559]: I0216 02:40:23.258709 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-72940-default-external-api-0" event={"ID":"9b2014b4-1c15-4883-8435-1f26396ed008","Type":"ContainerDied","Data":"84179352863d97c35954bf2153a31c72640c6c68fb23f5bda3ccdb7a1950ede3"} Feb 16 02:40:23.266921 master-0 kubenswrapper[31559]: I0216 02:40:23.266841 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6dcfcd5c95-986r2" podStartSLOduration=3.266817253 podStartE2EDuration="3.266817253s" podCreationTimestamp="2026-02-16 02:40:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:40:23.257580219 +0000 UTC m=+1075.602186234" watchObservedRunningTime="2026-02-16 02:40:23.266817253 +0000 UTC m=+1075.611423268" Feb 16 02:40:23.563595 master-0 kubenswrapper[31559]: I0216 02:40:23.563454 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5687464c96-4rx8g" Feb 16 02:40:23.624056 master-0 kubenswrapper[31559]: I0216 02:40:23.623398 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cd9993cd-827d-4976-ac75-954bb9ace111-config\") pod \"cd9993cd-827d-4976-ac75-954bb9ace111\" (UID: \"cd9993cd-827d-4976-ac75-954bb9ace111\") " Feb 16 02:40:23.624056 master-0 kubenswrapper[31559]: I0216 02:40:23.623548 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd9993cd-827d-4976-ac75-954bb9ace111-ovndb-tls-certs\") pod \"cd9993cd-827d-4976-ac75-954bb9ace111\" (UID: \"cd9993cd-827d-4976-ac75-954bb9ace111\") " Feb 16 02:40:23.624056 master-0 kubenswrapper[31559]: I0216 02:40:23.623749 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd9993cd-827d-4976-ac75-954bb9ace111-combined-ca-bundle\") pod \"cd9993cd-827d-4976-ac75-954bb9ace111\" (UID: \"cd9993cd-827d-4976-ac75-954bb9ace111\") " Feb 16 02:40:23.624056 master-0 kubenswrapper[31559]: I0216 02:40:23.623853 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpxww\" (UniqueName: \"kubernetes.io/projected/cd9993cd-827d-4976-ac75-954bb9ace111-kube-api-access-zpxww\") pod \"cd9993cd-827d-4976-ac75-954bb9ace111\" (UID: \"cd9993cd-827d-4976-ac75-954bb9ace111\") " Feb 16 02:40:23.627453 master-0 kubenswrapper[31559]: I0216 02:40:23.624644 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cd9993cd-827d-4976-ac75-954bb9ace111-httpd-config\") pod \"cd9993cd-827d-4976-ac75-954bb9ace111\" (UID: \"cd9993cd-827d-4976-ac75-954bb9ace111\") " Feb 16 02:40:23.630373 master-0 kubenswrapper[31559]: I0216 02:40:23.629572 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd9993cd-827d-4976-ac75-954bb9ace111-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "cd9993cd-827d-4976-ac75-954bb9ace111" (UID: "cd9993cd-827d-4976-ac75-954bb9ace111"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:40:23.637305 master-0 kubenswrapper[31559]: I0216 02:40:23.637216 31559 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cd9993cd-827d-4976-ac75-954bb9ace111-httpd-config\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:23.638193 master-0 kubenswrapper[31559]: I0216 02:40:23.638037 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd9993cd-827d-4976-ac75-954bb9ace111-kube-api-access-zpxww" (OuterVolumeSpecName: "kube-api-access-zpxww") pod "cd9993cd-827d-4976-ac75-954bb9ace111" (UID: "cd9993cd-827d-4976-ac75-954bb9ace111"). InnerVolumeSpecName "kube-api-access-zpxww". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:40:23.690306 master-0 kubenswrapper[31559]: I0216 02:40:23.690107 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd9993cd-827d-4976-ac75-954bb9ace111-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cd9993cd-827d-4976-ac75-954bb9ace111" (UID: "cd9993cd-827d-4976-ac75-954bb9ace111"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:40:23.706802 master-0 kubenswrapper[31559]: I0216 02:40:23.706735 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd9993cd-827d-4976-ac75-954bb9ace111-config" (OuterVolumeSpecName: "config") pod "cd9993cd-827d-4976-ac75-954bb9ace111" (UID: "cd9993cd-827d-4976-ac75-954bb9ace111"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:40:23.730904 master-0 kubenswrapper[31559]: I0216 02:40:23.730853 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-4dzgp"] Feb 16 02:40:23.732490 master-0 kubenswrapper[31559]: I0216 02:40:23.732328 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd9993cd-827d-4976-ac75-954bb9ace111-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "cd9993cd-827d-4976-ac75-954bb9ace111" (UID: "cd9993cd-827d-4976-ac75-954bb9ace111"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:40:23.735276 master-0 kubenswrapper[31559]: W0216 02:40:23.735003 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b2abcfb_1594_4f85_8068_80c1a8d7fc3e.slice/crio-c7c98ce38649b3662c2623c77743639f913dd96e782703a26b2d68336f65cb87 WatchSource:0}: Error finding container c7c98ce38649b3662c2623c77743639f913dd96e782703a26b2d68336f65cb87: Status 404 returned error can't find the container with id c7c98ce38649b3662c2623c77743639f913dd96e782703a26b2d68336f65cb87 Feb 16 02:40:23.739338 master-0 kubenswrapper[31559]: I0216 02:40:23.739031 31559 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/cd9993cd-827d-4976-ac75-954bb9ace111-config\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:23.739338 master-0 kubenswrapper[31559]: I0216 02:40:23.739052 31559 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd9993cd-827d-4976-ac75-954bb9ace111-ovndb-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:23.739338 master-0 kubenswrapper[31559]: I0216 02:40:23.739063 31559 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd9993cd-827d-4976-ac75-954bb9ace111-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:23.739338 master-0 kubenswrapper[31559]: I0216 02:40:23.739074 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpxww\" (UniqueName: \"kubernetes.io/projected/cd9993cd-827d-4976-ac75-954bb9ace111-kube-api-access-zpxww\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:23.951789 master-0 kubenswrapper[31559]: I0216 02:40:23.950411 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-72940-default-internal-api-0"] Feb 16 02:40:23.951789 master-0 kubenswrapper[31559]: I0216 02:40:23.950726 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-72940-default-internal-api-0" podUID="e0e776d3-0db2-474c-ad7f-6798e0b59a3c" containerName="glance-log" containerID="cri-o://852e77019e6f36e648b06570ae813160acdcdcb8bf5958343b9075f3cf0d9bf3" gracePeriod=30 Feb 16 02:40:23.951789 master-0 kubenswrapper[31559]: I0216 02:40:23.951249 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-72940-default-internal-api-0" podUID="e0e776d3-0db2-474c-ad7f-6798e0b59a3c" containerName="glance-httpd" containerID="cri-o://a156719f3471713b64f07c6e9ffea069af70f3c5cfb89722283b7cd9365bd26d" gracePeriod=30 Feb 16 02:40:24.277105 master-0 kubenswrapper[31559]: I0216 02:40:24.273768 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5687464c96-4rx8g" event={"ID":"cd9993cd-827d-4976-ac75-954bb9ace111","Type":"ContainerDied","Data":"d1ad75a1acbe8733430cf2074295e5cbccf242d8f095c00805af4dc8bf2f14d0"} Feb 16 02:40:24.277105 master-0 kubenswrapper[31559]: I0216 02:40:24.273800 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5687464c96-4rx8g" Feb 16 02:40:24.277105 master-0 kubenswrapper[31559]: I0216 02:40:24.273843 31559 scope.go:117] "RemoveContainer" containerID="742ab028b0f99a9d936cbb8dd6ec80a01d1400a786fdd89945263ec04b24b11f" Feb 16 02:40:24.277105 master-0 kubenswrapper[31559]: I0216 02:40:24.276751 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-4dzgp" event={"ID":"8b2abcfb-1594-4f85-8068-80c1a8d7fc3e","Type":"ContainerStarted","Data":"c7c98ce38649b3662c2623c77743639f913dd96e782703a26b2d68336f65cb87"} Feb 16 02:40:24.284572 master-0 kubenswrapper[31559]: I0216 02:40:24.284470 31559 generic.go:334] "Generic (PLEG): container finished" podID="e0e776d3-0db2-474c-ad7f-6798e0b59a3c" containerID="852e77019e6f36e648b06570ae813160acdcdcb8bf5958343b9075f3cf0d9bf3" exitCode=143 Feb 16 02:40:24.284678 master-0 kubenswrapper[31559]: I0216 02:40:24.284575 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-72940-default-internal-api-0" event={"ID":"e0e776d3-0db2-474c-ad7f-6798e0b59a3c","Type":"ContainerDied","Data":"852e77019e6f36e648b06570ae813160acdcdcb8bf5958343b9075f3cf0d9bf3"} Feb 16 02:40:24.304088 master-0 kubenswrapper[31559]: I0216 02:40:24.304029 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5687464c96-4rx8g"] Feb 16 02:40:24.308742 master-0 kubenswrapper[31559]: I0216 02:40:24.308386 31559 scope.go:117] "RemoveContainer" containerID="f45d4d96aae3bcd16a415272cdfd08b0b444279c01764ca6c58606b1e088e4ee" Feb 16 02:40:24.318505 master-0 kubenswrapper[31559]: I0216 02:40:24.317331 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5687464c96-4rx8g"] Feb 16 02:40:24.348407 master-0 kubenswrapper[31559]: I0216 02:40:24.347183 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-0"] Feb 16 02:40:25.942843 master-0 kubenswrapper[31559]: I0216 02:40:25.942795 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd9993cd-827d-4976-ac75-954bb9ace111" path="/var/lib/kubelet/pods/cd9993cd-827d-4976-ac75-954bb9ace111/volumes" Feb 16 02:40:26.317341 master-0 kubenswrapper[31559]: I0216 02:40:26.317271 31559 generic.go:334] "Generic (PLEG): container finished" podID="9b2014b4-1c15-4883-8435-1f26396ed008" containerID="d727907c96d4b6dc6c4393991a32d768caf72eeec9fb1de049864242f34db444" exitCode=0 Feb 16 02:40:26.317628 master-0 kubenswrapper[31559]: I0216 02:40:26.317361 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-72940-default-external-api-0" event={"ID":"9b2014b4-1c15-4883-8435-1f26396ed008","Type":"ContainerDied","Data":"d727907c96d4b6dc6c4393991a32d768caf72eeec9fb1de049864242f34db444"} Feb 16 02:40:26.944152 master-0 kubenswrapper[31559]: I0216 02:40:26.943861 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-neutron-agent-57d76bb68d-7wt45" Feb 16 02:40:27.334392 master-0 kubenswrapper[31559]: I0216 02:40:27.334189 31559 generic.go:334] "Generic (PLEG): container finished" podID="e0e776d3-0db2-474c-ad7f-6798e0b59a3c" containerID="a156719f3471713b64f07c6e9ffea069af70f3c5cfb89722283b7cd9365bd26d" exitCode=0 Feb 16 02:40:27.334392 master-0 kubenswrapper[31559]: I0216 02:40:27.334257 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-72940-default-internal-api-0" event={"ID":"e0e776d3-0db2-474c-ad7f-6798e0b59a3c","Type":"ContainerDied","Data":"a156719f3471713b64f07c6e9ffea069af70f3c5cfb89722283b7cd9365bd26d"} Feb 16 02:40:28.209208 master-0 kubenswrapper[31559]: I0216 02:40:28.207937 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-dde57-api-0" Feb 16 02:40:31.256266 master-0 kubenswrapper[31559]: I0216 02:40:31.256212 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-72940-default-external-api-0" Feb 16 02:40:31.317396 master-0 kubenswrapper[31559]: I0216 02:40:31.317278 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6dcfcd5c95-986r2" Feb 16 02:40:31.389135 master-0 kubenswrapper[31559]: I0216 02:40:31.389019 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b2014b4-1c15-4883-8435-1f26396ed008-scripts\") pod \"9b2014b4-1c15-4883-8435-1f26396ed008\" (UID: \"9b2014b4-1c15-4883-8435-1f26396ed008\") " Feb 16 02:40:31.391321 master-0 kubenswrapper[31559]: I0216 02:40:31.389297 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b2014b4-1c15-4883-8435-1f26396ed008-logs\") pod \"9b2014b4-1c15-4883-8435-1f26396ed008\" (UID: \"9b2014b4-1c15-4883-8435-1f26396ed008\") " Feb 16 02:40:31.391321 master-0 kubenswrapper[31559]: I0216 02:40:31.389342 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b2014b4-1c15-4883-8435-1f26396ed008-combined-ca-bundle\") pod \"9b2014b4-1c15-4883-8435-1f26396ed008\" (UID: \"9b2014b4-1c15-4883-8435-1f26396ed008\") " Feb 16 02:40:31.391321 master-0 kubenswrapper[31559]: I0216 02:40:31.389427 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpdlm\" (UniqueName: \"kubernetes.io/projected/9b2014b4-1c15-4883-8435-1f26396ed008-kube-api-access-fpdlm\") pod \"9b2014b4-1c15-4883-8435-1f26396ed008\" (UID: \"9b2014b4-1c15-4883-8435-1f26396ed008\") " Feb 16 02:40:31.391321 master-0 kubenswrapper[31559]: I0216 02:40:31.389688 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b2014b4-1c15-4883-8435-1f26396ed008-logs" (OuterVolumeSpecName: "logs") pod "9b2014b4-1c15-4883-8435-1f26396ed008" (UID: "9b2014b4-1c15-4883-8435-1f26396ed008"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 02:40:31.391321 master-0 kubenswrapper[31559]: I0216 02:40:31.389705 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^fcb2bec2-7af4-4d84-a267-9adfc7ab4af8\") pod \"9b2014b4-1c15-4883-8435-1f26396ed008\" (UID: \"9b2014b4-1c15-4883-8435-1f26396ed008\") " Feb 16 02:40:31.391321 master-0 kubenswrapper[31559]: I0216 02:40:31.389859 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9b2014b4-1c15-4883-8435-1f26396ed008-httpd-run\") pod \"9b2014b4-1c15-4883-8435-1f26396ed008\" (UID: \"9b2014b4-1c15-4883-8435-1f26396ed008\") " Feb 16 02:40:31.391321 master-0 kubenswrapper[31559]: I0216 02:40:31.389894 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b2014b4-1c15-4883-8435-1f26396ed008-public-tls-certs\") pod \"9b2014b4-1c15-4883-8435-1f26396ed008\" (UID: \"9b2014b4-1c15-4883-8435-1f26396ed008\") " Feb 16 02:40:31.391321 master-0 kubenswrapper[31559]: I0216 02:40:31.389999 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b2014b4-1c15-4883-8435-1f26396ed008-config-data\") pod \"9b2014b4-1c15-4883-8435-1f26396ed008\" (UID: \"9b2014b4-1c15-4883-8435-1f26396ed008\") " Feb 16 02:40:31.391321 master-0 kubenswrapper[31559]: I0216 02:40:31.390920 31559 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b2014b4-1c15-4883-8435-1f26396ed008-logs\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:31.391662 master-0 kubenswrapper[31559]: I0216 02:40:31.391548 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b2014b4-1c15-4883-8435-1f26396ed008-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "9b2014b4-1c15-4883-8435-1f26396ed008" (UID: "9b2014b4-1c15-4883-8435-1f26396ed008"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 02:40:31.393400 master-0 kubenswrapper[31559]: I0216 02:40:31.393363 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b2014b4-1c15-4883-8435-1f26396ed008-scripts" (OuterVolumeSpecName: "scripts") pod "9b2014b4-1c15-4883-8435-1f26396ed008" (UID: "9b2014b4-1c15-4883-8435-1f26396ed008"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:40:31.394379 master-0 kubenswrapper[31559]: I0216 02:40:31.394335 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b2014b4-1c15-4883-8435-1f26396ed008-kube-api-access-fpdlm" (OuterVolumeSpecName: "kube-api-access-fpdlm") pod "9b2014b4-1c15-4883-8435-1f26396ed008" (UID: "9b2014b4-1c15-4883-8435-1f26396ed008"). InnerVolumeSpecName "kube-api-access-fpdlm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:40:31.407190 master-0 kubenswrapper[31559]: I0216 02:40:31.407144 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-72940-default-external-api-0" event={"ID":"9b2014b4-1c15-4883-8435-1f26396ed008","Type":"ContainerDied","Data":"854e375b5788ff97303456a8cf5f2f324ada42f2b54e497516ce302f36b419b7"} Feb 16 02:40:31.407389 master-0 kubenswrapper[31559]: I0216 02:40:31.407205 31559 scope.go:117] "RemoveContainer" containerID="d727907c96d4b6dc6c4393991a32d768caf72eeec9fb1de049864242f34db444" Feb 16 02:40:31.407389 master-0 kubenswrapper[31559]: I0216 02:40:31.407202 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-72940-default-external-api-0" Feb 16 02:40:31.411391 master-0 kubenswrapper[31559]: I0216 02:40:31.411354 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^fcb2bec2-7af4-4d84-a267-9adfc7ab4af8" (OuterVolumeSpecName: "glance") pod "9b2014b4-1c15-4883-8435-1f26396ed008" (UID: "9b2014b4-1c15-4883-8435-1f26396ed008"). InnerVolumeSpecName "pvc-f8967d3f-b95e-4cbf-ba61-3779cc05db2f". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 02:40:31.423980 master-0 kubenswrapper[31559]: I0216 02:40:31.423938 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b2014b4-1c15-4883-8435-1f26396ed008-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9b2014b4-1c15-4883-8435-1f26396ed008" (UID: "9b2014b4-1c15-4883-8435-1f26396ed008"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:40:31.459783 master-0 kubenswrapper[31559]: I0216 02:40:31.459717 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b2014b4-1c15-4883-8435-1f26396ed008-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "9b2014b4-1c15-4883-8435-1f26396ed008" (UID: "9b2014b4-1c15-4883-8435-1f26396ed008"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:40:31.473429 master-0 kubenswrapper[31559]: I0216 02:40:31.473383 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b2014b4-1c15-4883-8435-1f26396ed008-config-data" (OuterVolumeSpecName: "config-data") pod "9b2014b4-1c15-4883-8435-1f26396ed008" (UID: "9b2014b4-1c15-4883-8435-1f26396ed008"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:40:31.495166 master-0 kubenswrapper[31559]: I0216 02:40:31.495112 31559 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b2014b4-1c15-4883-8435-1f26396ed008-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:31.495166 master-0 kubenswrapper[31559]: I0216 02:40:31.495153 31559 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b2014b4-1c15-4883-8435-1f26396ed008-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:31.495166 master-0 kubenswrapper[31559]: I0216 02:40:31.495166 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpdlm\" (UniqueName: \"kubernetes.io/projected/9b2014b4-1c15-4883-8435-1f26396ed008-kube-api-access-fpdlm\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:31.495453 master-0 kubenswrapper[31559]: I0216 02:40:31.495187 31559 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-f8967d3f-b95e-4cbf-ba61-3779cc05db2f\" (UniqueName: \"kubernetes.io/csi/topolvm.io^fcb2bec2-7af4-4d84-a267-9adfc7ab4af8\") on node \"master-0\" " Feb 16 02:40:31.495453 master-0 kubenswrapper[31559]: I0216 02:40:31.495197 31559 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9b2014b4-1c15-4883-8435-1f26396ed008-httpd-run\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:31.495453 master-0 kubenswrapper[31559]: I0216 02:40:31.495206 31559 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b2014b4-1c15-4883-8435-1f26396ed008-public-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:31.495453 master-0 kubenswrapper[31559]: I0216 02:40:31.495215 31559 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b2014b4-1c15-4883-8435-1f26396ed008-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:31.536302 master-0 kubenswrapper[31559]: I0216 02:40:31.536007 31559 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 02:40:31.536302 master-0 kubenswrapper[31559]: I0216 02:40:31.536215 31559 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-f8967d3f-b95e-4cbf-ba61-3779cc05db2f" (UniqueName: "kubernetes.io/csi/topolvm.io^fcb2bec2-7af4-4d84-a267-9adfc7ab4af8") on node "master-0" Feb 16 02:40:31.597154 master-0 kubenswrapper[31559]: I0216 02:40:31.597098 31559 reconciler_common.go:293] "Volume detached for volume \"pvc-f8967d3f-b95e-4cbf-ba61-3779cc05db2f\" (UniqueName: \"kubernetes.io/csi/topolvm.io^fcb2bec2-7af4-4d84-a267-9adfc7ab4af8\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:31.950022 master-0 kubenswrapper[31559]: I0216 02:40:31.949460 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6fbd84b845-7w476"] Feb 16 02:40:31.950022 master-0 kubenswrapper[31559]: I0216 02:40:31.949695 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6fbd84b845-7w476" podUID="52d1137e-c1db-4205-ae56-dfd8b4c84b39" containerName="dnsmasq-dns" containerID="cri-o://686e2f92a7e9f283dd0f139230d85d8489dd8986eea4a01c147f865449eba7a6" gracePeriod=10 Feb 16 02:40:31.956133 master-0 kubenswrapper[31559]: I0216 02:40:31.955508 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-72940-default-external-api-0"] Feb 16 02:40:32.006451 master-0 kubenswrapper[31559]: I0216 02:40:32.006288 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-72940-default-external-api-0"] Feb 16 02:40:32.136550 master-0 kubenswrapper[31559]: I0216 02:40:32.135541 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-72940-default-external-api-0"] Feb 16 02:40:32.136550 master-0 kubenswrapper[31559]: E0216 02:40:32.136003 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd9993cd-827d-4976-ac75-954bb9ace111" containerName="neutron-api" Feb 16 02:40:32.136550 master-0 kubenswrapper[31559]: I0216 02:40:32.136016 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd9993cd-827d-4976-ac75-954bb9ace111" containerName="neutron-api" Feb 16 02:40:32.136550 master-0 kubenswrapper[31559]: E0216 02:40:32.136074 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b2014b4-1c15-4883-8435-1f26396ed008" containerName="glance-httpd" Feb 16 02:40:32.136550 master-0 kubenswrapper[31559]: I0216 02:40:32.136081 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b2014b4-1c15-4883-8435-1f26396ed008" containerName="glance-httpd" Feb 16 02:40:32.136550 master-0 kubenswrapper[31559]: E0216 02:40:32.136101 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b2014b4-1c15-4883-8435-1f26396ed008" containerName="glance-log" Feb 16 02:40:32.136550 master-0 kubenswrapper[31559]: I0216 02:40:32.136108 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b2014b4-1c15-4883-8435-1f26396ed008" containerName="glance-log" Feb 16 02:40:32.136550 master-0 kubenswrapper[31559]: E0216 02:40:32.136141 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd9993cd-827d-4976-ac75-954bb9ace111" containerName="neutron-httpd" Feb 16 02:40:32.136550 master-0 kubenswrapper[31559]: I0216 02:40:32.136149 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd9993cd-827d-4976-ac75-954bb9ace111" containerName="neutron-httpd" Feb 16 02:40:32.136550 master-0 kubenswrapper[31559]: I0216 02:40:32.136420 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b2014b4-1c15-4883-8435-1f26396ed008" containerName="glance-log" Feb 16 02:40:32.136550 master-0 kubenswrapper[31559]: I0216 02:40:32.136456 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd9993cd-827d-4976-ac75-954bb9ace111" containerName="neutron-httpd" Feb 16 02:40:32.136550 master-0 kubenswrapper[31559]: I0216 02:40:32.136479 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b2014b4-1c15-4883-8435-1f26396ed008" containerName="glance-httpd" Feb 16 02:40:32.136550 master-0 kubenswrapper[31559]: I0216 02:40:32.136490 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd9993cd-827d-4976-ac75-954bb9ace111" containerName="neutron-api" Feb 16 02:40:32.140754 master-0 kubenswrapper[31559]: I0216 02:40:32.138054 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-72940-default-external-api-0" Feb 16 02:40:32.142030 master-0 kubenswrapper[31559]: I0216 02:40:32.141991 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-72940-default-external-config-data" Feb 16 02:40:32.142614 master-0 kubenswrapper[31559]: I0216 02:40:32.142580 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 16 02:40:32.210424 master-0 kubenswrapper[31559]: I0216 02:40:32.210304 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-72940-default-external-api-0"] Feb 16 02:40:32.327751 master-0 kubenswrapper[31559]: I0216 02:40:32.327632 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5a33110-e7ec-47b7-a616-86c8d8ef5248-scripts\") pod \"glance-72940-default-external-api-0\" (UID: \"b5a33110-e7ec-47b7-a616-86c8d8ef5248\") " pod="openstack/glance-72940-default-external-api-0" Feb 16 02:40:32.328237 master-0 kubenswrapper[31559]: I0216 02:40:32.327746 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5a33110-e7ec-47b7-a616-86c8d8ef5248-public-tls-certs\") pod \"glance-72940-default-external-api-0\" (UID: \"b5a33110-e7ec-47b7-a616-86c8d8ef5248\") " pod="openstack/glance-72940-default-external-api-0" Feb 16 02:40:32.328237 master-0 kubenswrapper[31559]: I0216 02:40:32.327791 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5a33110-e7ec-47b7-a616-86c8d8ef5248-config-data\") pod \"glance-72940-default-external-api-0\" (UID: \"b5a33110-e7ec-47b7-a616-86c8d8ef5248\") " pod="openstack/glance-72940-default-external-api-0" Feb 16 02:40:32.328237 master-0 kubenswrapper[31559]: I0216 02:40:32.327890 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f8967d3f-b95e-4cbf-ba61-3779cc05db2f\" (UniqueName: \"kubernetes.io/csi/topolvm.io^fcb2bec2-7af4-4d84-a267-9adfc7ab4af8\") pod \"glance-72940-default-external-api-0\" (UID: \"b5a33110-e7ec-47b7-a616-86c8d8ef5248\") " pod="openstack/glance-72940-default-external-api-0" Feb 16 02:40:32.328237 master-0 kubenswrapper[31559]: I0216 02:40:32.327924 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tbgq\" (UniqueName: \"kubernetes.io/projected/b5a33110-e7ec-47b7-a616-86c8d8ef5248-kube-api-access-6tbgq\") pod \"glance-72940-default-external-api-0\" (UID: \"b5a33110-e7ec-47b7-a616-86c8d8ef5248\") " pod="openstack/glance-72940-default-external-api-0" Feb 16 02:40:32.328237 master-0 kubenswrapper[31559]: I0216 02:40:32.327978 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5a33110-e7ec-47b7-a616-86c8d8ef5248-logs\") pod \"glance-72940-default-external-api-0\" (UID: \"b5a33110-e7ec-47b7-a616-86c8d8ef5248\") " pod="openstack/glance-72940-default-external-api-0" Feb 16 02:40:32.328237 master-0 kubenswrapper[31559]: I0216 02:40:32.328039 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b5a33110-e7ec-47b7-a616-86c8d8ef5248-httpd-run\") pod \"glance-72940-default-external-api-0\" (UID: \"b5a33110-e7ec-47b7-a616-86c8d8ef5248\") " pod="openstack/glance-72940-default-external-api-0" Feb 16 02:40:32.328237 master-0 kubenswrapper[31559]: I0216 02:40:32.328084 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5a33110-e7ec-47b7-a616-86c8d8ef5248-combined-ca-bundle\") pod \"glance-72940-default-external-api-0\" (UID: \"b5a33110-e7ec-47b7-a616-86c8d8ef5248\") " pod="openstack/glance-72940-default-external-api-0" Feb 16 02:40:32.424276 master-0 kubenswrapper[31559]: I0216 02:40:32.424210 31559 generic.go:334] "Generic (PLEG): container finished" podID="52d1137e-c1db-4205-ae56-dfd8b4c84b39" containerID="686e2f92a7e9f283dd0f139230d85d8489dd8986eea4a01c147f865449eba7a6" exitCode=0 Feb 16 02:40:32.424507 master-0 kubenswrapper[31559]: I0216 02:40:32.424270 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fbd84b845-7w476" event={"ID":"52d1137e-c1db-4205-ae56-dfd8b4c84b39","Type":"ContainerDied","Data":"686e2f92a7e9f283dd0f139230d85d8489dd8986eea4a01c147f865449eba7a6"} Feb 16 02:40:32.435341 master-0 kubenswrapper[31559]: I0216 02:40:32.429580 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5a33110-e7ec-47b7-a616-86c8d8ef5248-logs\") pod \"glance-72940-default-external-api-0\" (UID: \"b5a33110-e7ec-47b7-a616-86c8d8ef5248\") " pod="openstack/glance-72940-default-external-api-0" Feb 16 02:40:32.435341 master-0 kubenswrapper[31559]: I0216 02:40:32.429645 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b5a33110-e7ec-47b7-a616-86c8d8ef5248-httpd-run\") pod \"glance-72940-default-external-api-0\" (UID: \"b5a33110-e7ec-47b7-a616-86c8d8ef5248\") " pod="openstack/glance-72940-default-external-api-0" Feb 16 02:40:32.435341 master-0 kubenswrapper[31559]: I0216 02:40:32.429680 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5a33110-e7ec-47b7-a616-86c8d8ef5248-combined-ca-bundle\") pod \"glance-72940-default-external-api-0\" (UID: \"b5a33110-e7ec-47b7-a616-86c8d8ef5248\") " pod="openstack/glance-72940-default-external-api-0" Feb 16 02:40:32.435341 master-0 kubenswrapper[31559]: I0216 02:40:32.429742 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5a33110-e7ec-47b7-a616-86c8d8ef5248-scripts\") pod \"glance-72940-default-external-api-0\" (UID: \"b5a33110-e7ec-47b7-a616-86c8d8ef5248\") " pod="openstack/glance-72940-default-external-api-0" Feb 16 02:40:32.435341 master-0 kubenswrapper[31559]: I0216 02:40:32.429782 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5a33110-e7ec-47b7-a616-86c8d8ef5248-public-tls-certs\") pod \"glance-72940-default-external-api-0\" (UID: \"b5a33110-e7ec-47b7-a616-86c8d8ef5248\") " pod="openstack/glance-72940-default-external-api-0" Feb 16 02:40:32.435341 master-0 kubenswrapper[31559]: I0216 02:40:32.429809 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5a33110-e7ec-47b7-a616-86c8d8ef5248-config-data\") pod \"glance-72940-default-external-api-0\" (UID: \"b5a33110-e7ec-47b7-a616-86c8d8ef5248\") " pod="openstack/glance-72940-default-external-api-0" Feb 16 02:40:32.435341 master-0 kubenswrapper[31559]: I0216 02:40:32.429863 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f8967d3f-b95e-4cbf-ba61-3779cc05db2f\" (UniqueName: \"kubernetes.io/csi/topolvm.io^fcb2bec2-7af4-4d84-a267-9adfc7ab4af8\") pod \"glance-72940-default-external-api-0\" (UID: \"b5a33110-e7ec-47b7-a616-86c8d8ef5248\") " pod="openstack/glance-72940-default-external-api-0" Feb 16 02:40:32.435341 master-0 kubenswrapper[31559]: I0216 02:40:32.429888 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tbgq\" (UniqueName: \"kubernetes.io/projected/b5a33110-e7ec-47b7-a616-86c8d8ef5248-kube-api-access-6tbgq\") pod \"glance-72940-default-external-api-0\" (UID: \"b5a33110-e7ec-47b7-a616-86c8d8ef5248\") " pod="openstack/glance-72940-default-external-api-0" Feb 16 02:40:32.435341 master-0 kubenswrapper[31559]: I0216 02:40:32.430854 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5a33110-e7ec-47b7-a616-86c8d8ef5248-logs\") pod \"glance-72940-default-external-api-0\" (UID: \"b5a33110-e7ec-47b7-a616-86c8d8ef5248\") " pod="openstack/glance-72940-default-external-api-0" Feb 16 02:40:32.435341 master-0 kubenswrapper[31559]: I0216 02:40:32.431076 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b5a33110-e7ec-47b7-a616-86c8d8ef5248-httpd-run\") pod \"glance-72940-default-external-api-0\" (UID: \"b5a33110-e7ec-47b7-a616-86c8d8ef5248\") " pod="openstack/glance-72940-default-external-api-0" Feb 16 02:40:32.438336 master-0 kubenswrapper[31559]: I0216 02:40:32.438301 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5a33110-e7ec-47b7-a616-86c8d8ef5248-combined-ca-bundle\") pod \"glance-72940-default-external-api-0\" (UID: \"b5a33110-e7ec-47b7-a616-86c8d8ef5248\") " pod="openstack/glance-72940-default-external-api-0" Feb 16 02:40:32.439901 master-0 kubenswrapper[31559]: I0216 02:40:32.439870 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5a33110-e7ec-47b7-a616-86c8d8ef5248-public-tls-certs\") pod \"glance-72940-default-external-api-0\" (UID: \"b5a33110-e7ec-47b7-a616-86c8d8ef5248\") " pod="openstack/glance-72940-default-external-api-0" Feb 16 02:40:32.440502 master-0 kubenswrapper[31559]: I0216 02:40:32.440425 31559 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 02:40:32.440611 master-0 kubenswrapper[31559]: I0216 02:40:32.440593 31559 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f8967d3f-b95e-4cbf-ba61-3779cc05db2f\" (UniqueName: \"kubernetes.io/csi/topolvm.io^fcb2bec2-7af4-4d84-a267-9adfc7ab4af8\") pod \"glance-72940-default-external-api-0\" (UID: \"b5a33110-e7ec-47b7-a616-86c8d8ef5248\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/6df8a36abcf3de2186a10fb6ccf098d30950cdf1ed8f716dfec080b59e1d20e7/globalmount\"" pod="openstack/glance-72940-default-external-api-0" Feb 16 02:40:32.441388 master-0 kubenswrapper[31559]: I0216 02:40:32.441349 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5a33110-e7ec-47b7-a616-86c8d8ef5248-config-data\") pod \"glance-72940-default-external-api-0\" (UID: \"b5a33110-e7ec-47b7-a616-86c8d8ef5248\") " pod="openstack/glance-72940-default-external-api-0" Feb 16 02:40:32.476489 master-0 kubenswrapper[31559]: I0216 02:40:32.462060 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5a33110-e7ec-47b7-a616-86c8d8ef5248-scripts\") pod \"glance-72940-default-external-api-0\" (UID: \"b5a33110-e7ec-47b7-a616-86c8d8ef5248\") " pod="openstack/glance-72940-default-external-api-0" Feb 16 02:40:32.476489 master-0 kubenswrapper[31559]: I0216 02:40:32.471452 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tbgq\" (UniqueName: \"kubernetes.io/projected/b5a33110-e7ec-47b7-a616-86c8d8ef5248-kube-api-access-6tbgq\") pod \"glance-72940-default-external-api-0\" (UID: \"b5a33110-e7ec-47b7-a616-86c8d8ef5248\") " pod="openstack/glance-72940-default-external-api-0" Feb 16 02:40:33.327604 master-0 kubenswrapper[31559]: I0216 02:40:33.327538 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f8967d3f-b95e-4cbf-ba61-3779cc05db2f\" (UniqueName: \"kubernetes.io/csi/topolvm.io^fcb2bec2-7af4-4d84-a267-9adfc7ab4af8\") pod \"glance-72940-default-external-api-0\" (UID: \"b5a33110-e7ec-47b7-a616-86c8d8ef5248\") " pod="openstack/glance-72940-default-external-api-0" Feb 16 02:40:33.666264 master-0 kubenswrapper[31559]: I0216 02:40:33.666202 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-72940-default-external-api-0" Feb 16 02:40:33.938203 master-0 kubenswrapper[31559]: I0216 02:40:33.938089 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b2014b4-1c15-4883-8435-1f26396ed008" path="/var/lib/kubelet/pods/9b2014b4-1c15-4883-8435-1f26396ed008/volumes" Feb 16 02:40:35.586312 master-0 kubenswrapper[31559]: I0216 02:40:35.586233 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-72940-default-internal-api-0" Feb 16 02:40:35.733304 master-0 kubenswrapper[31559]: I0216 02:40:35.733107 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0e776d3-0db2-474c-ad7f-6798e0b59a3c-logs\") pod \"e0e776d3-0db2-474c-ad7f-6798e0b59a3c\" (UID: \"e0e776d3-0db2-474c-ad7f-6798e0b59a3c\") " Feb 16 02:40:35.733304 master-0 kubenswrapper[31559]: I0216 02:40:35.733266 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xl5pv\" (UniqueName: \"kubernetes.io/projected/e0e776d3-0db2-474c-ad7f-6798e0b59a3c-kube-api-access-xl5pv\") pod \"e0e776d3-0db2-474c-ad7f-6798e0b59a3c\" (UID: \"e0e776d3-0db2-474c-ad7f-6798e0b59a3c\") " Feb 16 02:40:35.733555 master-0 kubenswrapper[31559]: I0216 02:40:35.733490 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0e776d3-0db2-474c-ad7f-6798e0b59a3c-config-data\") pod \"e0e776d3-0db2-474c-ad7f-6798e0b59a3c\" (UID: \"e0e776d3-0db2-474c-ad7f-6798e0b59a3c\") " Feb 16 02:40:35.733555 master-0 kubenswrapper[31559]: I0216 02:40:35.733541 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0e776d3-0db2-474c-ad7f-6798e0b59a3c-combined-ca-bundle\") pod \"e0e776d3-0db2-474c-ad7f-6798e0b59a3c\" (UID: \"e0e776d3-0db2-474c-ad7f-6798e0b59a3c\") " Feb 16 02:40:35.733624 master-0 kubenswrapper[31559]: I0216 02:40:35.733605 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e0e776d3-0db2-474c-ad7f-6798e0b59a3c-httpd-run\") pod \"e0e776d3-0db2-474c-ad7f-6798e0b59a3c\" (UID: \"e0e776d3-0db2-474c-ad7f-6798e0b59a3c\") " Feb 16 02:40:35.733839 master-0 kubenswrapper[31559]: I0216 02:40:35.733803 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^6e8938e5-d57e-476b-aba2-0b5d7983a3e3\") pod \"e0e776d3-0db2-474c-ad7f-6798e0b59a3c\" (UID: \"e0e776d3-0db2-474c-ad7f-6798e0b59a3c\") " Feb 16 02:40:35.733985 master-0 kubenswrapper[31559]: I0216 02:40:35.733955 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0e776d3-0db2-474c-ad7f-6798e0b59a3c-internal-tls-certs\") pod \"e0e776d3-0db2-474c-ad7f-6798e0b59a3c\" (UID: \"e0e776d3-0db2-474c-ad7f-6798e0b59a3c\") " Feb 16 02:40:35.734048 master-0 kubenswrapper[31559]: I0216 02:40:35.734024 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0e776d3-0db2-474c-ad7f-6798e0b59a3c-scripts\") pod \"e0e776d3-0db2-474c-ad7f-6798e0b59a3c\" (UID: \"e0e776d3-0db2-474c-ad7f-6798e0b59a3c\") " Feb 16 02:40:35.736675 master-0 kubenswrapper[31559]: I0216 02:40:35.736606 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0e776d3-0db2-474c-ad7f-6798e0b59a3c-logs" (OuterVolumeSpecName: "logs") pod "e0e776d3-0db2-474c-ad7f-6798e0b59a3c" (UID: "e0e776d3-0db2-474c-ad7f-6798e0b59a3c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 02:40:35.736797 master-0 kubenswrapper[31559]: I0216 02:40:35.736758 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0e776d3-0db2-474c-ad7f-6798e0b59a3c-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "e0e776d3-0db2-474c-ad7f-6798e0b59a3c" (UID: "e0e776d3-0db2-474c-ad7f-6798e0b59a3c"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 02:40:35.740332 master-0 kubenswrapper[31559]: I0216 02:40:35.740237 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0e776d3-0db2-474c-ad7f-6798e0b59a3c-scripts" (OuterVolumeSpecName: "scripts") pod "e0e776d3-0db2-474c-ad7f-6798e0b59a3c" (UID: "e0e776d3-0db2-474c-ad7f-6798e0b59a3c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:40:35.740332 master-0 kubenswrapper[31559]: I0216 02:40:35.740257 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0e776d3-0db2-474c-ad7f-6798e0b59a3c-kube-api-access-xl5pv" (OuterVolumeSpecName: "kube-api-access-xl5pv") pod "e0e776d3-0db2-474c-ad7f-6798e0b59a3c" (UID: "e0e776d3-0db2-474c-ad7f-6798e0b59a3c"). InnerVolumeSpecName "kube-api-access-xl5pv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:40:35.757868 master-0 kubenswrapper[31559]: I0216 02:40:35.757803 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^6e8938e5-d57e-476b-aba2-0b5d7983a3e3" (OuterVolumeSpecName: "glance") pod "e0e776d3-0db2-474c-ad7f-6798e0b59a3c" (UID: "e0e776d3-0db2-474c-ad7f-6798e0b59a3c"). InnerVolumeSpecName "pvc-a28269aa-59fc-4653-a10d-dcadc1f5499f". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 02:40:35.789623 master-0 kubenswrapper[31559]: I0216 02:40:35.789564 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0e776d3-0db2-474c-ad7f-6798e0b59a3c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e0e776d3-0db2-474c-ad7f-6798e0b59a3c" (UID: "e0e776d3-0db2-474c-ad7f-6798e0b59a3c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:40:35.813660 master-0 kubenswrapper[31559]: I0216 02:40:35.813588 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0e776d3-0db2-474c-ad7f-6798e0b59a3c-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "e0e776d3-0db2-474c-ad7f-6798e0b59a3c" (UID: "e0e776d3-0db2-474c-ad7f-6798e0b59a3c"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:40:35.833711 master-0 kubenswrapper[31559]: I0216 02:40:35.833640 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0e776d3-0db2-474c-ad7f-6798e0b59a3c-config-data" (OuterVolumeSpecName: "config-data") pod "e0e776d3-0db2-474c-ad7f-6798e0b59a3c" (UID: "e0e776d3-0db2-474c-ad7f-6798e0b59a3c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:40:35.837216 master-0 kubenswrapper[31559]: I0216 02:40:35.837162 31559 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0e776d3-0db2-474c-ad7f-6798e0b59a3c-logs\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:35.837216 master-0 kubenswrapper[31559]: I0216 02:40:35.837212 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xl5pv\" (UniqueName: \"kubernetes.io/projected/e0e776d3-0db2-474c-ad7f-6798e0b59a3c-kube-api-access-xl5pv\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:35.837321 master-0 kubenswrapper[31559]: I0216 02:40:35.837228 31559 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0e776d3-0db2-474c-ad7f-6798e0b59a3c-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:35.837321 master-0 kubenswrapper[31559]: I0216 02:40:35.837244 31559 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0e776d3-0db2-474c-ad7f-6798e0b59a3c-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:35.837321 master-0 kubenswrapper[31559]: I0216 02:40:35.837258 31559 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e0e776d3-0db2-474c-ad7f-6798e0b59a3c-httpd-run\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:35.837321 master-0 kubenswrapper[31559]: I0216 02:40:35.837303 31559 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-a28269aa-59fc-4653-a10d-dcadc1f5499f\" (UniqueName: \"kubernetes.io/csi/topolvm.io^6e8938e5-d57e-476b-aba2-0b5d7983a3e3\") on node \"master-0\" " Feb 16 02:40:35.837321 master-0 kubenswrapper[31559]: I0216 02:40:35.837319 31559 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0e776d3-0db2-474c-ad7f-6798e0b59a3c-internal-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:35.837492 master-0 kubenswrapper[31559]: I0216 02:40:35.837335 31559 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0e776d3-0db2-474c-ad7f-6798e0b59a3c-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:35.877890 master-0 kubenswrapper[31559]: I0216 02:40:35.876288 31559 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 02:40:35.877890 master-0 kubenswrapper[31559]: I0216 02:40:35.876561 31559 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-a28269aa-59fc-4653-a10d-dcadc1f5499f" (UniqueName: "kubernetes.io/csi/topolvm.io^6e8938e5-d57e-476b-aba2-0b5d7983a3e3") on node "master-0" Feb 16 02:40:35.938985 master-0 kubenswrapper[31559]: I0216 02:40:35.938931 31559 reconciler_common.go:293] "Volume detached for volume \"pvc-a28269aa-59fc-4653-a10d-dcadc1f5499f\" (UniqueName: \"kubernetes.io/csi/topolvm.io^6e8938e5-d57e-476b-aba2-0b5d7983a3e3\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:36.196048 master-0 kubenswrapper[31559]: I0216 02:40:36.195668 31559 scope.go:117] "RemoveContainer" containerID="84179352863d97c35954bf2153a31c72640c6c68fb23f5bda3ccdb7a1950ede3" Feb 16 02:40:36.521102 master-0 kubenswrapper[31559]: I0216 02:40:36.521037 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-72940-default-internal-api-0" event={"ID":"e0e776d3-0db2-474c-ad7f-6798e0b59a3c","Type":"ContainerDied","Data":"78623c7cf9ba16846090ca440c9884e1ada73bdbfc89227bd3523d117464dc19"} Feb 16 02:40:36.521102 master-0 kubenswrapper[31559]: I0216 02:40:36.521097 31559 scope.go:117] "RemoveContainer" containerID="a156719f3471713b64f07c6e9ffea069af70f3c5cfb89722283b7cd9365bd26d" Feb 16 02:40:36.521329 master-0 kubenswrapper[31559]: I0216 02:40:36.521186 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-72940-default-internal-api-0" Feb 16 02:40:37.027980 master-0 kubenswrapper[31559]: I0216 02:40:37.027830 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-72940-default-internal-api-0"] Feb 16 02:40:37.051693 master-0 kubenswrapper[31559]: I0216 02:40:37.051625 31559 scope.go:117] "RemoveContainer" containerID="852e77019e6f36e648b06570ae813160acdcdcb8bf5958343b9075f3cf0d9bf3" Feb 16 02:40:37.151217 master-0 kubenswrapper[31559]: I0216 02:40:37.151149 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fbd84b845-7w476" Feb 16 02:40:37.196184 master-0 kubenswrapper[31559]: I0216 02:40:37.196109 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-72940-default-internal-api-0"] Feb 16 02:40:37.293087 master-0 kubenswrapper[31559]: I0216 02:40:37.293014 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52d1137e-c1db-4205-ae56-dfd8b4c84b39-config\") pod \"52d1137e-c1db-4205-ae56-dfd8b4c84b39\" (UID: \"52d1137e-c1db-4205-ae56-dfd8b4c84b39\") " Feb 16 02:40:37.293200 master-0 kubenswrapper[31559]: I0216 02:40:37.293145 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52d1137e-c1db-4205-ae56-dfd8b4c84b39-dns-svc\") pod \"52d1137e-c1db-4205-ae56-dfd8b4c84b39\" (UID: \"52d1137e-c1db-4205-ae56-dfd8b4c84b39\") " Feb 16 02:40:37.293253 master-0 kubenswrapper[31559]: I0216 02:40:37.293205 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52d1137e-c1db-4205-ae56-dfd8b4c84b39-ovsdbserver-nb\") pod \"52d1137e-c1db-4205-ae56-dfd8b4c84b39\" (UID: \"52d1137e-c1db-4205-ae56-dfd8b4c84b39\") " Feb 16 02:40:37.293320 master-0 kubenswrapper[31559]: I0216 02:40:37.293261 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjn66\" (UniqueName: \"kubernetes.io/projected/52d1137e-c1db-4205-ae56-dfd8b4c84b39-kube-api-access-tjn66\") pod \"52d1137e-c1db-4205-ae56-dfd8b4c84b39\" (UID: \"52d1137e-c1db-4205-ae56-dfd8b4c84b39\") " Feb 16 02:40:37.293405 master-0 kubenswrapper[31559]: I0216 02:40:37.293372 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/52d1137e-c1db-4205-ae56-dfd8b4c84b39-dns-swift-storage-0\") pod \"52d1137e-c1db-4205-ae56-dfd8b4c84b39\" (UID: \"52d1137e-c1db-4205-ae56-dfd8b4c84b39\") " Feb 16 02:40:37.293650 master-0 kubenswrapper[31559]: I0216 02:40:37.293606 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/52d1137e-c1db-4205-ae56-dfd8b4c84b39-ovsdbserver-sb\") pod \"52d1137e-c1db-4205-ae56-dfd8b4c84b39\" (UID: \"52d1137e-c1db-4205-ae56-dfd8b4c84b39\") " Feb 16 02:40:37.312596 master-0 kubenswrapper[31559]: I0216 02:40:37.308531 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52d1137e-c1db-4205-ae56-dfd8b4c84b39-kube-api-access-tjn66" (OuterVolumeSpecName: "kube-api-access-tjn66") pod "52d1137e-c1db-4205-ae56-dfd8b4c84b39" (UID: "52d1137e-c1db-4205-ae56-dfd8b4c84b39"). InnerVolumeSpecName "kube-api-access-tjn66". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:40:37.396859 master-0 kubenswrapper[31559]: I0216 02:40:37.396792 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjn66\" (UniqueName: \"kubernetes.io/projected/52d1137e-c1db-4205-ae56-dfd8b4c84b39-kube-api-access-tjn66\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:37.397068 master-0 kubenswrapper[31559]: I0216 02:40:37.396925 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52d1137e-c1db-4205-ae56-dfd8b4c84b39-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "52d1137e-c1db-4205-ae56-dfd8b4c84b39" (UID: "52d1137e-c1db-4205-ae56-dfd8b4c84b39"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:40:37.397068 master-0 kubenswrapper[31559]: I0216 02:40:37.396990 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52d1137e-c1db-4205-ae56-dfd8b4c84b39-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "52d1137e-c1db-4205-ae56-dfd8b4c84b39" (UID: "52d1137e-c1db-4205-ae56-dfd8b4c84b39"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:40:37.402100 master-0 kubenswrapper[31559]: I0216 02:40:37.402051 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-72940-default-internal-api-0"] Feb 16 02:40:37.402690 master-0 kubenswrapper[31559]: E0216 02:40:37.402631 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0e776d3-0db2-474c-ad7f-6798e0b59a3c" containerName="glance-log" Feb 16 02:40:37.402690 master-0 kubenswrapper[31559]: I0216 02:40:37.402655 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0e776d3-0db2-474c-ad7f-6798e0b59a3c" containerName="glance-log" Feb 16 02:40:37.402690 master-0 kubenswrapper[31559]: E0216 02:40:37.402672 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52d1137e-c1db-4205-ae56-dfd8b4c84b39" containerName="init" Feb 16 02:40:37.402690 master-0 kubenswrapper[31559]: I0216 02:40:37.402681 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="52d1137e-c1db-4205-ae56-dfd8b4c84b39" containerName="init" Feb 16 02:40:37.402904 master-0 kubenswrapper[31559]: I0216 02:40:37.402680 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52d1137e-c1db-4205-ae56-dfd8b4c84b39-config" (OuterVolumeSpecName: "config") pod "52d1137e-c1db-4205-ae56-dfd8b4c84b39" (UID: "52d1137e-c1db-4205-ae56-dfd8b4c84b39"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:40:37.402904 master-0 kubenswrapper[31559]: E0216 02:40:37.402742 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52d1137e-c1db-4205-ae56-dfd8b4c84b39" containerName="dnsmasq-dns" Feb 16 02:40:37.402904 master-0 kubenswrapper[31559]: I0216 02:40:37.402751 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="52d1137e-c1db-4205-ae56-dfd8b4c84b39" containerName="dnsmasq-dns" Feb 16 02:40:37.402904 master-0 kubenswrapper[31559]: E0216 02:40:37.402774 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0e776d3-0db2-474c-ad7f-6798e0b59a3c" containerName="glance-httpd" Feb 16 02:40:37.402904 master-0 kubenswrapper[31559]: I0216 02:40:37.402784 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0e776d3-0db2-474c-ad7f-6798e0b59a3c" containerName="glance-httpd" Feb 16 02:40:37.404427 master-0 kubenswrapper[31559]: I0216 02:40:37.404363 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="52d1137e-c1db-4205-ae56-dfd8b4c84b39" containerName="dnsmasq-dns" Feb 16 02:40:37.404514 master-0 kubenswrapper[31559]: I0216 02:40:37.404468 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0e776d3-0db2-474c-ad7f-6798e0b59a3c" containerName="glance-httpd" Feb 16 02:40:37.404514 master-0 kubenswrapper[31559]: I0216 02:40:37.404509 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0e776d3-0db2-474c-ad7f-6798e0b59a3c" containerName="glance-log" Feb 16 02:40:37.406107 master-0 kubenswrapper[31559]: I0216 02:40:37.406067 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-72940-default-internal-api-0" Feb 16 02:40:37.409026 master-0 kubenswrapper[31559]: I0216 02:40:37.408942 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-72940-default-internal-config-data" Feb 16 02:40:37.409210 master-0 kubenswrapper[31559]: I0216 02:40:37.409167 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 16 02:40:37.420293 master-0 kubenswrapper[31559]: I0216 02:40:37.420227 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52d1137e-c1db-4205-ae56-dfd8b4c84b39-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "52d1137e-c1db-4205-ae56-dfd8b4c84b39" (UID: "52d1137e-c1db-4205-ae56-dfd8b4c84b39"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:40:37.445867 master-0 kubenswrapper[31559]: I0216 02:40:37.445758 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52d1137e-c1db-4205-ae56-dfd8b4c84b39-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "52d1137e-c1db-4205-ae56-dfd8b4c84b39" (UID: "52d1137e-c1db-4205-ae56-dfd8b4c84b39"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:40:37.508488 master-0 kubenswrapper[31559]: I0216 02:40:37.505236 31559 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/52d1137e-c1db-4205-ae56-dfd8b4c84b39-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:37.508488 master-0 kubenswrapper[31559]: I0216 02:40:37.505285 31559 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52d1137e-c1db-4205-ae56-dfd8b4c84b39-config\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:37.508488 master-0 kubenswrapper[31559]: I0216 02:40:37.505298 31559 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52d1137e-c1db-4205-ae56-dfd8b4c84b39-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:37.508488 master-0 kubenswrapper[31559]: I0216 02:40:37.505311 31559 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52d1137e-c1db-4205-ae56-dfd8b4c84b39-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:37.508488 master-0 kubenswrapper[31559]: I0216 02:40:37.505325 31559 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/52d1137e-c1db-4205-ae56-dfd8b4c84b39-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:37.539571 master-0 kubenswrapper[31559]: I0216 02:40:37.535960 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-72940-default-internal-api-0"] Feb 16 02:40:37.563691 master-0 kubenswrapper[31559]: I0216 02:40:37.562055 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fbd84b845-7w476" event={"ID":"52d1137e-c1db-4205-ae56-dfd8b4c84b39","Type":"ContainerDied","Data":"55e165cca09b6cdb7a0a9bab3cec68d6b44d40ca9c1a7070b1b11b9b62fc1f55"} Feb 16 02:40:37.563691 master-0 kubenswrapper[31559]: I0216 02:40:37.562224 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fbd84b845-7w476" Feb 16 02:40:37.563691 master-0 kubenswrapper[31559]: I0216 02:40:37.563554 31559 scope.go:117] "RemoveContainer" containerID="686e2f92a7e9f283dd0f139230d85d8489dd8986eea4a01c147f865449eba7a6" Feb 16 02:40:37.584685 master-0 kubenswrapper[31559]: E0216 02:40:37.584620 31559 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52d1137e_c1db_4205_ae56_dfd8b4c84b39.slice/crio-55e165cca09b6cdb7a0a9bab3cec68d6b44d40ca9c1a7070b1b11b9b62fc1f55\": RecentStats: unable to find data in memory cache]" Feb 16 02:40:37.584814 master-0 kubenswrapper[31559]: E0216 02:40:37.584632 31559 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52d1137e_c1db_4205_ae56_dfd8b4c84b39.slice/crio-55e165cca09b6cdb7a0a9bab3cec68d6b44d40ca9c1a7070b1b11b9b62fc1f55\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52d1137e_c1db_4205_ae56_dfd8b4c84b39.slice\": RecentStats: unable to find data in memory cache]" Feb 16 02:40:37.607261 master-0 kubenswrapper[31559]: I0216 02:40:37.607150 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a28269aa-59fc-4653-a10d-dcadc1f5499f\" (UniqueName: \"kubernetes.io/csi/topolvm.io^6e8938e5-d57e-476b-aba2-0b5d7983a3e3\") pod \"glance-72940-default-internal-api-0\" (UID: \"69c12677-af7e-44f3-9419-1ef60f60b5f2\") " pod="openstack/glance-72940-default-internal-api-0" Feb 16 02:40:37.607261 master-0 kubenswrapper[31559]: I0216 02:40:37.607218 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69c12677-af7e-44f3-9419-1ef60f60b5f2-config-data\") pod \"glance-72940-default-internal-api-0\" (UID: \"69c12677-af7e-44f3-9419-1ef60f60b5f2\") " pod="openstack/glance-72940-default-internal-api-0" Feb 16 02:40:37.607261 master-0 kubenswrapper[31559]: I0216 02:40:37.607253 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/69c12677-af7e-44f3-9419-1ef60f60b5f2-httpd-run\") pod \"glance-72940-default-internal-api-0\" (UID: \"69c12677-af7e-44f3-9419-1ef60f60b5f2\") " pod="openstack/glance-72940-default-internal-api-0" Feb 16 02:40:37.607649 master-0 kubenswrapper[31559]: I0216 02:40:37.607311 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxckq\" (UniqueName: \"kubernetes.io/projected/69c12677-af7e-44f3-9419-1ef60f60b5f2-kube-api-access-vxckq\") pod \"glance-72940-default-internal-api-0\" (UID: \"69c12677-af7e-44f3-9419-1ef60f60b5f2\") " pod="openstack/glance-72940-default-internal-api-0" Feb 16 02:40:37.607649 master-0 kubenswrapper[31559]: I0216 02:40:37.607356 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69c12677-af7e-44f3-9419-1ef60f60b5f2-combined-ca-bundle\") pod \"glance-72940-default-internal-api-0\" (UID: \"69c12677-af7e-44f3-9419-1ef60f60b5f2\") " pod="openstack/glance-72940-default-internal-api-0" Feb 16 02:40:37.607649 master-0 kubenswrapper[31559]: I0216 02:40:37.607402 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/69c12677-af7e-44f3-9419-1ef60f60b5f2-internal-tls-certs\") pod \"glance-72940-default-internal-api-0\" (UID: \"69c12677-af7e-44f3-9419-1ef60f60b5f2\") " pod="openstack/glance-72940-default-internal-api-0" Feb 16 02:40:37.607649 master-0 kubenswrapper[31559]: I0216 02:40:37.607469 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69c12677-af7e-44f3-9419-1ef60f60b5f2-scripts\") pod \"glance-72940-default-internal-api-0\" (UID: \"69c12677-af7e-44f3-9419-1ef60f60b5f2\") " pod="openstack/glance-72940-default-internal-api-0" Feb 16 02:40:37.607649 master-0 kubenswrapper[31559]: I0216 02:40:37.607491 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69c12677-af7e-44f3-9419-1ef60f60b5f2-logs\") pod \"glance-72940-default-internal-api-0\" (UID: \"69c12677-af7e-44f3-9419-1ef60f60b5f2\") " pod="openstack/glance-72940-default-internal-api-0" Feb 16 02:40:37.709759 master-0 kubenswrapper[31559]: I0216 02:40:37.709709 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69c12677-af7e-44f3-9419-1ef60f60b5f2-scripts\") pod \"glance-72940-default-internal-api-0\" (UID: \"69c12677-af7e-44f3-9419-1ef60f60b5f2\") " pod="openstack/glance-72940-default-internal-api-0" Feb 16 02:40:37.709759 master-0 kubenswrapper[31559]: I0216 02:40:37.709760 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69c12677-af7e-44f3-9419-1ef60f60b5f2-logs\") pod \"glance-72940-default-internal-api-0\" (UID: \"69c12677-af7e-44f3-9419-1ef60f60b5f2\") " pod="openstack/glance-72940-default-internal-api-0" Feb 16 02:40:37.710382 master-0 kubenswrapper[31559]: I0216 02:40:37.710362 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69c12677-af7e-44f3-9419-1ef60f60b5f2-logs\") pod \"glance-72940-default-internal-api-0\" (UID: \"69c12677-af7e-44f3-9419-1ef60f60b5f2\") " pod="openstack/glance-72940-default-internal-api-0" Feb 16 02:40:37.710475 master-0 kubenswrapper[31559]: I0216 02:40:37.710416 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69c12677-af7e-44f3-9419-1ef60f60b5f2-config-data\") pod \"glance-72940-default-internal-api-0\" (UID: \"69c12677-af7e-44f3-9419-1ef60f60b5f2\") " pod="openstack/glance-72940-default-internal-api-0" Feb 16 02:40:37.710475 master-0 kubenswrapper[31559]: I0216 02:40:37.710466 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/69c12677-af7e-44f3-9419-1ef60f60b5f2-httpd-run\") pod \"glance-72940-default-internal-api-0\" (UID: \"69c12677-af7e-44f3-9419-1ef60f60b5f2\") " pod="openstack/glance-72940-default-internal-api-0" Feb 16 02:40:37.710583 master-0 kubenswrapper[31559]: I0216 02:40:37.710518 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxckq\" (UniqueName: \"kubernetes.io/projected/69c12677-af7e-44f3-9419-1ef60f60b5f2-kube-api-access-vxckq\") pod \"glance-72940-default-internal-api-0\" (UID: \"69c12677-af7e-44f3-9419-1ef60f60b5f2\") " pod="openstack/glance-72940-default-internal-api-0" Feb 16 02:40:37.710583 master-0 kubenswrapper[31559]: I0216 02:40:37.710559 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69c12677-af7e-44f3-9419-1ef60f60b5f2-combined-ca-bundle\") pod \"glance-72940-default-internal-api-0\" (UID: \"69c12677-af7e-44f3-9419-1ef60f60b5f2\") " pod="openstack/glance-72940-default-internal-api-0" Feb 16 02:40:37.710686 master-0 kubenswrapper[31559]: I0216 02:40:37.710601 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/69c12677-af7e-44f3-9419-1ef60f60b5f2-internal-tls-certs\") pod \"glance-72940-default-internal-api-0\" (UID: \"69c12677-af7e-44f3-9419-1ef60f60b5f2\") " pod="openstack/glance-72940-default-internal-api-0" Feb 16 02:40:37.712100 master-0 kubenswrapper[31559]: I0216 02:40:37.712036 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/69c12677-af7e-44f3-9419-1ef60f60b5f2-httpd-run\") pod \"glance-72940-default-internal-api-0\" (UID: \"69c12677-af7e-44f3-9419-1ef60f60b5f2\") " pod="openstack/glance-72940-default-internal-api-0" Feb 16 02:40:37.716149 master-0 kubenswrapper[31559]: I0216 02:40:37.715563 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69c12677-af7e-44f3-9419-1ef60f60b5f2-scripts\") pod \"glance-72940-default-internal-api-0\" (UID: \"69c12677-af7e-44f3-9419-1ef60f60b5f2\") " pod="openstack/glance-72940-default-internal-api-0" Feb 16 02:40:37.716280 master-0 kubenswrapper[31559]: I0216 02:40:37.716214 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69c12677-af7e-44f3-9419-1ef60f60b5f2-combined-ca-bundle\") pod \"glance-72940-default-internal-api-0\" (UID: \"69c12677-af7e-44f3-9419-1ef60f60b5f2\") " pod="openstack/glance-72940-default-internal-api-0" Feb 16 02:40:37.718592 master-0 kubenswrapper[31559]: I0216 02:40:37.717828 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/69c12677-af7e-44f3-9419-1ef60f60b5f2-internal-tls-certs\") pod \"glance-72940-default-internal-api-0\" (UID: \"69c12677-af7e-44f3-9419-1ef60f60b5f2\") " pod="openstack/glance-72940-default-internal-api-0" Feb 16 02:40:37.719320 master-0 kubenswrapper[31559]: I0216 02:40:37.719278 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69c12677-af7e-44f3-9419-1ef60f60b5f2-config-data\") pod \"glance-72940-default-internal-api-0\" (UID: \"69c12677-af7e-44f3-9419-1ef60f60b5f2\") " pod="openstack/glance-72940-default-internal-api-0" Feb 16 02:40:37.792284 master-0 kubenswrapper[31559]: I0216 02:40:37.792213 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxckq\" (UniqueName: \"kubernetes.io/projected/69c12677-af7e-44f3-9419-1ef60f60b5f2-kube-api-access-vxckq\") pod \"glance-72940-default-internal-api-0\" (UID: \"69c12677-af7e-44f3-9419-1ef60f60b5f2\") " pod="openstack/glance-72940-default-internal-api-0" Feb 16 02:40:37.813601 master-0 kubenswrapper[31559]: I0216 02:40:37.813512 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a28269aa-59fc-4653-a10d-dcadc1f5499f\" (UniqueName: \"kubernetes.io/csi/topolvm.io^6e8938e5-d57e-476b-aba2-0b5d7983a3e3\") pod \"glance-72940-default-internal-api-0\" (UID: \"69c12677-af7e-44f3-9419-1ef60f60b5f2\") " pod="openstack/glance-72940-default-internal-api-0" Feb 16 02:40:37.816619 master-0 kubenswrapper[31559]: I0216 02:40:37.815794 31559 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 02:40:37.816619 master-0 kubenswrapper[31559]: I0216 02:40:37.815854 31559 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a28269aa-59fc-4653-a10d-dcadc1f5499f\" (UniqueName: \"kubernetes.io/csi/topolvm.io^6e8938e5-d57e-476b-aba2-0b5d7983a3e3\") pod \"glance-72940-default-internal-api-0\" (UID: \"69c12677-af7e-44f3-9419-1ef60f60b5f2\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/0789e8af4b098ede363a908d5ca2de42a744f798f7b80a69915d2057c015778f/globalmount\"" pod="openstack/glance-72940-default-internal-api-0" Feb 16 02:40:37.945264 master-0 kubenswrapper[31559]: I0216 02:40:37.945137 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0e776d3-0db2-474c-ad7f-6798e0b59a3c" path="/var/lib/kubelet/pods/e0e776d3-0db2-474c-ad7f-6798e0b59a3c/volumes" Feb 16 02:40:37.973954 master-0 kubenswrapper[31559]: I0216 02:40:37.972417 31559 scope.go:117] "RemoveContainer" containerID="714454ce59af0f2a0a90940ec7784ddc3a76240fac92fab0f359069d0ff43eb6" Feb 16 02:40:38.004491 master-0 kubenswrapper[31559]: I0216 02:40:38.004398 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6fbd84b845-7w476"] Feb 16 02:40:38.264202 master-0 kubenswrapper[31559]: W0216 02:40:38.264124 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb5a33110_e7ec_47b7_a616_86c8d8ef5248.slice/crio-c1a6df2a706e0da7f87d3a996fe5c1454b08f2f964f843dbcfbbfff014c135c0 WatchSource:0}: Error finding container c1a6df2a706e0da7f87d3a996fe5c1454b08f2f964f843dbcfbbfff014c135c0: Status 404 returned error can't find the container with id c1a6df2a706e0da7f87d3a996fe5c1454b08f2f964f843dbcfbbfff014c135c0 Feb 16 02:40:38.301915 master-0 kubenswrapper[31559]: I0216 02:40:38.301852 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-72940-default-external-api-0"] Feb 16 02:40:38.584603 master-0 kubenswrapper[31559]: I0216 02:40:38.584490 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"f71d864b-c882-43b8-a7a7-b1e163d38aa4","Type":"ContainerStarted","Data":"c241312784a45414d22c770f53e8c8f795a20e8cfba2730d19ff1ecdaee6cfba"} Feb 16 02:40:38.589320 master-0 kubenswrapper[31559]: I0216 02:40:38.589233 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-72940-default-external-api-0" event={"ID":"b5a33110-e7ec-47b7-a616-86c8d8ef5248","Type":"ContainerStarted","Data":"c1a6df2a706e0da7f87d3a996fe5c1454b08f2f964f843dbcfbbfff014c135c0"} Feb 16 02:40:38.769358 master-0 kubenswrapper[31559]: I0216 02:40:38.769287 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a28269aa-59fc-4653-a10d-dcadc1f5499f\" (UniqueName: \"kubernetes.io/csi/topolvm.io^6e8938e5-d57e-476b-aba2-0b5d7983a3e3\") pod \"glance-72940-default-internal-api-0\" (UID: \"69c12677-af7e-44f3-9419-1ef60f60b5f2\") " pod="openstack/glance-72940-default-internal-api-0" Feb 16 02:40:39.115721 master-0 kubenswrapper[31559]: I0216 02:40:39.115653 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6fbd84b845-7w476"] Feb 16 02:40:39.531886 master-0 kubenswrapper[31559]: I0216 02:40:39.531822 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-72940-default-internal-api-0" Feb 16 02:40:39.608533 master-0 kubenswrapper[31559]: I0216 02:40:39.608417 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-72940-default-external-api-0" event={"ID":"b5a33110-e7ec-47b7-a616-86c8d8ef5248","Type":"ContainerStarted","Data":"1456f563d8fa6deeda79aaf3675ba55023311fdcd42c3a9d2e7e34c482e35dcb"} Feb 16 02:40:39.610551 master-0 kubenswrapper[31559]: I0216 02:40:39.610426 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-4dzgp" event={"ID":"8b2abcfb-1594-4f85-8068-80c1a8d7fc3e","Type":"ContainerStarted","Data":"4979bad83bc9ff44e251c1dda305d92b6042d0ffacc394ba2959f41d490ef835"} Feb 16 02:40:39.637492 master-0 kubenswrapper[31559]: I0216 02:40:39.637397 31559 generic.go:334] "Generic (PLEG): container finished" podID="12b7b092-d70d-4cdf-85d6-9d98d6b97ca0" containerID="24852c3713cea8664e30150026387fa056e241eeb7449b7c705657bbb582d5eb" exitCode=0 Feb 16 02:40:39.637492 master-0 kubenswrapper[31559]: I0216 02:40:39.637497 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"12b7b092-d70d-4cdf-85d6-9d98d6b97ca0","Type":"ContainerDied","Data":"24852c3713cea8664e30150026387fa056e241eeb7449b7c705657bbb582d5eb"} Feb 16 02:40:39.931464 master-0 kubenswrapper[31559]: I0216 02:40:39.918836 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-4dzgp" podStartSLOduration=3.438159234 podStartE2EDuration="17.918820491s" podCreationTimestamp="2026-02-16 02:40:22 +0000 UTC" firstStartedPulling="2026-02-16 02:40:23.737141556 +0000 UTC m=+1076.081747571" lastFinishedPulling="2026-02-16 02:40:38.217802813 +0000 UTC m=+1090.562408828" observedRunningTime="2026-02-16 02:40:39.917481347 +0000 UTC m=+1092.262087372" watchObservedRunningTime="2026-02-16 02:40:39.918820491 +0000 UTC m=+1092.263426506" Feb 16 02:40:39.945318 master-0 kubenswrapper[31559]: I0216 02:40:39.943945 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52d1137e-c1db-4205-ae56-dfd8b4c84b39" path="/var/lib/kubelet/pods/52d1137e-c1db-4205-ae56-dfd8b4c84b39/volumes" Feb 16 02:40:40.384002 master-0 kubenswrapper[31559]: I0216 02:40:40.383941 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Feb 16 02:40:40.502458 master-0 kubenswrapper[31559]: I0216 02:40:40.498865 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/12b7b092-d70d-4cdf-85d6-9d98d6b97ca0-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"12b7b092-d70d-4cdf-85d6-9d98d6b97ca0\" (UID: \"12b7b092-d70d-4cdf-85d6-9d98d6b97ca0\") " Feb 16 02:40:40.502458 master-0 kubenswrapper[31559]: I0216 02:40:40.498969 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbwn8\" (UniqueName: \"kubernetes.io/projected/12b7b092-d70d-4cdf-85d6-9d98d6b97ca0-kube-api-access-sbwn8\") pod \"12b7b092-d70d-4cdf-85d6-9d98d6b97ca0\" (UID: \"12b7b092-d70d-4cdf-85d6-9d98d6b97ca0\") " Feb 16 02:40:40.502458 master-0 kubenswrapper[31559]: I0216 02:40:40.499180 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/12b7b092-d70d-4cdf-85d6-9d98d6b97ca0-etc-podinfo\") pod \"12b7b092-d70d-4cdf-85d6-9d98d6b97ca0\" (UID: \"12b7b092-d70d-4cdf-85d6-9d98d6b97ca0\") " Feb 16 02:40:40.502458 master-0 kubenswrapper[31559]: I0216 02:40:40.499271 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12b7b092-d70d-4cdf-85d6-9d98d6b97ca0-scripts\") pod \"12b7b092-d70d-4cdf-85d6-9d98d6b97ca0\" (UID: \"12b7b092-d70d-4cdf-85d6-9d98d6b97ca0\") " Feb 16 02:40:40.502458 master-0 kubenswrapper[31559]: I0216 02:40:40.499318 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/12b7b092-d70d-4cdf-85d6-9d98d6b97ca0-config\") pod \"12b7b092-d70d-4cdf-85d6-9d98d6b97ca0\" (UID: \"12b7b092-d70d-4cdf-85d6-9d98d6b97ca0\") " Feb 16 02:40:40.502458 master-0 kubenswrapper[31559]: I0216 02:40:40.499407 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/12b7b092-d70d-4cdf-85d6-9d98d6b97ca0-var-lib-ironic\") pod \"12b7b092-d70d-4cdf-85d6-9d98d6b97ca0\" (UID: \"12b7b092-d70d-4cdf-85d6-9d98d6b97ca0\") " Feb 16 02:40:40.502458 master-0 kubenswrapper[31559]: I0216 02:40:40.499530 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12b7b092-d70d-4cdf-85d6-9d98d6b97ca0-combined-ca-bundle\") pod \"12b7b092-d70d-4cdf-85d6-9d98d6b97ca0\" (UID: \"12b7b092-d70d-4cdf-85d6-9d98d6b97ca0\") " Feb 16 02:40:40.506544 master-0 kubenswrapper[31559]: I0216 02:40:40.502806 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12b7b092-d70d-4cdf-85d6-9d98d6b97ca0-var-lib-ironic-inspector-dhcp-hostsdir" (OuterVolumeSpecName: "var-lib-ironic-inspector-dhcp-hostsdir") pod "12b7b092-d70d-4cdf-85d6-9d98d6b97ca0" (UID: "12b7b092-d70d-4cdf-85d6-9d98d6b97ca0"). InnerVolumeSpecName "var-lib-ironic-inspector-dhcp-hostsdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 02:40:40.506544 master-0 kubenswrapper[31559]: I0216 02:40:40.502908 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12b7b092-d70d-4cdf-85d6-9d98d6b97ca0-var-lib-ironic" (OuterVolumeSpecName: "var-lib-ironic") pod "12b7b092-d70d-4cdf-85d6-9d98d6b97ca0" (UID: "12b7b092-d70d-4cdf-85d6-9d98d6b97ca0"). InnerVolumeSpecName "var-lib-ironic". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 02:40:40.506544 master-0 kubenswrapper[31559]: I0216 02:40:40.505572 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12b7b092-d70d-4cdf-85d6-9d98d6b97ca0-config" (OuterVolumeSpecName: "config") pod "12b7b092-d70d-4cdf-85d6-9d98d6b97ca0" (UID: "12b7b092-d70d-4cdf-85d6-9d98d6b97ca0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:40:40.506544 master-0 kubenswrapper[31559]: I0216 02:40:40.505730 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/12b7b092-d70d-4cdf-85d6-9d98d6b97ca0-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "12b7b092-d70d-4cdf-85d6-9d98d6b97ca0" (UID: "12b7b092-d70d-4cdf-85d6-9d98d6b97ca0"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 16 02:40:40.515455 master-0 kubenswrapper[31559]: I0216 02:40:40.510781 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12b7b092-d70d-4cdf-85d6-9d98d6b97ca0-kube-api-access-sbwn8" (OuterVolumeSpecName: "kube-api-access-sbwn8") pod "12b7b092-d70d-4cdf-85d6-9d98d6b97ca0" (UID: "12b7b092-d70d-4cdf-85d6-9d98d6b97ca0"). InnerVolumeSpecName "kube-api-access-sbwn8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:40:40.515455 master-0 kubenswrapper[31559]: I0216 02:40:40.512489 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-72940-default-internal-api-0"] Feb 16 02:40:40.523822 master-0 kubenswrapper[31559]: I0216 02:40:40.522738 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12b7b092-d70d-4cdf-85d6-9d98d6b97ca0-scripts" (OuterVolumeSpecName: "scripts") pod "12b7b092-d70d-4cdf-85d6-9d98d6b97ca0" (UID: "12b7b092-d70d-4cdf-85d6-9d98d6b97ca0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:40:40.584766 master-0 kubenswrapper[31559]: I0216 02:40:40.584659 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12b7b092-d70d-4cdf-85d6-9d98d6b97ca0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "12b7b092-d70d-4cdf-85d6-9d98d6b97ca0" (UID: "12b7b092-d70d-4cdf-85d6-9d98d6b97ca0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:40:40.602287 master-0 kubenswrapper[31559]: I0216 02:40:40.602203 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbwn8\" (UniqueName: \"kubernetes.io/projected/12b7b092-d70d-4cdf-85d6-9d98d6b97ca0-kube-api-access-sbwn8\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:40.602287 master-0 kubenswrapper[31559]: I0216 02:40:40.602242 31559 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/12b7b092-d70d-4cdf-85d6-9d98d6b97ca0-etc-podinfo\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:40.602287 master-0 kubenswrapper[31559]: I0216 02:40:40.602252 31559 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12b7b092-d70d-4cdf-85d6-9d98d6b97ca0-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:40.602287 master-0 kubenswrapper[31559]: I0216 02:40:40.602261 31559 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/12b7b092-d70d-4cdf-85d6-9d98d6b97ca0-config\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:40.602287 master-0 kubenswrapper[31559]: I0216 02:40:40.602277 31559 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/12b7b092-d70d-4cdf-85d6-9d98d6b97ca0-var-lib-ironic\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:40.602287 master-0 kubenswrapper[31559]: I0216 02:40:40.602285 31559 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12b7b092-d70d-4cdf-85d6-9d98d6b97ca0-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:40.602287 master-0 kubenswrapper[31559]: I0216 02:40:40.602296 31559 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/12b7b092-d70d-4cdf-85d6-9d98d6b97ca0-var-lib-ironic-inspector-dhcp-hostsdir\") on node \"master-0\" DevicePath \"\"" Feb 16 02:40:40.672210 master-0 kubenswrapper[31559]: I0216 02:40:40.671054 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"12b7b092-d70d-4cdf-85d6-9d98d6b97ca0","Type":"ContainerDied","Data":"27f3b1c8f975b87ee6b11d38d84a11ee466a6e12cbc23f7eb69dff130e40a8c8"} Feb 16 02:40:40.672210 master-0 kubenswrapper[31559]: I0216 02:40:40.671309 31559 scope.go:117] "RemoveContainer" containerID="24852c3713cea8664e30150026387fa056e241eeb7449b7c705657bbb582d5eb" Feb 16 02:40:40.672210 master-0 kubenswrapper[31559]: I0216 02:40:40.671462 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Feb 16 02:40:40.676551 master-0 kubenswrapper[31559]: I0216 02:40:40.676417 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-72940-default-internal-api-0" event={"ID":"69c12677-af7e-44f3-9419-1ef60f60b5f2","Type":"ContainerStarted","Data":"cec750576725f4fd29573bec2c2e8357b9af126050a62d3ff94ddbe913a9cb37"} Feb 16 02:40:40.679564 master-0 kubenswrapper[31559]: I0216 02:40:40.678659 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-72940-default-external-api-0" event={"ID":"b5a33110-e7ec-47b7-a616-86c8d8ef5248","Type":"ContainerStarted","Data":"eb16e9a6334b62f50930ff5cd7b5ceed5c9f66cc83c314ea3feac7abd6e63a85"} Feb 16 02:40:40.751498 master-0 kubenswrapper[31559]: I0216 02:40:40.748595 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-72940-default-external-api-0" podStartSLOduration=8.748579921 podStartE2EDuration="8.748579921s" podCreationTimestamp="2026-02-16 02:40:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:40:40.748478438 +0000 UTC m=+1093.093084453" watchObservedRunningTime="2026-02-16 02:40:40.748579921 +0000 UTC m=+1093.093185936" Feb 16 02:40:40.889899 master-0 kubenswrapper[31559]: I0216 02:40:40.889699 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-0"] Feb 16 02:40:40.911957 master-0 kubenswrapper[31559]: I0216 02:40:40.911617 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-inspector-0"] Feb 16 02:40:40.985741 master-0 kubenswrapper[31559]: I0216 02:40:40.985656 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-0"] Feb 16 02:40:40.986375 master-0 kubenswrapper[31559]: E0216 02:40:40.986339 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12b7b092-d70d-4cdf-85d6-9d98d6b97ca0" containerName="ironic-python-agent-init" Feb 16 02:40:40.986375 master-0 kubenswrapper[31559]: I0216 02:40:40.986372 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="12b7b092-d70d-4cdf-85d6-9d98d6b97ca0" containerName="ironic-python-agent-init" Feb 16 02:40:40.986977 master-0 kubenswrapper[31559]: I0216 02:40:40.986953 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="12b7b092-d70d-4cdf-85d6-9d98d6b97ca0" containerName="ironic-python-agent-init" Feb 16 02:40:40.992363 master-0 kubenswrapper[31559]: I0216 02:40:40.992294 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Feb 16 02:40:41.006724 master-0 kubenswrapper[31559]: I0216 02:40:41.003939 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-transport-url-ironic-inspector-transport" Feb 16 02:40:41.006724 master-0 kubenswrapper[31559]: I0216 02:40:41.004285 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-inspector-internal-svc" Feb 16 02:40:41.006724 master-0 kubenswrapper[31559]: I0216 02:40:41.004425 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-scripts" Feb 16 02:40:41.006724 master-0 kubenswrapper[31559]: I0216 02:40:41.004516 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-inspector-public-svc" Feb 16 02:40:41.006724 master-0 kubenswrapper[31559]: I0216 02:40:41.006328 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-config-data" Feb 16 02:40:41.020736 master-0 kubenswrapper[31559]: I0216 02:40:41.020667 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Feb 16 02:40:41.131808 master-0 kubenswrapper[31559]: I0216 02:40:41.131736 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c619a336-be30-4974-842a-de99c2ec7a0c-public-tls-certs\") pod \"ironic-inspector-0\" (UID: \"c619a336-be30-4974-842a-de99c2ec7a0c\") " pod="openstack/ironic-inspector-0" Feb 16 02:40:41.131981 master-0 kubenswrapper[31559]: I0216 02:40:41.131906 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c619a336-be30-4974-842a-de99c2ec7a0c-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"c619a336-be30-4974-842a-de99c2ec7a0c\") " pod="openstack/ironic-inspector-0" Feb 16 02:40:41.132207 master-0 kubenswrapper[31559]: I0216 02:40:41.132181 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/c619a336-be30-4974-842a-de99c2ec7a0c-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"c619a336-be30-4974-842a-de99c2ec7a0c\") " pod="openstack/ironic-inspector-0" Feb 16 02:40:41.132347 master-0 kubenswrapper[31559]: I0216 02:40:41.132320 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/c619a336-be30-4974-842a-de99c2ec7a0c-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"c619a336-be30-4974-842a-de99c2ec7a0c\") " pod="openstack/ironic-inspector-0" Feb 16 02:40:41.132446 master-0 kubenswrapper[31559]: I0216 02:40:41.132384 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c619a336-be30-4974-842a-de99c2ec7a0c-internal-tls-certs\") pod \"ironic-inspector-0\" (UID: \"c619a336-be30-4974-842a-de99c2ec7a0c\") " pod="openstack/ironic-inspector-0" Feb 16 02:40:41.132446 master-0 kubenswrapper[31559]: I0216 02:40:41.132420 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/c619a336-be30-4974-842a-de99c2ec7a0c-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"c619a336-be30-4974-842a-de99c2ec7a0c\") " pod="openstack/ironic-inspector-0" Feb 16 02:40:41.132549 master-0 kubenswrapper[31559]: I0216 02:40:41.132523 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjg92\" (UniqueName: \"kubernetes.io/projected/c619a336-be30-4974-842a-de99c2ec7a0c-kube-api-access-mjg92\") pod \"ironic-inspector-0\" (UID: \"c619a336-be30-4974-842a-de99c2ec7a0c\") " pod="openstack/ironic-inspector-0" Feb 16 02:40:41.132721 master-0 kubenswrapper[31559]: I0216 02:40:41.132693 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c619a336-be30-4974-842a-de99c2ec7a0c-scripts\") pod \"ironic-inspector-0\" (UID: \"c619a336-be30-4974-842a-de99c2ec7a0c\") " pod="openstack/ironic-inspector-0" Feb 16 02:40:41.132799 master-0 kubenswrapper[31559]: I0216 02:40:41.132746 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c619a336-be30-4974-842a-de99c2ec7a0c-config\") pod \"ironic-inspector-0\" (UID: \"c619a336-be30-4974-842a-de99c2ec7a0c\") " pod="openstack/ironic-inspector-0" Feb 16 02:40:41.237463 master-0 kubenswrapper[31559]: I0216 02:40:41.235978 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c619a336-be30-4974-842a-de99c2ec7a0c-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"c619a336-be30-4974-842a-de99c2ec7a0c\") " pod="openstack/ironic-inspector-0" Feb 16 02:40:41.237463 master-0 kubenswrapper[31559]: I0216 02:40:41.236116 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/c619a336-be30-4974-842a-de99c2ec7a0c-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"c619a336-be30-4974-842a-de99c2ec7a0c\") " pod="openstack/ironic-inspector-0" Feb 16 02:40:41.237463 master-0 kubenswrapper[31559]: I0216 02:40:41.236186 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/c619a336-be30-4974-842a-de99c2ec7a0c-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"c619a336-be30-4974-842a-de99c2ec7a0c\") " pod="openstack/ironic-inspector-0" Feb 16 02:40:41.237463 master-0 kubenswrapper[31559]: I0216 02:40:41.236234 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c619a336-be30-4974-842a-de99c2ec7a0c-internal-tls-certs\") pod \"ironic-inspector-0\" (UID: \"c619a336-be30-4974-842a-de99c2ec7a0c\") " pod="openstack/ironic-inspector-0" Feb 16 02:40:41.237463 master-0 kubenswrapper[31559]: I0216 02:40:41.236263 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/c619a336-be30-4974-842a-de99c2ec7a0c-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"c619a336-be30-4974-842a-de99c2ec7a0c\") " pod="openstack/ironic-inspector-0" Feb 16 02:40:41.237463 master-0 kubenswrapper[31559]: I0216 02:40:41.236290 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjg92\" (UniqueName: \"kubernetes.io/projected/c619a336-be30-4974-842a-de99c2ec7a0c-kube-api-access-mjg92\") pod \"ironic-inspector-0\" (UID: \"c619a336-be30-4974-842a-de99c2ec7a0c\") " pod="openstack/ironic-inspector-0" Feb 16 02:40:41.237463 master-0 kubenswrapper[31559]: I0216 02:40:41.236349 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c619a336-be30-4974-842a-de99c2ec7a0c-scripts\") pod \"ironic-inspector-0\" (UID: \"c619a336-be30-4974-842a-de99c2ec7a0c\") " pod="openstack/ironic-inspector-0" Feb 16 02:40:41.237463 master-0 kubenswrapper[31559]: I0216 02:40:41.236391 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c619a336-be30-4974-842a-de99c2ec7a0c-config\") pod \"ironic-inspector-0\" (UID: \"c619a336-be30-4974-842a-de99c2ec7a0c\") " pod="openstack/ironic-inspector-0" Feb 16 02:40:41.237463 master-0 kubenswrapper[31559]: I0216 02:40:41.236556 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c619a336-be30-4974-842a-de99c2ec7a0c-public-tls-certs\") pod \"ironic-inspector-0\" (UID: \"c619a336-be30-4974-842a-de99c2ec7a0c\") " pod="openstack/ironic-inspector-0" Feb 16 02:40:41.240882 master-0 kubenswrapper[31559]: I0216 02:40:41.239895 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c619a336-be30-4974-842a-de99c2ec7a0c-public-tls-certs\") pod \"ironic-inspector-0\" (UID: \"c619a336-be30-4974-842a-de99c2ec7a0c\") " pod="openstack/ironic-inspector-0" Feb 16 02:40:41.240882 master-0 kubenswrapper[31559]: I0216 02:40:41.240294 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/c619a336-be30-4974-842a-de99c2ec7a0c-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"c619a336-be30-4974-842a-de99c2ec7a0c\") " pod="openstack/ironic-inspector-0" Feb 16 02:40:41.240882 master-0 kubenswrapper[31559]: I0216 02:40:41.240685 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c619a336-be30-4974-842a-de99c2ec7a0c-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"c619a336-be30-4974-842a-de99c2ec7a0c\") " pod="openstack/ironic-inspector-0" Feb 16 02:40:41.241150 master-0 kubenswrapper[31559]: I0216 02:40:41.241075 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/c619a336-be30-4974-842a-de99c2ec7a0c-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"c619a336-be30-4974-842a-de99c2ec7a0c\") " pod="openstack/ironic-inspector-0" Feb 16 02:40:41.241194 master-0 kubenswrapper[31559]: I0216 02:40:41.241177 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/c619a336-be30-4974-842a-de99c2ec7a0c-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"c619a336-be30-4974-842a-de99c2ec7a0c\") " pod="openstack/ironic-inspector-0" Feb 16 02:40:41.243453 master-0 kubenswrapper[31559]: I0216 02:40:41.242493 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c619a336-be30-4974-842a-de99c2ec7a0c-internal-tls-certs\") pod \"ironic-inspector-0\" (UID: \"c619a336-be30-4974-842a-de99c2ec7a0c\") " pod="openstack/ironic-inspector-0" Feb 16 02:40:41.251078 master-0 kubenswrapper[31559]: I0216 02:40:41.245166 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c619a336-be30-4974-842a-de99c2ec7a0c-config\") pod \"ironic-inspector-0\" (UID: \"c619a336-be30-4974-842a-de99c2ec7a0c\") " pod="openstack/ironic-inspector-0" Feb 16 02:40:41.251078 master-0 kubenswrapper[31559]: I0216 02:40:41.247039 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c619a336-be30-4974-842a-de99c2ec7a0c-scripts\") pod \"ironic-inspector-0\" (UID: \"c619a336-be30-4974-842a-de99c2ec7a0c\") " pod="openstack/ironic-inspector-0" Feb 16 02:40:41.269499 master-0 kubenswrapper[31559]: I0216 02:40:41.268936 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjg92\" (UniqueName: \"kubernetes.io/projected/c619a336-be30-4974-842a-de99c2ec7a0c-kube-api-access-mjg92\") pod \"ironic-inspector-0\" (UID: \"c619a336-be30-4974-842a-de99c2ec7a0c\") " pod="openstack/ironic-inspector-0" Feb 16 02:40:41.345300 master-0 kubenswrapper[31559]: I0216 02:40:41.345235 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Feb 16 02:40:41.722428 master-0 kubenswrapper[31559]: I0216 02:40:41.722303 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-72940-default-internal-api-0" event={"ID":"69c12677-af7e-44f3-9419-1ef60f60b5f2","Type":"ContainerStarted","Data":"d784d464e033f8de79d9f8d45f57dc05abefaff7ac10c2db1e036b84a0b5ba52"} Feb 16 02:40:41.819647 master-0 kubenswrapper[31559]: I0216 02:40:41.819390 31559 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6fbd84b845-7w476" podUID="52d1137e-c1db-4205-ae56-dfd8b4c84b39" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.234:5353: i/o timeout" Feb 16 02:40:41.943588 master-0 kubenswrapper[31559]: I0216 02:40:41.943538 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12b7b092-d70d-4cdf-85d6-9d98d6b97ca0" path="/var/lib/kubelet/pods/12b7b092-d70d-4cdf-85d6-9d98d6b97ca0/volumes" Feb 16 02:40:42.015845 master-0 kubenswrapper[31559]: I0216 02:40:42.015777 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Feb 16 02:40:42.019066 master-0 kubenswrapper[31559]: W0216 02:40:42.018989 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc619a336_be30_4974_842a_de99c2ec7a0c.slice/crio-ce1cb7e0c0a6360bf7d0aa70ce75ccfb8b38a67a58823c97dbd6ee9ce934ad8f WatchSource:0}: Error finding container ce1cb7e0c0a6360bf7d0aa70ce75ccfb8b38a67a58823c97dbd6ee9ce934ad8f: Status 404 returned error can't find the container with id ce1cb7e0c0a6360bf7d0aa70ce75ccfb8b38a67a58823c97dbd6ee9ce934ad8f Feb 16 02:40:42.749402 master-0 kubenswrapper[31559]: I0216 02:40:42.749309 31559 generic.go:334] "Generic (PLEG): container finished" podID="c619a336-be30-4974-842a-de99c2ec7a0c" containerID="6685e92703a5c930ab947b4b5d58de355575d9a9bcb8d0b4be0c58ec9cea0c2a" exitCode=0 Feb 16 02:40:42.750101 master-0 kubenswrapper[31559]: I0216 02:40:42.749546 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"c619a336-be30-4974-842a-de99c2ec7a0c","Type":"ContainerDied","Data":"6685e92703a5c930ab947b4b5d58de355575d9a9bcb8d0b4be0c58ec9cea0c2a"} Feb 16 02:40:42.750184 master-0 kubenswrapper[31559]: I0216 02:40:42.750153 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"c619a336-be30-4974-842a-de99c2ec7a0c","Type":"ContainerStarted","Data":"ce1cb7e0c0a6360bf7d0aa70ce75ccfb8b38a67a58823c97dbd6ee9ce934ad8f"} Feb 16 02:40:42.757315 master-0 kubenswrapper[31559]: I0216 02:40:42.757256 31559 generic.go:334] "Generic (PLEG): container finished" podID="f71d864b-c882-43b8-a7a7-b1e163d38aa4" containerID="c241312784a45414d22c770f53e8c8f795a20e8cfba2730d19ff1ecdaee6cfba" exitCode=0 Feb 16 02:40:42.757412 master-0 kubenswrapper[31559]: I0216 02:40:42.757350 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"f71d864b-c882-43b8-a7a7-b1e163d38aa4","Type":"ContainerDied","Data":"c241312784a45414d22c770f53e8c8f795a20e8cfba2730d19ff1ecdaee6cfba"} Feb 16 02:40:42.762668 master-0 kubenswrapper[31559]: I0216 02:40:42.762591 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-72940-default-internal-api-0" event={"ID":"69c12677-af7e-44f3-9419-1ef60f60b5f2","Type":"ContainerStarted","Data":"4503021a2502800e6fd7e0f45068c37515ca766aa9584642cc1fdb863506368d"} Feb 16 02:40:42.835187 master-0 kubenswrapper[31559]: I0216 02:40:42.834931 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-72940-default-internal-api-0" podStartSLOduration=5.834910309 podStartE2EDuration="5.834910309s" podCreationTimestamp="2026-02-16 02:40:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:40:42.82046718 +0000 UTC m=+1095.165073205" watchObservedRunningTime="2026-02-16 02:40:42.834910309 +0000 UTC m=+1095.179516334" Feb 16 02:40:43.666625 master-0 kubenswrapper[31559]: I0216 02:40:43.666533 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-72940-default-external-api-0" Feb 16 02:40:43.666625 master-0 kubenswrapper[31559]: I0216 02:40:43.666622 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-72940-default-external-api-0" Feb 16 02:40:43.706890 master-0 kubenswrapper[31559]: I0216 02:40:43.706824 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-72940-default-external-api-0" Feb 16 02:40:43.721348 master-0 kubenswrapper[31559]: I0216 02:40:43.721295 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-72940-default-external-api-0" Feb 16 02:40:43.781460 master-0 kubenswrapper[31559]: I0216 02:40:43.780939 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-72940-default-external-api-0" Feb 16 02:40:43.781460 master-0 kubenswrapper[31559]: I0216 02:40:43.781022 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-72940-default-external-api-0" Feb 16 02:40:46.690808 master-0 kubenswrapper[31559]: I0216 02:40:46.690734 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-72940-default-external-api-0" Feb 16 02:40:48.366201 master-0 kubenswrapper[31559]: I0216 02:40:48.366156 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-72940-default-external-api-0" Feb 16 02:40:49.534525 master-0 kubenswrapper[31559]: I0216 02:40:49.534422 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-72940-default-internal-api-0" Feb 16 02:40:49.534525 master-0 kubenswrapper[31559]: I0216 02:40:49.534517 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-72940-default-internal-api-0" Feb 16 02:40:49.579924 master-0 kubenswrapper[31559]: I0216 02:40:49.579852 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-72940-default-internal-api-0" Feb 16 02:40:49.601860 master-0 kubenswrapper[31559]: I0216 02:40:49.601792 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-72940-default-internal-api-0" Feb 16 02:40:49.861493 master-0 kubenswrapper[31559]: I0216 02:40:49.861260 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"f71d864b-c882-43b8-a7a7-b1e163d38aa4","Type":"ContainerStarted","Data":"8091045113d7c4d70e9e3e1588a057088d5e4a4b106abd7fff2b97bfd191a69f"} Feb 16 02:40:49.864591 master-0 kubenswrapper[31559]: I0216 02:40:49.864550 31559 generic.go:334] "Generic (PLEG): container finished" podID="c619a336-be30-4974-842a-de99c2ec7a0c" containerID="c64ee8912f11ffc7a38ba60259b78e3a46d288fc0eccb80de145afcc57b9987c" exitCode=0 Feb 16 02:40:49.864696 master-0 kubenswrapper[31559]: I0216 02:40:49.864622 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"c619a336-be30-4974-842a-de99c2ec7a0c","Type":"ContainerDied","Data":"c64ee8912f11ffc7a38ba60259b78e3a46d288fc0eccb80de145afcc57b9987c"} Feb 16 02:40:49.864924 master-0 kubenswrapper[31559]: I0216 02:40:49.864878 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-72940-default-internal-api-0" Feb 16 02:40:49.864974 master-0 kubenswrapper[31559]: I0216 02:40:49.864948 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-72940-default-internal-api-0" Feb 16 02:40:50.886585 master-0 kubenswrapper[31559]: I0216 02:40:50.884605 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"c619a336-be30-4974-842a-de99c2ec7a0c","Type":"ContainerStarted","Data":"7cfa2214b12e42c7e3e4b5c20bb7ae94230ce89914254b423969aca73385ec52"} Feb 16 02:40:51.620054 master-0 kubenswrapper[31559]: I0216 02:40:51.619992 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-72940-default-internal-api-0" Feb 16 02:40:51.731555 master-0 kubenswrapper[31559]: I0216 02:40:51.731036 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-72940-default-internal-api-0" Feb 16 02:40:51.905783 master-0 kubenswrapper[31559]: I0216 02:40:51.905509 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"c619a336-be30-4974-842a-de99c2ec7a0c","Type":"ContainerStarted","Data":"5036d9f0c1cadd414232c6a59c3e6ab07860a88c54c0d565304896bf157aa285"} Feb 16 02:40:52.932202 master-0 kubenswrapper[31559]: I0216 02:40:52.929800 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"c619a336-be30-4974-842a-de99c2ec7a0c","Type":"ContainerStarted","Data":"ae87fc7460570fbe923bd4f62fd45f5873606c3b0a98fe66c4e2a3c27a160554"} Feb 16 02:40:52.932202 master-0 kubenswrapper[31559]: I0216 02:40:52.929850 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"c619a336-be30-4974-842a-de99c2ec7a0c","Type":"ContainerStarted","Data":"f6892ff8e304a59301f6f2c9af3a67d509ba4dc698cb1d17508cd5e8c8d2f8cd"} Feb 16 02:40:53.949073 master-0 kubenswrapper[31559]: I0216 02:40:53.948951 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"c619a336-be30-4974-842a-de99c2ec7a0c","Type":"ContainerStarted","Data":"2fa9f29424aeff029fce9bfe6ebb5a025954861fd1ebc34251d3e224b91a2114"} Feb 16 02:40:53.949073 master-0 kubenswrapper[31559]: I0216 02:40:53.949076 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Feb 16 02:40:53.950072 master-0 kubenswrapper[31559]: I0216 02:40:53.949108 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Feb 16 02:40:53.992180 master-0 kubenswrapper[31559]: I0216 02:40:53.992078 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-inspector-0" podStartSLOduration=8.401836782 podStartE2EDuration="13.992054919s" podCreationTimestamp="2026-02-16 02:40:40 +0000 UTC" firstStartedPulling="2026-02-16 02:40:42.751690895 +0000 UTC m=+1095.096296910" lastFinishedPulling="2026-02-16 02:40:48.341909032 +0000 UTC m=+1100.686515047" observedRunningTime="2026-02-16 02:40:53.98230791 +0000 UTC m=+1106.326913935" watchObservedRunningTime="2026-02-16 02:40:53.992054919 +0000 UTC m=+1106.336660944" Feb 16 02:40:56.346487 master-0 kubenswrapper[31559]: I0216 02:40:56.346392 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Feb 16 02:40:56.346487 master-0 kubenswrapper[31559]: I0216 02:40:56.346498 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Feb 16 02:40:56.350662 master-0 kubenswrapper[31559]: I0216 02:40:56.350623 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Feb 16 02:40:56.413422 master-0 kubenswrapper[31559]: I0216 02:40:56.413368 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Feb 16 02:41:01.099651 master-0 kubenswrapper[31559]: I0216 02:41:01.099571 31559 generic.go:334] "Generic (PLEG): container finished" podID="8b2abcfb-1594-4f85-8068-80c1a8d7fc3e" containerID="4979bad83bc9ff44e251c1dda305d92b6042d0ffacc394ba2959f41d490ef835" exitCode=0 Feb 16 02:41:01.099651 master-0 kubenswrapper[31559]: I0216 02:41:01.099636 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-4dzgp" event={"ID":"8b2abcfb-1594-4f85-8068-80c1a8d7fc3e","Type":"ContainerDied","Data":"4979bad83bc9ff44e251c1dda305d92b6042d0ffacc394ba2959f41d490ef835"} Feb 16 02:41:01.345873 master-0 kubenswrapper[31559]: I0216 02:41:01.345790 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ironic-inspector-0" Feb 16 02:41:01.347077 master-0 kubenswrapper[31559]: I0216 02:41:01.347022 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ironic-inspector-0" Feb 16 02:41:01.381155 master-0 kubenswrapper[31559]: I0216 02:41:01.381035 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ironic-inspector-0" Feb 16 02:41:01.385171 master-0 kubenswrapper[31559]: I0216 02:41:01.385152 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ironic-inspector-0" Feb 16 02:41:02.130167 master-0 kubenswrapper[31559]: I0216 02:41:02.130104 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Feb 16 02:41:02.131792 master-0 kubenswrapper[31559]: I0216 02:41:02.131755 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Feb 16 02:41:02.771579 master-0 kubenswrapper[31559]: I0216 02:41:02.769155 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-4dzgp" Feb 16 02:41:02.870896 master-0 kubenswrapper[31559]: I0216 02:41:02.870637 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b2abcfb-1594-4f85-8068-80c1a8d7fc3e-config-data\") pod \"8b2abcfb-1594-4f85-8068-80c1a8d7fc3e\" (UID: \"8b2abcfb-1594-4f85-8068-80c1a8d7fc3e\") " Feb 16 02:41:02.870896 master-0 kubenswrapper[31559]: I0216 02:41:02.870721 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b2abcfb-1594-4f85-8068-80c1a8d7fc3e-scripts\") pod \"8b2abcfb-1594-4f85-8068-80c1a8d7fc3e\" (UID: \"8b2abcfb-1594-4f85-8068-80c1a8d7fc3e\") " Feb 16 02:41:02.870896 master-0 kubenswrapper[31559]: I0216 02:41:02.870806 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2dkl\" (UniqueName: \"kubernetes.io/projected/8b2abcfb-1594-4f85-8068-80c1a8d7fc3e-kube-api-access-n2dkl\") pod \"8b2abcfb-1594-4f85-8068-80c1a8d7fc3e\" (UID: \"8b2abcfb-1594-4f85-8068-80c1a8d7fc3e\") " Feb 16 02:41:02.870896 master-0 kubenswrapper[31559]: I0216 02:41:02.870867 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b2abcfb-1594-4f85-8068-80c1a8d7fc3e-combined-ca-bundle\") pod \"8b2abcfb-1594-4f85-8068-80c1a8d7fc3e\" (UID: \"8b2abcfb-1594-4f85-8068-80c1a8d7fc3e\") " Feb 16 02:41:02.875049 master-0 kubenswrapper[31559]: I0216 02:41:02.874793 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b2abcfb-1594-4f85-8068-80c1a8d7fc3e-scripts" (OuterVolumeSpecName: "scripts") pod "8b2abcfb-1594-4f85-8068-80c1a8d7fc3e" (UID: "8b2abcfb-1594-4f85-8068-80c1a8d7fc3e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:41:02.878100 master-0 kubenswrapper[31559]: I0216 02:41:02.876837 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b2abcfb-1594-4f85-8068-80c1a8d7fc3e-kube-api-access-n2dkl" (OuterVolumeSpecName: "kube-api-access-n2dkl") pod "8b2abcfb-1594-4f85-8068-80c1a8d7fc3e" (UID: "8b2abcfb-1594-4f85-8068-80c1a8d7fc3e"). InnerVolumeSpecName "kube-api-access-n2dkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:41:02.914496 master-0 kubenswrapper[31559]: I0216 02:41:02.914328 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b2abcfb-1594-4f85-8068-80c1a8d7fc3e-config-data" (OuterVolumeSpecName: "config-data") pod "8b2abcfb-1594-4f85-8068-80c1a8d7fc3e" (UID: "8b2abcfb-1594-4f85-8068-80c1a8d7fc3e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:41:02.924397 master-0 kubenswrapper[31559]: I0216 02:41:02.922331 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b2abcfb-1594-4f85-8068-80c1a8d7fc3e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8b2abcfb-1594-4f85-8068-80c1a8d7fc3e" (UID: "8b2abcfb-1594-4f85-8068-80c1a8d7fc3e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:41:02.975369 master-0 kubenswrapper[31559]: I0216 02:41:02.975275 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2dkl\" (UniqueName: \"kubernetes.io/projected/8b2abcfb-1594-4f85-8068-80c1a8d7fc3e-kube-api-access-n2dkl\") on node \"master-0\" DevicePath \"\"" Feb 16 02:41:02.975369 master-0 kubenswrapper[31559]: I0216 02:41:02.975341 31559 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b2abcfb-1594-4f85-8068-80c1a8d7fc3e-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 02:41:02.975369 master-0 kubenswrapper[31559]: I0216 02:41:02.975365 31559 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b2abcfb-1594-4f85-8068-80c1a8d7fc3e-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 02:41:02.975856 master-0 kubenswrapper[31559]: I0216 02:41:02.975393 31559 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b2abcfb-1594-4f85-8068-80c1a8d7fc3e-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 02:41:03.135593 master-0 kubenswrapper[31559]: I0216 02:41:03.135461 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-4dzgp" Feb 16 02:41:03.135593 master-0 kubenswrapper[31559]: I0216 02:41:03.135553 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-4dzgp" event={"ID":"8b2abcfb-1594-4f85-8068-80c1a8d7fc3e","Type":"ContainerDied","Data":"c7c98ce38649b3662c2623c77743639f913dd96e782703a26b2d68336f65cb87"} Feb 16 02:41:03.136307 master-0 kubenswrapper[31559]: I0216 02:41:03.135615 31559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7c98ce38649b3662c2623c77743639f913dd96e782703a26b2d68336f65cb87" Feb 16 02:41:03.290619 master-0 kubenswrapper[31559]: I0216 02:41:03.290536 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 02:41:03.291668 master-0 kubenswrapper[31559]: E0216 02:41:03.291614 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b2abcfb-1594-4f85-8068-80c1a8d7fc3e" containerName="nova-cell0-conductor-db-sync" Feb 16 02:41:03.291668 master-0 kubenswrapper[31559]: I0216 02:41:03.291663 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b2abcfb-1594-4f85-8068-80c1a8d7fc3e" containerName="nova-cell0-conductor-db-sync" Feb 16 02:41:03.292268 master-0 kubenswrapper[31559]: I0216 02:41:03.292219 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b2abcfb-1594-4f85-8068-80c1a8d7fc3e" containerName="nova-cell0-conductor-db-sync" Feb 16 02:41:03.294036 master-0 kubenswrapper[31559]: I0216 02:41:03.293981 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 16 02:41:03.296168 master-0 kubenswrapper[31559]: I0216 02:41:03.296115 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 16 02:41:03.302916 master-0 kubenswrapper[31559]: I0216 02:41:03.302856 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 02:41:03.387627 master-0 kubenswrapper[31559]: I0216 02:41:03.387501 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/452f445f-0017-48f6-b304-12fbde1df087-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"452f445f-0017-48f6-b304-12fbde1df087\") " pod="openstack/nova-cell0-conductor-0" Feb 16 02:41:03.387627 master-0 kubenswrapper[31559]: I0216 02:41:03.387588 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4t8mc\" (UniqueName: \"kubernetes.io/projected/452f445f-0017-48f6-b304-12fbde1df087-kube-api-access-4t8mc\") pod \"nova-cell0-conductor-0\" (UID: \"452f445f-0017-48f6-b304-12fbde1df087\") " pod="openstack/nova-cell0-conductor-0" Feb 16 02:41:03.387870 master-0 kubenswrapper[31559]: I0216 02:41:03.387654 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/452f445f-0017-48f6-b304-12fbde1df087-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"452f445f-0017-48f6-b304-12fbde1df087\") " pod="openstack/nova-cell0-conductor-0" Feb 16 02:41:03.489990 master-0 kubenswrapper[31559]: I0216 02:41:03.489920 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/452f445f-0017-48f6-b304-12fbde1df087-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"452f445f-0017-48f6-b304-12fbde1df087\") " pod="openstack/nova-cell0-conductor-0" Feb 16 02:41:03.490207 master-0 kubenswrapper[31559]: I0216 02:41:03.490005 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4t8mc\" (UniqueName: \"kubernetes.io/projected/452f445f-0017-48f6-b304-12fbde1df087-kube-api-access-4t8mc\") pod \"nova-cell0-conductor-0\" (UID: \"452f445f-0017-48f6-b304-12fbde1df087\") " pod="openstack/nova-cell0-conductor-0" Feb 16 02:41:03.490391 master-0 kubenswrapper[31559]: I0216 02:41:03.490335 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/452f445f-0017-48f6-b304-12fbde1df087-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"452f445f-0017-48f6-b304-12fbde1df087\") " pod="openstack/nova-cell0-conductor-0" Feb 16 02:41:03.493917 master-0 kubenswrapper[31559]: I0216 02:41:03.493667 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/452f445f-0017-48f6-b304-12fbde1df087-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"452f445f-0017-48f6-b304-12fbde1df087\") " pod="openstack/nova-cell0-conductor-0" Feb 16 02:41:03.494000 master-0 kubenswrapper[31559]: I0216 02:41:03.493922 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/452f445f-0017-48f6-b304-12fbde1df087-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"452f445f-0017-48f6-b304-12fbde1df087\") " pod="openstack/nova-cell0-conductor-0" Feb 16 02:41:03.510428 master-0 kubenswrapper[31559]: I0216 02:41:03.510352 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4t8mc\" (UniqueName: \"kubernetes.io/projected/452f445f-0017-48f6-b304-12fbde1df087-kube-api-access-4t8mc\") pod \"nova-cell0-conductor-0\" (UID: \"452f445f-0017-48f6-b304-12fbde1df087\") " pod="openstack/nova-cell0-conductor-0" Feb 16 02:41:03.618549 master-0 kubenswrapper[31559]: I0216 02:41:03.618162 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 16 02:41:04.135864 master-0 kubenswrapper[31559]: I0216 02:41:04.135752 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 02:41:05.173333 master-0 kubenswrapper[31559]: I0216 02:41:05.173238 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"452f445f-0017-48f6-b304-12fbde1df087","Type":"ContainerStarted","Data":"dcea9c50478c23668902ee883c9e7d39d7c7d8828cfc653df462815101ea2ea3"} Feb 16 02:41:05.174006 master-0 kubenswrapper[31559]: I0216 02:41:05.173338 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"452f445f-0017-48f6-b304-12fbde1df087","Type":"ContainerStarted","Data":"462f0062de1ac628798b9b3bbcfe73820ce69339442e13930fd9df3925b801e4"} Feb 16 02:41:05.174006 master-0 kubenswrapper[31559]: I0216 02:41:05.173688 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 16 02:41:13.660396 master-0 kubenswrapper[31559]: I0216 02:41:13.660314 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 16 02:41:13.719049 master-0 kubenswrapper[31559]: I0216 02:41:13.718934 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=10.718906274 podStartE2EDuration="10.718906274s" podCreationTimestamp="2026-02-16 02:41:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:41:05.215582765 +0000 UTC m=+1117.560188810" watchObservedRunningTime="2026-02-16 02:41:13.718906274 +0000 UTC m=+1126.063512319" Feb 16 02:41:14.334847 master-0 kubenswrapper[31559]: I0216 02:41:14.334684 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-zcqp2"] Feb 16 02:41:14.337317 master-0 kubenswrapper[31559]: I0216 02:41:14.337262 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-zcqp2" Feb 16 02:41:14.341078 master-0 kubenswrapper[31559]: I0216 02:41:14.341037 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 16 02:41:14.341487 master-0 kubenswrapper[31559]: I0216 02:41:14.341409 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 16 02:41:14.366881 master-0 kubenswrapper[31559]: I0216 02:41:14.365733 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-zcqp2"] Feb 16 02:41:14.433856 master-0 kubenswrapper[31559]: I0216 02:41:14.433790 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12aa2463-4471-4146-9b3e-e5532987d769-scripts\") pod \"nova-cell0-cell-mapping-zcqp2\" (UID: \"12aa2463-4471-4146-9b3e-e5532987d769\") " pod="openstack/nova-cell0-cell-mapping-zcqp2" Feb 16 02:41:14.434079 master-0 kubenswrapper[31559]: I0216 02:41:14.433882 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12aa2463-4471-4146-9b3e-e5532987d769-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-zcqp2\" (UID: \"12aa2463-4471-4146-9b3e-e5532987d769\") " pod="openstack/nova-cell0-cell-mapping-zcqp2" Feb 16 02:41:14.434079 master-0 kubenswrapper[31559]: I0216 02:41:14.433976 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kq5x4\" (UniqueName: \"kubernetes.io/projected/12aa2463-4471-4146-9b3e-e5532987d769-kube-api-access-kq5x4\") pod \"nova-cell0-cell-mapping-zcqp2\" (UID: \"12aa2463-4471-4146-9b3e-e5532987d769\") " pod="openstack/nova-cell0-cell-mapping-zcqp2" Feb 16 02:41:14.434156 master-0 kubenswrapper[31559]: I0216 02:41:14.434092 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12aa2463-4471-4146-9b3e-e5532987d769-config-data\") pod \"nova-cell0-cell-mapping-zcqp2\" (UID: \"12aa2463-4471-4146-9b3e-e5532987d769\") " pod="openstack/nova-cell0-cell-mapping-zcqp2" Feb 16 02:41:14.440819 master-0 kubenswrapper[31559]: I0216 02:41:14.440616 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-compute-ironic-compute-0"] Feb 16 02:41:14.442785 master-0 kubenswrapper[31559]: I0216 02:41:14.442763 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 16 02:41:14.446889 master-0 kubenswrapper[31559]: I0216 02:41:14.446509 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-ironic-compute-config-data" Feb 16 02:41:14.467178 master-0 kubenswrapper[31559]: I0216 02:41:14.466813 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-compute-ironic-compute-0"] Feb 16 02:41:14.541385 master-0 kubenswrapper[31559]: I0216 02:41:14.540077 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12aa2463-4471-4146-9b3e-e5532987d769-config-data\") pod \"nova-cell0-cell-mapping-zcqp2\" (UID: \"12aa2463-4471-4146-9b3e-e5532987d769\") " pod="openstack/nova-cell0-cell-mapping-zcqp2" Feb 16 02:41:14.541385 master-0 kubenswrapper[31559]: I0216 02:41:14.540182 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d42ca2ea-34cc-46f6-8551-928ddd4fecf5-combined-ca-bundle\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"d42ca2ea-34cc-46f6-8551-928ddd4fecf5\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 16 02:41:14.541385 master-0 kubenswrapper[31559]: I0216 02:41:14.540294 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12aa2463-4471-4146-9b3e-e5532987d769-scripts\") pod \"nova-cell0-cell-mapping-zcqp2\" (UID: \"12aa2463-4471-4146-9b3e-e5532987d769\") " pod="openstack/nova-cell0-cell-mapping-zcqp2" Feb 16 02:41:14.541385 master-0 kubenswrapper[31559]: I0216 02:41:14.540344 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12aa2463-4471-4146-9b3e-e5532987d769-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-zcqp2\" (UID: \"12aa2463-4471-4146-9b3e-e5532987d769\") " pod="openstack/nova-cell0-cell-mapping-zcqp2" Feb 16 02:41:14.541385 master-0 kubenswrapper[31559]: I0216 02:41:14.540393 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvvpz\" (UniqueName: \"kubernetes.io/projected/d42ca2ea-34cc-46f6-8551-928ddd4fecf5-kube-api-access-mvvpz\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"d42ca2ea-34cc-46f6-8551-928ddd4fecf5\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 16 02:41:14.541385 master-0 kubenswrapper[31559]: I0216 02:41:14.540422 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d42ca2ea-34cc-46f6-8551-928ddd4fecf5-config-data\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"d42ca2ea-34cc-46f6-8551-928ddd4fecf5\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 16 02:41:14.541385 master-0 kubenswrapper[31559]: I0216 02:41:14.540523 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kq5x4\" (UniqueName: \"kubernetes.io/projected/12aa2463-4471-4146-9b3e-e5532987d769-kube-api-access-kq5x4\") pod \"nova-cell0-cell-mapping-zcqp2\" (UID: \"12aa2463-4471-4146-9b3e-e5532987d769\") " pod="openstack/nova-cell0-cell-mapping-zcqp2" Feb 16 02:41:14.563962 master-0 kubenswrapper[31559]: I0216 02:41:14.559042 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12aa2463-4471-4146-9b3e-e5532987d769-scripts\") pod \"nova-cell0-cell-mapping-zcqp2\" (UID: \"12aa2463-4471-4146-9b3e-e5532987d769\") " pod="openstack/nova-cell0-cell-mapping-zcqp2" Feb 16 02:41:14.575192 master-0 kubenswrapper[31559]: I0216 02:41:14.574844 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12aa2463-4471-4146-9b3e-e5532987d769-config-data\") pod \"nova-cell0-cell-mapping-zcqp2\" (UID: \"12aa2463-4471-4146-9b3e-e5532987d769\") " pod="openstack/nova-cell0-cell-mapping-zcqp2" Feb 16 02:41:14.575606 master-0 kubenswrapper[31559]: I0216 02:41:14.575581 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12aa2463-4471-4146-9b3e-e5532987d769-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-zcqp2\" (UID: \"12aa2463-4471-4146-9b3e-e5532987d769\") " pod="openstack/nova-cell0-cell-mapping-zcqp2" Feb 16 02:41:14.588662 master-0 kubenswrapper[31559]: I0216 02:41:14.588416 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kq5x4\" (UniqueName: \"kubernetes.io/projected/12aa2463-4471-4146-9b3e-e5532987d769-kube-api-access-kq5x4\") pod \"nova-cell0-cell-mapping-zcqp2\" (UID: \"12aa2463-4471-4146-9b3e-e5532987d769\") " pod="openstack/nova-cell0-cell-mapping-zcqp2" Feb 16 02:41:14.653297 master-0 kubenswrapper[31559]: I0216 02:41:14.650803 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvvpz\" (UniqueName: \"kubernetes.io/projected/d42ca2ea-34cc-46f6-8551-928ddd4fecf5-kube-api-access-mvvpz\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"d42ca2ea-34cc-46f6-8551-928ddd4fecf5\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 16 02:41:14.653297 master-0 kubenswrapper[31559]: I0216 02:41:14.650898 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d42ca2ea-34cc-46f6-8551-928ddd4fecf5-config-data\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"d42ca2ea-34cc-46f6-8551-928ddd4fecf5\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 16 02:41:14.653297 master-0 kubenswrapper[31559]: I0216 02:41:14.651345 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d42ca2ea-34cc-46f6-8551-928ddd4fecf5-combined-ca-bundle\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"d42ca2ea-34cc-46f6-8551-928ddd4fecf5\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 16 02:41:14.661513 master-0 kubenswrapper[31559]: I0216 02:41:14.661360 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d42ca2ea-34cc-46f6-8551-928ddd4fecf5-config-data\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"d42ca2ea-34cc-46f6-8551-928ddd4fecf5\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 16 02:41:14.662888 master-0 kubenswrapper[31559]: I0216 02:41:14.662850 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d42ca2ea-34cc-46f6-8551-928ddd4fecf5-combined-ca-bundle\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"d42ca2ea-34cc-46f6-8551-928ddd4fecf5\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 16 02:41:14.677980 master-0 kubenswrapper[31559]: I0216 02:41:14.675670 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-zcqp2" Feb 16 02:41:14.695834 master-0 kubenswrapper[31559]: I0216 02:41:14.693887 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvvpz\" (UniqueName: \"kubernetes.io/projected/d42ca2ea-34cc-46f6-8551-928ddd4fecf5-kube-api-access-mvvpz\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"d42ca2ea-34cc-46f6-8551-928ddd4fecf5\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 16 02:41:14.718843 master-0 kubenswrapper[31559]: I0216 02:41:14.713513 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 16 02:41:14.724465 master-0 kubenswrapper[31559]: I0216 02:41:14.722994 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 02:41:14.739557 master-0 kubenswrapper[31559]: I0216 02:41:14.726884 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 16 02:41:14.739557 master-0 kubenswrapper[31559]: I0216 02:41:14.730791 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 02:41:14.776554 master-0 kubenswrapper[31559]: I0216 02:41:14.758318 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75a69fc6-fb23-44e1-8cc9-b5da2d90de3a-config-data\") pod \"nova-api-0\" (UID: \"75a69fc6-fb23-44e1-8cc9-b5da2d90de3a\") " pod="openstack/nova-api-0" Feb 16 02:41:14.776554 master-0 kubenswrapper[31559]: I0216 02:41:14.758387 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75a69fc6-fb23-44e1-8cc9-b5da2d90de3a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"75a69fc6-fb23-44e1-8cc9-b5da2d90de3a\") " pod="openstack/nova-api-0" Feb 16 02:41:14.776554 master-0 kubenswrapper[31559]: I0216 02:41:14.758490 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6s2cn\" (UniqueName: \"kubernetes.io/projected/75a69fc6-fb23-44e1-8cc9-b5da2d90de3a-kube-api-access-6s2cn\") pod \"nova-api-0\" (UID: \"75a69fc6-fb23-44e1-8cc9-b5da2d90de3a\") " pod="openstack/nova-api-0" Feb 16 02:41:14.776554 master-0 kubenswrapper[31559]: I0216 02:41:14.758543 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75a69fc6-fb23-44e1-8cc9-b5da2d90de3a-logs\") pod \"nova-api-0\" (UID: \"75a69fc6-fb23-44e1-8cc9-b5da2d90de3a\") " pod="openstack/nova-api-0" Feb 16 02:41:14.782466 master-0 kubenswrapper[31559]: I0216 02:41:14.779976 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 16 02:41:14.783495 master-0 kubenswrapper[31559]: I0216 02:41:14.782778 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 16 02:41:14.785558 master-0 kubenswrapper[31559]: I0216 02:41:14.785493 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 02:41:14.791315 master-0 kubenswrapper[31559]: I0216 02:41:14.791264 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 16 02:41:14.794378 master-0 kubenswrapper[31559]: I0216 02:41:14.793861 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 02:41:14.802137 master-0 kubenswrapper[31559]: I0216 02:41:14.797096 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 02:41:14.819131 master-0 kubenswrapper[31559]: I0216 02:41:14.808252 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 02:41:14.819131 master-0 kubenswrapper[31559]: I0216 02:41:14.808905 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 16 02:41:14.866574 master-0 kubenswrapper[31559]: I0216 02:41:14.863413 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75a69fc6-fb23-44e1-8cc9-b5da2d90de3a-config-data\") pod \"nova-api-0\" (UID: \"75a69fc6-fb23-44e1-8cc9-b5da2d90de3a\") " pod="openstack/nova-api-0" Feb 16 02:41:14.866574 master-0 kubenswrapper[31559]: I0216 02:41:14.863525 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdxjf\" (UniqueName: \"kubernetes.io/projected/53d330ba-814b-4e9b-960a-3a78377db2cf-kube-api-access-vdxjf\") pod \"nova-metadata-0\" (UID: \"53d330ba-814b-4e9b-960a-3a78377db2cf\") " pod="openstack/nova-metadata-0" Feb 16 02:41:14.866574 master-0 kubenswrapper[31559]: I0216 02:41:14.863568 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75a69fc6-fb23-44e1-8cc9-b5da2d90de3a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"75a69fc6-fb23-44e1-8cc9-b5da2d90de3a\") " pod="openstack/nova-api-0" Feb 16 02:41:14.866574 master-0 kubenswrapper[31559]: I0216 02:41:14.863603 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t92ck\" (UniqueName: \"kubernetes.io/projected/f2a12c06-81c2-4c89-b64a-79132f5241ed-kube-api-access-t92ck\") pod \"nova-cell1-novncproxy-0\" (UID: \"f2a12c06-81c2-4c89-b64a-79132f5241ed\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 02:41:14.866574 master-0 kubenswrapper[31559]: I0216 02:41:14.863626 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2a12c06-81c2-4c89-b64a-79132f5241ed-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f2a12c06-81c2-4c89-b64a-79132f5241ed\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 02:41:14.866574 master-0 kubenswrapper[31559]: I0216 02:41:14.863689 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53d330ba-814b-4e9b-960a-3a78377db2cf-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"53d330ba-814b-4e9b-960a-3a78377db2cf\") " pod="openstack/nova-metadata-0" Feb 16 02:41:14.866574 master-0 kubenswrapper[31559]: I0216 02:41:14.863734 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53d330ba-814b-4e9b-960a-3a78377db2cf-logs\") pod \"nova-metadata-0\" (UID: \"53d330ba-814b-4e9b-960a-3a78377db2cf\") " pod="openstack/nova-metadata-0" Feb 16 02:41:14.866574 master-0 kubenswrapper[31559]: I0216 02:41:14.863811 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6s2cn\" (UniqueName: \"kubernetes.io/projected/75a69fc6-fb23-44e1-8cc9-b5da2d90de3a-kube-api-access-6s2cn\") pod \"nova-api-0\" (UID: \"75a69fc6-fb23-44e1-8cc9-b5da2d90de3a\") " pod="openstack/nova-api-0" Feb 16 02:41:14.866574 master-0 kubenswrapper[31559]: I0216 02:41:14.863912 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75a69fc6-fb23-44e1-8cc9-b5da2d90de3a-logs\") pod \"nova-api-0\" (UID: \"75a69fc6-fb23-44e1-8cc9-b5da2d90de3a\") " pod="openstack/nova-api-0" Feb 16 02:41:14.866574 master-0 kubenswrapper[31559]: I0216 02:41:14.863961 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2a12c06-81c2-4c89-b64a-79132f5241ed-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f2a12c06-81c2-4c89-b64a-79132f5241ed\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 02:41:14.866574 master-0 kubenswrapper[31559]: I0216 02:41:14.864116 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53d330ba-814b-4e9b-960a-3a78377db2cf-config-data\") pod \"nova-metadata-0\" (UID: \"53d330ba-814b-4e9b-960a-3a78377db2cf\") " pod="openstack/nova-metadata-0" Feb 16 02:41:14.866574 master-0 kubenswrapper[31559]: I0216 02:41:14.866362 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75a69fc6-fb23-44e1-8cc9-b5da2d90de3a-logs\") pod \"nova-api-0\" (UID: \"75a69fc6-fb23-44e1-8cc9-b5da2d90de3a\") " pod="openstack/nova-api-0" Feb 16 02:41:14.876837 master-0 kubenswrapper[31559]: I0216 02:41:14.874655 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75a69fc6-fb23-44e1-8cc9-b5da2d90de3a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"75a69fc6-fb23-44e1-8cc9-b5da2d90de3a\") " pod="openstack/nova-api-0" Feb 16 02:41:14.888591 master-0 kubenswrapper[31559]: I0216 02:41:14.886496 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75a69fc6-fb23-44e1-8cc9-b5da2d90de3a-config-data\") pod \"nova-api-0\" (UID: \"75a69fc6-fb23-44e1-8cc9-b5da2d90de3a\") " pod="openstack/nova-api-0" Feb 16 02:41:14.895466 master-0 kubenswrapper[31559]: I0216 02:41:14.895113 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6s2cn\" (UniqueName: \"kubernetes.io/projected/75a69fc6-fb23-44e1-8cc9-b5da2d90de3a-kube-api-access-6s2cn\") pod \"nova-api-0\" (UID: \"75a69fc6-fb23-44e1-8cc9-b5da2d90de3a\") " pod="openstack/nova-api-0" Feb 16 02:41:14.899546 master-0 kubenswrapper[31559]: I0216 02:41:14.896930 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 02:41:14.918512 master-0 kubenswrapper[31559]: I0216 02:41:14.918185 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 02:41:14.920760 master-0 kubenswrapper[31559]: I0216 02:41:14.920466 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 02:41:14.925535 master-0 kubenswrapper[31559]: I0216 02:41:14.925179 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 16 02:41:14.955308 master-0 kubenswrapper[31559]: I0216 02:41:14.951365 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 02:41:14.968322 master-0 kubenswrapper[31559]: I0216 02:41:14.961954 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-857cbc5f9f-lbsl2"] Feb 16 02:41:14.968322 master-0 kubenswrapper[31559]: I0216 02:41:14.964325 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-857cbc5f9f-lbsl2" Feb 16 02:41:14.968322 master-0 kubenswrapper[31559]: I0216 02:41:14.965340 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2a12c06-81c2-4c89-b64a-79132f5241ed-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f2a12c06-81c2-4c89-b64a-79132f5241ed\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 02:41:14.968322 master-0 kubenswrapper[31559]: I0216 02:41:14.965398 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7be751e-f768-4ff3-aad5-985f26d3b399-config-data\") pod \"nova-scheduler-0\" (UID: \"e7be751e-f768-4ff3-aad5-985f26d3b399\") " pod="openstack/nova-scheduler-0" Feb 16 02:41:14.968322 master-0 kubenswrapper[31559]: I0216 02:41:14.966021 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53d330ba-814b-4e9b-960a-3a78377db2cf-config-data\") pod \"nova-metadata-0\" (UID: \"53d330ba-814b-4e9b-960a-3a78377db2cf\") " pod="openstack/nova-metadata-0" Feb 16 02:41:14.968322 master-0 kubenswrapper[31559]: I0216 02:41:14.966168 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdxjf\" (UniqueName: \"kubernetes.io/projected/53d330ba-814b-4e9b-960a-3a78377db2cf-kube-api-access-vdxjf\") pod \"nova-metadata-0\" (UID: \"53d330ba-814b-4e9b-960a-3a78377db2cf\") " pod="openstack/nova-metadata-0" Feb 16 02:41:14.968322 master-0 kubenswrapper[31559]: I0216 02:41:14.966210 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t92ck\" (UniqueName: \"kubernetes.io/projected/f2a12c06-81c2-4c89-b64a-79132f5241ed-kube-api-access-t92ck\") pod \"nova-cell1-novncproxy-0\" (UID: \"f2a12c06-81c2-4c89-b64a-79132f5241ed\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 02:41:14.968322 master-0 kubenswrapper[31559]: I0216 02:41:14.966233 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2a12c06-81c2-4c89-b64a-79132f5241ed-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f2a12c06-81c2-4c89-b64a-79132f5241ed\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 02:41:14.968322 master-0 kubenswrapper[31559]: I0216 02:41:14.966269 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53d330ba-814b-4e9b-960a-3a78377db2cf-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"53d330ba-814b-4e9b-960a-3a78377db2cf\") " pod="openstack/nova-metadata-0" Feb 16 02:41:14.968322 master-0 kubenswrapper[31559]: I0216 02:41:14.966295 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53d330ba-814b-4e9b-960a-3a78377db2cf-logs\") pod \"nova-metadata-0\" (UID: \"53d330ba-814b-4e9b-960a-3a78377db2cf\") " pod="openstack/nova-metadata-0" Feb 16 02:41:14.968322 master-0 kubenswrapper[31559]: I0216 02:41:14.966317 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgnrv\" (UniqueName: \"kubernetes.io/projected/e7be751e-f768-4ff3-aad5-985f26d3b399-kube-api-access-xgnrv\") pod \"nova-scheduler-0\" (UID: \"e7be751e-f768-4ff3-aad5-985f26d3b399\") " pod="openstack/nova-scheduler-0" Feb 16 02:41:14.968322 master-0 kubenswrapper[31559]: I0216 02:41:14.966364 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7be751e-f768-4ff3-aad5-985f26d3b399-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e7be751e-f768-4ff3-aad5-985f26d3b399\") " pod="openstack/nova-scheduler-0" Feb 16 02:41:14.970759 master-0 kubenswrapper[31559]: I0216 02:41:14.970377 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2a12c06-81c2-4c89-b64a-79132f5241ed-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f2a12c06-81c2-4c89-b64a-79132f5241ed\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 02:41:14.974249 master-0 kubenswrapper[31559]: I0216 02:41:14.974205 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53d330ba-814b-4e9b-960a-3a78377db2cf-config-data\") pod \"nova-metadata-0\" (UID: \"53d330ba-814b-4e9b-960a-3a78377db2cf\") " pod="openstack/nova-metadata-0" Feb 16 02:41:14.974249 master-0 kubenswrapper[31559]: I0216 02:41:14.974240 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2a12c06-81c2-4c89-b64a-79132f5241ed-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f2a12c06-81c2-4c89-b64a-79132f5241ed\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 02:41:14.974787 master-0 kubenswrapper[31559]: I0216 02:41:14.974706 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53d330ba-814b-4e9b-960a-3a78377db2cf-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"53d330ba-814b-4e9b-960a-3a78377db2cf\") " pod="openstack/nova-metadata-0" Feb 16 02:41:14.974994 master-0 kubenswrapper[31559]: I0216 02:41:14.974973 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53d330ba-814b-4e9b-960a-3a78377db2cf-logs\") pod \"nova-metadata-0\" (UID: \"53d330ba-814b-4e9b-960a-3a78377db2cf\") " pod="openstack/nova-metadata-0" Feb 16 02:41:15.006589 master-0 kubenswrapper[31559]: I0216 02:41:15.006532 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-857cbc5f9f-lbsl2"] Feb 16 02:41:15.069263 master-0 kubenswrapper[31559]: I0216 02:41:15.069131 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gppnf\" (UniqueName: \"kubernetes.io/projected/7fa59dc4-e794-44ee-9b14-1899479e07c7-kube-api-access-gppnf\") pod \"dnsmasq-dns-857cbc5f9f-lbsl2\" (UID: \"7fa59dc4-e794-44ee-9b14-1899479e07c7\") " pod="openstack/dnsmasq-dns-857cbc5f9f-lbsl2" Feb 16 02:41:15.069881 master-0 kubenswrapper[31559]: I0216 02:41:15.069608 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7be751e-f768-4ff3-aad5-985f26d3b399-config-data\") pod \"nova-scheduler-0\" (UID: \"e7be751e-f768-4ff3-aad5-985f26d3b399\") " pod="openstack/nova-scheduler-0" Feb 16 02:41:15.069881 master-0 kubenswrapper[31559]: I0216 02:41:15.069848 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7fa59dc4-e794-44ee-9b14-1899479e07c7-ovsdbserver-nb\") pod \"dnsmasq-dns-857cbc5f9f-lbsl2\" (UID: \"7fa59dc4-e794-44ee-9b14-1899479e07c7\") " pod="openstack/dnsmasq-dns-857cbc5f9f-lbsl2" Feb 16 02:41:15.070239 master-0 kubenswrapper[31559]: I0216 02:41:15.070203 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7fa59dc4-e794-44ee-9b14-1899479e07c7-ovsdbserver-sb\") pod \"dnsmasq-dns-857cbc5f9f-lbsl2\" (UID: \"7fa59dc4-e794-44ee-9b14-1899479e07c7\") " pod="openstack/dnsmasq-dns-857cbc5f9f-lbsl2" Feb 16 02:41:15.070345 master-0 kubenswrapper[31559]: I0216 02:41:15.070319 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7fa59dc4-e794-44ee-9b14-1899479e07c7-dns-svc\") pod \"dnsmasq-dns-857cbc5f9f-lbsl2\" (UID: \"7fa59dc4-e794-44ee-9b14-1899479e07c7\") " pod="openstack/dnsmasq-dns-857cbc5f9f-lbsl2" Feb 16 02:41:15.070425 master-0 kubenswrapper[31559]: I0216 02:41:15.070402 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7fa59dc4-e794-44ee-9b14-1899479e07c7-dns-swift-storage-0\") pod \"dnsmasq-dns-857cbc5f9f-lbsl2\" (UID: \"7fa59dc4-e794-44ee-9b14-1899479e07c7\") " pod="openstack/dnsmasq-dns-857cbc5f9f-lbsl2" Feb 16 02:41:15.070597 master-0 kubenswrapper[31559]: I0216 02:41:15.070573 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgnrv\" (UniqueName: \"kubernetes.io/projected/e7be751e-f768-4ff3-aad5-985f26d3b399-kube-api-access-xgnrv\") pod \"nova-scheduler-0\" (UID: \"e7be751e-f768-4ff3-aad5-985f26d3b399\") " pod="openstack/nova-scheduler-0" Feb 16 02:41:15.070733 master-0 kubenswrapper[31559]: I0216 02:41:15.070709 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7be751e-f768-4ff3-aad5-985f26d3b399-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e7be751e-f768-4ff3-aad5-985f26d3b399\") " pod="openstack/nova-scheduler-0" Feb 16 02:41:15.070821 master-0 kubenswrapper[31559]: I0216 02:41:15.070796 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fa59dc4-e794-44ee-9b14-1899479e07c7-config\") pod \"dnsmasq-dns-857cbc5f9f-lbsl2\" (UID: \"7fa59dc4-e794-44ee-9b14-1899479e07c7\") " pod="openstack/dnsmasq-dns-857cbc5f9f-lbsl2" Feb 16 02:41:15.073762 master-0 kubenswrapper[31559]: I0216 02:41:15.073718 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7be751e-f768-4ff3-aad5-985f26d3b399-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e7be751e-f768-4ff3-aad5-985f26d3b399\") " pod="openstack/nova-scheduler-0" Feb 16 02:41:15.075049 master-0 kubenswrapper[31559]: I0216 02:41:15.075009 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7be751e-f768-4ff3-aad5-985f26d3b399-config-data\") pod \"nova-scheduler-0\" (UID: \"e7be751e-f768-4ff3-aad5-985f26d3b399\") " pod="openstack/nova-scheduler-0" Feb 16 02:41:15.138700 master-0 kubenswrapper[31559]: I0216 02:41:15.138590 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgnrv\" (UniqueName: \"kubernetes.io/projected/e7be751e-f768-4ff3-aad5-985f26d3b399-kube-api-access-xgnrv\") pod \"nova-scheduler-0\" (UID: \"e7be751e-f768-4ff3-aad5-985f26d3b399\") " pod="openstack/nova-scheduler-0" Feb 16 02:41:15.138895 master-0 kubenswrapper[31559]: I0216 02:41:15.138694 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdxjf\" (UniqueName: \"kubernetes.io/projected/53d330ba-814b-4e9b-960a-3a78377db2cf-kube-api-access-vdxjf\") pod \"nova-metadata-0\" (UID: \"53d330ba-814b-4e9b-960a-3a78377db2cf\") " pod="openstack/nova-metadata-0" Feb 16 02:41:15.140207 master-0 kubenswrapper[31559]: I0216 02:41:15.140172 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t92ck\" (UniqueName: \"kubernetes.io/projected/f2a12c06-81c2-4c89-b64a-79132f5241ed-kube-api-access-t92ck\") pod \"nova-cell1-novncproxy-0\" (UID: \"f2a12c06-81c2-4c89-b64a-79132f5241ed\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 02:41:15.173132 master-0 kubenswrapper[31559]: I0216 02:41:15.173077 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fa59dc4-e794-44ee-9b14-1899479e07c7-config\") pod \"dnsmasq-dns-857cbc5f9f-lbsl2\" (UID: \"7fa59dc4-e794-44ee-9b14-1899479e07c7\") " pod="openstack/dnsmasq-dns-857cbc5f9f-lbsl2" Feb 16 02:41:15.173290 master-0 kubenswrapper[31559]: I0216 02:41:15.173175 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gppnf\" (UniqueName: \"kubernetes.io/projected/7fa59dc4-e794-44ee-9b14-1899479e07c7-kube-api-access-gppnf\") pod \"dnsmasq-dns-857cbc5f9f-lbsl2\" (UID: \"7fa59dc4-e794-44ee-9b14-1899479e07c7\") " pod="openstack/dnsmasq-dns-857cbc5f9f-lbsl2" Feb 16 02:41:15.173290 master-0 kubenswrapper[31559]: I0216 02:41:15.173225 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7fa59dc4-e794-44ee-9b14-1899479e07c7-ovsdbserver-nb\") pod \"dnsmasq-dns-857cbc5f9f-lbsl2\" (UID: \"7fa59dc4-e794-44ee-9b14-1899479e07c7\") " pod="openstack/dnsmasq-dns-857cbc5f9f-lbsl2" Feb 16 02:41:15.173426 master-0 kubenswrapper[31559]: I0216 02:41:15.173309 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7fa59dc4-e794-44ee-9b14-1899479e07c7-ovsdbserver-sb\") pod \"dnsmasq-dns-857cbc5f9f-lbsl2\" (UID: \"7fa59dc4-e794-44ee-9b14-1899479e07c7\") " pod="openstack/dnsmasq-dns-857cbc5f9f-lbsl2" Feb 16 02:41:15.173426 master-0 kubenswrapper[31559]: I0216 02:41:15.173347 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7fa59dc4-e794-44ee-9b14-1899479e07c7-dns-svc\") pod \"dnsmasq-dns-857cbc5f9f-lbsl2\" (UID: \"7fa59dc4-e794-44ee-9b14-1899479e07c7\") " pod="openstack/dnsmasq-dns-857cbc5f9f-lbsl2" Feb 16 02:41:15.173426 master-0 kubenswrapper[31559]: I0216 02:41:15.173376 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7fa59dc4-e794-44ee-9b14-1899479e07c7-dns-swift-storage-0\") pod \"dnsmasq-dns-857cbc5f9f-lbsl2\" (UID: \"7fa59dc4-e794-44ee-9b14-1899479e07c7\") " pod="openstack/dnsmasq-dns-857cbc5f9f-lbsl2" Feb 16 02:41:15.174190 master-0 kubenswrapper[31559]: I0216 02:41:15.174148 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7fa59dc4-e794-44ee-9b14-1899479e07c7-dns-swift-storage-0\") pod \"dnsmasq-dns-857cbc5f9f-lbsl2\" (UID: \"7fa59dc4-e794-44ee-9b14-1899479e07c7\") " pod="openstack/dnsmasq-dns-857cbc5f9f-lbsl2" Feb 16 02:41:15.174727 master-0 kubenswrapper[31559]: I0216 02:41:15.174693 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fa59dc4-e794-44ee-9b14-1899479e07c7-config\") pod \"dnsmasq-dns-857cbc5f9f-lbsl2\" (UID: \"7fa59dc4-e794-44ee-9b14-1899479e07c7\") " pod="openstack/dnsmasq-dns-857cbc5f9f-lbsl2" Feb 16 02:41:15.175605 master-0 kubenswrapper[31559]: I0216 02:41:15.175558 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7fa59dc4-e794-44ee-9b14-1899479e07c7-ovsdbserver-nb\") pod \"dnsmasq-dns-857cbc5f9f-lbsl2\" (UID: \"7fa59dc4-e794-44ee-9b14-1899479e07c7\") " pod="openstack/dnsmasq-dns-857cbc5f9f-lbsl2" Feb 16 02:41:15.176126 master-0 kubenswrapper[31559]: I0216 02:41:15.176086 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7fa59dc4-e794-44ee-9b14-1899479e07c7-ovsdbserver-sb\") pod \"dnsmasq-dns-857cbc5f9f-lbsl2\" (UID: \"7fa59dc4-e794-44ee-9b14-1899479e07c7\") " pod="openstack/dnsmasq-dns-857cbc5f9f-lbsl2" Feb 16 02:41:15.176704 master-0 kubenswrapper[31559]: I0216 02:41:15.176668 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7fa59dc4-e794-44ee-9b14-1899479e07c7-dns-svc\") pod \"dnsmasq-dns-857cbc5f9f-lbsl2\" (UID: \"7fa59dc4-e794-44ee-9b14-1899479e07c7\") " pod="openstack/dnsmasq-dns-857cbc5f9f-lbsl2" Feb 16 02:41:15.186773 master-0 kubenswrapper[31559]: I0216 02:41:15.186708 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 02:41:15.262321 master-0 kubenswrapper[31559]: I0216 02:41:15.262275 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 02:41:15.299513 master-0 kubenswrapper[31559]: I0216 02:41:15.299468 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 02:41:15.317807 master-0 kubenswrapper[31559]: I0216 02:41:15.310427 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 02:41:15.449523 master-0 kubenswrapper[31559]: I0216 02:41:15.449466 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gppnf\" (UniqueName: \"kubernetes.io/projected/7fa59dc4-e794-44ee-9b14-1899479e07c7-kube-api-access-gppnf\") pod \"dnsmasq-dns-857cbc5f9f-lbsl2\" (UID: \"7fa59dc4-e794-44ee-9b14-1899479e07c7\") " pod="openstack/dnsmasq-dns-857cbc5f9f-lbsl2" Feb 16 02:41:15.623130 master-0 kubenswrapper[31559]: I0216 02:41:15.621903 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-857cbc5f9f-lbsl2" Feb 16 02:41:15.654448 master-0 kubenswrapper[31559]: I0216 02:41:15.654376 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-zcqp2"] Feb 16 02:41:15.668915 master-0 kubenswrapper[31559]: I0216 02:41:15.666614 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-compute-ironic-compute-0"] Feb 16 02:41:15.707196 master-0 kubenswrapper[31559]: I0216 02:41:15.699585 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 02:41:15.916492 master-0 kubenswrapper[31559]: I0216 02:41:15.916417 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-86vgj"] Feb 16 02:41:15.918477 master-0 kubenswrapper[31559]: I0216 02:41:15.918457 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-86vgj" Feb 16 02:41:15.920824 master-0 kubenswrapper[31559]: I0216 02:41:15.920783 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 16 02:41:15.922248 master-0 kubenswrapper[31559]: I0216 02:41:15.922197 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 16 02:41:15.964491 master-0 kubenswrapper[31559]: I0216 02:41:15.961956 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-86vgj"] Feb 16 02:41:16.034759 master-0 kubenswrapper[31559]: I0216 02:41:16.034693 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d980fc2e-586c-4a8d-ad5f-d6385e22c959-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-86vgj\" (UID: \"d980fc2e-586c-4a8d-ad5f-d6385e22c959\") " pod="openstack/nova-cell1-conductor-db-sync-86vgj" Feb 16 02:41:16.034982 master-0 kubenswrapper[31559]: I0216 02:41:16.034878 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d980fc2e-586c-4a8d-ad5f-d6385e22c959-config-data\") pod \"nova-cell1-conductor-db-sync-86vgj\" (UID: \"d980fc2e-586c-4a8d-ad5f-d6385e22c959\") " pod="openstack/nova-cell1-conductor-db-sync-86vgj" Feb 16 02:41:16.035025 master-0 kubenswrapper[31559]: I0216 02:41:16.034992 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jmcf\" (UniqueName: \"kubernetes.io/projected/d980fc2e-586c-4a8d-ad5f-d6385e22c959-kube-api-access-5jmcf\") pod \"nova-cell1-conductor-db-sync-86vgj\" (UID: \"d980fc2e-586c-4a8d-ad5f-d6385e22c959\") " pod="openstack/nova-cell1-conductor-db-sync-86vgj" Feb 16 02:41:16.035173 master-0 kubenswrapper[31559]: I0216 02:41:16.035059 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d980fc2e-586c-4a8d-ad5f-d6385e22c959-scripts\") pod \"nova-cell1-conductor-db-sync-86vgj\" (UID: \"d980fc2e-586c-4a8d-ad5f-d6385e22c959\") " pod="openstack/nova-cell1-conductor-db-sync-86vgj" Feb 16 02:41:16.145664 master-0 kubenswrapper[31559]: I0216 02:41:16.138397 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jmcf\" (UniqueName: \"kubernetes.io/projected/d980fc2e-586c-4a8d-ad5f-d6385e22c959-kube-api-access-5jmcf\") pod \"nova-cell1-conductor-db-sync-86vgj\" (UID: \"d980fc2e-586c-4a8d-ad5f-d6385e22c959\") " pod="openstack/nova-cell1-conductor-db-sync-86vgj" Feb 16 02:41:16.145664 master-0 kubenswrapper[31559]: I0216 02:41:16.144800 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d980fc2e-586c-4a8d-ad5f-d6385e22c959-scripts\") pod \"nova-cell1-conductor-db-sync-86vgj\" (UID: \"d980fc2e-586c-4a8d-ad5f-d6385e22c959\") " pod="openstack/nova-cell1-conductor-db-sync-86vgj" Feb 16 02:41:16.145664 master-0 kubenswrapper[31559]: I0216 02:41:16.144975 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d980fc2e-586c-4a8d-ad5f-d6385e22c959-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-86vgj\" (UID: \"d980fc2e-586c-4a8d-ad5f-d6385e22c959\") " pod="openstack/nova-cell1-conductor-db-sync-86vgj" Feb 16 02:41:16.145664 master-0 kubenswrapper[31559]: I0216 02:41:16.145264 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d980fc2e-586c-4a8d-ad5f-d6385e22c959-config-data\") pod \"nova-cell1-conductor-db-sync-86vgj\" (UID: \"d980fc2e-586c-4a8d-ad5f-d6385e22c959\") " pod="openstack/nova-cell1-conductor-db-sync-86vgj" Feb 16 02:41:16.169776 master-0 kubenswrapper[31559]: I0216 02:41:16.164153 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d980fc2e-586c-4a8d-ad5f-d6385e22c959-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-86vgj\" (UID: \"d980fc2e-586c-4a8d-ad5f-d6385e22c959\") " pod="openstack/nova-cell1-conductor-db-sync-86vgj" Feb 16 02:41:16.169776 master-0 kubenswrapper[31559]: I0216 02:41:16.169539 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d980fc2e-586c-4a8d-ad5f-d6385e22c959-scripts\") pod \"nova-cell1-conductor-db-sync-86vgj\" (UID: \"d980fc2e-586c-4a8d-ad5f-d6385e22c959\") " pod="openstack/nova-cell1-conductor-db-sync-86vgj" Feb 16 02:41:16.179641 master-0 kubenswrapper[31559]: I0216 02:41:16.176705 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jmcf\" (UniqueName: \"kubernetes.io/projected/d980fc2e-586c-4a8d-ad5f-d6385e22c959-kube-api-access-5jmcf\") pod \"nova-cell1-conductor-db-sync-86vgj\" (UID: \"d980fc2e-586c-4a8d-ad5f-d6385e22c959\") " pod="openstack/nova-cell1-conductor-db-sync-86vgj" Feb 16 02:41:16.192476 master-0 kubenswrapper[31559]: I0216 02:41:16.188904 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d980fc2e-586c-4a8d-ad5f-d6385e22c959-config-data\") pod \"nova-cell1-conductor-db-sync-86vgj\" (UID: \"d980fc2e-586c-4a8d-ad5f-d6385e22c959\") " pod="openstack/nova-cell1-conductor-db-sync-86vgj" Feb 16 02:41:16.223528 master-0 kubenswrapper[31559]: I0216 02:41:16.221888 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 02:41:16.252580 master-0 kubenswrapper[31559]: I0216 02:41:16.252528 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 02:41:16.253049 master-0 kubenswrapper[31559]: I0216 02:41:16.253011 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-86vgj" Feb 16 02:41:16.273521 master-0 kubenswrapper[31559]: I0216 02:41:16.273464 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 02:41:16.290190 master-0 kubenswrapper[31559]: I0216 02:41:16.289029 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-857cbc5f9f-lbsl2"] Feb 16 02:41:16.371538 master-0 kubenswrapper[31559]: I0216 02:41:16.371480 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"53d330ba-814b-4e9b-960a-3a78377db2cf","Type":"ContainerStarted","Data":"2a4086fb1ead359731b50deefb4b7638136d3a6ae6a84c6753e13af401030186"} Feb 16 02:41:16.380141 master-0 kubenswrapper[31559]: I0216 02:41:16.380084 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f2a12c06-81c2-4c89-b64a-79132f5241ed","Type":"ContainerStarted","Data":"d6c9adb588a20c821d27ee98f8f11b26c2eee355e81a929c93ce39be3fad5738"} Feb 16 02:41:16.383044 master-0 kubenswrapper[31559]: I0216 02:41:16.382927 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-compute-ironic-compute-0" event={"ID":"d42ca2ea-34cc-46f6-8551-928ddd4fecf5","Type":"ContainerStarted","Data":"8b062e7644059e9ebe73fd5db7643c7849335fdcb7aa36a2339753fc131fdd9b"} Feb 16 02:41:16.387942 master-0 kubenswrapper[31559]: I0216 02:41:16.387838 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e7be751e-f768-4ff3-aad5-985f26d3b399","Type":"ContainerStarted","Data":"89f4185f583536053b801ddd8a6c3349142cb1a4620155e69522f3fa442947f4"} Feb 16 02:41:16.392061 master-0 kubenswrapper[31559]: I0216 02:41:16.392006 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-857cbc5f9f-lbsl2" event={"ID":"7fa59dc4-e794-44ee-9b14-1899479e07c7","Type":"ContainerStarted","Data":"c78578ac35144b12e0613a8df6fce528c660f83852aa6f0730704393e484815a"} Feb 16 02:41:16.393614 master-0 kubenswrapper[31559]: I0216 02:41:16.393576 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"75a69fc6-fb23-44e1-8cc9-b5da2d90de3a","Type":"ContainerStarted","Data":"be1944ff94cecc22ed765cff26152fb1077f343242524b38370b2f84d10f3e27"} Feb 16 02:41:16.401857 master-0 kubenswrapper[31559]: I0216 02:41:16.401781 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-zcqp2" event={"ID":"12aa2463-4471-4146-9b3e-e5532987d769","Type":"ContainerStarted","Data":"ed615757ce97a8f4e497be28fe5d75e698dc7e0b994b1152a3a1c0039d4f9e08"} Feb 16 02:41:16.401857 master-0 kubenswrapper[31559]: I0216 02:41:16.401845 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-zcqp2" event={"ID":"12aa2463-4471-4146-9b3e-e5532987d769","Type":"ContainerStarted","Data":"3f2b8bc5bd8a475a502b0a0ebf6b896f29ab551bc44c70d86e4feba1bcc43e78"} Feb 16 02:41:16.433741 master-0 kubenswrapper[31559]: I0216 02:41:16.433560 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-zcqp2" podStartSLOduration=2.433538637 podStartE2EDuration="2.433538637s" podCreationTimestamp="2026-02-16 02:41:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:41:16.425175384 +0000 UTC m=+1128.769781399" watchObservedRunningTime="2026-02-16 02:41:16.433538637 +0000 UTC m=+1128.778144652" Feb 16 02:41:16.782891 master-0 kubenswrapper[31559]: I0216 02:41:16.782831 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-86vgj"] Feb 16 02:41:17.419214 master-0 kubenswrapper[31559]: I0216 02:41:17.419102 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-86vgj" event={"ID":"d980fc2e-586c-4a8d-ad5f-d6385e22c959","Type":"ContainerStarted","Data":"5cd96e1fed1955707437875738994ccd1c511fcc24215e5f84c0f8efbc32defc"} Feb 16 02:41:17.422489 master-0 kubenswrapper[31559]: I0216 02:41:17.422414 31559 generic.go:334] "Generic (PLEG): container finished" podID="7fa59dc4-e794-44ee-9b14-1899479e07c7" containerID="39db39b107871c2872de2f8e91c34b2b4e4a9f031f156bc09866241e90b5c086" exitCode=0 Feb 16 02:41:17.422731 master-0 kubenswrapper[31559]: I0216 02:41:17.422529 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-857cbc5f9f-lbsl2" event={"ID":"7fa59dc4-e794-44ee-9b14-1899479e07c7","Type":"ContainerDied","Data":"39db39b107871c2872de2f8e91c34b2b4e4a9f031f156bc09866241e90b5c086"} Feb 16 02:41:18.441136 master-0 kubenswrapper[31559]: I0216 02:41:18.441059 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-86vgj" event={"ID":"d980fc2e-586c-4a8d-ad5f-d6385e22c959","Type":"ContainerStarted","Data":"7f676bcfacc06c6183bcab91167d1bac7b31aad9f2d96d4cf19dc517f26bfb3a"} Feb 16 02:41:18.463996 master-0 kubenswrapper[31559]: I0216 02:41:18.463930 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-86vgj" podStartSLOduration=3.463914108 podStartE2EDuration="3.463914108s" podCreationTimestamp="2026-02-16 02:41:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:41:18.462751618 +0000 UTC m=+1130.807357633" watchObservedRunningTime="2026-02-16 02:41:18.463914108 +0000 UTC m=+1130.808520113" Feb 16 02:41:18.922117 master-0 kubenswrapper[31559]: I0216 02:41:18.920248 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 02:41:18.935218 master-0 kubenswrapper[31559]: I0216 02:41:18.933956 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 02:41:19.454758 master-0 kubenswrapper[31559]: I0216 02:41:19.454676 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-857cbc5f9f-lbsl2" event={"ID":"7fa59dc4-e794-44ee-9b14-1899479e07c7","Type":"ContainerStarted","Data":"ace857275fb6ef2c729b6cd725d28524d6f35dee0e791135b0ca446b425284fa"} Feb 16 02:41:19.454758 master-0 kubenswrapper[31559]: I0216 02:41:19.454765 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-857cbc5f9f-lbsl2" Feb 16 02:41:19.459274 master-0 kubenswrapper[31559]: I0216 02:41:19.459227 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"75a69fc6-fb23-44e1-8cc9-b5da2d90de3a","Type":"ContainerStarted","Data":"4f4a9838b3ffe7d069796932cbf83cd3b8c64a9cdc99ccff54bd09bb2a17adfc"} Feb 16 02:41:19.459352 master-0 kubenswrapper[31559]: I0216 02:41:19.459277 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"75a69fc6-fb23-44e1-8cc9-b5da2d90de3a","Type":"ContainerStarted","Data":"c38e90199e886812cceda806368a1aabe6aca65ca4119f7342e3db1a895467a7"} Feb 16 02:41:19.462186 master-0 kubenswrapper[31559]: I0216 02:41:19.461325 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"53d330ba-814b-4e9b-960a-3a78377db2cf","Type":"ContainerStarted","Data":"a38c13133a2c666e6ee11c1a92c0404b199e820162b1c514e128a5b21e34428d"} Feb 16 02:41:19.462186 master-0 kubenswrapper[31559]: I0216 02:41:19.461376 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"53d330ba-814b-4e9b-960a-3a78377db2cf","Type":"ContainerStarted","Data":"c4f9975f68defade099794a20bb6f4d0ae09d6235bc7b6f1638594ff92d88e29"} Feb 16 02:41:19.462186 master-0 kubenswrapper[31559]: I0216 02:41:19.461407 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="53d330ba-814b-4e9b-960a-3a78377db2cf" containerName="nova-metadata-log" containerID="cri-o://c4f9975f68defade099794a20bb6f4d0ae09d6235bc7b6f1638594ff92d88e29" gracePeriod=30 Feb 16 02:41:19.462186 master-0 kubenswrapper[31559]: I0216 02:41:19.461425 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="53d330ba-814b-4e9b-960a-3a78377db2cf" containerName="nova-metadata-metadata" containerID="cri-o://a38c13133a2c666e6ee11c1a92c0404b199e820162b1c514e128a5b21e34428d" gracePeriod=30 Feb 16 02:41:19.466463 master-0 kubenswrapper[31559]: I0216 02:41:19.464539 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e7be751e-f768-4ff3-aad5-985f26d3b399","Type":"ContainerStarted","Data":"8cb65666c4f9e4900d4f0012c082473fb24df4169f2d765b291a17bd721d4637"} Feb 16 02:41:19.495671 master-0 kubenswrapper[31559]: I0216 02:41:19.495580 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-857cbc5f9f-lbsl2" podStartSLOduration=5.495562992 podStartE2EDuration="5.495562992s" podCreationTimestamp="2026-02-16 02:41:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:41:19.481513113 +0000 UTC m=+1131.826119128" watchObservedRunningTime="2026-02-16 02:41:19.495562992 +0000 UTC m=+1131.840169007" Feb 16 02:41:19.599684 master-0 kubenswrapper[31559]: I0216 02:41:19.599592 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.371011927 podStartE2EDuration="5.599573608s" podCreationTimestamp="2026-02-16 02:41:14 +0000 UTC" firstStartedPulling="2026-02-16 02:41:16.154349658 +0000 UTC m=+1128.498955663" lastFinishedPulling="2026-02-16 02:41:18.382911329 +0000 UTC m=+1130.727517344" observedRunningTime="2026-02-16 02:41:19.59377574 +0000 UTC m=+1131.938381755" watchObservedRunningTime="2026-02-16 02:41:19.599573608 +0000 UTC m=+1131.944179613" Feb 16 02:41:19.630315 master-0 kubenswrapper[31559]: I0216 02:41:19.630176 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.991765473 podStartE2EDuration="5.63016085s" podCreationTimestamp="2026-02-16 02:41:14 +0000 UTC" firstStartedPulling="2026-02-16 02:41:15.748891284 +0000 UTC m=+1128.093497299" lastFinishedPulling="2026-02-16 02:41:18.387286671 +0000 UTC m=+1130.731892676" observedRunningTime="2026-02-16 02:41:19.627642425 +0000 UTC m=+1131.972248440" watchObservedRunningTime="2026-02-16 02:41:19.63016085 +0000 UTC m=+1131.974766865" Feb 16 02:41:19.655449 master-0 kubenswrapper[31559]: I0216 02:41:19.654474 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.464529015 podStartE2EDuration="5.65445535s" podCreationTimestamp="2026-02-16 02:41:14 +0000 UTC" firstStartedPulling="2026-02-16 02:41:16.193296092 +0000 UTC m=+1128.537902107" lastFinishedPulling="2026-02-16 02:41:18.383222427 +0000 UTC m=+1130.727828442" observedRunningTime="2026-02-16 02:41:19.643019408 +0000 UTC m=+1131.987625423" watchObservedRunningTime="2026-02-16 02:41:19.65445535 +0000 UTC m=+1131.999061365" Feb 16 02:41:20.143119 master-0 kubenswrapper[31559]: I0216 02:41:20.143055 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 02:41:20.178625 master-0 kubenswrapper[31559]: I0216 02:41:20.178555 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53d330ba-814b-4e9b-960a-3a78377db2cf-combined-ca-bundle\") pod \"53d330ba-814b-4e9b-960a-3a78377db2cf\" (UID: \"53d330ba-814b-4e9b-960a-3a78377db2cf\") " Feb 16 02:41:20.182045 master-0 kubenswrapper[31559]: I0216 02:41:20.179741 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdxjf\" (UniqueName: \"kubernetes.io/projected/53d330ba-814b-4e9b-960a-3a78377db2cf-kube-api-access-vdxjf\") pod \"53d330ba-814b-4e9b-960a-3a78377db2cf\" (UID: \"53d330ba-814b-4e9b-960a-3a78377db2cf\") " Feb 16 02:41:20.182045 master-0 kubenswrapper[31559]: I0216 02:41:20.179896 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53d330ba-814b-4e9b-960a-3a78377db2cf-config-data\") pod \"53d330ba-814b-4e9b-960a-3a78377db2cf\" (UID: \"53d330ba-814b-4e9b-960a-3a78377db2cf\") " Feb 16 02:41:20.182045 master-0 kubenswrapper[31559]: I0216 02:41:20.180022 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53d330ba-814b-4e9b-960a-3a78377db2cf-logs\") pod \"53d330ba-814b-4e9b-960a-3a78377db2cf\" (UID: \"53d330ba-814b-4e9b-960a-3a78377db2cf\") " Feb 16 02:41:20.182045 master-0 kubenswrapper[31559]: I0216 02:41:20.181339 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53d330ba-814b-4e9b-960a-3a78377db2cf-logs" (OuterVolumeSpecName: "logs") pod "53d330ba-814b-4e9b-960a-3a78377db2cf" (UID: "53d330ba-814b-4e9b-960a-3a78377db2cf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 02:41:20.184534 master-0 kubenswrapper[31559]: I0216 02:41:20.182386 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53d330ba-814b-4e9b-960a-3a78377db2cf-kube-api-access-vdxjf" (OuterVolumeSpecName: "kube-api-access-vdxjf") pod "53d330ba-814b-4e9b-960a-3a78377db2cf" (UID: "53d330ba-814b-4e9b-960a-3a78377db2cf"). InnerVolumeSpecName "kube-api-access-vdxjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:41:20.216864 master-0 kubenswrapper[31559]: I0216 02:41:20.216554 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53d330ba-814b-4e9b-960a-3a78377db2cf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "53d330ba-814b-4e9b-960a-3a78377db2cf" (UID: "53d330ba-814b-4e9b-960a-3a78377db2cf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:41:20.241757 master-0 kubenswrapper[31559]: I0216 02:41:20.241612 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53d330ba-814b-4e9b-960a-3a78377db2cf-config-data" (OuterVolumeSpecName: "config-data") pod "53d330ba-814b-4e9b-960a-3a78377db2cf" (UID: "53d330ba-814b-4e9b-960a-3a78377db2cf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:41:20.285347 master-0 kubenswrapper[31559]: I0216 02:41:20.285257 31559 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53d330ba-814b-4e9b-960a-3a78377db2cf-logs\") on node \"master-0\" DevicePath \"\"" Feb 16 02:41:20.285580 master-0 kubenswrapper[31559]: I0216 02:41:20.285322 31559 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53d330ba-814b-4e9b-960a-3a78377db2cf-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 02:41:20.285580 master-0 kubenswrapper[31559]: I0216 02:41:20.285388 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdxjf\" (UniqueName: \"kubernetes.io/projected/53d330ba-814b-4e9b-960a-3a78377db2cf-kube-api-access-vdxjf\") on node \"master-0\" DevicePath \"\"" Feb 16 02:41:20.285580 master-0 kubenswrapper[31559]: I0216 02:41:20.285398 31559 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53d330ba-814b-4e9b-960a-3a78377db2cf-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 02:41:20.311389 master-0 kubenswrapper[31559]: I0216 02:41:20.311238 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 16 02:41:20.490184 master-0 kubenswrapper[31559]: I0216 02:41:20.490114 31559 generic.go:334] "Generic (PLEG): container finished" podID="53d330ba-814b-4e9b-960a-3a78377db2cf" containerID="a38c13133a2c666e6ee11c1a92c0404b199e820162b1c514e128a5b21e34428d" exitCode=0 Feb 16 02:41:20.490184 master-0 kubenswrapper[31559]: I0216 02:41:20.490154 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"53d330ba-814b-4e9b-960a-3a78377db2cf","Type":"ContainerDied","Data":"a38c13133a2c666e6ee11c1a92c0404b199e820162b1c514e128a5b21e34428d"} Feb 16 02:41:20.490184 master-0 kubenswrapper[31559]: I0216 02:41:20.490165 31559 generic.go:334] "Generic (PLEG): container finished" podID="53d330ba-814b-4e9b-960a-3a78377db2cf" containerID="c4f9975f68defade099794a20bb6f4d0ae09d6235bc7b6f1638594ff92d88e29" exitCode=143 Feb 16 02:41:20.490808 master-0 kubenswrapper[31559]: I0216 02:41:20.490197 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"53d330ba-814b-4e9b-960a-3a78377db2cf","Type":"ContainerDied","Data":"c4f9975f68defade099794a20bb6f4d0ae09d6235bc7b6f1638594ff92d88e29"} Feb 16 02:41:20.490808 master-0 kubenswrapper[31559]: I0216 02:41:20.490216 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"53d330ba-814b-4e9b-960a-3a78377db2cf","Type":"ContainerDied","Data":"2a4086fb1ead359731b50deefb4b7638136d3a6ae6a84c6753e13af401030186"} Feb 16 02:41:20.490808 master-0 kubenswrapper[31559]: I0216 02:41:20.490235 31559 scope.go:117] "RemoveContainer" containerID="a38c13133a2c666e6ee11c1a92c0404b199e820162b1c514e128a5b21e34428d" Feb 16 02:41:20.490808 master-0 kubenswrapper[31559]: I0216 02:41:20.490239 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 02:41:20.515913 master-0 kubenswrapper[31559]: I0216 02:41:20.515863 31559 scope.go:117] "RemoveContainer" containerID="c4f9975f68defade099794a20bb6f4d0ae09d6235bc7b6f1638594ff92d88e29" Feb 16 02:41:20.548191 master-0 kubenswrapper[31559]: I0216 02:41:20.547927 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 02:41:20.566426 master-0 kubenswrapper[31559]: I0216 02:41:20.566253 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 02:41:20.566856 master-0 kubenswrapper[31559]: I0216 02:41:20.566541 31559 scope.go:117] "RemoveContainer" containerID="a38c13133a2c666e6ee11c1a92c0404b199e820162b1c514e128a5b21e34428d" Feb 16 02:41:20.567117 master-0 kubenswrapper[31559]: E0216 02:41:20.567082 31559 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a38c13133a2c666e6ee11c1a92c0404b199e820162b1c514e128a5b21e34428d\": container with ID starting with a38c13133a2c666e6ee11c1a92c0404b199e820162b1c514e128a5b21e34428d not found: ID does not exist" containerID="a38c13133a2c666e6ee11c1a92c0404b199e820162b1c514e128a5b21e34428d" Feb 16 02:41:20.567189 master-0 kubenswrapper[31559]: I0216 02:41:20.567111 31559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a38c13133a2c666e6ee11c1a92c0404b199e820162b1c514e128a5b21e34428d"} err="failed to get container status \"a38c13133a2c666e6ee11c1a92c0404b199e820162b1c514e128a5b21e34428d\": rpc error: code = NotFound desc = could not find container \"a38c13133a2c666e6ee11c1a92c0404b199e820162b1c514e128a5b21e34428d\": container with ID starting with a38c13133a2c666e6ee11c1a92c0404b199e820162b1c514e128a5b21e34428d not found: ID does not exist" Feb 16 02:41:20.567189 master-0 kubenswrapper[31559]: I0216 02:41:20.567132 31559 scope.go:117] "RemoveContainer" containerID="c4f9975f68defade099794a20bb6f4d0ae09d6235bc7b6f1638594ff92d88e29" Feb 16 02:41:20.567348 master-0 kubenswrapper[31559]: E0216 02:41:20.567322 31559 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4f9975f68defade099794a20bb6f4d0ae09d6235bc7b6f1638594ff92d88e29\": container with ID starting with c4f9975f68defade099794a20bb6f4d0ae09d6235bc7b6f1638594ff92d88e29 not found: ID does not exist" containerID="c4f9975f68defade099794a20bb6f4d0ae09d6235bc7b6f1638594ff92d88e29" Feb 16 02:41:20.567405 master-0 kubenswrapper[31559]: I0216 02:41:20.567342 31559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4f9975f68defade099794a20bb6f4d0ae09d6235bc7b6f1638594ff92d88e29"} err="failed to get container status \"c4f9975f68defade099794a20bb6f4d0ae09d6235bc7b6f1638594ff92d88e29\": rpc error: code = NotFound desc = could not find container \"c4f9975f68defade099794a20bb6f4d0ae09d6235bc7b6f1638594ff92d88e29\": container with ID starting with c4f9975f68defade099794a20bb6f4d0ae09d6235bc7b6f1638594ff92d88e29 not found: ID does not exist" Feb 16 02:41:20.567405 master-0 kubenswrapper[31559]: I0216 02:41:20.567355 31559 scope.go:117] "RemoveContainer" containerID="a38c13133a2c666e6ee11c1a92c0404b199e820162b1c514e128a5b21e34428d" Feb 16 02:41:20.567756 master-0 kubenswrapper[31559]: I0216 02:41:20.567730 31559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a38c13133a2c666e6ee11c1a92c0404b199e820162b1c514e128a5b21e34428d"} err="failed to get container status \"a38c13133a2c666e6ee11c1a92c0404b199e820162b1c514e128a5b21e34428d\": rpc error: code = NotFound desc = could not find container \"a38c13133a2c666e6ee11c1a92c0404b199e820162b1c514e128a5b21e34428d\": container with ID starting with a38c13133a2c666e6ee11c1a92c0404b199e820162b1c514e128a5b21e34428d not found: ID does not exist" Feb 16 02:41:20.567756 master-0 kubenswrapper[31559]: I0216 02:41:20.567744 31559 scope.go:117] "RemoveContainer" containerID="c4f9975f68defade099794a20bb6f4d0ae09d6235bc7b6f1638594ff92d88e29" Feb 16 02:41:20.567986 master-0 kubenswrapper[31559]: I0216 02:41:20.567952 31559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4f9975f68defade099794a20bb6f4d0ae09d6235bc7b6f1638594ff92d88e29"} err="failed to get container status \"c4f9975f68defade099794a20bb6f4d0ae09d6235bc7b6f1638594ff92d88e29\": rpc error: code = NotFound desc = could not find container \"c4f9975f68defade099794a20bb6f4d0ae09d6235bc7b6f1638594ff92d88e29\": container with ID starting with c4f9975f68defade099794a20bb6f4d0ae09d6235bc7b6f1638594ff92d88e29 not found: ID does not exist" Feb 16 02:41:20.611208 master-0 kubenswrapper[31559]: I0216 02:41:20.611119 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 16 02:41:20.612426 master-0 kubenswrapper[31559]: E0216 02:41:20.612391 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53d330ba-814b-4e9b-960a-3a78377db2cf" containerName="nova-metadata-log" Feb 16 02:41:20.612426 master-0 kubenswrapper[31559]: I0216 02:41:20.612422 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="53d330ba-814b-4e9b-960a-3a78377db2cf" containerName="nova-metadata-log" Feb 16 02:41:20.612579 master-0 kubenswrapper[31559]: E0216 02:41:20.612551 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53d330ba-814b-4e9b-960a-3a78377db2cf" containerName="nova-metadata-metadata" Feb 16 02:41:20.612579 master-0 kubenswrapper[31559]: I0216 02:41:20.612573 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="53d330ba-814b-4e9b-960a-3a78377db2cf" containerName="nova-metadata-metadata" Feb 16 02:41:20.612895 master-0 kubenswrapper[31559]: I0216 02:41:20.612864 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="53d330ba-814b-4e9b-960a-3a78377db2cf" containerName="nova-metadata-log" Feb 16 02:41:20.612939 master-0 kubenswrapper[31559]: I0216 02:41:20.612899 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="53d330ba-814b-4e9b-960a-3a78377db2cf" containerName="nova-metadata-metadata" Feb 16 02:41:20.614386 master-0 kubenswrapper[31559]: I0216 02:41:20.614357 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 02:41:20.617088 master-0 kubenswrapper[31559]: I0216 02:41:20.616038 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 16 02:41:20.622263 master-0 kubenswrapper[31559]: I0216 02:41:20.622201 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 16 02:41:20.653882 master-0 kubenswrapper[31559]: I0216 02:41:20.653815 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 02:41:20.800219 master-0 kubenswrapper[31559]: I0216 02:41:20.800148 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb5e5602-f025-4f1e-9c92-e90de0c8cddf-config-data\") pod \"nova-metadata-0\" (UID: \"bb5e5602-f025-4f1e-9c92-e90de0c8cddf\") " pod="openstack/nova-metadata-0" Feb 16 02:41:20.800416 master-0 kubenswrapper[31559]: I0216 02:41:20.800355 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvpcz\" (UniqueName: \"kubernetes.io/projected/bb5e5602-f025-4f1e-9c92-e90de0c8cddf-kube-api-access-vvpcz\") pod \"nova-metadata-0\" (UID: \"bb5e5602-f025-4f1e-9c92-e90de0c8cddf\") " pod="openstack/nova-metadata-0" Feb 16 02:41:20.800643 master-0 kubenswrapper[31559]: I0216 02:41:20.800494 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb5e5602-f025-4f1e-9c92-e90de0c8cddf-logs\") pod \"nova-metadata-0\" (UID: \"bb5e5602-f025-4f1e-9c92-e90de0c8cddf\") " pod="openstack/nova-metadata-0" Feb 16 02:41:20.800643 master-0 kubenswrapper[31559]: I0216 02:41:20.800525 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb5e5602-f025-4f1e-9c92-e90de0c8cddf-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"bb5e5602-f025-4f1e-9c92-e90de0c8cddf\") " pod="openstack/nova-metadata-0" Feb 16 02:41:20.800643 master-0 kubenswrapper[31559]: I0216 02:41:20.800577 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb5e5602-f025-4f1e-9c92-e90de0c8cddf-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bb5e5602-f025-4f1e-9c92-e90de0c8cddf\") " pod="openstack/nova-metadata-0" Feb 16 02:41:20.909382 master-0 kubenswrapper[31559]: I0216 02:41:20.909258 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb5e5602-f025-4f1e-9c92-e90de0c8cddf-logs\") pod \"nova-metadata-0\" (UID: \"bb5e5602-f025-4f1e-9c92-e90de0c8cddf\") " pod="openstack/nova-metadata-0" Feb 16 02:41:20.909382 master-0 kubenswrapper[31559]: I0216 02:41:20.909330 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb5e5602-f025-4f1e-9c92-e90de0c8cddf-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"bb5e5602-f025-4f1e-9c92-e90de0c8cddf\") " pod="openstack/nova-metadata-0" Feb 16 02:41:20.909627 master-0 kubenswrapper[31559]: I0216 02:41:20.909477 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb5e5602-f025-4f1e-9c92-e90de0c8cddf-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bb5e5602-f025-4f1e-9c92-e90de0c8cddf\") " pod="openstack/nova-metadata-0" Feb 16 02:41:20.909663 master-0 kubenswrapper[31559]: I0216 02:41:20.909627 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb5e5602-f025-4f1e-9c92-e90de0c8cddf-config-data\") pod \"nova-metadata-0\" (UID: \"bb5e5602-f025-4f1e-9c92-e90de0c8cddf\") " pod="openstack/nova-metadata-0" Feb 16 02:41:20.909843 master-0 kubenswrapper[31559]: I0216 02:41:20.909764 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvpcz\" (UniqueName: \"kubernetes.io/projected/bb5e5602-f025-4f1e-9c92-e90de0c8cddf-kube-api-access-vvpcz\") pod \"nova-metadata-0\" (UID: \"bb5e5602-f025-4f1e-9c92-e90de0c8cddf\") " pod="openstack/nova-metadata-0" Feb 16 02:41:20.912702 master-0 kubenswrapper[31559]: I0216 02:41:20.912669 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb5e5602-f025-4f1e-9c92-e90de0c8cddf-logs\") pod \"nova-metadata-0\" (UID: \"bb5e5602-f025-4f1e-9c92-e90de0c8cddf\") " pod="openstack/nova-metadata-0" Feb 16 02:41:20.917892 master-0 kubenswrapper[31559]: I0216 02:41:20.917868 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb5e5602-f025-4f1e-9c92-e90de0c8cddf-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"bb5e5602-f025-4f1e-9c92-e90de0c8cddf\") " pod="openstack/nova-metadata-0" Feb 16 02:41:20.918542 master-0 kubenswrapper[31559]: I0216 02:41:20.918525 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb5e5602-f025-4f1e-9c92-e90de0c8cddf-config-data\") pod \"nova-metadata-0\" (UID: \"bb5e5602-f025-4f1e-9c92-e90de0c8cddf\") " pod="openstack/nova-metadata-0" Feb 16 02:41:20.926179 master-0 kubenswrapper[31559]: I0216 02:41:20.926129 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb5e5602-f025-4f1e-9c92-e90de0c8cddf-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bb5e5602-f025-4f1e-9c92-e90de0c8cddf\") " pod="openstack/nova-metadata-0" Feb 16 02:41:20.931621 master-0 kubenswrapper[31559]: I0216 02:41:20.931556 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvpcz\" (UniqueName: \"kubernetes.io/projected/bb5e5602-f025-4f1e-9c92-e90de0c8cddf-kube-api-access-vvpcz\") pod \"nova-metadata-0\" (UID: \"bb5e5602-f025-4f1e-9c92-e90de0c8cddf\") " pod="openstack/nova-metadata-0" Feb 16 02:41:20.935766 master-0 kubenswrapper[31559]: I0216 02:41:20.934691 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 02:41:21.518400 master-0 kubenswrapper[31559]: I0216 02:41:21.518328 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 02:41:21.816924 master-0 kubenswrapper[31559]: W0216 02:41:21.816762 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb5e5602_f025_4f1e_9c92_e90de0c8cddf.slice/crio-1551fd684dd950540dbbd1e2fe72d8a5c1b5c57c42ed0117dd226c6f8d53b0a5 WatchSource:0}: Error finding container 1551fd684dd950540dbbd1e2fe72d8a5c1b5c57c42ed0117dd226c6f8d53b0a5: Status 404 returned error can't find the container with id 1551fd684dd950540dbbd1e2fe72d8a5c1b5c57c42ed0117dd226c6f8d53b0a5 Feb 16 02:41:21.942085 master-0 kubenswrapper[31559]: I0216 02:41:21.942004 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53d330ba-814b-4e9b-960a-3a78377db2cf" path="/var/lib/kubelet/pods/53d330ba-814b-4e9b-960a-3a78377db2cf/volumes" Feb 16 02:41:22.522189 master-0 kubenswrapper[31559]: I0216 02:41:22.522124 31559 generic.go:334] "Generic (PLEG): container finished" podID="12aa2463-4471-4146-9b3e-e5532987d769" containerID="ed615757ce97a8f4e497be28fe5d75e698dc7e0b994b1152a3a1c0039d4f9e08" exitCode=0 Feb 16 02:41:22.522825 master-0 kubenswrapper[31559]: I0216 02:41:22.522213 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-zcqp2" event={"ID":"12aa2463-4471-4146-9b3e-e5532987d769","Type":"ContainerDied","Data":"ed615757ce97a8f4e497be28fe5d75e698dc7e0b994b1152a3a1c0039d4f9e08"} Feb 16 02:41:22.525347 master-0 kubenswrapper[31559]: I0216 02:41:22.525301 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f2a12c06-81c2-4c89-b64a-79132f5241ed","Type":"ContainerStarted","Data":"30dd8519f624f370d98b8746a04a06d4085df0a4909d080ba9beb06725c29cac"} Feb 16 02:41:22.525568 master-0 kubenswrapper[31559]: I0216 02:41:22.525482 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="f2a12c06-81c2-4c89-b64a-79132f5241ed" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://30dd8519f624f370d98b8746a04a06d4085df0a4909d080ba9beb06725c29cac" gracePeriod=30 Feb 16 02:41:22.528171 master-0 kubenswrapper[31559]: I0216 02:41:22.528139 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bb5e5602-f025-4f1e-9c92-e90de0c8cddf","Type":"ContainerStarted","Data":"263608eee0555fa3cc37134605c97adc6e568dcc5c668e451347eae4fa58b128"} Feb 16 02:41:22.528260 master-0 kubenswrapper[31559]: I0216 02:41:22.528173 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bb5e5602-f025-4f1e-9c92-e90de0c8cddf","Type":"ContainerStarted","Data":"d6e145573b611cc513983237ef03796a67e6b82bde69dea177a5623b562b08f2"} Feb 16 02:41:22.528260 master-0 kubenswrapper[31559]: I0216 02:41:22.528183 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bb5e5602-f025-4f1e-9c92-e90de0c8cddf","Type":"ContainerStarted","Data":"1551fd684dd950540dbbd1e2fe72d8a5c1b5c57c42ed0117dd226c6f8d53b0a5"} Feb 16 02:41:22.603899 master-0 kubenswrapper[31559]: I0216 02:41:22.596480 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.59645416 podStartE2EDuration="2.59645416s" podCreationTimestamp="2026-02-16 02:41:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:41:22.580621715 +0000 UTC m=+1134.925227730" watchObservedRunningTime="2026-02-16 02:41:22.59645416 +0000 UTC m=+1134.941060185" Feb 16 02:41:22.610037 master-0 kubenswrapper[31559]: I0216 02:41:22.609959 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.950406777 podStartE2EDuration="8.609938534s" podCreationTimestamp="2026-02-16 02:41:14 +0000 UTC" firstStartedPulling="2026-02-16 02:41:16.22920826 +0000 UTC m=+1128.573814275" lastFinishedPulling="2026-02-16 02:41:21.888740017 +0000 UTC m=+1134.233346032" observedRunningTime="2026-02-16 02:41:22.605295895 +0000 UTC m=+1134.949901910" watchObservedRunningTime="2026-02-16 02:41:22.609938534 +0000 UTC m=+1134.954544549" Feb 16 02:41:25.187797 master-0 kubenswrapper[31559]: I0216 02:41:25.187702 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 02:41:25.187797 master-0 kubenswrapper[31559]: I0216 02:41:25.187759 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 02:41:25.302526 master-0 kubenswrapper[31559]: I0216 02:41:25.300318 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 16 02:41:25.313483 master-0 kubenswrapper[31559]: I0216 02:41:25.312051 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 16 02:41:25.353227 master-0 kubenswrapper[31559]: I0216 02:41:25.353153 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 16 02:41:25.615808 master-0 kubenswrapper[31559]: I0216 02:41:25.615628 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 16 02:41:25.623544 master-0 kubenswrapper[31559]: I0216 02:41:25.623470 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-857cbc5f9f-lbsl2" Feb 16 02:41:25.954210 master-0 kubenswrapper[31559]: I0216 02:41:25.954120 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 02:41:25.955826 master-0 kubenswrapper[31559]: I0216 02:41:25.955805 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 02:41:26.236250 master-0 kubenswrapper[31559]: I0216 02:41:26.230953 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6dcfcd5c95-986r2"] Feb 16 02:41:26.236250 master-0 kubenswrapper[31559]: I0216 02:41:26.231220 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6dcfcd5c95-986r2" podUID="416c4107-c9b1-4c74-8084-5e5b3ea18696" containerName="dnsmasq-dns" containerID="cri-o://1bfa7679a8bea2070f612a1384832a6f1cfa76ca29e92456471e9c440abc1790" gracePeriod=10 Feb 16 02:41:26.276462 master-0 kubenswrapper[31559]: I0216 02:41:26.274087 31559 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="75a69fc6-fb23-44e1-8cc9-b5da2d90de3a" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.128.1.5:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 02:41:26.276462 master-0 kubenswrapper[31559]: I0216 02:41:26.274353 31559 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="75a69fc6-fb23-44e1-8cc9-b5da2d90de3a" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.128.1.5:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 02:41:26.583591 master-0 kubenswrapper[31559]: I0216 02:41:26.583402 31559 generic.go:334] "Generic (PLEG): container finished" podID="416c4107-c9b1-4c74-8084-5e5b3ea18696" containerID="1bfa7679a8bea2070f612a1384832a6f1cfa76ca29e92456471e9c440abc1790" exitCode=0 Feb 16 02:41:26.584387 master-0 kubenswrapper[31559]: I0216 02:41:26.584325 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6dcfcd5c95-986r2" event={"ID":"416c4107-c9b1-4c74-8084-5e5b3ea18696","Type":"ContainerDied","Data":"1bfa7679a8bea2070f612a1384832a6f1cfa76ca29e92456471e9c440abc1790"} Feb 16 02:41:28.153245 master-0 kubenswrapper[31559]: I0216 02:41:28.152793 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-zcqp2" Feb 16 02:41:28.168902 master-0 kubenswrapper[31559]: I0216 02:41:28.168846 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12aa2463-4471-4146-9b3e-e5532987d769-combined-ca-bundle\") pod \"12aa2463-4471-4146-9b3e-e5532987d769\" (UID: \"12aa2463-4471-4146-9b3e-e5532987d769\") " Feb 16 02:41:28.169215 master-0 kubenswrapper[31559]: I0216 02:41:28.169178 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kq5x4\" (UniqueName: \"kubernetes.io/projected/12aa2463-4471-4146-9b3e-e5532987d769-kube-api-access-kq5x4\") pod \"12aa2463-4471-4146-9b3e-e5532987d769\" (UID: \"12aa2463-4471-4146-9b3e-e5532987d769\") " Feb 16 02:41:28.169215 master-0 kubenswrapper[31559]: I0216 02:41:28.169213 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12aa2463-4471-4146-9b3e-e5532987d769-scripts\") pod \"12aa2463-4471-4146-9b3e-e5532987d769\" (UID: \"12aa2463-4471-4146-9b3e-e5532987d769\") " Feb 16 02:41:28.169332 master-0 kubenswrapper[31559]: I0216 02:41:28.169244 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12aa2463-4471-4146-9b3e-e5532987d769-config-data\") pod \"12aa2463-4471-4146-9b3e-e5532987d769\" (UID: \"12aa2463-4471-4146-9b3e-e5532987d769\") " Feb 16 02:41:28.196957 master-0 kubenswrapper[31559]: I0216 02:41:28.191453 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12aa2463-4471-4146-9b3e-e5532987d769-kube-api-access-kq5x4" (OuterVolumeSpecName: "kube-api-access-kq5x4") pod "12aa2463-4471-4146-9b3e-e5532987d769" (UID: "12aa2463-4471-4146-9b3e-e5532987d769"). InnerVolumeSpecName "kube-api-access-kq5x4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:41:28.196957 master-0 kubenswrapper[31559]: I0216 02:41:28.192584 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12aa2463-4471-4146-9b3e-e5532987d769-scripts" (OuterVolumeSpecName: "scripts") pod "12aa2463-4471-4146-9b3e-e5532987d769" (UID: "12aa2463-4471-4146-9b3e-e5532987d769"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:41:28.245005 master-0 kubenswrapper[31559]: I0216 02:41:28.244864 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12aa2463-4471-4146-9b3e-e5532987d769-config-data" (OuterVolumeSpecName: "config-data") pod "12aa2463-4471-4146-9b3e-e5532987d769" (UID: "12aa2463-4471-4146-9b3e-e5532987d769"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:41:28.273164 master-0 kubenswrapper[31559]: I0216 02:41:28.273100 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kq5x4\" (UniqueName: \"kubernetes.io/projected/12aa2463-4471-4146-9b3e-e5532987d769-kube-api-access-kq5x4\") on node \"master-0\" DevicePath \"\"" Feb 16 02:41:28.273164 master-0 kubenswrapper[31559]: I0216 02:41:28.273142 31559 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12aa2463-4471-4146-9b3e-e5532987d769-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 02:41:28.273164 master-0 kubenswrapper[31559]: I0216 02:41:28.273154 31559 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12aa2463-4471-4146-9b3e-e5532987d769-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 02:41:28.273868 master-0 kubenswrapper[31559]: I0216 02:41:28.273818 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12aa2463-4471-4146-9b3e-e5532987d769-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "12aa2463-4471-4146-9b3e-e5532987d769" (UID: "12aa2463-4471-4146-9b3e-e5532987d769"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:41:28.391599 master-0 kubenswrapper[31559]: I0216 02:41:28.388328 31559 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12aa2463-4471-4146-9b3e-e5532987d769-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 02:41:28.477162 master-0 kubenswrapper[31559]: I0216 02:41:28.477097 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6dcfcd5c95-986r2" Feb 16 02:41:28.592591 master-0 kubenswrapper[31559]: I0216 02:41:28.592514 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/416c4107-c9b1-4c74-8084-5e5b3ea18696-config\") pod \"416c4107-c9b1-4c74-8084-5e5b3ea18696\" (UID: \"416c4107-c9b1-4c74-8084-5e5b3ea18696\") " Feb 16 02:41:28.592825 master-0 kubenswrapper[31559]: I0216 02:41:28.592735 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/416c4107-c9b1-4c74-8084-5e5b3ea18696-ovsdbserver-sb\") pod \"416c4107-c9b1-4c74-8084-5e5b3ea18696\" (UID: \"416c4107-c9b1-4c74-8084-5e5b3ea18696\") " Feb 16 02:41:28.592825 master-0 kubenswrapper[31559]: I0216 02:41:28.592795 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/416c4107-c9b1-4c74-8084-5e5b3ea18696-dns-swift-storage-0\") pod \"416c4107-c9b1-4c74-8084-5e5b3ea18696\" (UID: \"416c4107-c9b1-4c74-8084-5e5b3ea18696\") " Feb 16 02:41:28.592932 master-0 kubenswrapper[31559]: I0216 02:41:28.592907 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/416c4107-c9b1-4c74-8084-5e5b3ea18696-ovsdbserver-nb\") pod \"416c4107-c9b1-4c74-8084-5e5b3ea18696\" (UID: \"416c4107-c9b1-4c74-8084-5e5b3ea18696\") " Feb 16 02:41:28.592976 master-0 kubenswrapper[31559]: I0216 02:41:28.592962 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2qz8\" (UniqueName: \"kubernetes.io/projected/416c4107-c9b1-4c74-8084-5e5b3ea18696-kube-api-access-n2qz8\") pod \"416c4107-c9b1-4c74-8084-5e5b3ea18696\" (UID: \"416c4107-c9b1-4c74-8084-5e5b3ea18696\") " Feb 16 02:41:28.593242 master-0 kubenswrapper[31559]: I0216 02:41:28.593213 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/416c4107-c9b1-4c74-8084-5e5b3ea18696-dns-svc\") pod \"416c4107-c9b1-4c74-8084-5e5b3ea18696\" (UID: \"416c4107-c9b1-4c74-8084-5e5b3ea18696\") " Feb 16 02:41:28.597365 master-0 kubenswrapper[31559]: I0216 02:41:28.597273 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/416c4107-c9b1-4c74-8084-5e5b3ea18696-kube-api-access-n2qz8" (OuterVolumeSpecName: "kube-api-access-n2qz8") pod "416c4107-c9b1-4c74-8084-5e5b3ea18696" (UID: "416c4107-c9b1-4c74-8084-5e5b3ea18696"). InnerVolumeSpecName "kube-api-access-n2qz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:41:28.633602 master-0 kubenswrapper[31559]: I0216 02:41:28.631915 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6dcfcd5c95-986r2" event={"ID":"416c4107-c9b1-4c74-8084-5e5b3ea18696","Type":"ContainerDied","Data":"bc35d2496277f24b3feda3f5faca5048481d176724a381fbe9d9fcd0b0ad9940"} Feb 16 02:41:28.633602 master-0 kubenswrapper[31559]: I0216 02:41:28.632095 31559 scope.go:117] "RemoveContainer" containerID="1bfa7679a8bea2070f612a1384832a6f1cfa76ca29e92456471e9c440abc1790" Feb 16 02:41:28.633602 master-0 kubenswrapper[31559]: I0216 02:41:28.632319 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6dcfcd5c95-986r2" Feb 16 02:41:28.634126 master-0 kubenswrapper[31559]: I0216 02:41:28.634088 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-compute-ironic-compute-0" event={"ID":"d42ca2ea-34cc-46f6-8551-928ddd4fecf5","Type":"ContainerStarted","Data":"6eb32595306227eb6cf9457a1a92eeb0afaa4df06123e8e10676520d1700f6b4"} Feb 16 02:41:28.634725 master-0 kubenswrapper[31559]: I0216 02:41:28.634685 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 16 02:41:28.637560 master-0 kubenswrapper[31559]: I0216 02:41:28.637452 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-zcqp2" Feb 16 02:41:28.637560 master-0 kubenswrapper[31559]: I0216 02:41:28.637457 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-zcqp2" event={"ID":"12aa2463-4471-4146-9b3e-e5532987d769","Type":"ContainerDied","Data":"3f2b8bc5bd8a475a502b0a0ebf6b896f29ab551bc44c70d86e4feba1bcc43e78"} Feb 16 02:41:28.638337 master-0 kubenswrapper[31559]: I0216 02:41:28.637565 31559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f2b8bc5bd8a475a502b0a0ebf6b896f29ab551bc44c70d86e4feba1bcc43e78" Feb 16 02:41:28.658618 master-0 kubenswrapper[31559]: I0216 02:41:28.658577 31559 scope.go:117] "RemoveContainer" containerID="cbca2071a74a5f5b48c10860a2ca4ffc816cfc0405215bee1a7960caed36c5ed" Feb 16 02:41:28.663160 master-0 kubenswrapper[31559]: I0216 02:41:28.662905 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-compute-ironic-compute-0" podStartSLOduration=2.241281588 podStartE2EDuration="14.662892289s" podCreationTimestamp="2026-02-16 02:41:14 +0000 UTC" firstStartedPulling="2026-02-16 02:41:15.742356427 +0000 UTC m=+1128.086962442" lastFinishedPulling="2026-02-16 02:41:28.163967128 +0000 UTC m=+1140.508573143" observedRunningTime="2026-02-16 02:41:28.658722122 +0000 UTC m=+1141.003328137" watchObservedRunningTime="2026-02-16 02:41:28.662892289 +0000 UTC m=+1141.007498294" Feb 16 02:41:28.674369 master-0 kubenswrapper[31559]: I0216 02:41:28.674306 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/416c4107-c9b1-4c74-8084-5e5b3ea18696-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "416c4107-c9b1-4c74-8084-5e5b3ea18696" (UID: "416c4107-c9b1-4c74-8084-5e5b3ea18696"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:41:28.688821 master-0 kubenswrapper[31559]: I0216 02:41:28.688754 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/416c4107-c9b1-4c74-8084-5e5b3ea18696-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "416c4107-c9b1-4c74-8084-5e5b3ea18696" (UID: "416c4107-c9b1-4c74-8084-5e5b3ea18696"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:41:28.690226 master-0 kubenswrapper[31559]: I0216 02:41:28.690190 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/416c4107-c9b1-4c74-8084-5e5b3ea18696-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "416c4107-c9b1-4c74-8084-5e5b3ea18696" (UID: "416c4107-c9b1-4c74-8084-5e5b3ea18696"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:41:28.696668 master-0 kubenswrapper[31559]: I0216 02:41:28.696625 31559 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/416c4107-c9b1-4c74-8084-5e5b3ea18696-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 16 02:41:28.696668 master-0 kubenswrapper[31559]: I0216 02:41:28.696653 31559 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/416c4107-c9b1-4c74-8084-5e5b3ea18696-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 16 02:41:28.696668 master-0 kubenswrapper[31559]: I0216 02:41:28.696666 31559 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/416c4107-c9b1-4c74-8084-5e5b3ea18696-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 16 02:41:28.696907 master-0 kubenswrapper[31559]: I0216 02:41:28.696679 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2qz8\" (UniqueName: \"kubernetes.io/projected/416c4107-c9b1-4c74-8084-5e5b3ea18696-kube-api-access-n2qz8\") on node \"master-0\" DevicePath \"\"" Feb 16 02:41:28.697278 master-0 kubenswrapper[31559]: I0216 02:41:28.697246 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 16 02:41:28.698246 master-0 kubenswrapper[31559]: I0216 02:41:28.698212 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/416c4107-c9b1-4c74-8084-5e5b3ea18696-config" (OuterVolumeSpecName: "config") pod "416c4107-c9b1-4c74-8084-5e5b3ea18696" (UID: "416c4107-c9b1-4c74-8084-5e5b3ea18696"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:41:28.729144 master-0 kubenswrapper[31559]: I0216 02:41:28.729064 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/416c4107-c9b1-4c74-8084-5e5b3ea18696-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "416c4107-c9b1-4c74-8084-5e5b3ea18696" (UID: "416c4107-c9b1-4c74-8084-5e5b3ea18696"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:41:28.798980 master-0 kubenswrapper[31559]: I0216 02:41:28.798914 31559 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/416c4107-c9b1-4c74-8084-5e5b3ea18696-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 16 02:41:28.799305 master-0 kubenswrapper[31559]: I0216 02:41:28.799284 31559 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/416c4107-c9b1-4c74-8084-5e5b3ea18696-config\") on node \"master-0\" DevicePath \"\"" Feb 16 02:41:29.000767 master-0 kubenswrapper[31559]: I0216 02:41:29.000664 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6dcfcd5c95-986r2"] Feb 16 02:41:29.019982 master-0 kubenswrapper[31559]: I0216 02:41:29.019894 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6dcfcd5c95-986r2"] Feb 16 02:41:29.397047 master-0 kubenswrapper[31559]: I0216 02:41:29.396897 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 02:41:29.397896 master-0 kubenswrapper[31559]: I0216 02:41:29.397149 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="75a69fc6-fb23-44e1-8cc9-b5da2d90de3a" containerName="nova-api-log" containerID="cri-o://c38e90199e886812cceda806368a1aabe6aca65ca4119f7342e3db1a895467a7" gracePeriod=30 Feb 16 02:41:29.397896 master-0 kubenswrapper[31559]: I0216 02:41:29.397717 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="75a69fc6-fb23-44e1-8cc9-b5da2d90de3a" containerName="nova-api-api" containerID="cri-o://4f4a9838b3ffe7d069796932cbf83cd3b8c64a9cdc99ccff54bd09bb2a17adfc" gracePeriod=30 Feb 16 02:41:29.413472 master-0 kubenswrapper[31559]: I0216 02:41:29.413392 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 02:41:29.413722 master-0 kubenswrapper[31559]: I0216 02:41:29.413674 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="e7be751e-f768-4ff3-aad5-985f26d3b399" containerName="nova-scheduler-scheduler" containerID="cri-o://8cb65666c4f9e4900d4f0012c082473fb24df4169f2d765b291a17bd721d4637" gracePeriod=30 Feb 16 02:41:29.452246 master-0 kubenswrapper[31559]: I0216 02:41:29.452159 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 02:41:29.452576 master-0 kubenswrapper[31559]: I0216 02:41:29.452495 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="bb5e5602-f025-4f1e-9c92-e90de0c8cddf" containerName="nova-metadata-log" containerID="cri-o://d6e145573b611cc513983237ef03796a67e6b82bde69dea177a5623b562b08f2" gracePeriod=30 Feb 16 02:41:29.452662 master-0 kubenswrapper[31559]: I0216 02:41:29.452541 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="bb5e5602-f025-4f1e-9c92-e90de0c8cddf" containerName="nova-metadata-metadata" containerID="cri-o://263608eee0555fa3cc37134605c97adc6e568dcc5c668e451347eae4fa58b128" gracePeriod=30 Feb 16 02:41:29.670374 master-0 kubenswrapper[31559]: I0216 02:41:29.670265 31559 generic.go:334] "Generic (PLEG): container finished" podID="75a69fc6-fb23-44e1-8cc9-b5da2d90de3a" containerID="c38e90199e886812cceda806368a1aabe6aca65ca4119f7342e3db1a895467a7" exitCode=143 Feb 16 02:41:29.670374 master-0 kubenswrapper[31559]: I0216 02:41:29.670368 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"75a69fc6-fb23-44e1-8cc9-b5da2d90de3a","Type":"ContainerDied","Data":"c38e90199e886812cceda806368a1aabe6aca65ca4119f7342e3db1a895467a7"} Feb 16 02:41:29.675912 master-0 kubenswrapper[31559]: I0216 02:41:29.675888 31559 generic.go:334] "Generic (PLEG): container finished" podID="bb5e5602-f025-4f1e-9c92-e90de0c8cddf" containerID="d6e145573b611cc513983237ef03796a67e6b82bde69dea177a5623b562b08f2" exitCode=143 Feb 16 02:41:29.676075 master-0 kubenswrapper[31559]: I0216 02:41:29.676021 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bb5e5602-f025-4f1e-9c92-e90de0c8cddf","Type":"ContainerDied","Data":"d6e145573b611cc513983237ef03796a67e6b82bde69dea177a5623b562b08f2"} Feb 16 02:41:29.940236 master-0 kubenswrapper[31559]: I0216 02:41:29.940098 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="416c4107-c9b1-4c74-8084-5e5b3ea18696" path="/var/lib/kubelet/pods/416c4107-c9b1-4c74-8084-5e5b3ea18696/volumes" Feb 16 02:41:30.166933 master-0 kubenswrapper[31559]: I0216 02:41:30.166886 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 02:41:30.244385 master-0 kubenswrapper[31559]: I0216 02:41:30.244206 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb5e5602-f025-4f1e-9c92-e90de0c8cddf-logs\") pod \"bb5e5602-f025-4f1e-9c92-e90de0c8cddf\" (UID: \"bb5e5602-f025-4f1e-9c92-e90de0c8cddf\") " Feb 16 02:41:30.244616 master-0 kubenswrapper[31559]: I0216 02:41:30.244395 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvpcz\" (UniqueName: \"kubernetes.io/projected/bb5e5602-f025-4f1e-9c92-e90de0c8cddf-kube-api-access-vvpcz\") pod \"bb5e5602-f025-4f1e-9c92-e90de0c8cddf\" (UID: \"bb5e5602-f025-4f1e-9c92-e90de0c8cddf\") " Feb 16 02:41:30.244616 master-0 kubenswrapper[31559]: I0216 02:41:30.244557 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb5e5602-f025-4f1e-9c92-e90de0c8cddf-config-data\") pod \"bb5e5602-f025-4f1e-9c92-e90de0c8cddf\" (UID: \"bb5e5602-f025-4f1e-9c92-e90de0c8cddf\") " Feb 16 02:41:30.244720 master-0 kubenswrapper[31559]: I0216 02:41:30.244640 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb5e5602-f025-4f1e-9c92-e90de0c8cddf-combined-ca-bundle\") pod \"bb5e5602-f025-4f1e-9c92-e90de0c8cddf\" (UID: \"bb5e5602-f025-4f1e-9c92-e90de0c8cddf\") " Feb 16 02:41:30.244780 master-0 kubenswrapper[31559]: I0216 02:41:30.244698 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb5e5602-f025-4f1e-9c92-e90de0c8cddf-logs" (OuterVolumeSpecName: "logs") pod "bb5e5602-f025-4f1e-9c92-e90de0c8cddf" (UID: "bb5e5602-f025-4f1e-9c92-e90de0c8cddf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 02:41:30.244882 master-0 kubenswrapper[31559]: I0216 02:41:30.244856 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb5e5602-f025-4f1e-9c92-e90de0c8cddf-nova-metadata-tls-certs\") pod \"bb5e5602-f025-4f1e-9c92-e90de0c8cddf\" (UID: \"bb5e5602-f025-4f1e-9c92-e90de0c8cddf\") " Feb 16 02:41:30.245765 master-0 kubenswrapper[31559]: I0216 02:41:30.245735 31559 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb5e5602-f025-4f1e-9c92-e90de0c8cddf-logs\") on node \"master-0\" DevicePath \"\"" Feb 16 02:41:30.253709 master-0 kubenswrapper[31559]: I0216 02:41:30.253632 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb5e5602-f025-4f1e-9c92-e90de0c8cddf-kube-api-access-vvpcz" (OuterVolumeSpecName: "kube-api-access-vvpcz") pod "bb5e5602-f025-4f1e-9c92-e90de0c8cddf" (UID: "bb5e5602-f025-4f1e-9c92-e90de0c8cddf"). InnerVolumeSpecName "kube-api-access-vvpcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:41:30.291461 master-0 kubenswrapper[31559]: I0216 02:41:30.286068 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb5e5602-f025-4f1e-9c92-e90de0c8cddf-config-data" (OuterVolumeSpecName: "config-data") pod "bb5e5602-f025-4f1e-9c92-e90de0c8cddf" (UID: "bb5e5602-f025-4f1e-9c92-e90de0c8cddf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:41:30.291461 master-0 kubenswrapper[31559]: I0216 02:41:30.286533 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb5e5602-f025-4f1e-9c92-e90de0c8cddf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bb5e5602-f025-4f1e-9c92-e90de0c8cddf" (UID: "bb5e5602-f025-4f1e-9c92-e90de0c8cddf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:41:30.312011 master-0 kubenswrapper[31559]: E0216 02:41:30.311937 31559 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8cb65666c4f9e4900d4f0012c082473fb24df4169f2d765b291a17bd721d4637 is running failed: container process not found" containerID="8cb65666c4f9e4900d4f0012c082473fb24df4169f2d765b291a17bd721d4637" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 16 02:41:30.312513 master-0 kubenswrapper[31559]: E0216 02:41:30.312429 31559 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8cb65666c4f9e4900d4f0012c082473fb24df4169f2d765b291a17bd721d4637 is running failed: container process not found" containerID="8cb65666c4f9e4900d4f0012c082473fb24df4169f2d765b291a17bd721d4637" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 16 02:41:30.312927 master-0 kubenswrapper[31559]: E0216 02:41:30.312890 31559 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8cb65666c4f9e4900d4f0012c082473fb24df4169f2d765b291a17bd721d4637 is running failed: container process not found" containerID="8cb65666c4f9e4900d4f0012c082473fb24df4169f2d765b291a17bd721d4637" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 16 02:41:30.313000 master-0 kubenswrapper[31559]: E0216 02:41:30.312935 31559 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8cb65666c4f9e4900d4f0012c082473fb24df4169f2d765b291a17bd721d4637 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="e7be751e-f768-4ff3-aad5-985f26d3b399" containerName="nova-scheduler-scheduler" Feb 16 02:41:30.321690 master-0 kubenswrapper[31559]: I0216 02:41:30.321544 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb5e5602-f025-4f1e-9c92-e90de0c8cddf-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "bb5e5602-f025-4f1e-9c92-e90de0c8cddf" (UID: "bb5e5602-f025-4f1e-9c92-e90de0c8cddf"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:41:30.349619 master-0 kubenswrapper[31559]: I0216 02:41:30.349568 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvpcz\" (UniqueName: \"kubernetes.io/projected/bb5e5602-f025-4f1e-9c92-e90de0c8cddf-kube-api-access-vvpcz\") on node \"master-0\" DevicePath \"\"" Feb 16 02:41:30.349716 master-0 kubenswrapper[31559]: I0216 02:41:30.349620 31559 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb5e5602-f025-4f1e-9c92-e90de0c8cddf-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 02:41:30.349716 master-0 kubenswrapper[31559]: I0216 02:41:30.349636 31559 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb5e5602-f025-4f1e-9c92-e90de0c8cddf-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 02:41:30.349716 master-0 kubenswrapper[31559]: I0216 02:41:30.349648 31559 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb5e5602-f025-4f1e-9c92-e90de0c8cddf-nova-metadata-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 16 02:41:30.689316 master-0 kubenswrapper[31559]: I0216 02:41:30.689274 31559 generic.go:334] "Generic (PLEG): container finished" podID="e7be751e-f768-4ff3-aad5-985f26d3b399" containerID="8cb65666c4f9e4900d4f0012c082473fb24df4169f2d765b291a17bd721d4637" exitCode=0 Feb 16 02:41:30.689948 master-0 kubenswrapper[31559]: I0216 02:41:30.689366 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e7be751e-f768-4ff3-aad5-985f26d3b399","Type":"ContainerDied","Data":"8cb65666c4f9e4900d4f0012c082473fb24df4169f2d765b291a17bd721d4637"} Feb 16 02:41:30.692825 master-0 kubenswrapper[31559]: I0216 02:41:30.692781 31559 generic.go:334] "Generic (PLEG): container finished" podID="bb5e5602-f025-4f1e-9c92-e90de0c8cddf" containerID="263608eee0555fa3cc37134605c97adc6e568dcc5c668e451347eae4fa58b128" exitCode=0 Feb 16 02:41:30.692917 master-0 kubenswrapper[31559]: I0216 02:41:30.692856 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 02:41:30.693560 master-0 kubenswrapper[31559]: I0216 02:41:30.693317 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bb5e5602-f025-4f1e-9c92-e90de0c8cddf","Type":"ContainerDied","Data":"263608eee0555fa3cc37134605c97adc6e568dcc5c668e451347eae4fa58b128"} Feb 16 02:41:30.693560 master-0 kubenswrapper[31559]: I0216 02:41:30.693374 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bb5e5602-f025-4f1e-9c92-e90de0c8cddf","Type":"ContainerDied","Data":"1551fd684dd950540dbbd1e2fe72d8a5c1b5c57c42ed0117dd226c6f8d53b0a5"} Feb 16 02:41:30.693560 master-0 kubenswrapper[31559]: I0216 02:41:30.693397 31559 scope.go:117] "RemoveContainer" containerID="263608eee0555fa3cc37134605c97adc6e568dcc5c668e451347eae4fa58b128" Feb 16 02:41:30.698345 master-0 kubenswrapper[31559]: I0216 02:41:30.698281 31559 generic.go:334] "Generic (PLEG): container finished" podID="d980fc2e-586c-4a8d-ad5f-d6385e22c959" containerID="7f676bcfacc06c6183bcab91167d1bac7b31aad9f2d96d4cf19dc517f26bfb3a" exitCode=0 Feb 16 02:41:30.698714 master-0 kubenswrapper[31559]: I0216 02:41:30.698316 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-86vgj" event={"ID":"d980fc2e-586c-4a8d-ad5f-d6385e22c959","Type":"ContainerDied","Data":"7f676bcfacc06c6183bcab91167d1bac7b31aad9f2d96d4cf19dc517f26bfb3a"} Feb 16 02:41:30.752930 master-0 kubenswrapper[31559]: I0216 02:41:30.752882 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 02:41:30.780921 master-0 kubenswrapper[31559]: I0216 02:41:30.780863 31559 scope.go:117] "RemoveContainer" containerID="d6e145573b611cc513983237ef03796a67e6b82bde69dea177a5623b562b08f2" Feb 16 02:41:30.786792 master-0 kubenswrapper[31559]: I0216 02:41:30.786407 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 02:41:30.815725 master-0 kubenswrapper[31559]: I0216 02:41:30.808722 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 02:41:30.836415 master-0 kubenswrapper[31559]: I0216 02:41:30.836368 31559 scope.go:117] "RemoveContainer" containerID="263608eee0555fa3cc37134605c97adc6e568dcc5c668e451347eae4fa58b128" Feb 16 02:41:30.839412 master-0 kubenswrapper[31559]: E0216 02:41:30.839365 31559 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"263608eee0555fa3cc37134605c97adc6e568dcc5c668e451347eae4fa58b128\": container with ID starting with 263608eee0555fa3cc37134605c97adc6e568dcc5c668e451347eae4fa58b128 not found: ID does not exist" containerID="263608eee0555fa3cc37134605c97adc6e568dcc5c668e451347eae4fa58b128" Feb 16 02:41:30.839492 master-0 kubenswrapper[31559]: I0216 02:41:30.839415 31559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"263608eee0555fa3cc37134605c97adc6e568dcc5c668e451347eae4fa58b128"} err="failed to get container status \"263608eee0555fa3cc37134605c97adc6e568dcc5c668e451347eae4fa58b128\": rpc error: code = NotFound desc = could not find container \"263608eee0555fa3cc37134605c97adc6e568dcc5c668e451347eae4fa58b128\": container with ID starting with 263608eee0555fa3cc37134605c97adc6e568dcc5c668e451347eae4fa58b128 not found: ID does not exist" Feb 16 02:41:30.839492 master-0 kubenswrapper[31559]: I0216 02:41:30.839471 31559 scope.go:117] "RemoveContainer" containerID="d6e145573b611cc513983237ef03796a67e6b82bde69dea177a5623b562b08f2" Feb 16 02:41:30.840810 master-0 kubenswrapper[31559]: E0216 02:41:30.840675 31559 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6e145573b611cc513983237ef03796a67e6b82bde69dea177a5623b562b08f2\": container with ID starting with d6e145573b611cc513983237ef03796a67e6b82bde69dea177a5623b562b08f2 not found: ID does not exist" containerID="d6e145573b611cc513983237ef03796a67e6b82bde69dea177a5623b562b08f2" Feb 16 02:41:30.840810 master-0 kubenswrapper[31559]: I0216 02:41:30.840737 31559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6e145573b611cc513983237ef03796a67e6b82bde69dea177a5623b562b08f2"} err="failed to get container status \"d6e145573b611cc513983237ef03796a67e6b82bde69dea177a5623b562b08f2\": rpc error: code = NotFound desc = could not find container \"d6e145573b611cc513983237ef03796a67e6b82bde69dea177a5623b562b08f2\": container with ID starting with d6e145573b611cc513983237ef03796a67e6b82bde69dea177a5623b562b08f2 not found: ID does not exist" Feb 16 02:41:30.861463 master-0 kubenswrapper[31559]: I0216 02:41:30.860771 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 16 02:41:30.861701 master-0 kubenswrapper[31559]: E0216 02:41:30.861639 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="416c4107-c9b1-4c74-8084-5e5b3ea18696" containerName="init" Feb 16 02:41:30.861701 master-0 kubenswrapper[31559]: I0216 02:41:30.861661 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="416c4107-c9b1-4c74-8084-5e5b3ea18696" containerName="init" Feb 16 02:41:30.861701 master-0 kubenswrapper[31559]: E0216 02:41:30.861694 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb5e5602-f025-4f1e-9c92-e90de0c8cddf" containerName="nova-metadata-log" Feb 16 02:41:30.861788 master-0 kubenswrapper[31559]: I0216 02:41:30.861704 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb5e5602-f025-4f1e-9c92-e90de0c8cddf" containerName="nova-metadata-log" Feb 16 02:41:30.861788 master-0 kubenswrapper[31559]: E0216 02:41:30.861722 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="416c4107-c9b1-4c74-8084-5e5b3ea18696" containerName="dnsmasq-dns" Feb 16 02:41:30.861788 master-0 kubenswrapper[31559]: I0216 02:41:30.861732 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="416c4107-c9b1-4c74-8084-5e5b3ea18696" containerName="dnsmasq-dns" Feb 16 02:41:30.861788 master-0 kubenswrapper[31559]: E0216 02:41:30.861759 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb5e5602-f025-4f1e-9c92-e90de0c8cddf" containerName="nova-metadata-metadata" Feb 16 02:41:30.861788 master-0 kubenswrapper[31559]: I0216 02:41:30.861766 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb5e5602-f025-4f1e-9c92-e90de0c8cddf" containerName="nova-metadata-metadata" Feb 16 02:41:30.861788 master-0 kubenswrapper[31559]: E0216 02:41:30.861782 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12aa2463-4471-4146-9b3e-e5532987d769" containerName="nova-manage" Feb 16 02:41:30.861788 master-0 kubenswrapper[31559]: I0216 02:41:30.861790 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="12aa2463-4471-4146-9b3e-e5532987d769" containerName="nova-manage" Feb 16 02:41:30.861991 master-0 kubenswrapper[31559]: E0216 02:41:30.861806 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7be751e-f768-4ff3-aad5-985f26d3b399" containerName="nova-scheduler-scheduler" Feb 16 02:41:30.861991 master-0 kubenswrapper[31559]: I0216 02:41:30.861814 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7be751e-f768-4ff3-aad5-985f26d3b399" containerName="nova-scheduler-scheduler" Feb 16 02:41:30.863715 master-0 kubenswrapper[31559]: I0216 02:41:30.862075 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="416c4107-c9b1-4c74-8084-5e5b3ea18696" containerName="dnsmasq-dns" Feb 16 02:41:30.863715 master-0 kubenswrapper[31559]: I0216 02:41:30.862105 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="12aa2463-4471-4146-9b3e-e5532987d769" containerName="nova-manage" Feb 16 02:41:30.863715 master-0 kubenswrapper[31559]: I0216 02:41:30.862132 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb5e5602-f025-4f1e-9c92-e90de0c8cddf" containerName="nova-metadata-log" Feb 16 02:41:30.863715 master-0 kubenswrapper[31559]: I0216 02:41:30.862148 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7be751e-f768-4ff3-aad5-985f26d3b399" containerName="nova-scheduler-scheduler" Feb 16 02:41:30.863715 master-0 kubenswrapper[31559]: I0216 02:41:30.862169 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb5e5602-f025-4f1e-9c92-e90de0c8cddf" containerName="nova-metadata-metadata" Feb 16 02:41:30.865898 master-0 kubenswrapper[31559]: I0216 02:41:30.865706 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 02:41:30.868045 master-0 kubenswrapper[31559]: I0216 02:41:30.868005 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 16 02:41:30.874452 master-0 kubenswrapper[31559]: I0216 02:41:30.870814 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 16 02:41:30.879466 master-0 kubenswrapper[31559]: I0216 02:41:30.875925 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7be751e-f768-4ff3-aad5-985f26d3b399-combined-ca-bundle\") pod \"e7be751e-f768-4ff3-aad5-985f26d3b399\" (UID: \"e7be751e-f768-4ff3-aad5-985f26d3b399\") " Feb 16 02:41:30.879466 master-0 kubenswrapper[31559]: I0216 02:41:30.875974 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7be751e-f768-4ff3-aad5-985f26d3b399-config-data\") pod \"e7be751e-f768-4ff3-aad5-985f26d3b399\" (UID: \"e7be751e-f768-4ff3-aad5-985f26d3b399\") " Feb 16 02:41:30.879466 master-0 kubenswrapper[31559]: I0216 02:41:30.876110 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgnrv\" (UniqueName: \"kubernetes.io/projected/e7be751e-f768-4ff3-aad5-985f26d3b399-kube-api-access-xgnrv\") pod \"e7be751e-f768-4ff3-aad5-985f26d3b399\" (UID: \"e7be751e-f768-4ff3-aad5-985f26d3b399\") " Feb 16 02:41:30.879466 master-0 kubenswrapper[31559]: I0216 02:41:30.877833 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 02:41:30.881317 master-0 kubenswrapper[31559]: I0216 02:41:30.880290 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7be751e-f768-4ff3-aad5-985f26d3b399-kube-api-access-xgnrv" (OuterVolumeSpecName: "kube-api-access-xgnrv") pod "e7be751e-f768-4ff3-aad5-985f26d3b399" (UID: "e7be751e-f768-4ff3-aad5-985f26d3b399"). InnerVolumeSpecName "kube-api-access-xgnrv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:41:30.908933 master-0 kubenswrapper[31559]: I0216 02:41:30.908880 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7be751e-f768-4ff3-aad5-985f26d3b399-config-data" (OuterVolumeSpecName: "config-data") pod "e7be751e-f768-4ff3-aad5-985f26d3b399" (UID: "e7be751e-f768-4ff3-aad5-985f26d3b399"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:41:30.919063 master-0 kubenswrapper[31559]: I0216 02:41:30.919001 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7be751e-f768-4ff3-aad5-985f26d3b399-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e7be751e-f768-4ff3-aad5-985f26d3b399" (UID: "e7be751e-f768-4ff3-aad5-985f26d3b399"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:41:30.989233 master-0 kubenswrapper[31559]: I0216 02:41:30.989094 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91fb97c4-4958-40fd-8c1f-a3d3ca6f9727-logs\") pod \"nova-metadata-0\" (UID: \"91fb97c4-4958-40fd-8c1f-a3d3ca6f9727\") " pod="openstack/nova-metadata-0" Feb 16 02:41:30.989233 master-0 kubenswrapper[31559]: I0216 02:41:30.989147 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/91fb97c4-4958-40fd-8c1f-a3d3ca6f9727-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"91fb97c4-4958-40fd-8c1f-a3d3ca6f9727\") " pod="openstack/nova-metadata-0" Feb 16 02:41:30.989233 master-0 kubenswrapper[31559]: I0216 02:41:30.989187 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltkkv\" (UniqueName: \"kubernetes.io/projected/91fb97c4-4958-40fd-8c1f-a3d3ca6f9727-kube-api-access-ltkkv\") pod \"nova-metadata-0\" (UID: \"91fb97c4-4958-40fd-8c1f-a3d3ca6f9727\") " pod="openstack/nova-metadata-0" Feb 16 02:41:30.989601 master-0 kubenswrapper[31559]: I0216 02:41:30.989240 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91fb97c4-4958-40fd-8c1f-a3d3ca6f9727-config-data\") pod \"nova-metadata-0\" (UID: \"91fb97c4-4958-40fd-8c1f-a3d3ca6f9727\") " pod="openstack/nova-metadata-0" Feb 16 02:41:30.989601 master-0 kubenswrapper[31559]: I0216 02:41:30.989330 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91fb97c4-4958-40fd-8c1f-a3d3ca6f9727-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"91fb97c4-4958-40fd-8c1f-a3d3ca6f9727\") " pod="openstack/nova-metadata-0" Feb 16 02:41:30.989601 master-0 kubenswrapper[31559]: I0216 02:41:30.989461 31559 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7be751e-f768-4ff3-aad5-985f26d3b399-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 02:41:30.989601 master-0 kubenswrapper[31559]: I0216 02:41:30.989516 31559 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7be751e-f768-4ff3-aad5-985f26d3b399-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 02:41:30.989601 master-0 kubenswrapper[31559]: I0216 02:41:30.989528 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgnrv\" (UniqueName: \"kubernetes.io/projected/e7be751e-f768-4ff3-aad5-985f26d3b399-kube-api-access-xgnrv\") on node \"master-0\" DevicePath \"\"" Feb 16 02:41:31.091239 master-0 kubenswrapper[31559]: I0216 02:41:31.091158 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91fb97c4-4958-40fd-8c1f-a3d3ca6f9727-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"91fb97c4-4958-40fd-8c1f-a3d3ca6f9727\") " pod="openstack/nova-metadata-0" Feb 16 02:41:31.091642 master-0 kubenswrapper[31559]: I0216 02:41:31.091295 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91fb97c4-4958-40fd-8c1f-a3d3ca6f9727-logs\") pod \"nova-metadata-0\" (UID: \"91fb97c4-4958-40fd-8c1f-a3d3ca6f9727\") " pod="openstack/nova-metadata-0" Feb 16 02:41:31.091642 master-0 kubenswrapper[31559]: I0216 02:41:31.091315 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/91fb97c4-4958-40fd-8c1f-a3d3ca6f9727-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"91fb97c4-4958-40fd-8c1f-a3d3ca6f9727\") " pod="openstack/nova-metadata-0" Feb 16 02:41:31.091642 master-0 kubenswrapper[31559]: I0216 02:41:31.091349 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltkkv\" (UniqueName: \"kubernetes.io/projected/91fb97c4-4958-40fd-8c1f-a3d3ca6f9727-kube-api-access-ltkkv\") pod \"nova-metadata-0\" (UID: \"91fb97c4-4958-40fd-8c1f-a3d3ca6f9727\") " pod="openstack/nova-metadata-0" Feb 16 02:41:31.091642 master-0 kubenswrapper[31559]: I0216 02:41:31.091398 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91fb97c4-4958-40fd-8c1f-a3d3ca6f9727-config-data\") pod \"nova-metadata-0\" (UID: \"91fb97c4-4958-40fd-8c1f-a3d3ca6f9727\") " pod="openstack/nova-metadata-0" Feb 16 02:41:31.092397 master-0 kubenswrapper[31559]: I0216 02:41:31.092354 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91fb97c4-4958-40fd-8c1f-a3d3ca6f9727-logs\") pod \"nova-metadata-0\" (UID: \"91fb97c4-4958-40fd-8c1f-a3d3ca6f9727\") " pod="openstack/nova-metadata-0" Feb 16 02:41:31.095536 master-0 kubenswrapper[31559]: I0216 02:41:31.094586 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91fb97c4-4958-40fd-8c1f-a3d3ca6f9727-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"91fb97c4-4958-40fd-8c1f-a3d3ca6f9727\") " pod="openstack/nova-metadata-0" Feb 16 02:41:31.095536 master-0 kubenswrapper[31559]: I0216 02:41:31.095049 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/91fb97c4-4958-40fd-8c1f-a3d3ca6f9727-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"91fb97c4-4958-40fd-8c1f-a3d3ca6f9727\") " pod="openstack/nova-metadata-0" Feb 16 02:41:31.095536 master-0 kubenswrapper[31559]: I0216 02:41:31.095155 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91fb97c4-4958-40fd-8c1f-a3d3ca6f9727-config-data\") pod \"nova-metadata-0\" (UID: \"91fb97c4-4958-40fd-8c1f-a3d3ca6f9727\") " pod="openstack/nova-metadata-0" Feb 16 02:41:31.122309 master-0 kubenswrapper[31559]: I0216 02:41:31.122178 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltkkv\" (UniqueName: \"kubernetes.io/projected/91fb97c4-4958-40fd-8c1f-a3d3ca6f9727-kube-api-access-ltkkv\") pod \"nova-metadata-0\" (UID: \"91fb97c4-4958-40fd-8c1f-a3d3ca6f9727\") " pod="openstack/nova-metadata-0" Feb 16 02:41:31.203524 master-0 kubenswrapper[31559]: I0216 02:41:31.203421 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 02:41:31.717864 master-0 kubenswrapper[31559]: I0216 02:41:31.717770 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e7be751e-f768-4ff3-aad5-985f26d3b399","Type":"ContainerDied","Data":"89f4185f583536053b801ddd8a6c3349142cb1a4620155e69522f3fa442947f4"} Feb 16 02:41:31.718749 master-0 kubenswrapper[31559]: I0216 02:41:31.717880 31559 scope.go:117] "RemoveContainer" containerID="8cb65666c4f9e4900d4f0012c082473fb24df4169f2d765b291a17bd721d4637" Feb 16 02:41:31.718749 master-0 kubenswrapper[31559]: I0216 02:41:31.718025 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 02:41:31.883916 master-0 kubenswrapper[31559]: I0216 02:41:31.883858 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 02:41:31.905543 master-0 kubenswrapper[31559]: I0216 02:41:31.905354 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 02:41:31.916868 master-0 kubenswrapper[31559]: I0216 02:41:31.916807 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 02:41:31.942948 master-0 kubenswrapper[31559]: I0216 02:41:31.939869 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb5e5602-f025-4f1e-9c92-e90de0c8cddf" path="/var/lib/kubelet/pods/bb5e5602-f025-4f1e-9c92-e90de0c8cddf/volumes" Feb 16 02:41:31.942948 master-0 kubenswrapper[31559]: I0216 02:41:31.940757 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7be751e-f768-4ff3-aad5-985f26d3b399" path="/var/lib/kubelet/pods/e7be751e-f768-4ff3-aad5-985f26d3b399/volumes" Feb 16 02:41:31.942948 master-0 kubenswrapper[31559]: I0216 02:41:31.941493 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 02:41:31.944131 master-0 kubenswrapper[31559]: I0216 02:41:31.944107 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 02:41:31.946356 master-0 kubenswrapper[31559]: I0216 02:41:31.946333 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 16 02:41:31.946868 master-0 kubenswrapper[31559]: I0216 02:41:31.946824 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 02:41:32.050614 master-0 kubenswrapper[31559]: I0216 02:41:32.050566 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdf75778-245b-4cac-9568-71c9dfbc0a93-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"fdf75778-245b-4cac-9568-71c9dfbc0a93\") " pod="openstack/nova-scheduler-0" Feb 16 02:41:32.050755 master-0 kubenswrapper[31559]: I0216 02:41:32.050684 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdf75778-245b-4cac-9568-71c9dfbc0a93-config-data\") pod \"nova-scheduler-0\" (UID: \"fdf75778-245b-4cac-9568-71c9dfbc0a93\") " pod="openstack/nova-scheduler-0" Feb 16 02:41:32.050804 master-0 kubenswrapper[31559]: I0216 02:41:32.050756 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjkks\" (UniqueName: \"kubernetes.io/projected/fdf75778-245b-4cac-9568-71c9dfbc0a93-kube-api-access-zjkks\") pod \"nova-scheduler-0\" (UID: \"fdf75778-245b-4cac-9568-71c9dfbc0a93\") " pod="openstack/nova-scheduler-0" Feb 16 02:41:32.152669 master-0 kubenswrapper[31559]: I0216 02:41:32.152594 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdf75778-245b-4cac-9568-71c9dfbc0a93-config-data\") pod \"nova-scheduler-0\" (UID: \"fdf75778-245b-4cac-9568-71c9dfbc0a93\") " pod="openstack/nova-scheduler-0" Feb 16 02:41:32.152843 master-0 kubenswrapper[31559]: I0216 02:41:32.152733 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjkks\" (UniqueName: \"kubernetes.io/projected/fdf75778-245b-4cac-9568-71c9dfbc0a93-kube-api-access-zjkks\") pod \"nova-scheduler-0\" (UID: \"fdf75778-245b-4cac-9568-71c9dfbc0a93\") " pod="openstack/nova-scheduler-0" Feb 16 02:41:32.152903 master-0 kubenswrapper[31559]: I0216 02:41:32.152889 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdf75778-245b-4cac-9568-71c9dfbc0a93-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"fdf75778-245b-4cac-9568-71c9dfbc0a93\") " pod="openstack/nova-scheduler-0" Feb 16 02:41:32.156951 master-0 kubenswrapper[31559]: I0216 02:41:32.156885 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdf75778-245b-4cac-9568-71c9dfbc0a93-config-data\") pod \"nova-scheduler-0\" (UID: \"fdf75778-245b-4cac-9568-71c9dfbc0a93\") " pod="openstack/nova-scheduler-0" Feb 16 02:41:32.158215 master-0 kubenswrapper[31559]: I0216 02:41:32.158178 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdf75778-245b-4cac-9568-71c9dfbc0a93-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"fdf75778-245b-4cac-9568-71c9dfbc0a93\") " pod="openstack/nova-scheduler-0" Feb 16 02:41:32.180639 master-0 kubenswrapper[31559]: I0216 02:41:32.180609 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjkks\" (UniqueName: \"kubernetes.io/projected/fdf75778-245b-4cac-9568-71c9dfbc0a93-kube-api-access-zjkks\") pod \"nova-scheduler-0\" (UID: \"fdf75778-245b-4cac-9568-71c9dfbc0a93\") " pod="openstack/nova-scheduler-0" Feb 16 02:41:32.263142 master-0 kubenswrapper[31559]: I0216 02:41:32.263096 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 02:41:32.271754 master-0 kubenswrapper[31559]: I0216 02:41:32.271706 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-86vgj" Feb 16 02:41:32.356557 master-0 kubenswrapper[31559]: I0216 02:41:32.356499 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jmcf\" (UniqueName: \"kubernetes.io/projected/d980fc2e-586c-4a8d-ad5f-d6385e22c959-kube-api-access-5jmcf\") pod \"d980fc2e-586c-4a8d-ad5f-d6385e22c959\" (UID: \"d980fc2e-586c-4a8d-ad5f-d6385e22c959\") " Feb 16 02:41:32.356780 master-0 kubenswrapper[31559]: I0216 02:41:32.356740 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d980fc2e-586c-4a8d-ad5f-d6385e22c959-combined-ca-bundle\") pod \"d980fc2e-586c-4a8d-ad5f-d6385e22c959\" (UID: \"d980fc2e-586c-4a8d-ad5f-d6385e22c959\") " Feb 16 02:41:32.356853 master-0 kubenswrapper[31559]: I0216 02:41:32.356821 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d980fc2e-586c-4a8d-ad5f-d6385e22c959-scripts\") pod \"d980fc2e-586c-4a8d-ad5f-d6385e22c959\" (UID: \"d980fc2e-586c-4a8d-ad5f-d6385e22c959\") " Feb 16 02:41:32.356956 master-0 kubenswrapper[31559]: I0216 02:41:32.356930 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d980fc2e-586c-4a8d-ad5f-d6385e22c959-config-data\") pod \"d980fc2e-586c-4a8d-ad5f-d6385e22c959\" (UID: \"d980fc2e-586c-4a8d-ad5f-d6385e22c959\") " Feb 16 02:41:32.360309 master-0 kubenswrapper[31559]: I0216 02:41:32.360239 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d980fc2e-586c-4a8d-ad5f-d6385e22c959-kube-api-access-5jmcf" (OuterVolumeSpecName: "kube-api-access-5jmcf") pod "d980fc2e-586c-4a8d-ad5f-d6385e22c959" (UID: "d980fc2e-586c-4a8d-ad5f-d6385e22c959"). InnerVolumeSpecName "kube-api-access-5jmcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:41:32.365050 master-0 kubenswrapper[31559]: I0216 02:41:32.363899 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d980fc2e-586c-4a8d-ad5f-d6385e22c959-scripts" (OuterVolumeSpecName: "scripts") pod "d980fc2e-586c-4a8d-ad5f-d6385e22c959" (UID: "d980fc2e-586c-4a8d-ad5f-d6385e22c959"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:41:32.428062 master-0 kubenswrapper[31559]: I0216 02:41:32.427992 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d980fc2e-586c-4a8d-ad5f-d6385e22c959-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d980fc2e-586c-4a8d-ad5f-d6385e22c959" (UID: "d980fc2e-586c-4a8d-ad5f-d6385e22c959"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:41:32.457046 master-0 kubenswrapper[31559]: I0216 02:41:32.448755 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d980fc2e-586c-4a8d-ad5f-d6385e22c959-config-data" (OuterVolumeSpecName: "config-data") pod "d980fc2e-586c-4a8d-ad5f-d6385e22c959" (UID: "d980fc2e-586c-4a8d-ad5f-d6385e22c959"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:41:32.462267 master-0 kubenswrapper[31559]: I0216 02:41:32.461047 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jmcf\" (UniqueName: \"kubernetes.io/projected/d980fc2e-586c-4a8d-ad5f-d6385e22c959-kube-api-access-5jmcf\") on node \"master-0\" DevicePath \"\"" Feb 16 02:41:32.462267 master-0 kubenswrapper[31559]: I0216 02:41:32.461093 31559 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d980fc2e-586c-4a8d-ad5f-d6385e22c959-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 02:41:32.462267 master-0 kubenswrapper[31559]: I0216 02:41:32.461107 31559 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d980fc2e-586c-4a8d-ad5f-d6385e22c959-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 02:41:32.462267 master-0 kubenswrapper[31559]: I0216 02:41:32.461120 31559 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d980fc2e-586c-4a8d-ad5f-d6385e22c959-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 02:41:32.749989 master-0 kubenswrapper[31559]: I0216 02:41:32.749901 31559 generic.go:334] "Generic (PLEG): container finished" podID="75a69fc6-fb23-44e1-8cc9-b5da2d90de3a" containerID="4f4a9838b3ffe7d069796932cbf83cd3b8c64a9cdc99ccff54bd09bb2a17adfc" exitCode=0 Feb 16 02:41:32.750653 master-0 kubenswrapper[31559]: I0216 02:41:32.750045 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"75a69fc6-fb23-44e1-8cc9-b5da2d90de3a","Type":"ContainerDied","Data":"4f4a9838b3ffe7d069796932cbf83cd3b8c64a9cdc99ccff54bd09bb2a17adfc"} Feb 16 02:41:32.752277 master-0 kubenswrapper[31559]: I0216 02:41:32.752223 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-86vgj" event={"ID":"d980fc2e-586c-4a8d-ad5f-d6385e22c959","Type":"ContainerDied","Data":"5cd96e1fed1955707437875738994ccd1c511fcc24215e5f84c0f8efbc32defc"} Feb 16 02:41:32.752277 master-0 kubenswrapper[31559]: I0216 02:41:32.752253 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-86vgj" Feb 16 02:41:32.752398 master-0 kubenswrapper[31559]: I0216 02:41:32.752278 31559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5cd96e1fed1955707437875738994ccd1c511fcc24215e5f84c0f8efbc32defc" Feb 16 02:41:32.754779 master-0 kubenswrapper[31559]: I0216 02:41:32.754705 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"91fb97c4-4958-40fd-8c1f-a3d3ca6f9727","Type":"ContainerStarted","Data":"82c9080e49e7081ada7d82f2088c1a883181eb6dc496b909c51a3564bffb86c5"} Feb 16 02:41:32.754870 master-0 kubenswrapper[31559]: I0216 02:41:32.754802 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"91fb97c4-4958-40fd-8c1f-a3d3ca6f9727","Type":"ContainerStarted","Data":"a831245eca47a6845579f09785251f13ad26e431b35964b90d8cb14f3e8fefa0"} Feb 16 02:41:32.754870 master-0 kubenswrapper[31559]: I0216 02:41:32.754842 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"91fb97c4-4958-40fd-8c1f-a3d3ca6f9727","Type":"ContainerStarted","Data":"9c836a3d768be8ea42b15245ebf7088be9117c08d227ea97d47ec08ea20ec538"} Feb 16 02:41:32.799345 master-0 kubenswrapper[31559]: I0216 02:41:32.799257 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 02:41:32.868669 master-0 kubenswrapper[31559]: I0216 02:41:32.856533 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.856504071 podStartE2EDuration="2.856504071s" podCreationTimestamp="2026-02-16 02:41:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:41:32.832776825 +0000 UTC m=+1145.177382860" watchObservedRunningTime="2026-02-16 02:41:32.856504071 +0000 UTC m=+1145.201110106" Feb 16 02:41:32.974467 master-0 kubenswrapper[31559]: I0216 02:41:32.973799 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 16 02:41:32.977165 master-0 kubenswrapper[31559]: E0216 02:41:32.974685 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d980fc2e-586c-4a8d-ad5f-d6385e22c959" containerName="nova-cell1-conductor-db-sync" Feb 16 02:41:32.977165 master-0 kubenswrapper[31559]: I0216 02:41:32.974707 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="d980fc2e-586c-4a8d-ad5f-d6385e22c959" containerName="nova-cell1-conductor-db-sync" Feb 16 02:41:32.977165 master-0 kubenswrapper[31559]: I0216 02:41:32.975186 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="d980fc2e-586c-4a8d-ad5f-d6385e22c959" containerName="nova-cell1-conductor-db-sync" Feb 16 02:41:32.977165 master-0 kubenswrapper[31559]: I0216 02:41:32.976280 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 16 02:41:32.985462 master-0 kubenswrapper[31559]: I0216 02:41:32.981749 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 16 02:41:32.993332 master-0 kubenswrapper[31559]: I0216 02:41:32.993259 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 16 02:41:33.074414 master-0 kubenswrapper[31559]: I0216 02:41:33.074231 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8297f7b3-7be4-4502-8632-6c723f6dfc1d-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"8297f7b3-7be4-4502-8632-6c723f6dfc1d\") " pod="openstack/nova-cell1-conductor-0" Feb 16 02:41:33.074796 master-0 kubenswrapper[31559]: I0216 02:41:33.074491 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kjcm\" (UniqueName: \"kubernetes.io/projected/8297f7b3-7be4-4502-8632-6c723f6dfc1d-kube-api-access-5kjcm\") pod \"nova-cell1-conductor-0\" (UID: \"8297f7b3-7be4-4502-8632-6c723f6dfc1d\") " pod="openstack/nova-cell1-conductor-0" Feb 16 02:41:33.074796 master-0 kubenswrapper[31559]: I0216 02:41:33.074521 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8297f7b3-7be4-4502-8632-6c723f6dfc1d-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"8297f7b3-7be4-4502-8632-6c723f6dfc1d\") " pod="openstack/nova-cell1-conductor-0" Feb 16 02:41:33.176902 master-0 kubenswrapper[31559]: I0216 02:41:33.176816 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8297f7b3-7be4-4502-8632-6c723f6dfc1d-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"8297f7b3-7be4-4502-8632-6c723f6dfc1d\") " pod="openstack/nova-cell1-conductor-0" Feb 16 02:41:33.177236 master-0 kubenswrapper[31559]: I0216 02:41:33.177157 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kjcm\" (UniqueName: \"kubernetes.io/projected/8297f7b3-7be4-4502-8632-6c723f6dfc1d-kube-api-access-5kjcm\") pod \"nova-cell1-conductor-0\" (UID: \"8297f7b3-7be4-4502-8632-6c723f6dfc1d\") " pod="openstack/nova-cell1-conductor-0" Feb 16 02:41:33.177236 master-0 kubenswrapper[31559]: I0216 02:41:33.177217 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8297f7b3-7be4-4502-8632-6c723f6dfc1d-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"8297f7b3-7be4-4502-8632-6c723f6dfc1d\") " pod="openstack/nova-cell1-conductor-0" Feb 16 02:41:33.182040 master-0 kubenswrapper[31559]: I0216 02:41:33.181980 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8297f7b3-7be4-4502-8632-6c723f6dfc1d-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"8297f7b3-7be4-4502-8632-6c723f6dfc1d\") " pod="openstack/nova-cell1-conductor-0" Feb 16 02:41:33.187142 master-0 kubenswrapper[31559]: I0216 02:41:33.187091 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8297f7b3-7be4-4502-8632-6c723f6dfc1d-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"8297f7b3-7be4-4502-8632-6c723f6dfc1d\") " pod="openstack/nova-cell1-conductor-0" Feb 16 02:41:33.243260 master-0 kubenswrapper[31559]: I0216 02:41:33.243200 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kjcm\" (UniqueName: \"kubernetes.io/projected/8297f7b3-7be4-4502-8632-6c723f6dfc1d-kube-api-access-5kjcm\") pod \"nova-cell1-conductor-0\" (UID: \"8297f7b3-7be4-4502-8632-6c723f6dfc1d\") " pod="openstack/nova-cell1-conductor-0" Feb 16 02:41:33.315962 master-0 kubenswrapper[31559]: I0216 02:41:33.315919 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 02:41:33.328829 master-0 kubenswrapper[31559]: I0216 02:41:33.328648 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 16 02:41:33.382969 master-0 kubenswrapper[31559]: I0216 02:41:33.382887 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75a69fc6-fb23-44e1-8cc9-b5da2d90de3a-config-data\") pod \"75a69fc6-fb23-44e1-8cc9-b5da2d90de3a\" (UID: \"75a69fc6-fb23-44e1-8cc9-b5da2d90de3a\") " Feb 16 02:41:33.405192 master-0 kubenswrapper[31559]: I0216 02:41:33.405119 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75a69fc6-fb23-44e1-8cc9-b5da2d90de3a-logs\") pod \"75a69fc6-fb23-44e1-8cc9-b5da2d90de3a\" (UID: \"75a69fc6-fb23-44e1-8cc9-b5da2d90de3a\") " Feb 16 02:41:33.405539 master-0 kubenswrapper[31559]: I0216 02:41:33.405502 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6s2cn\" (UniqueName: \"kubernetes.io/projected/75a69fc6-fb23-44e1-8cc9-b5da2d90de3a-kube-api-access-6s2cn\") pod \"75a69fc6-fb23-44e1-8cc9-b5da2d90de3a\" (UID: \"75a69fc6-fb23-44e1-8cc9-b5da2d90de3a\") " Feb 16 02:41:33.405750 master-0 kubenswrapper[31559]: I0216 02:41:33.405682 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75a69fc6-fb23-44e1-8cc9-b5da2d90de3a-logs" (OuterVolumeSpecName: "logs") pod "75a69fc6-fb23-44e1-8cc9-b5da2d90de3a" (UID: "75a69fc6-fb23-44e1-8cc9-b5da2d90de3a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 02:41:33.405822 master-0 kubenswrapper[31559]: I0216 02:41:33.405678 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75a69fc6-fb23-44e1-8cc9-b5da2d90de3a-combined-ca-bundle\") pod \"75a69fc6-fb23-44e1-8cc9-b5da2d90de3a\" (UID: \"75a69fc6-fb23-44e1-8cc9-b5da2d90de3a\") " Feb 16 02:41:33.407208 master-0 kubenswrapper[31559]: I0216 02:41:33.407180 31559 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75a69fc6-fb23-44e1-8cc9-b5da2d90de3a-logs\") on node \"master-0\" DevicePath \"\"" Feb 16 02:41:33.411700 master-0 kubenswrapper[31559]: I0216 02:41:33.409562 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75a69fc6-fb23-44e1-8cc9-b5da2d90de3a-kube-api-access-6s2cn" (OuterVolumeSpecName: "kube-api-access-6s2cn") pod "75a69fc6-fb23-44e1-8cc9-b5da2d90de3a" (UID: "75a69fc6-fb23-44e1-8cc9-b5da2d90de3a"). InnerVolumeSpecName "kube-api-access-6s2cn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:41:33.436876 master-0 kubenswrapper[31559]: I0216 02:41:33.433071 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75a69fc6-fb23-44e1-8cc9-b5da2d90de3a-config-data" (OuterVolumeSpecName: "config-data") pod "75a69fc6-fb23-44e1-8cc9-b5da2d90de3a" (UID: "75a69fc6-fb23-44e1-8cc9-b5da2d90de3a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:41:33.474917 master-0 kubenswrapper[31559]: I0216 02:41:33.474844 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75a69fc6-fb23-44e1-8cc9-b5da2d90de3a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "75a69fc6-fb23-44e1-8cc9-b5da2d90de3a" (UID: "75a69fc6-fb23-44e1-8cc9-b5da2d90de3a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:41:33.512164 master-0 kubenswrapper[31559]: I0216 02:41:33.512106 31559 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75a69fc6-fb23-44e1-8cc9-b5da2d90de3a-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 02:41:33.512164 master-0 kubenswrapper[31559]: I0216 02:41:33.512155 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6s2cn\" (UniqueName: \"kubernetes.io/projected/75a69fc6-fb23-44e1-8cc9-b5da2d90de3a-kube-api-access-6s2cn\") on node \"master-0\" DevicePath \"\"" Feb 16 02:41:33.512164 master-0 kubenswrapper[31559]: I0216 02:41:33.512165 31559 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75a69fc6-fb23-44e1-8cc9-b5da2d90de3a-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 02:41:33.777647 master-0 kubenswrapper[31559]: I0216 02:41:33.777586 31559 generic.go:334] "Generic (PLEG): container finished" podID="f71d864b-c882-43b8-a7a7-b1e163d38aa4" containerID="8091045113d7c4d70e9e3e1588a057088d5e4a4b106abd7fff2b97bfd191a69f" exitCode=0 Feb 16 02:41:33.778196 master-0 kubenswrapper[31559]: I0216 02:41:33.777655 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"f71d864b-c882-43b8-a7a7-b1e163d38aa4","Type":"ContainerDied","Data":"8091045113d7c4d70e9e3e1588a057088d5e4a4b106abd7fff2b97bfd191a69f"} Feb 16 02:41:33.797860 master-0 kubenswrapper[31559]: I0216 02:41:33.791124 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fdf75778-245b-4cac-9568-71c9dfbc0a93","Type":"ContainerStarted","Data":"ca0b10073fad2479bd6b2085e4b6949f491b64131e9ae82bfa3d4012a78cbed2"} Feb 16 02:41:33.797860 master-0 kubenswrapper[31559]: I0216 02:41:33.791166 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fdf75778-245b-4cac-9568-71c9dfbc0a93","Type":"ContainerStarted","Data":"3e98170f80630bf45f03ebe251af59b4253668bfb4333892396ab44956710c3a"} Feb 16 02:41:33.797860 master-0 kubenswrapper[31559]: I0216 02:41:33.794075 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"75a69fc6-fb23-44e1-8cc9-b5da2d90de3a","Type":"ContainerDied","Data":"be1944ff94cecc22ed765cff26152fb1077f343242524b38370b2f84d10f3e27"} Feb 16 02:41:33.797860 master-0 kubenswrapper[31559]: I0216 02:41:33.794132 31559 scope.go:117] "RemoveContainer" containerID="4f4a9838b3ffe7d069796932cbf83cd3b8c64a9cdc99ccff54bd09bb2a17adfc" Feb 16 02:41:33.797860 master-0 kubenswrapper[31559]: I0216 02:41:33.794102 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 02:41:33.797860 master-0 kubenswrapper[31559]: W0216 02:41:33.795415 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8297f7b3_7be4_4502_8632_6c723f6dfc1d.slice/crio-69b176d5c9a98f618d77d5d7f5445ccaa1be7388c182b47d6e072e706025e78d WatchSource:0}: Error finding container 69b176d5c9a98f618d77d5d7f5445ccaa1be7388c182b47d6e072e706025e78d: Status 404 returned error can't find the container with id 69b176d5c9a98f618d77d5d7f5445ccaa1be7388c182b47d6e072e706025e78d Feb 16 02:41:33.804551 master-0 kubenswrapper[31559]: I0216 02:41:33.798723 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 16 02:41:33.842815 master-0 kubenswrapper[31559]: I0216 02:41:33.842723 31559 scope.go:117] "RemoveContainer" containerID="c38e90199e886812cceda806368a1aabe6aca65ca4119f7342e3db1a895467a7" Feb 16 02:41:33.869910 master-0 kubenswrapper[31559]: I0216 02:41:33.867104 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.867074247 podStartE2EDuration="2.867074247s" podCreationTimestamp="2026-02-16 02:41:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:41:33.856044836 +0000 UTC m=+1146.200650891" watchObservedRunningTime="2026-02-16 02:41:33.867074247 +0000 UTC m=+1146.211680302" Feb 16 02:41:33.893742 master-0 kubenswrapper[31559]: I0216 02:41:33.893638 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 02:41:33.920601 master-0 kubenswrapper[31559]: I0216 02:41:33.920508 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 16 02:41:34.014850 master-0 kubenswrapper[31559]: I0216 02:41:34.014790 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75a69fc6-fb23-44e1-8cc9-b5da2d90de3a" path="/var/lib/kubelet/pods/75a69fc6-fb23-44e1-8cc9-b5da2d90de3a/volumes" Feb 16 02:41:34.015553 master-0 kubenswrapper[31559]: I0216 02:41:34.015523 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 16 02:41:34.015988 master-0 kubenswrapper[31559]: E0216 02:41:34.015956 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75a69fc6-fb23-44e1-8cc9-b5da2d90de3a" containerName="nova-api-api" Feb 16 02:41:34.015988 master-0 kubenswrapper[31559]: I0216 02:41:34.015981 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="75a69fc6-fb23-44e1-8cc9-b5da2d90de3a" containerName="nova-api-api" Feb 16 02:41:34.016088 master-0 kubenswrapper[31559]: E0216 02:41:34.016016 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75a69fc6-fb23-44e1-8cc9-b5da2d90de3a" containerName="nova-api-log" Feb 16 02:41:34.016088 master-0 kubenswrapper[31559]: I0216 02:41:34.016024 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="75a69fc6-fb23-44e1-8cc9-b5da2d90de3a" containerName="nova-api-log" Feb 16 02:41:34.016362 master-0 kubenswrapper[31559]: I0216 02:41:34.016333 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="75a69fc6-fb23-44e1-8cc9-b5da2d90de3a" containerName="nova-api-api" Feb 16 02:41:34.016418 master-0 kubenswrapper[31559]: I0216 02:41:34.016383 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="75a69fc6-fb23-44e1-8cc9-b5da2d90de3a" containerName="nova-api-log" Feb 16 02:41:34.017809 master-0 kubenswrapper[31559]: I0216 02:41:34.017774 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 02:41:34.017903 master-0 kubenswrapper[31559]: I0216 02:41:34.017875 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 02:41:34.022017 master-0 kubenswrapper[31559]: I0216 02:41:34.021735 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 16 02:41:34.100952 master-0 kubenswrapper[31559]: I0216 02:41:34.100849 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce0c6b95-bf6a-47dc-aa0d-c448d9c5f2eb-config-data\") pod \"nova-api-0\" (UID: \"ce0c6b95-bf6a-47dc-aa0d-c448d9c5f2eb\") " pod="openstack/nova-api-0" Feb 16 02:41:34.102055 master-0 kubenswrapper[31559]: I0216 02:41:34.102005 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtzzh\" (UniqueName: \"kubernetes.io/projected/ce0c6b95-bf6a-47dc-aa0d-c448d9c5f2eb-kube-api-access-wtzzh\") pod \"nova-api-0\" (UID: \"ce0c6b95-bf6a-47dc-aa0d-c448d9c5f2eb\") " pod="openstack/nova-api-0" Feb 16 02:41:34.102555 master-0 kubenswrapper[31559]: I0216 02:41:34.102207 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce0c6b95-bf6a-47dc-aa0d-c448d9c5f2eb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ce0c6b95-bf6a-47dc-aa0d-c448d9c5f2eb\") " pod="openstack/nova-api-0" Feb 16 02:41:34.102555 master-0 kubenswrapper[31559]: I0216 02:41:34.102481 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce0c6b95-bf6a-47dc-aa0d-c448d9c5f2eb-logs\") pod \"nova-api-0\" (UID: \"ce0c6b95-bf6a-47dc-aa0d-c448d9c5f2eb\") " pod="openstack/nova-api-0" Feb 16 02:41:34.204843 master-0 kubenswrapper[31559]: I0216 02:41:34.204009 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtzzh\" (UniqueName: \"kubernetes.io/projected/ce0c6b95-bf6a-47dc-aa0d-c448d9c5f2eb-kube-api-access-wtzzh\") pod \"nova-api-0\" (UID: \"ce0c6b95-bf6a-47dc-aa0d-c448d9c5f2eb\") " pod="openstack/nova-api-0" Feb 16 02:41:34.204843 master-0 kubenswrapper[31559]: I0216 02:41:34.204334 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce0c6b95-bf6a-47dc-aa0d-c448d9c5f2eb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ce0c6b95-bf6a-47dc-aa0d-c448d9c5f2eb\") " pod="openstack/nova-api-0" Feb 16 02:41:34.204843 master-0 kubenswrapper[31559]: I0216 02:41:34.204576 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce0c6b95-bf6a-47dc-aa0d-c448d9c5f2eb-logs\") pod \"nova-api-0\" (UID: \"ce0c6b95-bf6a-47dc-aa0d-c448d9c5f2eb\") " pod="openstack/nova-api-0" Feb 16 02:41:34.204843 master-0 kubenswrapper[31559]: I0216 02:41:34.204765 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce0c6b95-bf6a-47dc-aa0d-c448d9c5f2eb-config-data\") pod \"nova-api-0\" (UID: \"ce0c6b95-bf6a-47dc-aa0d-c448d9c5f2eb\") " pod="openstack/nova-api-0" Feb 16 02:41:34.206511 master-0 kubenswrapper[31559]: I0216 02:41:34.206449 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce0c6b95-bf6a-47dc-aa0d-c448d9c5f2eb-logs\") pod \"nova-api-0\" (UID: \"ce0c6b95-bf6a-47dc-aa0d-c448d9c5f2eb\") " pod="openstack/nova-api-0" Feb 16 02:41:34.208315 master-0 kubenswrapper[31559]: I0216 02:41:34.208276 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce0c6b95-bf6a-47dc-aa0d-c448d9c5f2eb-config-data\") pod \"nova-api-0\" (UID: \"ce0c6b95-bf6a-47dc-aa0d-c448d9c5f2eb\") " pod="openstack/nova-api-0" Feb 16 02:41:34.211418 master-0 kubenswrapper[31559]: I0216 02:41:34.211374 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce0c6b95-bf6a-47dc-aa0d-c448d9c5f2eb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ce0c6b95-bf6a-47dc-aa0d-c448d9c5f2eb\") " pod="openstack/nova-api-0" Feb 16 02:41:34.219115 master-0 kubenswrapper[31559]: I0216 02:41:34.219071 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtzzh\" (UniqueName: \"kubernetes.io/projected/ce0c6b95-bf6a-47dc-aa0d-c448d9c5f2eb-kube-api-access-wtzzh\") pod \"nova-api-0\" (UID: \"ce0c6b95-bf6a-47dc-aa0d-c448d9c5f2eb\") " pod="openstack/nova-api-0" Feb 16 02:41:34.344104 master-0 kubenswrapper[31559]: I0216 02:41:34.344033 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 02:41:34.812593 master-0 kubenswrapper[31559]: I0216 02:41:34.812500 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"8297f7b3-7be4-4502-8632-6c723f6dfc1d","Type":"ContainerStarted","Data":"5a1f90ba464619a3be7c64dc53470e6b074b15bc1fe4df0d7eb9c596b79be71c"} Feb 16 02:41:34.812593 master-0 kubenswrapper[31559]: I0216 02:41:34.812565 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"8297f7b3-7be4-4502-8632-6c723f6dfc1d","Type":"ContainerStarted","Data":"69b176d5c9a98f618d77d5d7f5445ccaa1be7388c182b47d6e072e706025e78d"} Feb 16 02:41:34.813514 master-0 kubenswrapper[31559]: I0216 02:41:34.812633 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 16 02:41:34.816391 master-0 kubenswrapper[31559]: I0216 02:41:34.816342 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"f71d864b-c882-43b8-a7a7-b1e163d38aa4","Type":"ContainerStarted","Data":"bf57be97691f7645183d71ae8fd78aa3d8ce70d25613691b679d7b679c02165e"} Feb 16 02:41:34.841489 master-0 kubenswrapper[31559]: I0216 02:41:34.839780 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.839760987 podStartE2EDuration="2.839760987s" podCreationTimestamp="2026-02-16 02:41:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:41:34.834280487 +0000 UTC m=+1147.178886502" watchObservedRunningTime="2026-02-16 02:41:34.839760987 +0000 UTC m=+1147.184367002" Feb 16 02:41:34.858428 master-0 kubenswrapper[31559]: I0216 02:41:34.858374 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 02:41:34.861828 master-0 kubenswrapper[31559]: W0216 02:41:34.861728 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce0c6b95_bf6a_47dc_aa0d_c448d9c5f2eb.slice/crio-db18fc3fc0b32f17349c3dda2c1ce4996aef689afd14234196b03cf1d6e47fdd WatchSource:0}: Error finding container db18fc3fc0b32f17349c3dda2c1ce4996aef689afd14234196b03cf1d6e47fdd: Status 404 returned error can't find the container with id db18fc3fc0b32f17349c3dda2c1ce4996aef689afd14234196b03cf1d6e47fdd Feb 16 02:41:35.835544 master-0 kubenswrapper[31559]: I0216 02:41:35.835286 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"f71d864b-c882-43b8-a7a7-b1e163d38aa4","Type":"ContainerStarted","Data":"f5e1d4e6eebfaf346edd0f5b1266008091b280074284dd80e63aba93069d2785"} Feb 16 02:41:35.835544 master-0 kubenswrapper[31559]: I0216 02:41:35.835365 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"f71d864b-c882-43b8-a7a7-b1e163d38aa4","Type":"ContainerStarted","Data":"7a449071a2e9ecd79e92e2faf1319a54eb7ab82c150f5bc22ba9c8ebe29f7b56"} Feb 16 02:41:35.835544 master-0 kubenswrapper[31559]: I0216 02:41:35.835457 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-conductor-0" Feb 16 02:41:35.836785 master-0 kubenswrapper[31559]: I0216 02:41:35.836746 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-conductor-0" Feb 16 02:41:35.837488 master-0 kubenswrapper[31559]: I0216 02:41:35.837361 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ce0c6b95-bf6a-47dc-aa0d-c448d9c5f2eb","Type":"ContainerStarted","Data":"3f4000677112f1cf8660ee6bb401ad92b5d89aeabf21c46504173b6eca834f7f"} Feb 16 02:41:35.837488 master-0 kubenswrapper[31559]: I0216 02:41:35.837410 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ce0c6b95-bf6a-47dc-aa0d-c448d9c5f2eb","Type":"ContainerStarted","Data":"987021a198ef36c5500cc33e90d910642fc1eefc588d1879e2a260ac7ea2d796"} Feb 16 02:41:35.837488 master-0 kubenswrapper[31559]: I0216 02:41:35.837430 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ce0c6b95-bf6a-47dc-aa0d-c448d9c5f2eb","Type":"ContainerStarted","Data":"db18fc3fc0b32f17349c3dda2c1ce4996aef689afd14234196b03cf1d6e47fdd"} Feb 16 02:41:35.910489 master-0 kubenswrapper[31559]: I0216 02:41:35.908874 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-conductor-0" podStartSLOduration=57.94844014 podStartE2EDuration="1m44.908846649s" podCreationTimestamp="2026-02-16 02:39:51 +0000 UTC" firstStartedPulling="2026-02-16 02:40:01.441456714 +0000 UTC m=+1053.786062729" lastFinishedPulling="2026-02-16 02:40:48.401863223 +0000 UTC m=+1100.746469238" observedRunningTime="2026-02-16 02:41:35.895952549 +0000 UTC m=+1148.240558564" watchObservedRunningTime="2026-02-16 02:41:35.908846649 +0000 UTC m=+1148.253452674" Feb 16 02:41:35.934390 master-0 kubenswrapper[31559]: I0216 02:41:35.934296 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.934272588 podStartE2EDuration="2.934272588s" podCreationTimestamp="2026-02-16 02:41:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:41:35.923913523 +0000 UTC m=+1148.268519568" watchObservedRunningTime="2026-02-16 02:41:35.934272588 +0000 UTC m=+1148.278878603" Feb 16 02:41:36.204707 master-0 kubenswrapper[31559]: I0216 02:41:36.204457 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 02:41:36.205074 master-0 kubenswrapper[31559]: I0216 02:41:36.205036 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 02:41:37.264321 master-0 kubenswrapper[31559]: I0216 02:41:37.264232 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 16 02:41:37.375321 master-0 kubenswrapper[31559]: I0216 02:41:37.375229 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ironic-conductor-0" Feb 16 02:41:38.726677 master-0 kubenswrapper[31559]: I0216 02:41:38.726579 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ironic-conductor-0" Feb 16 02:41:38.921770 master-0 kubenswrapper[31559]: I0216 02:41:38.921686 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-conductor-0" Feb 16 02:41:39.899891 master-0 kubenswrapper[31559]: I0216 02:41:39.899797 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-conductor-0" Feb 16 02:41:41.208688 master-0 kubenswrapper[31559]: I0216 02:41:41.208498 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 16 02:41:41.208688 master-0 kubenswrapper[31559]: I0216 02:41:41.208588 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 16 02:41:42.216761 master-0 kubenswrapper[31559]: I0216 02:41:42.216679 31559 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="91fb97c4-4958-40fd-8c1f-a3d3ca6f9727" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.128.1.12:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 02:41:42.225497 master-0 kubenswrapper[31559]: I0216 02:41:42.225424 31559 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="91fb97c4-4958-40fd-8c1f-a3d3ca6f9727" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.128.1.12:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 02:41:42.263523 master-0 kubenswrapper[31559]: I0216 02:41:42.263457 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 16 02:41:42.315383 master-0 kubenswrapper[31559]: I0216 02:41:42.315308 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 16 02:41:42.993918 master-0 kubenswrapper[31559]: I0216 02:41:42.993852 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 16 02:41:43.365157 master-0 kubenswrapper[31559]: I0216 02:41:43.364989 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 16 02:41:44.345056 master-0 kubenswrapper[31559]: I0216 02:41:44.344984 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 02:41:44.345056 master-0 kubenswrapper[31559]: I0216 02:41:44.345051 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 02:41:45.428790 master-0 kubenswrapper[31559]: I0216 02:41:45.428680 31559 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ce0c6b95-bf6a-47dc-aa0d-c448d9c5f2eb" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.128.1.15:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 02:41:45.428790 master-0 kubenswrapper[31559]: I0216 02:41:45.428730 31559 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ce0c6b95-bf6a-47dc-aa0d-c448d9c5f2eb" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.128.1.15:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 02:41:51.209197 master-0 kubenswrapper[31559]: I0216 02:41:51.209138 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 16 02:41:51.214084 master-0 kubenswrapper[31559]: I0216 02:41:51.213985 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 16 02:41:51.220182 master-0 kubenswrapper[31559]: I0216 02:41:51.220131 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 16 02:41:52.099794 master-0 kubenswrapper[31559]: I0216 02:41:52.099694 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 16 02:41:53.107642 master-0 kubenswrapper[31559]: I0216 02:41:53.107570 31559 generic.go:334] "Generic (PLEG): container finished" podID="f2a12c06-81c2-4c89-b64a-79132f5241ed" containerID="30dd8519f624f370d98b8746a04a06d4085df0a4909d080ba9beb06725c29cac" exitCode=137 Feb 16 02:41:53.108537 master-0 kubenswrapper[31559]: I0216 02:41:53.107667 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f2a12c06-81c2-4c89-b64a-79132f5241ed","Type":"ContainerDied","Data":"30dd8519f624f370d98b8746a04a06d4085df0a4909d080ba9beb06725c29cac"} Feb 16 02:41:53.108537 master-0 kubenswrapper[31559]: I0216 02:41:53.107735 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f2a12c06-81c2-4c89-b64a-79132f5241ed","Type":"ContainerDied","Data":"d6c9adb588a20c821d27ee98f8f11b26c2eee355e81a929c93ce39be3fad5738"} Feb 16 02:41:53.108537 master-0 kubenswrapper[31559]: I0216 02:41:53.107752 31559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6c9adb588a20c821d27ee98f8f11b26c2eee355e81a929c93ce39be3fad5738" Feb 16 02:41:53.170701 master-0 kubenswrapper[31559]: I0216 02:41:53.170629 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 02:41:53.205711 master-0 kubenswrapper[31559]: I0216 02:41:53.205647 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2a12c06-81c2-4c89-b64a-79132f5241ed-combined-ca-bundle\") pod \"f2a12c06-81c2-4c89-b64a-79132f5241ed\" (UID: \"f2a12c06-81c2-4c89-b64a-79132f5241ed\") " Feb 16 02:41:53.206032 master-0 kubenswrapper[31559]: I0216 02:41:53.205807 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2a12c06-81c2-4c89-b64a-79132f5241ed-config-data\") pod \"f2a12c06-81c2-4c89-b64a-79132f5241ed\" (UID: \"f2a12c06-81c2-4c89-b64a-79132f5241ed\") " Feb 16 02:41:53.206032 master-0 kubenswrapper[31559]: I0216 02:41:53.205843 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t92ck\" (UniqueName: \"kubernetes.io/projected/f2a12c06-81c2-4c89-b64a-79132f5241ed-kube-api-access-t92ck\") pod \"f2a12c06-81c2-4c89-b64a-79132f5241ed\" (UID: \"f2a12c06-81c2-4c89-b64a-79132f5241ed\") " Feb 16 02:41:53.210742 master-0 kubenswrapper[31559]: I0216 02:41:53.210649 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2a12c06-81c2-4c89-b64a-79132f5241ed-kube-api-access-t92ck" (OuterVolumeSpecName: "kube-api-access-t92ck") pod "f2a12c06-81c2-4c89-b64a-79132f5241ed" (UID: "f2a12c06-81c2-4c89-b64a-79132f5241ed"). InnerVolumeSpecName "kube-api-access-t92ck". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:41:53.240694 master-0 kubenswrapper[31559]: I0216 02:41:53.240584 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2a12c06-81c2-4c89-b64a-79132f5241ed-config-data" (OuterVolumeSpecName: "config-data") pod "f2a12c06-81c2-4c89-b64a-79132f5241ed" (UID: "f2a12c06-81c2-4c89-b64a-79132f5241ed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:41:53.243958 master-0 kubenswrapper[31559]: I0216 02:41:53.243838 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2a12c06-81c2-4c89-b64a-79132f5241ed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f2a12c06-81c2-4c89-b64a-79132f5241ed" (UID: "f2a12c06-81c2-4c89-b64a-79132f5241ed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:41:53.312360 master-0 kubenswrapper[31559]: I0216 02:41:53.312220 31559 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2a12c06-81c2-4c89-b64a-79132f5241ed-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 02:41:53.312360 master-0 kubenswrapper[31559]: I0216 02:41:53.312319 31559 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2a12c06-81c2-4c89-b64a-79132f5241ed-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 02:41:53.312360 master-0 kubenswrapper[31559]: I0216 02:41:53.312340 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t92ck\" (UniqueName: \"kubernetes.io/projected/f2a12c06-81c2-4c89-b64a-79132f5241ed-kube-api-access-t92ck\") on node \"master-0\" DevicePath \"\"" Feb 16 02:41:54.130658 master-0 kubenswrapper[31559]: I0216 02:41:54.130542 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 02:41:54.178562 master-0 kubenswrapper[31559]: I0216 02:41:54.178483 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 02:41:54.203708 master-0 kubenswrapper[31559]: I0216 02:41:54.203620 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 02:41:54.217104 master-0 kubenswrapper[31559]: I0216 02:41:54.217050 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 02:41:54.217876 master-0 kubenswrapper[31559]: E0216 02:41:54.217854 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2a12c06-81c2-4c89-b64a-79132f5241ed" containerName="nova-cell1-novncproxy-novncproxy" Feb 16 02:41:54.217980 master-0 kubenswrapper[31559]: I0216 02:41:54.217965 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2a12c06-81c2-4c89-b64a-79132f5241ed" containerName="nova-cell1-novncproxy-novncproxy" Feb 16 02:41:54.221323 master-0 kubenswrapper[31559]: I0216 02:41:54.221298 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2a12c06-81c2-4c89-b64a-79132f5241ed" containerName="nova-cell1-novncproxy-novncproxy" Feb 16 02:41:54.222343 master-0 kubenswrapper[31559]: I0216 02:41:54.222318 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 02:41:54.226757 master-0 kubenswrapper[31559]: I0216 02:41:54.226678 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 16 02:41:54.227531 master-0 kubenswrapper[31559]: I0216 02:41:54.227509 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 16 02:41:54.228037 master-0 kubenswrapper[31559]: I0216 02:41:54.228018 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 16 02:41:54.248144 master-0 kubenswrapper[31559]: I0216 02:41:54.248031 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/84a8cbea-9312-4740-82f1-e09b5663fc73-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"84a8cbea-9312-4740-82f1-e09b5663fc73\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 02:41:54.250395 master-0 kubenswrapper[31559]: I0216 02:41:54.250337 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/84a8cbea-9312-4740-82f1-e09b5663fc73-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"84a8cbea-9312-4740-82f1-e09b5663fc73\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 02:41:54.250761 master-0 kubenswrapper[31559]: I0216 02:41:54.250720 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s58kw\" (UniqueName: \"kubernetes.io/projected/84a8cbea-9312-4740-82f1-e09b5663fc73-kube-api-access-s58kw\") pod \"nova-cell1-novncproxy-0\" (UID: \"84a8cbea-9312-4740-82f1-e09b5663fc73\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 02:41:54.250865 master-0 kubenswrapper[31559]: I0216 02:41:54.250832 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84a8cbea-9312-4740-82f1-e09b5663fc73-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"84a8cbea-9312-4740-82f1-e09b5663fc73\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 02:41:54.250932 master-0 kubenswrapper[31559]: I0216 02:41:54.250879 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84a8cbea-9312-4740-82f1-e09b5663fc73-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"84a8cbea-9312-4740-82f1-e09b5663fc73\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 02:41:54.255897 master-0 kubenswrapper[31559]: I0216 02:41:54.255737 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 02:41:54.351359 master-0 kubenswrapper[31559]: I0216 02:41:54.351278 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 16 02:41:54.351979 master-0 kubenswrapper[31559]: I0216 02:41:54.351931 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 16 02:41:54.353807 master-0 kubenswrapper[31559]: I0216 02:41:54.353764 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s58kw\" (UniqueName: \"kubernetes.io/projected/84a8cbea-9312-4740-82f1-e09b5663fc73-kube-api-access-s58kw\") pod \"nova-cell1-novncproxy-0\" (UID: \"84a8cbea-9312-4740-82f1-e09b5663fc73\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 02:41:54.353922 master-0 kubenswrapper[31559]: I0216 02:41:54.353883 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84a8cbea-9312-4740-82f1-e09b5663fc73-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"84a8cbea-9312-4740-82f1-e09b5663fc73\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 02:41:54.353984 master-0 kubenswrapper[31559]: I0216 02:41:54.353930 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84a8cbea-9312-4740-82f1-e09b5663fc73-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"84a8cbea-9312-4740-82f1-e09b5663fc73\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 02:41:54.355029 master-0 kubenswrapper[31559]: I0216 02:41:54.354990 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/84a8cbea-9312-4740-82f1-e09b5663fc73-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"84a8cbea-9312-4740-82f1-e09b5663fc73\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 02:41:54.355206 master-0 kubenswrapper[31559]: I0216 02:41:54.355178 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/84a8cbea-9312-4740-82f1-e09b5663fc73-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"84a8cbea-9312-4740-82f1-e09b5663fc73\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 02:41:54.355966 master-0 kubenswrapper[31559]: I0216 02:41:54.355918 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 16 02:41:54.359247 master-0 kubenswrapper[31559]: I0216 02:41:54.358886 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84a8cbea-9312-4740-82f1-e09b5663fc73-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"84a8cbea-9312-4740-82f1-e09b5663fc73\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 02:41:54.359538 master-0 kubenswrapper[31559]: I0216 02:41:54.359504 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 16 02:41:54.360010 master-0 kubenswrapper[31559]: I0216 02:41:54.359982 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/84a8cbea-9312-4740-82f1-e09b5663fc73-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"84a8cbea-9312-4740-82f1-e09b5663fc73\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 02:41:54.362594 master-0 kubenswrapper[31559]: I0216 02:41:54.362537 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/84a8cbea-9312-4740-82f1-e09b5663fc73-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"84a8cbea-9312-4740-82f1-e09b5663fc73\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 02:41:54.364270 master-0 kubenswrapper[31559]: I0216 02:41:54.364208 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84a8cbea-9312-4740-82f1-e09b5663fc73-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"84a8cbea-9312-4740-82f1-e09b5663fc73\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 02:41:54.394191 master-0 kubenswrapper[31559]: I0216 02:41:54.394154 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s58kw\" (UniqueName: \"kubernetes.io/projected/84a8cbea-9312-4740-82f1-e09b5663fc73-kube-api-access-s58kw\") pod \"nova-cell1-novncproxy-0\" (UID: \"84a8cbea-9312-4740-82f1-e09b5663fc73\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 02:41:54.566585 master-0 kubenswrapper[31559]: I0216 02:41:54.566446 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 02:41:55.120501 master-0 kubenswrapper[31559]: I0216 02:41:55.120405 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 02:41:55.149379 master-0 kubenswrapper[31559]: I0216 02:41:55.149311 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"84a8cbea-9312-4740-82f1-e09b5663fc73","Type":"ContainerStarted","Data":"cd8e59c6a3817e92d638765e6d4c678747e0f5d21d9eb1b8666966b7e9a1c5db"} Feb 16 02:41:55.149379 master-0 kubenswrapper[31559]: I0216 02:41:55.149386 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 16 02:41:55.154510 master-0 kubenswrapper[31559]: I0216 02:41:55.154422 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 16 02:41:55.421524 master-0 kubenswrapper[31559]: I0216 02:41:55.420866 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7b5c8d6759-8cjvh"] Feb 16 02:41:55.423901 master-0 kubenswrapper[31559]: I0216 02:41:55.423660 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b5c8d6759-8cjvh" Feb 16 02:41:55.444761 master-0 kubenswrapper[31559]: I0216 02:41:55.444590 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b5c8d6759-8cjvh"] Feb 16 02:41:55.596563 master-0 kubenswrapper[31559]: I0216 02:41:55.596498 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d5e2838f-fb7d-4ea9-b71b-32a8a6c8bb7a-ovsdbserver-nb\") pod \"dnsmasq-dns-7b5c8d6759-8cjvh\" (UID: \"d5e2838f-fb7d-4ea9-b71b-32a8a6c8bb7a\") " pod="openstack/dnsmasq-dns-7b5c8d6759-8cjvh" Feb 16 02:41:55.596793 master-0 kubenswrapper[31559]: I0216 02:41:55.596644 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5n29\" (UniqueName: \"kubernetes.io/projected/d5e2838f-fb7d-4ea9-b71b-32a8a6c8bb7a-kube-api-access-p5n29\") pod \"dnsmasq-dns-7b5c8d6759-8cjvh\" (UID: \"d5e2838f-fb7d-4ea9-b71b-32a8a6c8bb7a\") " pod="openstack/dnsmasq-dns-7b5c8d6759-8cjvh" Feb 16 02:41:55.596793 master-0 kubenswrapper[31559]: I0216 02:41:55.596687 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d5e2838f-fb7d-4ea9-b71b-32a8a6c8bb7a-ovsdbserver-sb\") pod \"dnsmasq-dns-7b5c8d6759-8cjvh\" (UID: \"d5e2838f-fb7d-4ea9-b71b-32a8a6c8bb7a\") " pod="openstack/dnsmasq-dns-7b5c8d6759-8cjvh" Feb 16 02:41:55.596911 master-0 kubenswrapper[31559]: I0216 02:41:55.596883 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d5e2838f-fb7d-4ea9-b71b-32a8a6c8bb7a-dns-svc\") pod \"dnsmasq-dns-7b5c8d6759-8cjvh\" (UID: \"d5e2838f-fb7d-4ea9-b71b-32a8a6c8bb7a\") " pod="openstack/dnsmasq-dns-7b5c8d6759-8cjvh" Feb 16 02:41:55.597003 master-0 kubenswrapper[31559]: I0216 02:41:55.596984 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5e2838f-fb7d-4ea9-b71b-32a8a6c8bb7a-config\") pod \"dnsmasq-dns-7b5c8d6759-8cjvh\" (UID: \"d5e2838f-fb7d-4ea9-b71b-32a8a6c8bb7a\") " pod="openstack/dnsmasq-dns-7b5c8d6759-8cjvh" Feb 16 02:41:55.597061 master-0 kubenswrapper[31559]: I0216 02:41:55.597047 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d5e2838f-fb7d-4ea9-b71b-32a8a6c8bb7a-dns-swift-storage-0\") pod \"dnsmasq-dns-7b5c8d6759-8cjvh\" (UID: \"d5e2838f-fb7d-4ea9-b71b-32a8a6c8bb7a\") " pod="openstack/dnsmasq-dns-7b5c8d6759-8cjvh" Feb 16 02:41:55.698685 master-0 kubenswrapper[31559]: I0216 02:41:55.698594 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d5e2838f-fb7d-4ea9-b71b-32a8a6c8bb7a-dns-swift-storage-0\") pod \"dnsmasq-dns-7b5c8d6759-8cjvh\" (UID: \"d5e2838f-fb7d-4ea9-b71b-32a8a6c8bb7a\") " pod="openstack/dnsmasq-dns-7b5c8d6759-8cjvh" Feb 16 02:41:55.698967 master-0 kubenswrapper[31559]: I0216 02:41:55.698737 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d5e2838f-fb7d-4ea9-b71b-32a8a6c8bb7a-ovsdbserver-nb\") pod \"dnsmasq-dns-7b5c8d6759-8cjvh\" (UID: \"d5e2838f-fb7d-4ea9-b71b-32a8a6c8bb7a\") " pod="openstack/dnsmasq-dns-7b5c8d6759-8cjvh" Feb 16 02:41:55.698967 master-0 kubenswrapper[31559]: I0216 02:41:55.698781 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5n29\" (UniqueName: \"kubernetes.io/projected/d5e2838f-fb7d-4ea9-b71b-32a8a6c8bb7a-kube-api-access-p5n29\") pod \"dnsmasq-dns-7b5c8d6759-8cjvh\" (UID: \"d5e2838f-fb7d-4ea9-b71b-32a8a6c8bb7a\") " pod="openstack/dnsmasq-dns-7b5c8d6759-8cjvh" Feb 16 02:41:55.699071 master-0 kubenswrapper[31559]: I0216 02:41:55.699000 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d5e2838f-fb7d-4ea9-b71b-32a8a6c8bb7a-ovsdbserver-sb\") pod \"dnsmasq-dns-7b5c8d6759-8cjvh\" (UID: \"d5e2838f-fb7d-4ea9-b71b-32a8a6c8bb7a\") " pod="openstack/dnsmasq-dns-7b5c8d6759-8cjvh" Feb 16 02:41:55.699119 master-0 kubenswrapper[31559]: I0216 02:41:55.699083 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d5e2838f-fb7d-4ea9-b71b-32a8a6c8bb7a-dns-svc\") pod \"dnsmasq-dns-7b5c8d6759-8cjvh\" (UID: \"d5e2838f-fb7d-4ea9-b71b-32a8a6c8bb7a\") " pod="openstack/dnsmasq-dns-7b5c8d6759-8cjvh" Feb 16 02:41:55.699207 master-0 kubenswrapper[31559]: I0216 02:41:55.699179 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5e2838f-fb7d-4ea9-b71b-32a8a6c8bb7a-config\") pod \"dnsmasq-dns-7b5c8d6759-8cjvh\" (UID: \"d5e2838f-fb7d-4ea9-b71b-32a8a6c8bb7a\") " pod="openstack/dnsmasq-dns-7b5c8d6759-8cjvh" Feb 16 02:41:55.699815 master-0 kubenswrapper[31559]: I0216 02:41:55.699766 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d5e2838f-fb7d-4ea9-b71b-32a8a6c8bb7a-ovsdbserver-nb\") pod \"dnsmasq-dns-7b5c8d6759-8cjvh\" (UID: \"d5e2838f-fb7d-4ea9-b71b-32a8a6c8bb7a\") " pod="openstack/dnsmasq-dns-7b5c8d6759-8cjvh" Feb 16 02:41:55.699946 master-0 kubenswrapper[31559]: I0216 02:41:55.699898 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d5e2838f-fb7d-4ea9-b71b-32a8a6c8bb7a-dns-swift-storage-0\") pod \"dnsmasq-dns-7b5c8d6759-8cjvh\" (UID: \"d5e2838f-fb7d-4ea9-b71b-32a8a6c8bb7a\") " pod="openstack/dnsmasq-dns-7b5c8d6759-8cjvh" Feb 16 02:41:55.700054 master-0 kubenswrapper[31559]: I0216 02:41:55.700007 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d5e2838f-fb7d-4ea9-b71b-32a8a6c8bb7a-ovsdbserver-sb\") pod \"dnsmasq-dns-7b5c8d6759-8cjvh\" (UID: \"d5e2838f-fb7d-4ea9-b71b-32a8a6c8bb7a\") " pod="openstack/dnsmasq-dns-7b5c8d6759-8cjvh" Feb 16 02:41:55.700128 master-0 kubenswrapper[31559]: I0216 02:41:55.700107 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5e2838f-fb7d-4ea9-b71b-32a8a6c8bb7a-config\") pod \"dnsmasq-dns-7b5c8d6759-8cjvh\" (UID: \"d5e2838f-fb7d-4ea9-b71b-32a8a6c8bb7a\") " pod="openstack/dnsmasq-dns-7b5c8d6759-8cjvh" Feb 16 02:41:55.700611 master-0 kubenswrapper[31559]: I0216 02:41:55.700408 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d5e2838f-fb7d-4ea9-b71b-32a8a6c8bb7a-dns-svc\") pod \"dnsmasq-dns-7b5c8d6759-8cjvh\" (UID: \"d5e2838f-fb7d-4ea9-b71b-32a8a6c8bb7a\") " pod="openstack/dnsmasq-dns-7b5c8d6759-8cjvh" Feb 16 02:41:55.723994 master-0 kubenswrapper[31559]: I0216 02:41:55.723714 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5n29\" (UniqueName: \"kubernetes.io/projected/d5e2838f-fb7d-4ea9-b71b-32a8a6c8bb7a-kube-api-access-p5n29\") pod \"dnsmasq-dns-7b5c8d6759-8cjvh\" (UID: \"d5e2838f-fb7d-4ea9-b71b-32a8a6c8bb7a\") " pod="openstack/dnsmasq-dns-7b5c8d6759-8cjvh" Feb 16 02:41:55.772410 master-0 kubenswrapper[31559]: I0216 02:41:55.772335 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b5c8d6759-8cjvh" Feb 16 02:41:55.946193 master-0 kubenswrapper[31559]: I0216 02:41:55.946135 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2a12c06-81c2-4c89-b64a-79132f5241ed" path="/var/lib/kubelet/pods/f2a12c06-81c2-4c89-b64a-79132f5241ed/volumes" Feb 16 02:41:56.175505 master-0 kubenswrapper[31559]: I0216 02:41:56.175221 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"84a8cbea-9312-4740-82f1-e09b5663fc73","Type":"ContainerStarted","Data":"4ddb779ffb1d071b203de7d08ebb7b1dc107b9893257453a4759b09cd0953b76"} Feb 16 02:41:56.203471 master-0 kubenswrapper[31559]: I0216 02:41:56.203309 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.203281639 podStartE2EDuration="2.203281639s" podCreationTimestamp="2026-02-16 02:41:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:41:56.194606738 +0000 UTC m=+1168.539212773" watchObservedRunningTime="2026-02-16 02:41:56.203281639 +0000 UTC m=+1168.547887664" Feb 16 02:41:56.311178 master-0 kubenswrapper[31559]: W0216 02:41:56.311101 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5e2838f_fb7d_4ea9_b71b_32a8a6c8bb7a.slice/crio-5018a094c0c97e64c261c4cd5269d0ed6a8d822caa3a6f143eb45f03e1b3c017 WatchSource:0}: Error finding container 5018a094c0c97e64c261c4cd5269d0ed6a8d822caa3a6f143eb45f03e1b3c017: Status 404 returned error can't find the container with id 5018a094c0c97e64c261c4cd5269d0ed6a8d822caa3a6f143eb45f03e1b3c017 Feb 16 02:41:56.325071 master-0 kubenswrapper[31559]: I0216 02:41:56.323748 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b5c8d6759-8cjvh"] Feb 16 02:41:57.199249 master-0 kubenswrapper[31559]: I0216 02:41:57.199185 31559 generic.go:334] "Generic (PLEG): container finished" podID="d5e2838f-fb7d-4ea9-b71b-32a8a6c8bb7a" containerID="7059b57a5681253bf0025e36004e815c44ea1c205d4c872f1618f464bbf3070f" exitCode=0 Feb 16 02:41:57.199923 master-0 kubenswrapper[31559]: I0216 02:41:57.199238 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b5c8d6759-8cjvh" event={"ID":"d5e2838f-fb7d-4ea9-b71b-32a8a6c8bb7a","Type":"ContainerDied","Data":"7059b57a5681253bf0025e36004e815c44ea1c205d4c872f1618f464bbf3070f"} Feb 16 02:41:57.199923 master-0 kubenswrapper[31559]: I0216 02:41:57.199301 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b5c8d6759-8cjvh" event={"ID":"d5e2838f-fb7d-4ea9-b71b-32a8a6c8bb7a","Type":"ContainerStarted","Data":"5018a094c0c97e64c261c4cd5269d0ed6a8d822caa3a6f143eb45f03e1b3c017"} Feb 16 02:41:58.005975 master-0 kubenswrapper[31559]: I0216 02:41:58.005885 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 02:41:58.219397 master-0 kubenswrapper[31559]: I0216 02:41:58.219315 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ce0c6b95-bf6a-47dc-aa0d-c448d9c5f2eb" containerName="nova-api-log" containerID="cri-o://987021a198ef36c5500cc33e90d910642fc1eefc588d1879e2a260ac7ea2d796" gracePeriod=30 Feb 16 02:41:58.220828 master-0 kubenswrapper[31559]: I0216 02:41:58.220749 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b5c8d6759-8cjvh" event={"ID":"d5e2838f-fb7d-4ea9-b71b-32a8a6c8bb7a","Type":"ContainerStarted","Data":"3a6c5f998b28ce39143c2b23aae84ceaf4f9493dc80bce08efecb4a301acd65a"} Feb 16 02:41:58.220951 master-0 kubenswrapper[31559]: I0216 02:41:58.220849 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7b5c8d6759-8cjvh" Feb 16 02:41:58.220951 master-0 kubenswrapper[31559]: I0216 02:41:58.220768 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ce0c6b95-bf6a-47dc-aa0d-c448d9c5f2eb" containerName="nova-api-api" containerID="cri-o://3f4000677112f1cf8660ee6bb401ad92b5d89aeabf21c46504173b6eca834f7f" gracePeriod=30 Feb 16 02:41:58.275215 master-0 kubenswrapper[31559]: I0216 02:41:58.275012 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7b5c8d6759-8cjvh" podStartSLOduration=3.274994804 podStartE2EDuration="3.274994804s" podCreationTimestamp="2026-02-16 02:41:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:41:58.26386528 +0000 UTC m=+1170.608471295" watchObservedRunningTime="2026-02-16 02:41:58.274994804 +0000 UTC m=+1170.619600819" Feb 16 02:41:59.237032 master-0 kubenswrapper[31559]: I0216 02:41:59.236966 31559 generic.go:334] "Generic (PLEG): container finished" podID="ce0c6b95-bf6a-47dc-aa0d-c448d9c5f2eb" containerID="987021a198ef36c5500cc33e90d910642fc1eefc588d1879e2a260ac7ea2d796" exitCode=143 Feb 16 02:41:59.237669 master-0 kubenswrapper[31559]: I0216 02:41:59.237488 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ce0c6b95-bf6a-47dc-aa0d-c448d9c5f2eb","Type":"ContainerDied","Data":"987021a198ef36c5500cc33e90d910642fc1eefc588d1879e2a260ac7ea2d796"} Feb 16 02:41:59.567601 master-0 kubenswrapper[31559]: I0216 02:41:59.567477 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 16 02:42:01.980474 master-0 kubenswrapper[31559]: I0216 02:42:01.980102 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 02:42:02.002427 master-0 kubenswrapper[31559]: I0216 02:42:02.002374 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce0c6b95-bf6a-47dc-aa0d-c448d9c5f2eb-config-data\") pod \"ce0c6b95-bf6a-47dc-aa0d-c448d9c5f2eb\" (UID: \"ce0c6b95-bf6a-47dc-aa0d-c448d9c5f2eb\") " Feb 16 02:42:02.002595 master-0 kubenswrapper[31559]: I0216 02:42:02.002577 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtzzh\" (UniqueName: \"kubernetes.io/projected/ce0c6b95-bf6a-47dc-aa0d-c448d9c5f2eb-kube-api-access-wtzzh\") pod \"ce0c6b95-bf6a-47dc-aa0d-c448d9c5f2eb\" (UID: \"ce0c6b95-bf6a-47dc-aa0d-c448d9c5f2eb\") " Feb 16 02:42:02.002651 master-0 kubenswrapper[31559]: I0216 02:42:02.002625 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce0c6b95-bf6a-47dc-aa0d-c448d9c5f2eb-combined-ca-bundle\") pod \"ce0c6b95-bf6a-47dc-aa0d-c448d9c5f2eb\" (UID: \"ce0c6b95-bf6a-47dc-aa0d-c448d9c5f2eb\") " Feb 16 02:42:02.002767 master-0 kubenswrapper[31559]: I0216 02:42:02.002725 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce0c6b95-bf6a-47dc-aa0d-c448d9c5f2eb-logs\") pod \"ce0c6b95-bf6a-47dc-aa0d-c448d9c5f2eb\" (UID: \"ce0c6b95-bf6a-47dc-aa0d-c448d9c5f2eb\") " Feb 16 02:42:02.003562 master-0 kubenswrapper[31559]: I0216 02:42:02.003502 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce0c6b95-bf6a-47dc-aa0d-c448d9c5f2eb-logs" (OuterVolumeSpecName: "logs") pod "ce0c6b95-bf6a-47dc-aa0d-c448d9c5f2eb" (UID: "ce0c6b95-bf6a-47dc-aa0d-c448d9c5f2eb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 02:42:02.003952 master-0 kubenswrapper[31559]: I0216 02:42:02.003891 31559 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce0c6b95-bf6a-47dc-aa0d-c448d9c5f2eb-logs\") on node \"master-0\" DevicePath \"\"" Feb 16 02:42:02.008000 master-0 kubenswrapper[31559]: I0216 02:42:02.007840 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce0c6b95-bf6a-47dc-aa0d-c448d9c5f2eb-kube-api-access-wtzzh" (OuterVolumeSpecName: "kube-api-access-wtzzh") pod "ce0c6b95-bf6a-47dc-aa0d-c448d9c5f2eb" (UID: "ce0c6b95-bf6a-47dc-aa0d-c448d9c5f2eb"). InnerVolumeSpecName "kube-api-access-wtzzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:42:02.072784 master-0 kubenswrapper[31559]: I0216 02:42:02.072691 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce0c6b95-bf6a-47dc-aa0d-c448d9c5f2eb-config-data" (OuterVolumeSpecName: "config-data") pod "ce0c6b95-bf6a-47dc-aa0d-c448d9c5f2eb" (UID: "ce0c6b95-bf6a-47dc-aa0d-c448d9c5f2eb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:42:02.103009 master-0 kubenswrapper[31559]: I0216 02:42:02.102839 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce0c6b95-bf6a-47dc-aa0d-c448d9c5f2eb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ce0c6b95-bf6a-47dc-aa0d-c448d9c5f2eb" (UID: "ce0c6b95-bf6a-47dc-aa0d-c448d9c5f2eb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:42:02.106421 master-0 kubenswrapper[31559]: I0216 02:42:02.106304 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wtzzh\" (UniqueName: \"kubernetes.io/projected/ce0c6b95-bf6a-47dc-aa0d-c448d9c5f2eb-kube-api-access-wtzzh\") on node \"master-0\" DevicePath \"\"" Feb 16 02:42:02.106421 master-0 kubenswrapper[31559]: I0216 02:42:02.106346 31559 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce0c6b95-bf6a-47dc-aa0d-c448d9c5f2eb-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 02:42:02.106421 master-0 kubenswrapper[31559]: I0216 02:42:02.106357 31559 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce0c6b95-bf6a-47dc-aa0d-c448d9c5f2eb-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 02:42:02.286598 master-0 kubenswrapper[31559]: I0216 02:42:02.286540 31559 generic.go:334] "Generic (PLEG): container finished" podID="ce0c6b95-bf6a-47dc-aa0d-c448d9c5f2eb" containerID="3f4000677112f1cf8660ee6bb401ad92b5d89aeabf21c46504173b6eca834f7f" exitCode=0 Feb 16 02:42:02.286814 master-0 kubenswrapper[31559]: I0216 02:42:02.286596 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 02:42:02.286814 master-0 kubenswrapper[31559]: I0216 02:42:02.286594 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ce0c6b95-bf6a-47dc-aa0d-c448d9c5f2eb","Type":"ContainerDied","Data":"3f4000677112f1cf8660ee6bb401ad92b5d89aeabf21c46504173b6eca834f7f"} Feb 16 02:42:02.286814 master-0 kubenswrapper[31559]: I0216 02:42:02.286725 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ce0c6b95-bf6a-47dc-aa0d-c448d9c5f2eb","Type":"ContainerDied","Data":"db18fc3fc0b32f17349c3dda2c1ce4996aef689afd14234196b03cf1d6e47fdd"} Feb 16 02:42:02.286814 master-0 kubenswrapper[31559]: I0216 02:42:02.286746 31559 scope.go:117] "RemoveContainer" containerID="3f4000677112f1cf8660ee6bb401ad92b5d89aeabf21c46504173b6eca834f7f" Feb 16 02:42:02.346649 master-0 kubenswrapper[31559]: I0216 02:42:02.346507 31559 scope.go:117] "RemoveContainer" containerID="987021a198ef36c5500cc33e90d910642fc1eefc588d1879e2a260ac7ea2d796" Feb 16 02:42:02.359627 master-0 kubenswrapper[31559]: I0216 02:42:02.359509 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 02:42:02.372847 master-0 kubenswrapper[31559]: I0216 02:42:02.371857 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 16 02:42:02.386962 master-0 kubenswrapper[31559]: I0216 02:42:02.386919 31559 scope.go:117] "RemoveContainer" containerID="3f4000677112f1cf8660ee6bb401ad92b5d89aeabf21c46504173b6eca834f7f" Feb 16 02:42:02.387682 master-0 kubenswrapper[31559]: E0216 02:42:02.387641 31559 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f4000677112f1cf8660ee6bb401ad92b5d89aeabf21c46504173b6eca834f7f\": container with ID starting with 3f4000677112f1cf8660ee6bb401ad92b5d89aeabf21c46504173b6eca834f7f not found: ID does not exist" containerID="3f4000677112f1cf8660ee6bb401ad92b5d89aeabf21c46504173b6eca834f7f" Feb 16 02:42:02.387803 master-0 kubenswrapper[31559]: I0216 02:42:02.387689 31559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f4000677112f1cf8660ee6bb401ad92b5d89aeabf21c46504173b6eca834f7f"} err="failed to get container status \"3f4000677112f1cf8660ee6bb401ad92b5d89aeabf21c46504173b6eca834f7f\": rpc error: code = NotFound desc = could not find container \"3f4000677112f1cf8660ee6bb401ad92b5d89aeabf21c46504173b6eca834f7f\": container with ID starting with 3f4000677112f1cf8660ee6bb401ad92b5d89aeabf21c46504173b6eca834f7f not found: ID does not exist" Feb 16 02:42:02.387803 master-0 kubenswrapper[31559]: I0216 02:42:02.387719 31559 scope.go:117] "RemoveContainer" containerID="987021a198ef36c5500cc33e90d910642fc1eefc588d1879e2a260ac7ea2d796" Feb 16 02:42:02.388079 master-0 kubenswrapper[31559]: E0216 02:42:02.388042 31559 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"987021a198ef36c5500cc33e90d910642fc1eefc588d1879e2a260ac7ea2d796\": container with ID starting with 987021a198ef36c5500cc33e90d910642fc1eefc588d1879e2a260ac7ea2d796 not found: ID does not exist" containerID="987021a198ef36c5500cc33e90d910642fc1eefc588d1879e2a260ac7ea2d796" Feb 16 02:42:02.388079 master-0 kubenswrapper[31559]: I0216 02:42:02.388072 31559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"987021a198ef36c5500cc33e90d910642fc1eefc588d1879e2a260ac7ea2d796"} err="failed to get container status \"987021a198ef36c5500cc33e90d910642fc1eefc588d1879e2a260ac7ea2d796\": rpc error: code = NotFound desc = could not find container \"987021a198ef36c5500cc33e90d910642fc1eefc588d1879e2a260ac7ea2d796\": container with ID starting with 987021a198ef36c5500cc33e90d910642fc1eefc588d1879e2a260ac7ea2d796 not found: ID does not exist" Feb 16 02:42:02.392879 master-0 kubenswrapper[31559]: I0216 02:42:02.392847 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 16 02:42:02.393672 master-0 kubenswrapper[31559]: E0216 02:42:02.393650 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce0c6b95-bf6a-47dc-aa0d-c448d9c5f2eb" containerName="nova-api-log" Feb 16 02:42:02.393924 master-0 kubenswrapper[31559]: I0216 02:42:02.393907 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce0c6b95-bf6a-47dc-aa0d-c448d9c5f2eb" containerName="nova-api-log" Feb 16 02:42:02.394072 master-0 kubenswrapper[31559]: E0216 02:42:02.394052 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce0c6b95-bf6a-47dc-aa0d-c448d9c5f2eb" containerName="nova-api-api" Feb 16 02:42:02.394256 master-0 kubenswrapper[31559]: I0216 02:42:02.394239 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce0c6b95-bf6a-47dc-aa0d-c448d9c5f2eb" containerName="nova-api-api" Feb 16 02:42:02.394671 master-0 kubenswrapper[31559]: I0216 02:42:02.394651 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce0c6b95-bf6a-47dc-aa0d-c448d9c5f2eb" containerName="nova-api-api" Feb 16 02:42:02.394830 master-0 kubenswrapper[31559]: I0216 02:42:02.394810 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce0c6b95-bf6a-47dc-aa0d-c448d9c5f2eb" containerName="nova-api-log" Feb 16 02:42:02.396945 master-0 kubenswrapper[31559]: I0216 02:42:02.396917 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 02:42:02.403847 master-0 kubenswrapper[31559]: I0216 02:42:02.402741 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 16 02:42:02.403847 master-0 kubenswrapper[31559]: I0216 02:42:02.402954 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 16 02:42:02.406710 master-0 kubenswrapper[31559]: I0216 02:42:02.405884 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 02:42:02.407590 master-0 kubenswrapper[31559]: I0216 02:42:02.407563 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 16 02:42:02.412612 master-0 kubenswrapper[31559]: I0216 02:42:02.412094 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ead525c4-0c20-4968-b1a2-2feeaa419d44-logs\") pod \"nova-api-0\" (UID: \"ead525c4-0c20-4968-b1a2-2feeaa419d44\") " pod="openstack/nova-api-0" Feb 16 02:42:02.412612 master-0 kubenswrapper[31559]: I0216 02:42:02.412155 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ead525c4-0c20-4968-b1a2-2feeaa419d44-public-tls-certs\") pod \"nova-api-0\" (UID: \"ead525c4-0c20-4968-b1a2-2feeaa419d44\") " pod="openstack/nova-api-0" Feb 16 02:42:02.412612 master-0 kubenswrapper[31559]: I0216 02:42:02.412190 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ead525c4-0c20-4968-b1a2-2feeaa419d44-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ead525c4-0c20-4968-b1a2-2feeaa419d44\") " pod="openstack/nova-api-0" Feb 16 02:42:02.412612 master-0 kubenswrapper[31559]: I0216 02:42:02.412272 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ead525c4-0c20-4968-b1a2-2feeaa419d44-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ead525c4-0c20-4968-b1a2-2feeaa419d44\") " pod="openstack/nova-api-0" Feb 16 02:42:02.412612 master-0 kubenswrapper[31559]: I0216 02:42:02.412359 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5j6w\" (UniqueName: \"kubernetes.io/projected/ead525c4-0c20-4968-b1a2-2feeaa419d44-kube-api-access-x5j6w\") pod \"nova-api-0\" (UID: \"ead525c4-0c20-4968-b1a2-2feeaa419d44\") " pod="openstack/nova-api-0" Feb 16 02:42:02.412612 master-0 kubenswrapper[31559]: I0216 02:42:02.412483 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ead525c4-0c20-4968-b1a2-2feeaa419d44-config-data\") pod \"nova-api-0\" (UID: \"ead525c4-0c20-4968-b1a2-2feeaa419d44\") " pod="openstack/nova-api-0" Feb 16 02:42:02.514040 master-0 kubenswrapper[31559]: I0216 02:42:02.513919 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5j6w\" (UniqueName: \"kubernetes.io/projected/ead525c4-0c20-4968-b1a2-2feeaa419d44-kube-api-access-x5j6w\") pod \"nova-api-0\" (UID: \"ead525c4-0c20-4968-b1a2-2feeaa419d44\") " pod="openstack/nova-api-0" Feb 16 02:42:02.514342 master-0 kubenswrapper[31559]: I0216 02:42:02.514151 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ead525c4-0c20-4968-b1a2-2feeaa419d44-config-data\") pod \"nova-api-0\" (UID: \"ead525c4-0c20-4968-b1a2-2feeaa419d44\") " pod="openstack/nova-api-0" Feb 16 02:42:02.515041 master-0 kubenswrapper[31559]: I0216 02:42:02.514998 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ead525c4-0c20-4968-b1a2-2feeaa419d44-logs\") pod \"nova-api-0\" (UID: \"ead525c4-0c20-4968-b1a2-2feeaa419d44\") " pod="openstack/nova-api-0" Feb 16 02:42:02.515118 master-0 kubenswrapper[31559]: I0216 02:42:02.515073 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ead525c4-0c20-4968-b1a2-2feeaa419d44-public-tls-certs\") pod \"nova-api-0\" (UID: \"ead525c4-0c20-4968-b1a2-2feeaa419d44\") " pod="openstack/nova-api-0" Feb 16 02:42:02.515278 master-0 kubenswrapper[31559]: I0216 02:42:02.515244 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ead525c4-0c20-4968-b1a2-2feeaa419d44-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ead525c4-0c20-4968-b1a2-2feeaa419d44\") " pod="openstack/nova-api-0" Feb 16 02:42:02.515408 master-0 kubenswrapper[31559]: I0216 02:42:02.515373 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ead525c4-0c20-4968-b1a2-2feeaa419d44-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ead525c4-0c20-4968-b1a2-2feeaa419d44\") " pod="openstack/nova-api-0" Feb 16 02:42:02.515635 master-0 kubenswrapper[31559]: I0216 02:42:02.515587 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ead525c4-0c20-4968-b1a2-2feeaa419d44-logs\") pod \"nova-api-0\" (UID: \"ead525c4-0c20-4968-b1a2-2feeaa419d44\") " pod="openstack/nova-api-0" Feb 16 02:42:02.518288 master-0 kubenswrapper[31559]: I0216 02:42:02.518245 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ead525c4-0c20-4968-b1a2-2feeaa419d44-config-data\") pod \"nova-api-0\" (UID: \"ead525c4-0c20-4968-b1a2-2feeaa419d44\") " pod="openstack/nova-api-0" Feb 16 02:42:02.518461 master-0 kubenswrapper[31559]: I0216 02:42:02.518412 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ead525c4-0c20-4968-b1a2-2feeaa419d44-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ead525c4-0c20-4968-b1a2-2feeaa419d44\") " pod="openstack/nova-api-0" Feb 16 02:42:02.520048 master-0 kubenswrapper[31559]: I0216 02:42:02.520008 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ead525c4-0c20-4968-b1a2-2feeaa419d44-public-tls-certs\") pod \"nova-api-0\" (UID: \"ead525c4-0c20-4968-b1a2-2feeaa419d44\") " pod="openstack/nova-api-0" Feb 16 02:42:02.520382 master-0 kubenswrapper[31559]: I0216 02:42:02.520340 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ead525c4-0c20-4968-b1a2-2feeaa419d44-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ead525c4-0c20-4968-b1a2-2feeaa419d44\") " pod="openstack/nova-api-0" Feb 16 02:42:02.535981 master-0 kubenswrapper[31559]: I0216 02:42:02.535942 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5j6w\" (UniqueName: \"kubernetes.io/projected/ead525c4-0c20-4968-b1a2-2feeaa419d44-kube-api-access-x5j6w\") pod \"nova-api-0\" (UID: \"ead525c4-0c20-4968-b1a2-2feeaa419d44\") " pod="openstack/nova-api-0" Feb 16 02:42:02.735278 master-0 kubenswrapper[31559]: I0216 02:42:02.735191 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 02:42:03.309495 master-0 kubenswrapper[31559]: I0216 02:42:03.307617 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 02:42:03.309495 master-0 kubenswrapper[31559]: W0216 02:42:03.308696 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podead525c4_0c20_4968_b1a2_2feeaa419d44.slice/crio-c3e4134b2d81cd45374456798886ea38050b8a19bf97c8b16379b91273aa5253 WatchSource:0}: Error finding container c3e4134b2d81cd45374456798886ea38050b8a19bf97c8b16379b91273aa5253: Status 404 returned error can't find the container with id c3e4134b2d81cd45374456798886ea38050b8a19bf97c8b16379b91273aa5253 Feb 16 02:42:03.964187 master-0 kubenswrapper[31559]: I0216 02:42:03.964083 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce0c6b95-bf6a-47dc-aa0d-c448d9c5f2eb" path="/var/lib/kubelet/pods/ce0c6b95-bf6a-47dc-aa0d-c448d9c5f2eb/volumes" Feb 16 02:42:04.327629 master-0 kubenswrapper[31559]: I0216 02:42:04.327454 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ead525c4-0c20-4968-b1a2-2feeaa419d44","Type":"ContainerStarted","Data":"aebb52482fee6c950bcfd0d1fa5f265b7a713afc12d16bec90e2c5ef5f797e95"} Feb 16 02:42:04.327629 master-0 kubenswrapper[31559]: I0216 02:42:04.327518 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ead525c4-0c20-4968-b1a2-2feeaa419d44","Type":"ContainerStarted","Data":"018ec1d82df2729dfd2ee6cd6d99a67bb57a206fa1d324d32befbcdff6e1e65a"} Feb 16 02:42:04.327629 master-0 kubenswrapper[31559]: I0216 02:42:04.327532 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ead525c4-0c20-4968-b1a2-2feeaa419d44","Type":"ContainerStarted","Data":"c3e4134b2d81cd45374456798886ea38050b8a19bf97c8b16379b91273aa5253"} Feb 16 02:42:04.354519 master-0 kubenswrapper[31559]: I0216 02:42:04.354124 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.353929223 podStartE2EDuration="2.353929223s" podCreationTimestamp="2026-02-16 02:42:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:42:04.349825798 +0000 UTC m=+1176.694431823" watchObservedRunningTime="2026-02-16 02:42:04.353929223 +0000 UTC m=+1176.698535268" Feb 16 02:42:04.566936 master-0 kubenswrapper[31559]: I0216 02:42:04.566879 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 16 02:42:04.588593 master-0 kubenswrapper[31559]: I0216 02:42:04.588458 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 16 02:42:05.372870 master-0 kubenswrapper[31559]: I0216 02:42:05.372799 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 16 02:42:05.622596 master-0 kubenswrapper[31559]: I0216 02:42:05.622380 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-6t6jv"] Feb 16 02:42:05.625340 master-0 kubenswrapper[31559]: I0216 02:42:05.625028 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-6t6jv" Feb 16 02:42:05.634665 master-0 kubenswrapper[31559]: I0216 02:42:05.634129 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-host-discover-q2bvg"] Feb 16 02:42:05.635739 master-0 kubenswrapper[31559]: I0216 02:42:05.635692 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 16 02:42:05.636350 master-0 kubenswrapper[31559]: I0216 02:42:05.636267 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 16 02:42:05.637837 master-0 kubenswrapper[31559]: I0216 02:42:05.637796 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-host-discover-q2bvg" Feb 16 02:42:05.653045 master-0 kubenswrapper[31559]: I0216 02:42:05.652977 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-6t6jv"] Feb 16 02:42:05.684086 master-0 kubenswrapper[31559]: I0216 02:42:05.683222 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-host-discover-q2bvg"] Feb 16 02:42:05.773724 master-0 kubenswrapper[31559]: I0216 02:42:05.773651 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7b5c8d6759-8cjvh" Feb 16 02:42:05.817837 master-0 kubenswrapper[31559]: I0216 02:42:05.817756 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/871c887d-2a5a-4839-8ffb-eadb66301e8d-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-6t6jv\" (UID: \"871c887d-2a5a-4839-8ffb-eadb66301e8d\") " pod="openstack/nova-cell1-cell-mapping-6t6jv" Feb 16 02:42:05.817837 master-0 kubenswrapper[31559]: I0216 02:42:05.817838 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8t4jh\" (UniqueName: \"kubernetes.io/projected/871c887d-2a5a-4839-8ffb-eadb66301e8d-kube-api-access-8t4jh\") pod \"nova-cell1-cell-mapping-6t6jv\" (UID: \"871c887d-2a5a-4839-8ffb-eadb66301e8d\") " pod="openstack/nova-cell1-cell-mapping-6t6jv" Feb 16 02:42:05.818187 master-0 kubenswrapper[31559]: I0216 02:42:05.817940 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23ec7bd8-7ac5-44fc-a7f1-67465e2cf88f-config-data\") pod \"nova-cell1-host-discover-q2bvg\" (UID: \"23ec7bd8-7ac5-44fc-a7f1-67465e2cf88f\") " pod="openstack/nova-cell1-host-discover-q2bvg" Feb 16 02:42:05.818187 master-0 kubenswrapper[31559]: I0216 02:42:05.818019 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/871c887d-2a5a-4839-8ffb-eadb66301e8d-scripts\") pod \"nova-cell1-cell-mapping-6t6jv\" (UID: \"871c887d-2a5a-4839-8ffb-eadb66301e8d\") " pod="openstack/nova-cell1-cell-mapping-6t6jv" Feb 16 02:42:05.818342 master-0 kubenswrapper[31559]: I0216 02:42:05.818255 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23ec7bd8-7ac5-44fc-a7f1-67465e2cf88f-combined-ca-bundle\") pod \"nova-cell1-host-discover-q2bvg\" (UID: \"23ec7bd8-7ac5-44fc-a7f1-67465e2cf88f\") " pod="openstack/nova-cell1-host-discover-q2bvg" Feb 16 02:42:05.818472 master-0 kubenswrapper[31559]: I0216 02:42:05.818355 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/871c887d-2a5a-4839-8ffb-eadb66301e8d-config-data\") pod \"nova-cell1-cell-mapping-6t6jv\" (UID: \"871c887d-2a5a-4839-8ffb-eadb66301e8d\") " pod="openstack/nova-cell1-cell-mapping-6t6jv" Feb 16 02:42:05.818472 master-0 kubenswrapper[31559]: I0216 02:42:05.818403 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23ec7bd8-7ac5-44fc-a7f1-67465e2cf88f-scripts\") pod \"nova-cell1-host-discover-q2bvg\" (UID: \"23ec7bd8-7ac5-44fc-a7f1-67465e2cf88f\") " pod="openstack/nova-cell1-host-discover-q2bvg" Feb 16 02:42:05.818659 master-0 kubenswrapper[31559]: I0216 02:42:05.818485 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxzp5\" (UniqueName: \"kubernetes.io/projected/23ec7bd8-7ac5-44fc-a7f1-67465e2cf88f-kube-api-access-sxzp5\") pod \"nova-cell1-host-discover-q2bvg\" (UID: \"23ec7bd8-7ac5-44fc-a7f1-67465e2cf88f\") " pod="openstack/nova-cell1-host-discover-q2bvg" Feb 16 02:42:05.876249 master-0 kubenswrapper[31559]: I0216 02:42:05.868922 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-857cbc5f9f-lbsl2"] Feb 16 02:42:05.876249 master-0 kubenswrapper[31559]: I0216 02:42:05.869211 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-857cbc5f9f-lbsl2" podUID="7fa59dc4-e794-44ee-9b14-1899479e07c7" containerName="dnsmasq-dns" containerID="cri-o://ace857275fb6ef2c729b6cd725d28524d6f35dee0e791135b0ca446b425284fa" gracePeriod=10 Feb 16 02:42:05.920649 master-0 kubenswrapper[31559]: I0216 02:42:05.920548 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23ec7bd8-7ac5-44fc-a7f1-67465e2cf88f-combined-ca-bundle\") pod \"nova-cell1-host-discover-q2bvg\" (UID: \"23ec7bd8-7ac5-44fc-a7f1-67465e2cf88f\") " pod="openstack/nova-cell1-host-discover-q2bvg" Feb 16 02:42:05.920915 master-0 kubenswrapper[31559]: I0216 02:42:05.920679 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/871c887d-2a5a-4839-8ffb-eadb66301e8d-config-data\") pod \"nova-cell1-cell-mapping-6t6jv\" (UID: \"871c887d-2a5a-4839-8ffb-eadb66301e8d\") " pod="openstack/nova-cell1-cell-mapping-6t6jv" Feb 16 02:42:05.920915 master-0 kubenswrapper[31559]: I0216 02:42:05.920766 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23ec7bd8-7ac5-44fc-a7f1-67465e2cf88f-scripts\") pod \"nova-cell1-host-discover-q2bvg\" (UID: \"23ec7bd8-7ac5-44fc-a7f1-67465e2cf88f\") " pod="openstack/nova-cell1-host-discover-q2bvg" Feb 16 02:42:05.920915 master-0 kubenswrapper[31559]: I0216 02:42:05.920838 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxzp5\" (UniqueName: \"kubernetes.io/projected/23ec7bd8-7ac5-44fc-a7f1-67465e2cf88f-kube-api-access-sxzp5\") pod \"nova-cell1-host-discover-q2bvg\" (UID: \"23ec7bd8-7ac5-44fc-a7f1-67465e2cf88f\") " pod="openstack/nova-cell1-host-discover-q2bvg" Feb 16 02:42:05.921073 master-0 kubenswrapper[31559]: I0216 02:42:05.921013 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/871c887d-2a5a-4839-8ffb-eadb66301e8d-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-6t6jv\" (UID: \"871c887d-2a5a-4839-8ffb-eadb66301e8d\") " pod="openstack/nova-cell1-cell-mapping-6t6jv" Feb 16 02:42:05.921121 master-0 kubenswrapper[31559]: I0216 02:42:05.921074 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8t4jh\" (UniqueName: \"kubernetes.io/projected/871c887d-2a5a-4839-8ffb-eadb66301e8d-kube-api-access-8t4jh\") pod \"nova-cell1-cell-mapping-6t6jv\" (UID: \"871c887d-2a5a-4839-8ffb-eadb66301e8d\") " pod="openstack/nova-cell1-cell-mapping-6t6jv" Feb 16 02:42:05.921168 master-0 kubenswrapper[31559]: I0216 02:42:05.921141 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23ec7bd8-7ac5-44fc-a7f1-67465e2cf88f-config-data\") pod \"nova-cell1-host-discover-q2bvg\" (UID: \"23ec7bd8-7ac5-44fc-a7f1-67465e2cf88f\") " pod="openstack/nova-cell1-host-discover-q2bvg" Feb 16 02:42:05.921216 master-0 kubenswrapper[31559]: I0216 02:42:05.921181 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/871c887d-2a5a-4839-8ffb-eadb66301e8d-scripts\") pod \"nova-cell1-cell-mapping-6t6jv\" (UID: \"871c887d-2a5a-4839-8ffb-eadb66301e8d\") " pod="openstack/nova-cell1-cell-mapping-6t6jv" Feb 16 02:42:05.930763 master-0 kubenswrapper[31559]: I0216 02:42:05.930708 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23ec7bd8-7ac5-44fc-a7f1-67465e2cf88f-scripts\") pod \"nova-cell1-host-discover-q2bvg\" (UID: \"23ec7bd8-7ac5-44fc-a7f1-67465e2cf88f\") " pod="openstack/nova-cell1-host-discover-q2bvg" Feb 16 02:42:05.931409 master-0 kubenswrapper[31559]: I0216 02:42:05.931357 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/871c887d-2a5a-4839-8ffb-eadb66301e8d-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-6t6jv\" (UID: \"871c887d-2a5a-4839-8ffb-eadb66301e8d\") " pod="openstack/nova-cell1-cell-mapping-6t6jv" Feb 16 02:42:05.934152 master-0 kubenswrapper[31559]: I0216 02:42:05.934100 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23ec7bd8-7ac5-44fc-a7f1-67465e2cf88f-combined-ca-bundle\") pod \"nova-cell1-host-discover-q2bvg\" (UID: \"23ec7bd8-7ac5-44fc-a7f1-67465e2cf88f\") " pod="openstack/nova-cell1-host-discover-q2bvg" Feb 16 02:42:05.934231 master-0 kubenswrapper[31559]: I0216 02:42:05.934169 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/871c887d-2a5a-4839-8ffb-eadb66301e8d-scripts\") pod \"nova-cell1-cell-mapping-6t6jv\" (UID: \"871c887d-2a5a-4839-8ffb-eadb66301e8d\") " pod="openstack/nova-cell1-cell-mapping-6t6jv" Feb 16 02:42:05.938194 master-0 kubenswrapper[31559]: I0216 02:42:05.938138 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/871c887d-2a5a-4839-8ffb-eadb66301e8d-config-data\") pod \"nova-cell1-cell-mapping-6t6jv\" (UID: \"871c887d-2a5a-4839-8ffb-eadb66301e8d\") " pod="openstack/nova-cell1-cell-mapping-6t6jv" Feb 16 02:42:05.940382 master-0 kubenswrapper[31559]: I0216 02:42:05.940344 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23ec7bd8-7ac5-44fc-a7f1-67465e2cf88f-config-data\") pod \"nova-cell1-host-discover-q2bvg\" (UID: \"23ec7bd8-7ac5-44fc-a7f1-67465e2cf88f\") " pod="openstack/nova-cell1-host-discover-q2bvg" Feb 16 02:42:05.944353 master-0 kubenswrapper[31559]: I0216 02:42:05.944296 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxzp5\" (UniqueName: \"kubernetes.io/projected/23ec7bd8-7ac5-44fc-a7f1-67465e2cf88f-kube-api-access-sxzp5\") pod \"nova-cell1-host-discover-q2bvg\" (UID: \"23ec7bd8-7ac5-44fc-a7f1-67465e2cf88f\") " pod="openstack/nova-cell1-host-discover-q2bvg" Feb 16 02:42:05.952014 master-0 kubenswrapper[31559]: I0216 02:42:05.951890 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8t4jh\" (UniqueName: \"kubernetes.io/projected/871c887d-2a5a-4839-8ffb-eadb66301e8d-kube-api-access-8t4jh\") pod \"nova-cell1-cell-mapping-6t6jv\" (UID: \"871c887d-2a5a-4839-8ffb-eadb66301e8d\") " pod="openstack/nova-cell1-cell-mapping-6t6jv" Feb 16 02:42:05.992044 master-0 kubenswrapper[31559]: I0216 02:42:05.991982 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-6t6jv" Feb 16 02:42:06.013865 master-0 kubenswrapper[31559]: I0216 02:42:06.013000 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-host-discover-q2bvg" Feb 16 02:42:06.360674 master-0 kubenswrapper[31559]: I0216 02:42:06.359883 31559 generic.go:334] "Generic (PLEG): container finished" podID="7fa59dc4-e794-44ee-9b14-1899479e07c7" containerID="ace857275fb6ef2c729b6cd725d28524d6f35dee0e791135b0ca446b425284fa" exitCode=0 Feb 16 02:42:06.360674 master-0 kubenswrapper[31559]: I0216 02:42:06.360662 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-857cbc5f9f-lbsl2" event={"ID":"7fa59dc4-e794-44ee-9b14-1899479e07c7","Type":"ContainerDied","Data":"ace857275fb6ef2c729b6cd725d28524d6f35dee0e791135b0ca446b425284fa"} Feb 16 02:42:06.486386 master-0 kubenswrapper[31559]: I0216 02:42:06.486323 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-857cbc5f9f-lbsl2" Feb 16 02:42:06.635538 master-0 kubenswrapper[31559]: I0216 02:42:06.635430 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-host-discover-q2bvg"] Feb 16 02:42:06.639297 master-0 kubenswrapper[31559]: I0216 02:42:06.639227 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7fa59dc4-e794-44ee-9b14-1899479e07c7-ovsdbserver-nb\") pod \"7fa59dc4-e794-44ee-9b14-1899479e07c7\" (UID: \"7fa59dc4-e794-44ee-9b14-1899479e07c7\") " Feb 16 02:42:06.639396 master-0 kubenswrapper[31559]: I0216 02:42:06.639352 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gppnf\" (UniqueName: \"kubernetes.io/projected/7fa59dc4-e794-44ee-9b14-1899479e07c7-kube-api-access-gppnf\") pod \"7fa59dc4-e794-44ee-9b14-1899479e07c7\" (UID: \"7fa59dc4-e794-44ee-9b14-1899479e07c7\") " Feb 16 02:42:06.639473 master-0 kubenswrapper[31559]: I0216 02:42:06.639409 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7fa59dc4-e794-44ee-9b14-1899479e07c7-ovsdbserver-sb\") pod \"7fa59dc4-e794-44ee-9b14-1899479e07c7\" (UID: \"7fa59dc4-e794-44ee-9b14-1899479e07c7\") " Feb 16 02:42:06.640073 master-0 kubenswrapper[31559]: I0216 02:42:06.639550 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7fa59dc4-e794-44ee-9b14-1899479e07c7-dns-svc\") pod \"7fa59dc4-e794-44ee-9b14-1899479e07c7\" (UID: \"7fa59dc4-e794-44ee-9b14-1899479e07c7\") " Feb 16 02:42:06.640073 master-0 kubenswrapper[31559]: I0216 02:42:06.639608 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7fa59dc4-e794-44ee-9b14-1899479e07c7-dns-swift-storage-0\") pod \"7fa59dc4-e794-44ee-9b14-1899479e07c7\" (UID: \"7fa59dc4-e794-44ee-9b14-1899479e07c7\") " Feb 16 02:42:06.640073 master-0 kubenswrapper[31559]: I0216 02:42:06.639669 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fa59dc4-e794-44ee-9b14-1899479e07c7-config\") pod \"7fa59dc4-e794-44ee-9b14-1899479e07c7\" (UID: \"7fa59dc4-e794-44ee-9b14-1899479e07c7\") " Feb 16 02:42:06.643907 master-0 kubenswrapper[31559]: I0216 02:42:06.643856 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fa59dc4-e794-44ee-9b14-1899479e07c7-kube-api-access-gppnf" (OuterVolumeSpecName: "kube-api-access-gppnf") pod "7fa59dc4-e794-44ee-9b14-1899479e07c7" (UID: "7fa59dc4-e794-44ee-9b14-1899479e07c7"). InnerVolumeSpecName "kube-api-access-gppnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:42:06.646617 master-0 kubenswrapper[31559]: I0216 02:42:06.646517 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-6t6jv"] Feb 16 02:42:06.702502 master-0 kubenswrapper[31559]: I0216 02:42:06.702326 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fa59dc4-e794-44ee-9b14-1899479e07c7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7fa59dc4-e794-44ee-9b14-1899479e07c7" (UID: "7fa59dc4-e794-44ee-9b14-1899479e07c7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:42:06.712327 master-0 kubenswrapper[31559]: I0216 02:42:06.712274 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fa59dc4-e794-44ee-9b14-1899479e07c7-config" (OuterVolumeSpecName: "config") pod "7fa59dc4-e794-44ee-9b14-1899479e07c7" (UID: "7fa59dc4-e794-44ee-9b14-1899479e07c7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:42:06.717763 master-0 kubenswrapper[31559]: I0216 02:42:06.717714 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fa59dc4-e794-44ee-9b14-1899479e07c7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7fa59dc4-e794-44ee-9b14-1899479e07c7" (UID: "7fa59dc4-e794-44ee-9b14-1899479e07c7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:42:06.719523 master-0 kubenswrapper[31559]: I0216 02:42:06.718772 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fa59dc4-e794-44ee-9b14-1899479e07c7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7fa59dc4-e794-44ee-9b14-1899479e07c7" (UID: "7fa59dc4-e794-44ee-9b14-1899479e07c7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:42:06.735597 master-0 kubenswrapper[31559]: I0216 02:42:06.735554 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fa59dc4-e794-44ee-9b14-1899479e07c7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7fa59dc4-e794-44ee-9b14-1899479e07c7" (UID: "7fa59dc4-e794-44ee-9b14-1899479e07c7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:42:06.742930 master-0 kubenswrapper[31559]: I0216 02:42:06.742886 31559 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7fa59dc4-e794-44ee-9b14-1899479e07c7-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 16 02:42:06.742930 master-0 kubenswrapper[31559]: I0216 02:42:06.742932 31559 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7fa59dc4-e794-44ee-9b14-1899479e07c7-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 16 02:42:06.743137 master-0 kubenswrapper[31559]: I0216 02:42:06.742949 31559 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fa59dc4-e794-44ee-9b14-1899479e07c7-config\") on node \"master-0\" DevicePath \"\"" Feb 16 02:42:06.743137 master-0 kubenswrapper[31559]: I0216 02:42:06.742962 31559 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7fa59dc4-e794-44ee-9b14-1899479e07c7-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 16 02:42:06.743137 master-0 kubenswrapper[31559]: I0216 02:42:06.742974 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gppnf\" (UniqueName: \"kubernetes.io/projected/7fa59dc4-e794-44ee-9b14-1899479e07c7-kube-api-access-gppnf\") on node \"master-0\" DevicePath \"\"" Feb 16 02:42:06.743137 master-0 kubenswrapper[31559]: I0216 02:42:06.742986 31559 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7fa59dc4-e794-44ee-9b14-1899479e07c7-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 16 02:42:07.377152 master-0 kubenswrapper[31559]: I0216 02:42:07.377026 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-857cbc5f9f-lbsl2" event={"ID":"7fa59dc4-e794-44ee-9b14-1899479e07c7","Type":"ContainerDied","Data":"c78578ac35144b12e0613a8df6fce528c660f83852aa6f0730704393e484815a"} Feb 16 02:42:07.377659 master-0 kubenswrapper[31559]: I0216 02:42:07.377130 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-857cbc5f9f-lbsl2" Feb 16 02:42:07.377755 master-0 kubenswrapper[31559]: I0216 02:42:07.377644 31559 scope.go:117] "RemoveContainer" containerID="ace857275fb6ef2c729b6cd725d28524d6f35dee0e791135b0ca446b425284fa" Feb 16 02:42:07.378928 master-0 kubenswrapper[31559]: I0216 02:42:07.378905 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-6t6jv" event={"ID":"871c887d-2a5a-4839-8ffb-eadb66301e8d","Type":"ContainerStarted","Data":"28dd63192024561ba0870ee951c165cef68cc93be7d9eb1a8dfe3ae0068c336c"} Feb 16 02:42:07.379730 master-0 kubenswrapper[31559]: I0216 02:42:07.379712 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-6t6jv" event={"ID":"871c887d-2a5a-4839-8ffb-eadb66301e8d","Type":"ContainerStarted","Data":"adec280787e90a59b10474630f709fd1fe01d4f760e4d896e4f85ce5b7f9e38c"} Feb 16 02:42:07.391332 master-0 kubenswrapper[31559]: I0216 02:42:07.391252 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-host-discover-q2bvg" event={"ID":"23ec7bd8-7ac5-44fc-a7f1-67465e2cf88f","Type":"ContainerStarted","Data":"e5261182c6a67b2a51b90cad0c5f3a5827f29c33c1f33ffb79284020587a5c33"} Feb 16 02:42:07.391332 master-0 kubenswrapper[31559]: I0216 02:42:07.391307 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-host-discover-q2bvg" event={"ID":"23ec7bd8-7ac5-44fc-a7f1-67465e2cf88f","Type":"ContainerStarted","Data":"e35b924dd6fb1ae3415288dd97d73f4c4f71d5a9775b10f6e58af3f9fad58c11"} Feb 16 02:42:07.417578 master-0 kubenswrapper[31559]: I0216 02:42:07.414232 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-6t6jv" podStartSLOduration=2.414198713 podStartE2EDuration="2.414198713s" podCreationTimestamp="2026-02-16 02:42:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:42:07.402787831 +0000 UTC m=+1179.747393846" watchObservedRunningTime="2026-02-16 02:42:07.414198713 +0000 UTC m=+1179.758804758" Feb 16 02:42:07.417578 master-0 kubenswrapper[31559]: I0216 02:42:07.416713 31559 scope.go:117] "RemoveContainer" containerID="39db39b107871c2872de2f8e91c34b2b4e4a9f031f156bc09866241e90b5c086" Feb 16 02:42:07.478534 master-0 kubenswrapper[31559]: I0216 02:42:07.478427 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-host-discover-q2bvg" podStartSLOduration=2.478407992 podStartE2EDuration="2.478407992s" podCreationTimestamp="2026-02-16 02:42:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:42:07.451352891 +0000 UTC m=+1179.795958906" watchObservedRunningTime="2026-02-16 02:42:07.478407992 +0000 UTC m=+1179.823014007" Feb 16 02:42:07.483699 master-0 kubenswrapper[31559]: I0216 02:42:07.483632 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-857cbc5f9f-lbsl2"] Feb 16 02:42:07.504073 master-0 kubenswrapper[31559]: I0216 02:42:07.502297 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-857cbc5f9f-lbsl2"] Feb 16 02:42:07.942096 master-0 kubenswrapper[31559]: I0216 02:42:07.942041 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fa59dc4-e794-44ee-9b14-1899479e07c7" path="/var/lib/kubelet/pods/7fa59dc4-e794-44ee-9b14-1899479e07c7/volumes" Feb 16 02:42:09.457285 master-0 kubenswrapper[31559]: I0216 02:42:09.455581 31559 generic.go:334] "Generic (PLEG): container finished" podID="23ec7bd8-7ac5-44fc-a7f1-67465e2cf88f" containerID="e5261182c6a67b2a51b90cad0c5f3a5827f29c33c1f33ffb79284020587a5c33" exitCode=0 Feb 16 02:42:09.457285 master-0 kubenswrapper[31559]: I0216 02:42:09.455650 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-host-discover-q2bvg" event={"ID":"23ec7bd8-7ac5-44fc-a7f1-67465e2cf88f","Type":"ContainerDied","Data":"e5261182c6a67b2a51b90cad0c5f3a5827f29c33c1f33ffb79284020587a5c33"} Feb 16 02:42:11.029799 master-0 kubenswrapper[31559]: I0216 02:42:11.029675 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-host-discover-q2bvg" Feb 16 02:42:11.172837 master-0 kubenswrapper[31559]: I0216 02:42:11.172017 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23ec7bd8-7ac5-44fc-a7f1-67465e2cf88f-scripts\") pod \"23ec7bd8-7ac5-44fc-a7f1-67465e2cf88f\" (UID: \"23ec7bd8-7ac5-44fc-a7f1-67465e2cf88f\") " Feb 16 02:42:11.172837 master-0 kubenswrapper[31559]: I0216 02:42:11.172224 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxzp5\" (UniqueName: \"kubernetes.io/projected/23ec7bd8-7ac5-44fc-a7f1-67465e2cf88f-kube-api-access-sxzp5\") pod \"23ec7bd8-7ac5-44fc-a7f1-67465e2cf88f\" (UID: \"23ec7bd8-7ac5-44fc-a7f1-67465e2cf88f\") " Feb 16 02:42:11.172837 master-0 kubenswrapper[31559]: I0216 02:42:11.172301 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23ec7bd8-7ac5-44fc-a7f1-67465e2cf88f-combined-ca-bundle\") pod \"23ec7bd8-7ac5-44fc-a7f1-67465e2cf88f\" (UID: \"23ec7bd8-7ac5-44fc-a7f1-67465e2cf88f\") " Feb 16 02:42:11.172837 master-0 kubenswrapper[31559]: I0216 02:42:11.172402 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23ec7bd8-7ac5-44fc-a7f1-67465e2cf88f-config-data\") pod \"23ec7bd8-7ac5-44fc-a7f1-67465e2cf88f\" (UID: \"23ec7bd8-7ac5-44fc-a7f1-67465e2cf88f\") " Feb 16 02:42:11.177087 master-0 kubenswrapper[31559]: I0216 02:42:11.177029 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23ec7bd8-7ac5-44fc-a7f1-67465e2cf88f-scripts" (OuterVolumeSpecName: "scripts") pod "23ec7bd8-7ac5-44fc-a7f1-67465e2cf88f" (UID: "23ec7bd8-7ac5-44fc-a7f1-67465e2cf88f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:42:11.188904 master-0 kubenswrapper[31559]: I0216 02:42:11.188859 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23ec7bd8-7ac5-44fc-a7f1-67465e2cf88f-kube-api-access-sxzp5" (OuterVolumeSpecName: "kube-api-access-sxzp5") pod "23ec7bd8-7ac5-44fc-a7f1-67465e2cf88f" (UID: "23ec7bd8-7ac5-44fc-a7f1-67465e2cf88f"). InnerVolumeSpecName "kube-api-access-sxzp5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:42:11.202856 master-0 kubenswrapper[31559]: I0216 02:42:11.202819 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23ec7bd8-7ac5-44fc-a7f1-67465e2cf88f-config-data" (OuterVolumeSpecName: "config-data") pod "23ec7bd8-7ac5-44fc-a7f1-67465e2cf88f" (UID: "23ec7bd8-7ac5-44fc-a7f1-67465e2cf88f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:42:11.211897 master-0 kubenswrapper[31559]: I0216 02:42:11.211840 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23ec7bd8-7ac5-44fc-a7f1-67465e2cf88f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "23ec7bd8-7ac5-44fc-a7f1-67465e2cf88f" (UID: "23ec7bd8-7ac5-44fc-a7f1-67465e2cf88f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:42:11.275963 master-0 kubenswrapper[31559]: I0216 02:42:11.275911 31559 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23ec7bd8-7ac5-44fc-a7f1-67465e2cf88f-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 02:42:11.275963 master-0 kubenswrapper[31559]: I0216 02:42:11.275972 31559 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23ec7bd8-7ac5-44fc-a7f1-67465e2cf88f-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 02:42:11.276167 master-0 kubenswrapper[31559]: I0216 02:42:11.275993 31559 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23ec7bd8-7ac5-44fc-a7f1-67465e2cf88f-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 02:42:11.276167 master-0 kubenswrapper[31559]: I0216 02:42:11.276012 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxzp5\" (UniqueName: \"kubernetes.io/projected/23ec7bd8-7ac5-44fc-a7f1-67465e2cf88f-kube-api-access-sxzp5\") on node \"master-0\" DevicePath \"\"" Feb 16 02:42:11.508561 master-0 kubenswrapper[31559]: I0216 02:42:11.508506 31559 generic.go:334] "Generic (PLEG): container finished" podID="871c887d-2a5a-4839-8ffb-eadb66301e8d" containerID="28dd63192024561ba0870ee951c165cef68cc93be7d9eb1a8dfe3ae0068c336c" exitCode=0 Feb 16 02:42:11.508889 master-0 kubenswrapper[31559]: I0216 02:42:11.508586 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-6t6jv" event={"ID":"871c887d-2a5a-4839-8ffb-eadb66301e8d","Type":"ContainerDied","Data":"28dd63192024561ba0870ee951c165cef68cc93be7d9eb1a8dfe3ae0068c336c"} Feb 16 02:42:11.522237 master-0 kubenswrapper[31559]: I0216 02:42:11.522177 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-host-discover-q2bvg" event={"ID":"23ec7bd8-7ac5-44fc-a7f1-67465e2cf88f","Type":"ContainerDied","Data":"e35b924dd6fb1ae3415288dd97d73f4c4f71d5a9775b10f6e58af3f9fad58c11"} Feb 16 02:42:11.522237 master-0 kubenswrapper[31559]: I0216 02:42:11.522244 31559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e35b924dd6fb1ae3415288dd97d73f4c4f71d5a9775b10f6e58af3f9fad58c11" Feb 16 02:42:11.522498 master-0 kubenswrapper[31559]: I0216 02:42:11.522329 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-host-discover-q2bvg" Feb 16 02:42:12.735951 master-0 kubenswrapper[31559]: I0216 02:42:12.735878 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 02:42:12.737605 master-0 kubenswrapper[31559]: I0216 02:42:12.737562 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 02:42:13.114203 master-0 kubenswrapper[31559]: I0216 02:42:13.114046 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-6t6jv" Feb 16 02:42:13.141409 master-0 kubenswrapper[31559]: I0216 02:42:13.141335 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/871c887d-2a5a-4839-8ffb-eadb66301e8d-combined-ca-bundle\") pod \"871c887d-2a5a-4839-8ffb-eadb66301e8d\" (UID: \"871c887d-2a5a-4839-8ffb-eadb66301e8d\") " Feb 16 02:42:13.142233 master-0 kubenswrapper[31559]: I0216 02:42:13.141648 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/871c887d-2a5a-4839-8ffb-eadb66301e8d-scripts\") pod \"871c887d-2a5a-4839-8ffb-eadb66301e8d\" (UID: \"871c887d-2a5a-4839-8ffb-eadb66301e8d\") " Feb 16 02:42:13.142233 master-0 kubenswrapper[31559]: I0216 02:42:13.141676 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/871c887d-2a5a-4839-8ffb-eadb66301e8d-config-data\") pod \"871c887d-2a5a-4839-8ffb-eadb66301e8d\" (UID: \"871c887d-2a5a-4839-8ffb-eadb66301e8d\") " Feb 16 02:42:13.142233 master-0 kubenswrapper[31559]: I0216 02:42:13.141762 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8t4jh\" (UniqueName: \"kubernetes.io/projected/871c887d-2a5a-4839-8ffb-eadb66301e8d-kube-api-access-8t4jh\") pod \"871c887d-2a5a-4839-8ffb-eadb66301e8d\" (UID: \"871c887d-2a5a-4839-8ffb-eadb66301e8d\") " Feb 16 02:42:13.181428 master-0 kubenswrapper[31559]: I0216 02:42:13.146095 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/871c887d-2a5a-4839-8ffb-eadb66301e8d-scripts" (OuterVolumeSpecName: "scripts") pod "871c887d-2a5a-4839-8ffb-eadb66301e8d" (UID: "871c887d-2a5a-4839-8ffb-eadb66301e8d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:42:13.181428 master-0 kubenswrapper[31559]: I0216 02:42:13.164910 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/871c887d-2a5a-4839-8ffb-eadb66301e8d-kube-api-access-8t4jh" (OuterVolumeSpecName: "kube-api-access-8t4jh") pod "871c887d-2a5a-4839-8ffb-eadb66301e8d" (UID: "871c887d-2a5a-4839-8ffb-eadb66301e8d"). InnerVolumeSpecName "kube-api-access-8t4jh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:42:13.181428 master-0 kubenswrapper[31559]: I0216 02:42:13.176561 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/871c887d-2a5a-4839-8ffb-eadb66301e8d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "871c887d-2a5a-4839-8ffb-eadb66301e8d" (UID: "871c887d-2a5a-4839-8ffb-eadb66301e8d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:42:13.201000 master-0 kubenswrapper[31559]: I0216 02:42:13.200943 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/871c887d-2a5a-4839-8ffb-eadb66301e8d-config-data" (OuterVolumeSpecName: "config-data") pod "871c887d-2a5a-4839-8ffb-eadb66301e8d" (UID: "871c887d-2a5a-4839-8ffb-eadb66301e8d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:42:13.244759 master-0 kubenswrapper[31559]: I0216 02:42:13.244699 31559 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/871c887d-2a5a-4839-8ffb-eadb66301e8d-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 02:42:13.244759 master-0 kubenswrapper[31559]: I0216 02:42:13.244741 31559 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/871c887d-2a5a-4839-8ffb-eadb66301e8d-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 02:42:13.244759 master-0 kubenswrapper[31559]: I0216 02:42:13.244750 31559 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/871c887d-2a5a-4839-8ffb-eadb66301e8d-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 02:42:13.244759 master-0 kubenswrapper[31559]: I0216 02:42:13.244760 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8t4jh\" (UniqueName: \"kubernetes.io/projected/871c887d-2a5a-4839-8ffb-eadb66301e8d-kube-api-access-8t4jh\") on node \"master-0\" DevicePath \"\"" Feb 16 02:42:13.551551 master-0 kubenswrapper[31559]: I0216 02:42:13.551469 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-6t6jv" event={"ID":"871c887d-2a5a-4839-8ffb-eadb66301e8d","Type":"ContainerDied","Data":"adec280787e90a59b10474630f709fd1fe01d4f760e4d896e4f85ce5b7f9e38c"} Feb 16 02:42:13.551551 master-0 kubenswrapper[31559]: I0216 02:42:13.551507 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-6t6jv" Feb 16 02:42:13.551551 master-0 kubenswrapper[31559]: I0216 02:42:13.551539 31559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="adec280787e90a59b10474630f709fd1fe01d4f760e4d896e4f85ce5b7f9e38c" Feb 16 02:42:13.750737 master-0 kubenswrapper[31559]: I0216 02:42:13.750608 31559 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ead525c4-0c20-4968-b1a2-2feeaa419d44" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.128.1.18:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 02:42:13.751532 master-0 kubenswrapper[31559]: I0216 02:42:13.750694 31559 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ead525c4-0c20-4968-b1a2-2feeaa419d44" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.128.1.18:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 02:42:13.797604 master-0 kubenswrapper[31559]: I0216 02:42:13.797511 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 02:42:13.797890 master-0 kubenswrapper[31559]: I0216 02:42:13.797786 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="fdf75778-245b-4cac-9568-71c9dfbc0a93" containerName="nova-scheduler-scheduler" containerID="cri-o://ca0b10073fad2479bd6b2085e4b6949f491b64131e9ae82bfa3d4012a78cbed2" gracePeriod=30 Feb 16 02:42:13.817248 master-0 kubenswrapper[31559]: I0216 02:42:13.817079 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 02:42:13.830525 master-0 kubenswrapper[31559]: I0216 02:42:13.830089 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 02:42:13.830525 master-0 kubenswrapper[31559]: I0216 02:42:13.830327 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="91fb97c4-4958-40fd-8c1f-a3d3ca6f9727" containerName="nova-metadata-log" containerID="cri-o://a831245eca47a6845579f09785251f13ad26e431b35964b90d8cb14f3e8fefa0" gracePeriod=30 Feb 16 02:42:13.830831 master-0 kubenswrapper[31559]: I0216 02:42:13.830607 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="91fb97c4-4958-40fd-8c1f-a3d3ca6f9727" containerName="nova-metadata-metadata" containerID="cri-o://82c9080e49e7081ada7d82f2088c1a883181eb6dc496b909c51a3564bffb86c5" gracePeriod=30 Feb 16 02:42:14.584036 master-0 kubenswrapper[31559]: I0216 02:42:14.583966 31559 generic.go:334] "Generic (PLEG): container finished" podID="91fb97c4-4958-40fd-8c1f-a3d3ca6f9727" containerID="a831245eca47a6845579f09785251f13ad26e431b35964b90d8cb14f3e8fefa0" exitCode=143 Feb 16 02:42:14.584355 master-0 kubenswrapper[31559]: I0216 02:42:14.584059 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"91fb97c4-4958-40fd-8c1f-a3d3ca6f9727","Type":"ContainerDied","Data":"a831245eca47a6845579f09785251f13ad26e431b35964b90d8cb14f3e8fefa0"} Feb 16 02:42:14.584355 master-0 kubenswrapper[31559]: I0216 02:42:14.584282 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ead525c4-0c20-4968-b1a2-2feeaa419d44" containerName="nova-api-log" containerID="cri-o://018ec1d82df2729dfd2ee6cd6d99a67bb57a206fa1d324d32befbcdff6e1e65a" gracePeriod=30 Feb 16 02:42:14.584859 master-0 kubenswrapper[31559]: I0216 02:42:14.584744 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ead525c4-0c20-4968-b1a2-2feeaa419d44" containerName="nova-api-api" containerID="cri-o://aebb52482fee6c950bcfd0d1fa5f265b7a713afc12d16bec90e2c5ef5f797e95" gracePeriod=30 Feb 16 02:42:15.603906 master-0 kubenswrapper[31559]: I0216 02:42:15.603819 31559 generic.go:334] "Generic (PLEG): container finished" podID="ead525c4-0c20-4968-b1a2-2feeaa419d44" containerID="018ec1d82df2729dfd2ee6cd6d99a67bb57a206fa1d324d32befbcdff6e1e65a" exitCode=143 Feb 16 02:42:15.604917 master-0 kubenswrapper[31559]: I0216 02:42:15.603908 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ead525c4-0c20-4968-b1a2-2feeaa419d44","Type":"ContainerDied","Data":"018ec1d82df2729dfd2ee6cd6d99a67bb57a206fa1d324d32befbcdff6e1e65a"} Feb 16 02:42:16.551445 master-0 kubenswrapper[31559]: I0216 02:42:16.551371 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 02:42:16.641597 master-0 kubenswrapper[31559]: I0216 02:42:16.641412 31559 generic.go:334] "Generic (PLEG): container finished" podID="fdf75778-245b-4cac-9568-71c9dfbc0a93" containerID="ca0b10073fad2479bd6b2085e4b6949f491b64131e9ae82bfa3d4012a78cbed2" exitCode=0 Feb 16 02:42:16.641597 master-0 kubenswrapper[31559]: I0216 02:42:16.641479 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fdf75778-245b-4cac-9568-71c9dfbc0a93","Type":"ContainerDied","Data":"ca0b10073fad2479bd6b2085e4b6949f491b64131e9ae82bfa3d4012a78cbed2"} Feb 16 02:42:16.641597 master-0 kubenswrapper[31559]: I0216 02:42:16.641507 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fdf75778-245b-4cac-9568-71c9dfbc0a93","Type":"ContainerDied","Data":"3e98170f80630bf45f03ebe251af59b4253668bfb4333892396ab44956710c3a"} Feb 16 02:42:16.641597 master-0 kubenswrapper[31559]: I0216 02:42:16.641524 31559 scope.go:117] "RemoveContainer" containerID="ca0b10073fad2479bd6b2085e4b6949f491b64131e9ae82bfa3d4012a78cbed2" Feb 16 02:42:16.642353 master-0 kubenswrapper[31559]: I0216 02:42:16.641652 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 02:42:16.676065 master-0 kubenswrapper[31559]: I0216 02:42:16.676021 31559 scope.go:117] "RemoveContainer" containerID="ca0b10073fad2479bd6b2085e4b6949f491b64131e9ae82bfa3d4012a78cbed2" Feb 16 02:42:16.676915 master-0 kubenswrapper[31559]: E0216 02:42:16.676867 31559 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca0b10073fad2479bd6b2085e4b6949f491b64131e9ae82bfa3d4012a78cbed2\": container with ID starting with ca0b10073fad2479bd6b2085e4b6949f491b64131e9ae82bfa3d4012a78cbed2 not found: ID does not exist" containerID="ca0b10073fad2479bd6b2085e4b6949f491b64131e9ae82bfa3d4012a78cbed2" Feb 16 02:42:16.677003 master-0 kubenswrapper[31559]: I0216 02:42:16.676916 31559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca0b10073fad2479bd6b2085e4b6949f491b64131e9ae82bfa3d4012a78cbed2"} err="failed to get container status \"ca0b10073fad2479bd6b2085e4b6949f491b64131e9ae82bfa3d4012a78cbed2\": rpc error: code = NotFound desc = could not find container \"ca0b10073fad2479bd6b2085e4b6949f491b64131e9ae82bfa3d4012a78cbed2\": container with ID starting with ca0b10073fad2479bd6b2085e4b6949f491b64131e9ae82bfa3d4012a78cbed2 not found: ID does not exist" Feb 16 02:42:16.739782 master-0 kubenswrapper[31559]: I0216 02:42:16.739579 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjkks\" (UniqueName: \"kubernetes.io/projected/fdf75778-245b-4cac-9568-71c9dfbc0a93-kube-api-access-zjkks\") pod \"fdf75778-245b-4cac-9568-71c9dfbc0a93\" (UID: \"fdf75778-245b-4cac-9568-71c9dfbc0a93\") " Feb 16 02:42:16.740227 master-0 kubenswrapper[31559]: I0216 02:42:16.740179 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdf75778-245b-4cac-9568-71c9dfbc0a93-combined-ca-bundle\") pod \"fdf75778-245b-4cac-9568-71c9dfbc0a93\" (UID: \"fdf75778-245b-4cac-9568-71c9dfbc0a93\") " Feb 16 02:42:16.741859 master-0 kubenswrapper[31559]: I0216 02:42:16.741815 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdf75778-245b-4cac-9568-71c9dfbc0a93-config-data\") pod \"fdf75778-245b-4cac-9568-71c9dfbc0a93\" (UID: \"fdf75778-245b-4cac-9568-71c9dfbc0a93\") " Feb 16 02:42:16.747566 master-0 kubenswrapper[31559]: I0216 02:42:16.747386 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdf75778-245b-4cac-9568-71c9dfbc0a93-kube-api-access-zjkks" (OuterVolumeSpecName: "kube-api-access-zjkks") pod "fdf75778-245b-4cac-9568-71c9dfbc0a93" (UID: "fdf75778-245b-4cac-9568-71c9dfbc0a93"). InnerVolumeSpecName "kube-api-access-zjkks". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:42:16.793681 master-0 kubenswrapper[31559]: I0216 02:42:16.779305 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdf75778-245b-4cac-9568-71c9dfbc0a93-config-data" (OuterVolumeSpecName: "config-data") pod "fdf75778-245b-4cac-9568-71c9dfbc0a93" (UID: "fdf75778-245b-4cac-9568-71c9dfbc0a93"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:42:16.801746 master-0 kubenswrapper[31559]: I0216 02:42:16.801682 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdf75778-245b-4cac-9568-71c9dfbc0a93-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fdf75778-245b-4cac-9568-71c9dfbc0a93" (UID: "fdf75778-245b-4cac-9568-71c9dfbc0a93"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:42:16.846952 master-0 kubenswrapper[31559]: I0216 02:42:16.846837 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjkks\" (UniqueName: \"kubernetes.io/projected/fdf75778-245b-4cac-9568-71c9dfbc0a93-kube-api-access-zjkks\") on node \"master-0\" DevicePath \"\"" Feb 16 02:42:16.846952 master-0 kubenswrapper[31559]: I0216 02:42:16.846931 31559 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdf75778-245b-4cac-9568-71c9dfbc0a93-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 02:42:16.846952 master-0 kubenswrapper[31559]: I0216 02:42:16.846943 31559 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdf75778-245b-4cac-9568-71c9dfbc0a93-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 02:42:16.973202 master-0 kubenswrapper[31559]: I0216 02:42:16.973088 31559 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="91fb97c4-4958-40fd-8c1f-a3d3ca6f9727" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.128.1.12:8775/\": read tcp 10.128.0.2:45766->10.128.1.12:8775: read: connection reset by peer" Feb 16 02:42:16.974695 master-0 kubenswrapper[31559]: I0216 02:42:16.974519 31559 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="91fb97c4-4958-40fd-8c1f-a3d3ca6f9727" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.128.1.12:8775/\": read tcp 10.128.0.2:45764->10.128.1.12:8775: read: connection reset by peer" Feb 16 02:42:16.990268 master-0 kubenswrapper[31559]: I0216 02:42:16.990109 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 02:42:17.015997 master-0 kubenswrapper[31559]: I0216 02:42:17.015938 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 02:42:17.028588 master-0 kubenswrapper[31559]: I0216 02:42:17.028503 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 02:42:17.029103 master-0 kubenswrapper[31559]: E0216 02:42:17.029070 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fa59dc4-e794-44ee-9b14-1899479e07c7" containerName="init" Feb 16 02:42:17.029103 master-0 kubenswrapper[31559]: I0216 02:42:17.029096 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fa59dc4-e794-44ee-9b14-1899479e07c7" containerName="init" Feb 16 02:42:17.029201 master-0 kubenswrapper[31559]: E0216 02:42:17.029121 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23ec7bd8-7ac5-44fc-a7f1-67465e2cf88f" containerName="nova-manage" Feb 16 02:42:17.029201 master-0 kubenswrapper[31559]: I0216 02:42:17.029130 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="23ec7bd8-7ac5-44fc-a7f1-67465e2cf88f" containerName="nova-manage" Feb 16 02:42:17.029201 master-0 kubenswrapper[31559]: E0216 02:42:17.029158 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fa59dc4-e794-44ee-9b14-1899479e07c7" containerName="dnsmasq-dns" Feb 16 02:42:17.029201 master-0 kubenswrapper[31559]: I0216 02:42:17.029168 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fa59dc4-e794-44ee-9b14-1899479e07c7" containerName="dnsmasq-dns" Feb 16 02:42:17.029201 master-0 kubenswrapper[31559]: E0216 02:42:17.029203 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="871c887d-2a5a-4839-8ffb-eadb66301e8d" containerName="nova-manage" Feb 16 02:42:17.029367 master-0 kubenswrapper[31559]: I0216 02:42:17.029212 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="871c887d-2a5a-4839-8ffb-eadb66301e8d" containerName="nova-manage" Feb 16 02:42:17.029367 master-0 kubenswrapper[31559]: E0216 02:42:17.029234 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdf75778-245b-4cac-9568-71c9dfbc0a93" containerName="nova-scheduler-scheduler" Feb 16 02:42:17.029367 master-0 kubenswrapper[31559]: I0216 02:42:17.029245 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdf75778-245b-4cac-9568-71c9dfbc0a93" containerName="nova-scheduler-scheduler" Feb 16 02:42:17.029560 master-0 kubenswrapper[31559]: I0216 02:42:17.029534 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="871c887d-2a5a-4839-8ffb-eadb66301e8d" containerName="nova-manage" Feb 16 02:42:17.029610 master-0 kubenswrapper[31559]: I0216 02:42:17.029565 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fa59dc4-e794-44ee-9b14-1899479e07c7" containerName="dnsmasq-dns" Feb 16 02:42:17.029610 master-0 kubenswrapper[31559]: I0216 02:42:17.029602 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdf75778-245b-4cac-9568-71c9dfbc0a93" containerName="nova-scheduler-scheduler" Feb 16 02:42:17.029682 master-0 kubenswrapper[31559]: I0216 02:42:17.029633 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="23ec7bd8-7ac5-44fc-a7f1-67465e2cf88f" containerName="nova-manage" Feb 16 02:42:17.030555 master-0 kubenswrapper[31559]: I0216 02:42:17.030522 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 02:42:17.032824 master-0 kubenswrapper[31559]: I0216 02:42:17.032769 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 16 02:42:17.044104 master-0 kubenswrapper[31559]: I0216 02:42:17.044002 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 02:42:17.053476 master-0 kubenswrapper[31559]: I0216 02:42:17.052791 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5faa55e-4b93-4878-b9b1-91093188bc7e-config-data\") pod \"nova-scheduler-0\" (UID: \"e5faa55e-4b93-4878-b9b1-91093188bc7e\") " pod="openstack/nova-scheduler-0" Feb 16 02:42:17.053764 master-0 kubenswrapper[31559]: I0216 02:42:17.053620 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5faa55e-4b93-4878-b9b1-91093188bc7e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e5faa55e-4b93-4878-b9b1-91093188bc7e\") " pod="openstack/nova-scheduler-0" Feb 16 02:42:17.053764 master-0 kubenswrapper[31559]: I0216 02:42:17.053735 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnh2f\" (UniqueName: \"kubernetes.io/projected/e5faa55e-4b93-4878-b9b1-91093188bc7e-kube-api-access-lnh2f\") pod \"nova-scheduler-0\" (UID: \"e5faa55e-4b93-4878-b9b1-91093188bc7e\") " pod="openstack/nova-scheduler-0" Feb 16 02:42:17.155730 master-0 kubenswrapper[31559]: I0216 02:42:17.155609 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5faa55e-4b93-4878-b9b1-91093188bc7e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e5faa55e-4b93-4878-b9b1-91093188bc7e\") " pod="openstack/nova-scheduler-0" Feb 16 02:42:17.155927 master-0 kubenswrapper[31559]: I0216 02:42:17.155742 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnh2f\" (UniqueName: \"kubernetes.io/projected/e5faa55e-4b93-4878-b9b1-91093188bc7e-kube-api-access-lnh2f\") pod \"nova-scheduler-0\" (UID: \"e5faa55e-4b93-4878-b9b1-91093188bc7e\") " pod="openstack/nova-scheduler-0" Feb 16 02:42:17.155927 master-0 kubenswrapper[31559]: I0216 02:42:17.155834 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5faa55e-4b93-4878-b9b1-91093188bc7e-config-data\") pod \"nova-scheduler-0\" (UID: \"e5faa55e-4b93-4878-b9b1-91093188bc7e\") " pod="openstack/nova-scheduler-0" Feb 16 02:42:17.162574 master-0 kubenswrapper[31559]: I0216 02:42:17.162470 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5faa55e-4b93-4878-b9b1-91093188bc7e-config-data\") pod \"nova-scheduler-0\" (UID: \"e5faa55e-4b93-4878-b9b1-91093188bc7e\") " pod="openstack/nova-scheduler-0" Feb 16 02:42:17.170255 master-0 kubenswrapper[31559]: I0216 02:42:17.170158 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5faa55e-4b93-4878-b9b1-91093188bc7e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e5faa55e-4b93-4878-b9b1-91093188bc7e\") " pod="openstack/nova-scheduler-0" Feb 16 02:42:17.176565 master-0 kubenswrapper[31559]: I0216 02:42:17.176204 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnh2f\" (UniqueName: \"kubernetes.io/projected/e5faa55e-4b93-4878-b9b1-91093188bc7e-kube-api-access-lnh2f\") pod \"nova-scheduler-0\" (UID: \"e5faa55e-4b93-4878-b9b1-91093188bc7e\") " pod="openstack/nova-scheduler-0" Feb 16 02:42:17.255468 master-0 kubenswrapper[31559]: I0216 02:42:17.255249 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 02:42:17.529506 master-0 kubenswrapper[31559]: I0216 02:42:17.529454 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 02:42:17.658085 master-0 kubenswrapper[31559]: I0216 02:42:17.658026 31559 generic.go:334] "Generic (PLEG): container finished" podID="91fb97c4-4958-40fd-8c1f-a3d3ca6f9727" containerID="82c9080e49e7081ada7d82f2088c1a883181eb6dc496b909c51a3564bffb86c5" exitCode=0 Feb 16 02:42:17.658085 master-0 kubenswrapper[31559]: I0216 02:42:17.658072 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"91fb97c4-4958-40fd-8c1f-a3d3ca6f9727","Type":"ContainerDied","Data":"82c9080e49e7081ada7d82f2088c1a883181eb6dc496b909c51a3564bffb86c5"} Feb 16 02:42:17.659312 master-0 kubenswrapper[31559]: I0216 02:42:17.658084 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 02:42:17.659312 master-0 kubenswrapper[31559]: I0216 02:42:17.658117 31559 scope.go:117] "RemoveContainer" containerID="82c9080e49e7081ada7d82f2088c1a883181eb6dc496b909c51a3564bffb86c5" Feb 16 02:42:17.659312 master-0 kubenswrapper[31559]: I0216 02:42:17.658102 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"91fb97c4-4958-40fd-8c1f-a3d3ca6f9727","Type":"ContainerDied","Data":"9c836a3d768be8ea42b15245ebf7088be9117c08d227ea97d47ec08ea20ec538"} Feb 16 02:42:17.670031 master-0 kubenswrapper[31559]: I0216 02:42:17.669967 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/91fb97c4-4958-40fd-8c1f-a3d3ca6f9727-nova-metadata-tls-certs\") pod \"91fb97c4-4958-40fd-8c1f-a3d3ca6f9727\" (UID: \"91fb97c4-4958-40fd-8c1f-a3d3ca6f9727\") " Feb 16 02:42:17.670186 master-0 kubenswrapper[31559]: I0216 02:42:17.670161 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91fb97c4-4958-40fd-8c1f-a3d3ca6f9727-combined-ca-bundle\") pod \"91fb97c4-4958-40fd-8c1f-a3d3ca6f9727\" (UID: \"91fb97c4-4958-40fd-8c1f-a3d3ca6f9727\") " Feb 16 02:42:17.670466 master-0 kubenswrapper[31559]: I0216 02:42:17.670417 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91fb97c4-4958-40fd-8c1f-a3d3ca6f9727-logs\") pod \"91fb97c4-4958-40fd-8c1f-a3d3ca6f9727\" (UID: \"91fb97c4-4958-40fd-8c1f-a3d3ca6f9727\") " Feb 16 02:42:17.670538 master-0 kubenswrapper[31559]: I0216 02:42:17.670510 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltkkv\" (UniqueName: \"kubernetes.io/projected/91fb97c4-4958-40fd-8c1f-a3d3ca6f9727-kube-api-access-ltkkv\") pod \"91fb97c4-4958-40fd-8c1f-a3d3ca6f9727\" (UID: \"91fb97c4-4958-40fd-8c1f-a3d3ca6f9727\") " Feb 16 02:42:17.670688 master-0 kubenswrapper[31559]: I0216 02:42:17.670662 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91fb97c4-4958-40fd-8c1f-a3d3ca6f9727-config-data\") pod \"91fb97c4-4958-40fd-8c1f-a3d3ca6f9727\" (UID: \"91fb97c4-4958-40fd-8c1f-a3d3ca6f9727\") " Feb 16 02:42:17.672504 master-0 kubenswrapper[31559]: I0216 02:42:17.672466 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91fb97c4-4958-40fd-8c1f-a3d3ca6f9727-logs" (OuterVolumeSpecName: "logs") pod "91fb97c4-4958-40fd-8c1f-a3d3ca6f9727" (UID: "91fb97c4-4958-40fd-8c1f-a3d3ca6f9727"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 02:42:17.672878 master-0 kubenswrapper[31559]: I0216 02:42:17.672846 31559 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91fb97c4-4958-40fd-8c1f-a3d3ca6f9727-logs\") on node \"master-0\" DevicePath \"\"" Feb 16 02:42:17.675308 master-0 kubenswrapper[31559]: I0216 02:42:17.675262 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91fb97c4-4958-40fd-8c1f-a3d3ca6f9727-kube-api-access-ltkkv" (OuterVolumeSpecName: "kube-api-access-ltkkv") pod "91fb97c4-4958-40fd-8c1f-a3d3ca6f9727" (UID: "91fb97c4-4958-40fd-8c1f-a3d3ca6f9727"). InnerVolumeSpecName "kube-api-access-ltkkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:42:17.691276 master-0 kubenswrapper[31559]: I0216 02:42:17.691231 31559 scope.go:117] "RemoveContainer" containerID="a831245eca47a6845579f09785251f13ad26e431b35964b90d8cb14f3e8fefa0" Feb 16 02:42:17.706106 master-0 kubenswrapper[31559]: I0216 02:42:17.706056 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91fb97c4-4958-40fd-8c1f-a3d3ca6f9727-config-data" (OuterVolumeSpecName: "config-data") pod "91fb97c4-4958-40fd-8c1f-a3d3ca6f9727" (UID: "91fb97c4-4958-40fd-8c1f-a3d3ca6f9727"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:42:17.708735 master-0 kubenswrapper[31559]: I0216 02:42:17.708679 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91fb97c4-4958-40fd-8c1f-a3d3ca6f9727-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "91fb97c4-4958-40fd-8c1f-a3d3ca6f9727" (UID: "91fb97c4-4958-40fd-8c1f-a3d3ca6f9727"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:42:17.723524 master-0 kubenswrapper[31559]: I0216 02:42:17.723483 31559 scope.go:117] "RemoveContainer" containerID="82c9080e49e7081ada7d82f2088c1a883181eb6dc496b909c51a3564bffb86c5" Feb 16 02:42:17.724217 master-0 kubenswrapper[31559]: E0216 02:42:17.724098 31559 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82c9080e49e7081ada7d82f2088c1a883181eb6dc496b909c51a3564bffb86c5\": container with ID starting with 82c9080e49e7081ada7d82f2088c1a883181eb6dc496b909c51a3564bffb86c5 not found: ID does not exist" containerID="82c9080e49e7081ada7d82f2088c1a883181eb6dc496b909c51a3564bffb86c5" Feb 16 02:42:17.724217 master-0 kubenswrapper[31559]: I0216 02:42:17.724139 31559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82c9080e49e7081ada7d82f2088c1a883181eb6dc496b909c51a3564bffb86c5"} err="failed to get container status \"82c9080e49e7081ada7d82f2088c1a883181eb6dc496b909c51a3564bffb86c5\": rpc error: code = NotFound desc = could not find container \"82c9080e49e7081ada7d82f2088c1a883181eb6dc496b909c51a3564bffb86c5\": container with ID starting with 82c9080e49e7081ada7d82f2088c1a883181eb6dc496b909c51a3564bffb86c5 not found: ID does not exist" Feb 16 02:42:17.724217 master-0 kubenswrapper[31559]: I0216 02:42:17.724164 31559 scope.go:117] "RemoveContainer" containerID="a831245eca47a6845579f09785251f13ad26e431b35964b90d8cb14f3e8fefa0" Feb 16 02:42:17.724679 master-0 kubenswrapper[31559]: E0216 02:42:17.724634 31559 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a831245eca47a6845579f09785251f13ad26e431b35964b90d8cb14f3e8fefa0\": container with ID starting with a831245eca47a6845579f09785251f13ad26e431b35964b90d8cb14f3e8fefa0 not found: ID does not exist" containerID="a831245eca47a6845579f09785251f13ad26e431b35964b90d8cb14f3e8fefa0" Feb 16 02:42:17.724739 master-0 kubenswrapper[31559]: I0216 02:42:17.724682 31559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a831245eca47a6845579f09785251f13ad26e431b35964b90d8cb14f3e8fefa0"} err="failed to get container status \"a831245eca47a6845579f09785251f13ad26e431b35964b90d8cb14f3e8fefa0\": rpc error: code = NotFound desc = could not find container \"a831245eca47a6845579f09785251f13ad26e431b35964b90d8cb14f3e8fefa0\": container with ID starting with a831245eca47a6845579f09785251f13ad26e431b35964b90d8cb14f3e8fefa0 not found: ID does not exist" Feb 16 02:42:17.744991 master-0 kubenswrapper[31559]: I0216 02:42:17.744908 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91fb97c4-4958-40fd-8c1f-a3d3ca6f9727-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "91fb97c4-4958-40fd-8c1f-a3d3ca6f9727" (UID: "91fb97c4-4958-40fd-8c1f-a3d3ca6f9727"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:42:17.776915 master-0 kubenswrapper[31559]: I0216 02:42:17.776804 31559 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91fb97c4-4958-40fd-8c1f-a3d3ca6f9727-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 02:42:17.776915 master-0 kubenswrapper[31559]: I0216 02:42:17.776893 31559 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/91fb97c4-4958-40fd-8c1f-a3d3ca6f9727-nova-metadata-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 16 02:42:17.776915 master-0 kubenswrapper[31559]: I0216 02:42:17.776913 31559 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91fb97c4-4958-40fd-8c1f-a3d3ca6f9727-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 02:42:17.777092 master-0 kubenswrapper[31559]: I0216 02:42:17.776924 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ltkkv\" (UniqueName: \"kubernetes.io/projected/91fb97c4-4958-40fd-8c1f-a3d3ca6f9727-kube-api-access-ltkkv\") on node \"master-0\" DevicePath \"\"" Feb 16 02:42:17.786743 master-0 kubenswrapper[31559]: I0216 02:42:17.786600 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 02:42:17.958737 master-0 kubenswrapper[31559]: I0216 02:42:17.958664 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdf75778-245b-4cac-9568-71c9dfbc0a93" path="/var/lib/kubelet/pods/fdf75778-245b-4cac-9568-71c9dfbc0a93/volumes" Feb 16 02:42:18.011138 master-0 kubenswrapper[31559]: I0216 02:42:18.011046 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 02:42:18.022647 master-0 kubenswrapper[31559]: I0216 02:42:18.022573 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 02:42:18.053815 master-0 kubenswrapper[31559]: I0216 02:42:18.053665 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 16 02:42:18.054386 master-0 kubenswrapper[31559]: E0216 02:42:18.054358 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91fb97c4-4958-40fd-8c1f-a3d3ca6f9727" containerName="nova-metadata-log" Feb 16 02:42:18.054386 master-0 kubenswrapper[31559]: I0216 02:42:18.054381 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="91fb97c4-4958-40fd-8c1f-a3d3ca6f9727" containerName="nova-metadata-log" Feb 16 02:42:18.054581 master-0 kubenswrapper[31559]: E0216 02:42:18.054424 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91fb97c4-4958-40fd-8c1f-a3d3ca6f9727" containerName="nova-metadata-metadata" Feb 16 02:42:18.054581 master-0 kubenswrapper[31559]: I0216 02:42:18.054468 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="91fb97c4-4958-40fd-8c1f-a3d3ca6f9727" containerName="nova-metadata-metadata" Feb 16 02:42:18.054812 master-0 kubenswrapper[31559]: I0216 02:42:18.054781 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="91fb97c4-4958-40fd-8c1f-a3d3ca6f9727" containerName="nova-metadata-log" Feb 16 02:42:18.054906 master-0 kubenswrapper[31559]: I0216 02:42:18.054821 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="91fb97c4-4958-40fd-8c1f-a3d3ca6f9727" containerName="nova-metadata-metadata" Feb 16 02:42:18.056345 master-0 kubenswrapper[31559]: I0216 02:42:18.056316 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 02:42:18.059127 master-0 kubenswrapper[31559]: I0216 02:42:18.059069 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 16 02:42:18.059359 master-0 kubenswrapper[31559]: I0216 02:42:18.059329 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 16 02:42:18.088351 master-0 kubenswrapper[31559]: I0216 02:42:18.088279 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 02:42:18.190203 master-0 kubenswrapper[31559]: I0216 02:42:18.190133 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/533cb780-1f1b-4513-b3e9-8bd11e1fb12d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"533cb780-1f1b-4513-b3e9-8bd11e1fb12d\") " pod="openstack/nova-metadata-0" Feb 16 02:42:18.190203 master-0 kubenswrapper[31559]: I0216 02:42:18.190201 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/533cb780-1f1b-4513-b3e9-8bd11e1fb12d-logs\") pod \"nova-metadata-0\" (UID: \"533cb780-1f1b-4513-b3e9-8bd11e1fb12d\") " pod="openstack/nova-metadata-0" Feb 16 02:42:18.190588 master-0 kubenswrapper[31559]: I0216 02:42:18.190237 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/533cb780-1f1b-4513-b3e9-8bd11e1fb12d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"533cb780-1f1b-4513-b3e9-8bd11e1fb12d\") " pod="openstack/nova-metadata-0" Feb 16 02:42:18.190588 master-0 kubenswrapper[31559]: I0216 02:42:18.190259 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/533cb780-1f1b-4513-b3e9-8bd11e1fb12d-config-data\") pod \"nova-metadata-0\" (UID: \"533cb780-1f1b-4513-b3e9-8bd11e1fb12d\") " pod="openstack/nova-metadata-0" Feb 16 02:42:18.190588 master-0 kubenswrapper[31559]: I0216 02:42:18.190299 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjpwc\" (UniqueName: \"kubernetes.io/projected/533cb780-1f1b-4513-b3e9-8bd11e1fb12d-kube-api-access-gjpwc\") pod \"nova-metadata-0\" (UID: \"533cb780-1f1b-4513-b3e9-8bd11e1fb12d\") " pod="openstack/nova-metadata-0" Feb 16 02:42:18.292502 master-0 kubenswrapper[31559]: I0216 02:42:18.292404 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjpwc\" (UniqueName: \"kubernetes.io/projected/533cb780-1f1b-4513-b3e9-8bd11e1fb12d-kube-api-access-gjpwc\") pod \"nova-metadata-0\" (UID: \"533cb780-1f1b-4513-b3e9-8bd11e1fb12d\") " pod="openstack/nova-metadata-0" Feb 16 02:42:18.292919 master-0 kubenswrapper[31559]: I0216 02:42:18.292673 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/533cb780-1f1b-4513-b3e9-8bd11e1fb12d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"533cb780-1f1b-4513-b3e9-8bd11e1fb12d\") " pod="openstack/nova-metadata-0" Feb 16 02:42:18.292919 master-0 kubenswrapper[31559]: I0216 02:42:18.292724 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/533cb780-1f1b-4513-b3e9-8bd11e1fb12d-logs\") pod \"nova-metadata-0\" (UID: \"533cb780-1f1b-4513-b3e9-8bd11e1fb12d\") " pod="openstack/nova-metadata-0" Feb 16 02:42:18.292919 master-0 kubenswrapper[31559]: I0216 02:42:18.292756 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/533cb780-1f1b-4513-b3e9-8bd11e1fb12d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"533cb780-1f1b-4513-b3e9-8bd11e1fb12d\") " pod="openstack/nova-metadata-0" Feb 16 02:42:18.292919 master-0 kubenswrapper[31559]: I0216 02:42:18.292812 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/533cb780-1f1b-4513-b3e9-8bd11e1fb12d-config-data\") pod \"nova-metadata-0\" (UID: \"533cb780-1f1b-4513-b3e9-8bd11e1fb12d\") " pod="openstack/nova-metadata-0" Feb 16 02:42:18.295772 master-0 kubenswrapper[31559]: I0216 02:42:18.293666 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/533cb780-1f1b-4513-b3e9-8bd11e1fb12d-logs\") pod \"nova-metadata-0\" (UID: \"533cb780-1f1b-4513-b3e9-8bd11e1fb12d\") " pod="openstack/nova-metadata-0" Feb 16 02:42:18.298019 master-0 kubenswrapper[31559]: I0216 02:42:18.297960 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/533cb780-1f1b-4513-b3e9-8bd11e1fb12d-config-data\") pod \"nova-metadata-0\" (UID: \"533cb780-1f1b-4513-b3e9-8bd11e1fb12d\") " pod="openstack/nova-metadata-0" Feb 16 02:42:18.298522 master-0 kubenswrapper[31559]: I0216 02:42:18.298459 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/533cb780-1f1b-4513-b3e9-8bd11e1fb12d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"533cb780-1f1b-4513-b3e9-8bd11e1fb12d\") " pod="openstack/nova-metadata-0" Feb 16 02:42:18.298695 master-0 kubenswrapper[31559]: I0216 02:42:18.298651 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/533cb780-1f1b-4513-b3e9-8bd11e1fb12d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"533cb780-1f1b-4513-b3e9-8bd11e1fb12d\") " pod="openstack/nova-metadata-0" Feb 16 02:42:18.323479 master-0 kubenswrapper[31559]: I0216 02:42:18.323272 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjpwc\" (UniqueName: \"kubernetes.io/projected/533cb780-1f1b-4513-b3e9-8bd11e1fb12d-kube-api-access-gjpwc\") pod \"nova-metadata-0\" (UID: \"533cb780-1f1b-4513-b3e9-8bd11e1fb12d\") " pod="openstack/nova-metadata-0" Feb 16 02:42:18.380545 master-0 kubenswrapper[31559]: I0216 02:42:18.380487 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 02:42:18.690968 master-0 kubenswrapper[31559]: I0216 02:42:18.690839 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e5faa55e-4b93-4878-b9b1-91093188bc7e","Type":"ContainerStarted","Data":"bd7aec05da39372eaa00e5154b7c834e01bfbeaa00f69b270fb8030d55f6f12d"} Feb 16 02:42:18.691583 master-0 kubenswrapper[31559]: I0216 02:42:18.690993 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e5faa55e-4b93-4878-b9b1-91093188bc7e","Type":"ContainerStarted","Data":"7a18de3fcb77117c97bceb9c9df8ee4cfc888d58774559ed36160fb91e1a6b5b"} Feb 16 02:42:18.725558 master-0 kubenswrapper[31559]: I0216 02:42:18.718396 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.718376707 podStartE2EDuration="2.718376707s" podCreationTimestamp="2026-02-16 02:42:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:42:18.712507838 +0000 UTC m=+1191.057113863" watchObservedRunningTime="2026-02-16 02:42:18.718376707 +0000 UTC m=+1191.062982722" Feb 16 02:42:18.935055 master-0 kubenswrapper[31559]: W0216 02:42:18.934961 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod533cb780_1f1b_4513_b3e9_8bd11e1fb12d.slice/crio-08c2804152e1001725a7ebd23096062255d44eacaef7661bc7c02689eee16230 WatchSource:0}: Error finding container 08c2804152e1001725a7ebd23096062255d44eacaef7661bc7c02689eee16230: Status 404 returned error can't find the container with id 08c2804152e1001725a7ebd23096062255d44eacaef7661bc7c02689eee16230 Feb 16 02:42:18.940747 master-0 kubenswrapper[31559]: I0216 02:42:18.940666 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 02:42:19.178818 master-0 kubenswrapper[31559]: E0216 02:42:19.178762 31559 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podead525c4_0c20_4968_b1a2_2feeaa419d44.slice/crio-aebb52482fee6c950bcfd0d1fa5f265b7a713afc12d16bec90e2c5ef5f797e95.scope\": RecentStats: unable to find data in memory cache]" Feb 16 02:42:19.566202 master-0 kubenswrapper[31559]: I0216 02:42:19.566159 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 02:42:19.709865 master-0 kubenswrapper[31559]: I0216 02:42:19.709744 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"533cb780-1f1b-4513-b3e9-8bd11e1fb12d","Type":"ContainerStarted","Data":"0b68a87df116c82c55c4ebae801a5f8a11389850b2fd46fa53d282e97722a856"} Feb 16 02:42:19.709865 master-0 kubenswrapper[31559]: I0216 02:42:19.709802 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"533cb780-1f1b-4513-b3e9-8bd11e1fb12d","Type":"ContainerStarted","Data":"1f6a90812e02e10242c74df1a6feba0f08782f7a91e63e11710e3df88f3db4be"} Feb 16 02:42:19.709865 master-0 kubenswrapper[31559]: I0216 02:42:19.709814 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"533cb780-1f1b-4513-b3e9-8bd11e1fb12d","Type":"ContainerStarted","Data":"08c2804152e1001725a7ebd23096062255d44eacaef7661bc7c02689eee16230"} Feb 16 02:42:19.713310 master-0 kubenswrapper[31559]: I0216 02:42:19.713275 31559 generic.go:334] "Generic (PLEG): container finished" podID="ead525c4-0c20-4968-b1a2-2feeaa419d44" containerID="aebb52482fee6c950bcfd0d1fa5f265b7a713afc12d16bec90e2c5ef5f797e95" exitCode=0 Feb 16 02:42:19.713548 master-0 kubenswrapper[31559]: I0216 02:42:19.713500 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 02:42:19.713645 master-0 kubenswrapper[31559]: I0216 02:42:19.713592 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ead525c4-0c20-4968-b1a2-2feeaa419d44","Type":"ContainerDied","Data":"aebb52482fee6c950bcfd0d1fa5f265b7a713afc12d16bec90e2c5ef5f797e95"} Feb 16 02:42:19.713689 master-0 kubenswrapper[31559]: I0216 02:42:19.713661 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ead525c4-0c20-4968-b1a2-2feeaa419d44","Type":"ContainerDied","Data":"c3e4134b2d81cd45374456798886ea38050b8a19bf97c8b16379b91273aa5253"} Feb 16 02:42:19.713726 master-0 kubenswrapper[31559]: I0216 02:42:19.713685 31559 scope.go:117] "RemoveContainer" containerID="aebb52482fee6c950bcfd0d1fa5f265b7a713afc12d16bec90e2c5ef5f797e95" Feb 16 02:42:19.736123 master-0 kubenswrapper[31559]: I0216 02:42:19.736069 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ead525c4-0c20-4968-b1a2-2feeaa419d44-logs\") pod \"ead525c4-0c20-4968-b1a2-2feeaa419d44\" (UID: \"ead525c4-0c20-4968-b1a2-2feeaa419d44\") " Feb 16 02:42:19.736406 master-0 kubenswrapper[31559]: I0216 02:42:19.736355 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ead525c4-0c20-4968-b1a2-2feeaa419d44-config-data\") pod \"ead525c4-0c20-4968-b1a2-2feeaa419d44\" (UID: \"ead525c4-0c20-4968-b1a2-2feeaa419d44\") " Feb 16 02:42:19.736477 master-0 kubenswrapper[31559]: I0216 02:42:19.736410 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ead525c4-0c20-4968-b1a2-2feeaa419d44-public-tls-certs\") pod \"ead525c4-0c20-4968-b1a2-2feeaa419d44\" (UID: \"ead525c4-0c20-4968-b1a2-2feeaa419d44\") " Feb 16 02:42:19.736477 master-0 kubenswrapper[31559]: I0216 02:42:19.736464 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ead525c4-0c20-4968-b1a2-2feeaa419d44-internal-tls-certs\") pod \"ead525c4-0c20-4968-b1a2-2feeaa419d44\" (UID: \"ead525c4-0c20-4968-b1a2-2feeaa419d44\") " Feb 16 02:42:19.736687 master-0 kubenswrapper[31559]: I0216 02:42:19.736654 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5j6w\" (UniqueName: \"kubernetes.io/projected/ead525c4-0c20-4968-b1a2-2feeaa419d44-kube-api-access-x5j6w\") pod \"ead525c4-0c20-4968-b1a2-2feeaa419d44\" (UID: \"ead525c4-0c20-4968-b1a2-2feeaa419d44\") " Feb 16 02:42:19.736726 master-0 kubenswrapper[31559]: I0216 02:42:19.736698 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ead525c4-0c20-4968-b1a2-2feeaa419d44-combined-ca-bundle\") pod \"ead525c4-0c20-4968-b1a2-2feeaa419d44\" (UID: \"ead525c4-0c20-4968-b1a2-2feeaa419d44\") " Feb 16 02:42:19.737468 master-0 kubenswrapper[31559]: I0216 02:42:19.737406 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ead525c4-0c20-4968-b1a2-2feeaa419d44-logs" (OuterVolumeSpecName: "logs") pod "ead525c4-0c20-4968-b1a2-2feeaa419d44" (UID: "ead525c4-0c20-4968-b1a2-2feeaa419d44"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 02:42:19.737592 master-0 kubenswrapper[31559]: I0216 02:42:19.737563 31559 scope.go:117] "RemoveContainer" containerID="018ec1d82df2729dfd2ee6cd6d99a67bb57a206fa1d324d32befbcdff6e1e65a" Feb 16 02:42:19.738351 master-0 kubenswrapper[31559]: I0216 02:42:19.738295 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=1.738277842 podStartE2EDuration="1.738277842s" podCreationTimestamp="2026-02-16 02:42:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:42:19.734656289 +0000 UTC m=+1192.079262304" watchObservedRunningTime="2026-02-16 02:42:19.738277842 +0000 UTC m=+1192.082883857" Feb 16 02:42:19.746836 master-0 kubenswrapper[31559]: I0216 02:42:19.746772 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ead525c4-0c20-4968-b1a2-2feeaa419d44-kube-api-access-x5j6w" (OuterVolumeSpecName: "kube-api-access-x5j6w") pod "ead525c4-0c20-4968-b1a2-2feeaa419d44" (UID: "ead525c4-0c20-4968-b1a2-2feeaa419d44"). InnerVolumeSpecName "kube-api-access-x5j6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:42:19.766515 master-0 kubenswrapper[31559]: I0216 02:42:19.766478 31559 scope.go:117] "RemoveContainer" containerID="aebb52482fee6c950bcfd0d1fa5f265b7a713afc12d16bec90e2c5ef5f797e95" Feb 16 02:42:19.767096 master-0 kubenswrapper[31559]: E0216 02:42:19.767044 31559 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aebb52482fee6c950bcfd0d1fa5f265b7a713afc12d16bec90e2c5ef5f797e95\": container with ID starting with aebb52482fee6c950bcfd0d1fa5f265b7a713afc12d16bec90e2c5ef5f797e95 not found: ID does not exist" containerID="aebb52482fee6c950bcfd0d1fa5f265b7a713afc12d16bec90e2c5ef5f797e95" Feb 16 02:42:19.767176 master-0 kubenswrapper[31559]: I0216 02:42:19.767135 31559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aebb52482fee6c950bcfd0d1fa5f265b7a713afc12d16bec90e2c5ef5f797e95"} err="failed to get container status \"aebb52482fee6c950bcfd0d1fa5f265b7a713afc12d16bec90e2c5ef5f797e95\": rpc error: code = NotFound desc = could not find container \"aebb52482fee6c950bcfd0d1fa5f265b7a713afc12d16bec90e2c5ef5f797e95\": container with ID starting with aebb52482fee6c950bcfd0d1fa5f265b7a713afc12d16bec90e2c5ef5f797e95 not found: ID does not exist" Feb 16 02:42:19.767246 master-0 kubenswrapper[31559]: I0216 02:42:19.767205 31559 scope.go:117] "RemoveContainer" containerID="018ec1d82df2729dfd2ee6cd6d99a67bb57a206fa1d324d32befbcdff6e1e65a" Feb 16 02:42:19.767693 master-0 kubenswrapper[31559]: E0216 02:42:19.767636 31559 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"018ec1d82df2729dfd2ee6cd6d99a67bb57a206fa1d324d32befbcdff6e1e65a\": container with ID starting with 018ec1d82df2729dfd2ee6cd6d99a67bb57a206fa1d324d32befbcdff6e1e65a not found: ID does not exist" containerID="018ec1d82df2729dfd2ee6cd6d99a67bb57a206fa1d324d32befbcdff6e1e65a" Feb 16 02:42:19.767770 master-0 kubenswrapper[31559]: I0216 02:42:19.767718 31559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"018ec1d82df2729dfd2ee6cd6d99a67bb57a206fa1d324d32befbcdff6e1e65a"} err="failed to get container status \"018ec1d82df2729dfd2ee6cd6d99a67bb57a206fa1d324d32befbcdff6e1e65a\": rpc error: code = NotFound desc = could not find container \"018ec1d82df2729dfd2ee6cd6d99a67bb57a206fa1d324d32befbcdff6e1e65a\": container with ID starting with 018ec1d82df2729dfd2ee6cd6d99a67bb57a206fa1d324d32befbcdff6e1e65a not found: ID does not exist" Feb 16 02:42:19.782377 master-0 kubenswrapper[31559]: I0216 02:42:19.782313 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ead525c4-0c20-4968-b1a2-2feeaa419d44-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ead525c4-0c20-4968-b1a2-2feeaa419d44" (UID: "ead525c4-0c20-4968-b1a2-2feeaa419d44"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:42:19.796747 master-0 kubenswrapper[31559]: I0216 02:42:19.796638 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ead525c4-0c20-4968-b1a2-2feeaa419d44-config-data" (OuterVolumeSpecName: "config-data") pod "ead525c4-0c20-4968-b1a2-2feeaa419d44" (UID: "ead525c4-0c20-4968-b1a2-2feeaa419d44"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:42:19.805404 master-0 kubenswrapper[31559]: I0216 02:42:19.805376 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ead525c4-0c20-4968-b1a2-2feeaa419d44-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "ead525c4-0c20-4968-b1a2-2feeaa419d44" (UID: "ead525c4-0c20-4968-b1a2-2feeaa419d44"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:42:19.814663 master-0 kubenswrapper[31559]: I0216 02:42:19.814586 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ead525c4-0c20-4968-b1a2-2feeaa419d44-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ead525c4-0c20-4968-b1a2-2feeaa419d44" (UID: "ead525c4-0c20-4968-b1a2-2feeaa419d44"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:42:19.842010 master-0 kubenswrapper[31559]: I0216 02:42:19.841913 31559 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ead525c4-0c20-4968-b1a2-2feeaa419d44-logs\") on node \"master-0\" DevicePath \"\"" Feb 16 02:42:19.842010 master-0 kubenswrapper[31559]: I0216 02:42:19.841951 31559 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ead525c4-0c20-4968-b1a2-2feeaa419d44-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 02:42:19.842248 master-0 kubenswrapper[31559]: I0216 02:42:19.842067 31559 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ead525c4-0c20-4968-b1a2-2feeaa419d44-public-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 16 02:42:19.842248 master-0 kubenswrapper[31559]: I0216 02:42:19.842108 31559 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ead525c4-0c20-4968-b1a2-2feeaa419d44-internal-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 16 02:42:19.842248 master-0 kubenswrapper[31559]: I0216 02:42:19.842140 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5j6w\" (UniqueName: \"kubernetes.io/projected/ead525c4-0c20-4968-b1a2-2feeaa419d44-kube-api-access-x5j6w\") on node \"master-0\" DevicePath \"\"" Feb 16 02:42:19.842248 master-0 kubenswrapper[31559]: I0216 02:42:19.842154 31559 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ead525c4-0c20-4968-b1a2-2feeaa419d44-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 02:42:19.952151 master-0 kubenswrapper[31559]: I0216 02:42:19.952057 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91fb97c4-4958-40fd-8c1f-a3d3ca6f9727" path="/var/lib/kubelet/pods/91fb97c4-4958-40fd-8c1f-a3d3ca6f9727/volumes" Feb 16 02:42:20.577323 master-0 kubenswrapper[31559]: I0216 02:42:20.576626 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 02:42:20.625166 master-0 kubenswrapper[31559]: I0216 02:42:20.625107 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 16 02:42:20.657589 master-0 kubenswrapper[31559]: I0216 02:42:20.657493 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 16 02:42:20.658152 master-0 kubenswrapper[31559]: E0216 02:42:20.658079 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ead525c4-0c20-4968-b1a2-2feeaa419d44" containerName="nova-api-api" Feb 16 02:42:20.658152 master-0 kubenswrapper[31559]: I0216 02:42:20.658106 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="ead525c4-0c20-4968-b1a2-2feeaa419d44" containerName="nova-api-api" Feb 16 02:42:20.658152 master-0 kubenswrapper[31559]: E0216 02:42:20.658122 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ead525c4-0c20-4968-b1a2-2feeaa419d44" containerName="nova-api-log" Feb 16 02:42:20.658152 master-0 kubenswrapper[31559]: I0216 02:42:20.658129 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="ead525c4-0c20-4968-b1a2-2feeaa419d44" containerName="nova-api-log" Feb 16 02:42:20.658607 master-0 kubenswrapper[31559]: I0216 02:42:20.658364 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="ead525c4-0c20-4968-b1a2-2feeaa419d44" containerName="nova-api-log" Feb 16 02:42:20.658607 master-0 kubenswrapper[31559]: I0216 02:42:20.658385 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="ead525c4-0c20-4968-b1a2-2feeaa419d44" containerName="nova-api-api" Feb 16 02:42:20.659831 master-0 kubenswrapper[31559]: I0216 02:42:20.659798 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 02:42:20.661613 master-0 kubenswrapper[31559]: I0216 02:42:20.661422 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 16 02:42:20.664386 master-0 kubenswrapper[31559]: I0216 02:42:20.663413 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 16 02:42:20.666315 master-0 kubenswrapper[31559]: I0216 02:42:20.665771 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 16 02:42:20.685624 master-0 kubenswrapper[31559]: I0216 02:42:20.684212 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 02:42:20.830541 master-0 kubenswrapper[31559]: I0216 02:42:20.827273 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f3ff980-7be2-4166-ac28-731b813d1d83-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5f3ff980-7be2-4166-ac28-731b813d1d83\") " pod="openstack/nova-api-0" Feb 16 02:42:20.830541 master-0 kubenswrapper[31559]: I0216 02:42:20.827395 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f3ff980-7be2-4166-ac28-731b813d1d83-logs\") pod \"nova-api-0\" (UID: \"5f3ff980-7be2-4166-ac28-731b813d1d83\") " pod="openstack/nova-api-0" Feb 16 02:42:20.830541 master-0 kubenswrapper[31559]: I0216 02:42:20.827444 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f3ff980-7be2-4166-ac28-731b813d1d83-public-tls-certs\") pod \"nova-api-0\" (UID: \"5f3ff980-7be2-4166-ac28-731b813d1d83\") " pod="openstack/nova-api-0" Feb 16 02:42:20.830541 master-0 kubenswrapper[31559]: I0216 02:42:20.827606 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f3ff980-7be2-4166-ac28-731b813d1d83-internal-tls-certs\") pod \"nova-api-0\" (UID: \"5f3ff980-7be2-4166-ac28-731b813d1d83\") " pod="openstack/nova-api-0" Feb 16 02:42:20.830541 master-0 kubenswrapper[31559]: I0216 02:42:20.827693 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcdts\" (UniqueName: \"kubernetes.io/projected/5f3ff980-7be2-4166-ac28-731b813d1d83-kube-api-access-pcdts\") pod \"nova-api-0\" (UID: \"5f3ff980-7be2-4166-ac28-731b813d1d83\") " pod="openstack/nova-api-0" Feb 16 02:42:20.830541 master-0 kubenswrapper[31559]: I0216 02:42:20.827747 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f3ff980-7be2-4166-ac28-731b813d1d83-config-data\") pod \"nova-api-0\" (UID: \"5f3ff980-7be2-4166-ac28-731b813d1d83\") " pod="openstack/nova-api-0" Feb 16 02:42:20.929215 master-0 kubenswrapper[31559]: I0216 02:42:20.929152 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f3ff980-7be2-4166-ac28-731b813d1d83-internal-tls-certs\") pod \"nova-api-0\" (UID: \"5f3ff980-7be2-4166-ac28-731b813d1d83\") " pod="openstack/nova-api-0" Feb 16 02:42:20.929215 master-0 kubenswrapper[31559]: I0216 02:42:20.929236 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcdts\" (UniqueName: \"kubernetes.io/projected/5f3ff980-7be2-4166-ac28-731b813d1d83-kube-api-access-pcdts\") pod \"nova-api-0\" (UID: \"5f3ff980-7be2-4166-ac28-731b813d1d83\") " pod="openstack/nova-api-0" Feb 16 02:42:20.930022 master-0 kubenswrapper[31559]: I0216 02:42:20.929972 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f3ff980-7be2-4166-ac28-731b813d1d83-config-data\") pod \"nova-api-0\" (UID: \"5f3ff980-7be2-4166-ac28-731b813d1d83\") " pod="openstack/nova-api-0" Feb 16 02:42:20.930356 master-0 kubenswrapper[31559]: I0216 02:42:20.930321 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f3ff980-7be2-4166-ac28-731b813d1d83-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5f3ff980-7be2-4166-ac28-731b813d1d83\") " pod="openstack/nova-api-0" Feb 16 02:42:20.930622 master-0 kubenswrapper[31559]: I0216 02:42:20.930588 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f3ff980-7be2-4166-ac28-731b813d1d83-logs\") pod \"nova-api-0\" (UID: \"5f3ff980-7be2-4166-ac28-731b813d1d83\") " pod="openstack/nova-api-0" Feb 16 02:42:20.930702 master-0 kubenswrapper[31559]: I0216 02:42:20.930638 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f3ff980-7be2-4166-ac28-731b813d1d83-public-tls-certs\") pod \"nova-api-0\" (UID: \"5f3ff980-7be2-4166-ac28-731b813d1d83\") " pod="openstack/nova-api-0" Feb 16 02:42:20.931229 master-0 kubenswrapper[31559]: I0216 02:42:20.931192 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f3ff980-7be2-4166-ac28-731b813d1d83-logs\") pod \"nova-api-0\" (UID: \"5f3ff980-7be2-4166-ac28-731b813d1d83\") " pod="openstack/nova-api-0" Feb 16 02:42:20.932716 master-0 kubenswrapper[31559]: I0216 02:42:20.932678 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f3ff980-7be2-4166-ac28-731b813d1d83-internal-tls-certs\") pod \"nova-api-0\" (UID: \"5f3ff980-7be2-4166-ac28-731b813d1d83\") " pod="openstack/nova-api-0" Feb 16 02:42:20.934027 master-0 kubenswrapper[31559]: I0216 02:42:20.933982 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f3ff980-7be2-4166-ac28-731b813d1d83-public-tls-certs\") pod \"nova-api-0\" (UID: \"5f3ff980-7be2-4166-ac28-731b813d1d83\") " pod="openstack/nova-api-0" Feb 16 02:42:20.935724 master-0 kubenswrapper[31559]: I0216 02:42:20.935691 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f3ff980-7be2-4166-ac28-731b813d1d83-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5f3ff980-7be2-4166-ac28-731b813d1d83\") " pod="openstack/nova-api-0" Feb 16 02:42:20.943883 master-0 kubenswrapper[31559]: I0216 02:42:20.943850 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcdts\" (UniqueName: \"kubernetes.io/projected/5f3ff980-7be2-4166-ac28-731b813d1d83-kube-api-access-pcdts\") pod \"nova-api-0\" (UID: \"5f3ff980-7be2-4166-ac28-731b813d1d83\") " pod="openstack/nova-api-0" Feb 16 02:42:20.945978 master-0 kubenswrapper[31559]: I0216 02:42:20.945947 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f3ff980-7be2-4166-ac28-731b813d1d83-config-data\") pod \"nova-api-0\" (UID: \"5f3ff980-7be2-4166-ac28-731b813d1d83\") " pod="openstack/nova-api-0" Feb 16 02:42:21.014140 master-0 kubenswrapper[31559]: I0216 02:42:21.014070 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 02:42:21.505838 master-0 kubenswrapper[31559]: I0216 02:42:21.505771 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 02:42:21.867146 master-0 kubenswrapper[31559]: I0216 02:42:21.867073 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5f3ff980-7be2-4166-ac28-731b813d1d83","Type":"ContainerStarted","Data":"27f48bd4c3fac943013038febe08345e48584f773062a5fcdfdbed0fc1c817d1"} Feb 16 02:42:21.867672 master-0 kubenswrapper[31559]: I0216 02:42:21.867159 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5f3ff980-7be2-4166-ac28-731b813d1d83","Type":"ContainerStarted","Data":"517f79d23396349127ab688dc1747e6e229440f4678454b5ff29af353c2bf087"} Feb 16 02:42:21.953794 master-0 kubenswrapper[31559]: I0216 02:42:21.953725 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ead525c4-0c20-4968-b1a2-2feeaa419d44" path="/var/lib/kubelet/pods/ead525c4-0c20-4968-b1a2-2feeaa419d44/volumes" Feb 16 02:42:22.258945 master-0 kubenswrapper[31559]: I0216 02:42:22.258855 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 16 02:42:22.887326 master-0 kubenswrapper[31559]: I0216 02:42:22.887272 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5f3ff980-7be2-4166-ac28-731b813d1d83","Type":"ContainerStarted","Data":"c814ab030dcf1b2fb2b41626d7c655092c89d844eae552f316db3da6abb7a367"} Feb 16 02:42:22.977603 master-0 kubenswrapper[31559]: I0216 02:42:22.977508 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.977486462 podStartE2EDuration="2.977486462s" podCreationTimestamp="2026-02-16 02:42:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:42:22.970103064 +0000 UTC m=+1195.314709119" watchObservedRunningTime="2026-02-16 02:42:22.977486462 +0000 UTC m=+1195.322092487" Feb 16 02:42:23.381731 master-0 kubenswrapper[31559]: I0216 02:42:23.381641 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 02:42:23.381731 master-0 kubenswrapper[31559]: I0216 02:42:23.381724 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 02:42:27.255792 master-0 kubenswrapper[31559]: I0216 02:42:27.255696 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 16 02:42:27.313753 master-0 kubenswrapper[31559]: I0216 02:42:27.313699 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 16 02:42:27.999626 master-0 kubenswrapper[31559]: I0216 02:42:27.999551 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 16 02:42:28.380868 master-0 kubenswrapper[31559]: I0216 02:42:28.380726 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 16 02:42:28.380868 master-0 kubenswrapper[31559]: I0216 02:42:28.380819 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 16 02:42:29.394713 master-0 kubenswrapper[31559]: I0216 02:42:29.394616 31559 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="533cb780-1f1b-4513-b3e9-8bd11e1fb12d" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.128.1.22:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 02:42:29.395635 master-0 kubenswrapper[31559]: I0216 02:42:29.394623 31559 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="533cb780-1f1b-4513-b3e9-8bd11e1fb12d" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.128.1.22:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 02:42:31.014998 master-0 kubenswrapper[31559]: I0216 02:42:31.014867 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 02:42:31.016135 master-0 kubenswrapper[31559]: I0216 02:42:31.015108 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 02:42:32.036794 master-0 kubenswrapper[31559]: I0216 02:42:32.036698 31559 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="5f3ff980-7be2-4166-ac28-731b813d1d83" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.128.1.23:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 02:42:32.037928 master-0 kubenswrapper[31559]: I0216 02:42:32.036735 31559 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="5f3ff980-7be2-4166-ac28-731b813d1d83" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.128.1.23:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 02:42:38.389976 master-0 kubenswrapper[31559]: I0216 02:42:38.389905 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 16 02:42:38.395103 master-0 kubenswrapper[31559]: I0216 02:42:38.395034 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 16 02:42:38.400358 master-0 kubenswrapper[31559]: I0216 02:42:38.400318 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 16 02:42:39.148961 master-0 kubenswrapper[31559]: I0216 02:42:39.148814 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 16 02:42:41.023339 master-0 kubenswrapper[31559]: I0216 02:42:41.023265 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 16 02:42:41.024015 master-0 kubenswrapper[31559]: I0216 02:42:41.023966 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 16 02:42:41.047372 master-0 kubenswrapper[31559]: I0216 02:42:41.047281 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 16 02:42:41.053165 master-0 kubenswrapper[31559]: I0216 02:42:41.053098 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 16 02:42:41.171925 master-0 kubenswrapper[31559]: I0216 02:42:41.171813 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 16 02:42:41.178069 master-0 kubenswrapper[31559]: I0216 02:42:41.178014 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 16 02:43:08.515345 master-0 kubenswrapper[31559]: I0216 02:43:08.515276 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["sushy-emulator/sushy-emulator-58f4c9b998-t2jlp"] Feb 16 02:43:08.516194 master-0 kubenswrapper[31559]: I0216 02:43:08.515555 31559 kuberuntime_container.go:808] "Killing container with a grace period" pod="sushy-emulator/sushy-emulator-58f4c9b998-t2jlp" podUID="83093487-16c4-44d2-a29f-bd113826e05a" containerName="sushy-emulator" containerID="cri-o://65ff54d138dcfd1472a57d726e907029e5d613b5e9ee1a7bfdd49515d95b5015" gracePeriod=30 Feb 16 02:43:09.060770 master-0 kubenswrapper[31559]: I0216 02:43:09.060717 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-58f4c9b998-t2jlp" Feb 16 02:43:09.220095 master-0 kubenswrapper[31559]: I0216 02:43:09.219955 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["sushy-emulator/sushy-emulator-64488c485f-w9s2m"] Feb 16 02:43:09.220747 master-0 kubenswrapper[31559]: E0216 02:43:09.220709 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83093487-16c4-44d2-a29f-bd113826e05a" containerName="sushy-emulator" Feb 16 02:43:09.220747 master-0 kubenswrapper[31559]: I0216 02:43:09.220739 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="83093487-16c4-44d2-a29f-bd113826e05a" containerName="sushy-emulator" Feb 16 02:43:09.221207 master-0 kubenswrapper[31559]: I0216 02:43:09.221174 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="83093487-16c4-44d2-a29f-bd113826e05a" containerName="sushy-emulator" Feb 16 02:43:09.222289 master-0 kubenswrapper[31559]: I0216 02:43:09.222253 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-64488c485f-w9s2m" Feb 16 02:43:09.229539 master-0 kubenswrapper[31559]: I0216 02:43:09.229495 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/sushy-emulator-64488c485f-w9s2m"] Feb 16 02:43:09.242252 master-0 kubenswrapper[31559]: I0216 02:43:09.242179 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/83093487-16c4-44d2-a29f-bd113826e05a-sushy-emulator-config\") pod \"83093487-16c4-44d2-a29f-bd113826e05a\" (UID: \"83093487-16c4-44d2-a29f-bd113826e05a\") " Feb 16 02:43:09.242252 master-0 kubenswrapper[31559]: I0216 02:43:09.242241 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m52lj\" (UniqueName: \"kubernetes.io/projected/83093487-16c4-44d2-a29f-bd113826e05a-kube-api-access-m52lj\") pod \"83093487-16c4-44d2-a29f-bd113826e05a\" (UID: \"83093487-16c4-44d2-a29f-bd113826e05a\") " Feb 16 02:43:09.242582 master-0 kubenswrapper[31559]: I0216 02:43:09.242298 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/83093487-16c4-44d2-a29f-bd113826e05a-os-client-config\") pod \"83093487-16c4-44d2-a29f-bd113826e05a\" (UID: \"83093487-16c4-44d2-a29f-bd113826e05a\") " Feb 16 02:43:09.242723 master-0 kubenswrapper[31559]: I0216 02:43:09.242697 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83093487-16c4-44d2-a29f-bd113826e05a-sushy-emulator-config" (OuterVolumeSpecName: "sushy-emulator-config") pod "83093487-16c4-44d2-a29f-bd113826e05a" (UID: "83093487-16c4-44d2-a29f-bd113826e05a"). InnerVolumeSpecName "sushy-emulator-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:43:09.243325 master-0 kubenswrapper[31559]: I0216 02:43:09.243282 31559 reconciler_common.go:293] "Volume detached for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/83093487-16c4-44d2-a29f-bd113826e05a-sushy-emulator-config\") on node \"master-0\" DevicePath \"\"" Feb 16 02:43:09.246902 master-0 kubenswrapper[31559]: I0216 02:43:09.246845 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83093487-16c4-44d2-a29f-bd113826e05a-kube-api-access-m52lj" (OuterVolumeSpecName: "kube-api-access-m52lj") pod "83093487-16c4-44d2-a29f-bd113826e05a" (UID: "83093487-16c4-44d2-a29f-bd113826e05a"). InnerVolumeSpecName "kube-api-access-m52lj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:43:09.247693 master-0 kubenswrapper[31559]: I0216 02:43:09.247646 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83093487-16c4-44d2-a29f-bd113826e05a-os-client-config" (OuterVolumeSpecName: "os-client-config") pod "83093487-16c4-44d2-a29f-bd113826e05a" (UID: "83093487-16c4-44d2-a29f-bd113826e05a"). InnerVolumeSpecName "os-client-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:43:09.345638 master-0 kubenswrapper[31559]: I0216 02:43:09.345570 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2lg9\" (UniqueName: \"kubernetes.io/projected/194eb6bb-e60f-41e3-8a6a-f3331c90d58c-kube-api-access-g2lg9\") pod \"sushy-emulator-64488c485f-w9s2m\" (UID: \"194eb6bb-e60f-41e3-8a6a-f3331c90d58c\") " pod="sushy-emulator/sushy-emulator-64488c485f-w9s2m" Feb 16 02:43:09.345990 master-0 kubenswrapper[31559]: I0216 02:43:09.345649 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/194eb6bb-e60f-41e3-8a6a-f3331c90d58c-os-client-config\") pod \"sushy-emulator-64488c485f-w9s2m\" (UID: \"194eb6bb-e60f-41e3-8a6a-f3331c90d58c\") " pod="sushy-emulator/sushy-emulator-64488c485f-w9s2m" Feb 16 02:43:09.346624 master-0 kubenswrapper[31559]: I0216 02:43:09.346521 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/194eb6bb-e60f-41e3-8a6a-f3331c90d58c-sushy-emulator-config\") pod \"sushy-emulator-64488c485f-w9s2m\" (UID: \"194eb6bb-e60f-41e3-8a6a-f3331c90d58c\") " pod="sushy-emulator/sushy-emulator-64488c485f-w9s2m" Feb 16 02:43:09.347195 master-0 kubenswrapper[31559]: I0216 02:43:09.347153 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m52lj\" (UniqueName: \"kubernetes.io/projected/83093487-16c4-44d2-a29f-bd113826e05a-kube-api-access-m52lj\") on node \"master-0\" DevicePath \"\"" Feb 16 02:43:09.347195 master-0 kubenswrapper[31559]: I0216 02:43:09.347191 31559 reconciler_common.go:293] "Volume detached for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/83093487-16c4-44d2-a29f-bd113826e05a-os-client-config\") on node \"master-0\" DevicePath \"\"" Feb 16 02:43:09.449720 master-0 kubenswrapper[31559]: I0216 02:43:09.449646 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2lg9\" (UniqueName: \"kubernetes.io/projected/194eb6bb-e60f-41e3-8a6a-f3331c90d58c-kube-api-access-g2lg9\") pod \"sushy-emulator-64488c485f-w9s2m\" (UID: \"194eb6bb-e60f-41e3-8a6a-f3331c90d58c\") " pod="sushy-emulator/sushy-emulator-64488c485f-w9s2m" Feb 16 02:43:09.450295 master-0 kubenswrapper[31559]: I0216 02:43:09.450238 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/194eb6bb-e60f-41e3-8a6a-f3331c90d58c-os-client-config\") pod \"sushy-emulator-64488c485f-w9s2m\" (UID: \"194eb6bb-e60f-41e3-8a6a-f3331c90d58c\") " pod="sushy-emulator/sushy-emulator-64488c485f-w9s2m" Feb 16 02:43:09.450586 master-0 kubenswrapper[31559]: I0216 02:43:09.450549 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/194eb6bb-e60f-41e3-8a6a-f3331c90d58c-sushy-emulator-config\") pod \"sushy-emulator-64488c485f-w9s2m\" (UID: \"194eb6bb-e60f-41e3-8a6a-f3331c90d58c\") " pod="sushy-emulator/sushy-emulator-64488c485f-w9s2m" Feb 16 02:43:09.452817 master-0 kubenswrapper[31559]: I0216 02:43:09.452378 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/194eb6bb-e60f-41e3-8a6a-f3331c90d58c-sushy-emulator-config\") pod \"sushy-emulator-64488c485f-w9s2m\" (UID: \"194eb6bb-e60f-41e3-8a6a-f3331c90d58c\") " pod="sushy-emulator/sushy-emulator-64488c485f-w9s2m" Feb 16 02:43:09.455607 master-0 kubenswrapper[31559]: I0216 02:43:09.455556 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/194eb6bb-e60f-41e3-8a6a-f3331c90d58c-os-client-config\") pod \"sushy-emulator-64488c485f-w9s2m\" (UID: \"194eb6bb-e60f-41e3-8a6a-f3331c90d58c\") " pod="sushy-emulator/sushy-emulator-64488c485f-w9s2m" Feb 16 02:43:09.482505 master-0 kubenswrapper[31559]: I0216 02:43:09.482370 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2lg9\" (UniqueName: \"kubernetes.io/projected/194eb6bb-e60f-41e3-8a6a-f3331c90d58c-kube-api-access-g2lg9\") pod \"sushy-emulator-64488c485f-w9s2m\" (UID: \"194eb6bb-e60f-41e3-8a6a-f3331c90d58c\") " pod="sushy-emulator/sushy-emulator-64488c485f-w9s2m" Feb 16 02:43:09.543561 master-0 kubenswrapper[31559]: I0216 02:43:09.542188 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-64488c485f-w9s2m" Feb 16 02:43:09.639886 master-0 kubenswrapper[31559]: I0216 02:43:09.639823 31559 generic.go:334] "Generic (PLEG): container finished" podID="83093487-16c4-44d2-a29f-bd113826e05a" containerID="65ff54d138dcfd1472a57d726e907029e5d613b5e9ee1a7bfdd49515d95b5015" exitCode=0 Feb 16 02:43:09.640067 master-0 kubenswrapper[31559]: I0216 02:43:09.639891 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-58f4c9b998-t2jlp" event={"ID":"83093487-16c4-44d2-a29f-bd113826e05a","Type":"ContainerDied","Data":"65ff54d138dcfd1472a57d726e907029e5d613b5e9ee1a7bfdd49515d95b5015"} Feb 16 02:43:09.640067 master-0 kubenswrapper[31559]: I0216 02:43:09.639934 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-58f4c9b998-t2jlp" event={"ID":"83093487-16c4-44d2-a29f-bd113826e05a","Type":"ContainerDied","Data":"29cf36b3d1bbbe59b609a00a8ae6741a025ce1109dd6f774c7afe7497dbc0e4f"} Feb 16 02:43:09.640067 master-0 kubenswrapper[31559]: I0216 02:43:09.639965 31559 scope.go:117] "RemoveContainer" containerID="65ff54d138dcfd1472a57d726e907029e5d613b5e9ee1a7bfdd49515d95b5015" Feb 16 02:43:09.641079 master-0 kubenswrapper[31559]: I0216 02:43:09.640156 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-58f4c9b998-t2jlp" Feb 16 02:43:09.726266 master-0 kubenswrapper[31559]: I0216 02:43:09.726138 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["sushy-emulator/sushy-emulator-58f4c9b998-t2jlp"] Feb 16 02:43:09.743580 master-0 kubenswrapper[31559]: I0216 02:43:09.743415 31559 scope.go:117] "RemoveContainer" containerID="65ff54d138dcfd1472a57d726e907029e5d613b5e9ee1a7bfdd49515d95b5015" Feb 16 02:43:09.745399 master-0 kubenswrapper[31559]: E0216 02:43:09.745327 31559 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65ff54d138dcfd1472a57d726e907029e5d613b5e9ee1a7bfdd49515d95b5015\": container with ID starting with 65ff54d138dcfd1472a57d726e907029e5d613b5e9ee1a7bfdd49515d95b5015 not found: ID does not exist" containerID="65ff54d138dcfd1472a57d726e907029e5d613b5e9ee1a7bfdd49515d95b5015" Feb 16 02:43:09.745399 master-0 kubenswrapper[31559]: I0216 02:43:09.745398 31559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65ff54d138dcfd1472a57d726e907029e5d613b5e9ee1a7bfdd49515d95b5015"} err="failed to get container status \"65ff54d138dcfd1472a57d726e907029e5d613b5e9ee1a7bfdd49515d95b5015\": rpc error: code = NotFound desc = could not find container \"65ff54d138dcfd1472a57d726e907029e5d613b5e9ee1a7bfdd49515d95b5015\": container with ID starting with 65ff54d138dcfd1472a57d726e907029e5d613b5e9ee1a7bfdd49515d95b5015 not found: ID does not exist" Feb 16 02:43:09.809632 master-0 kubenswrapper[31559]: I0216 02:43:09.770106 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["sushy-emulator/sushy-emulator-58f4c9b998-t2jlp"] Feb 16 02:43:09.947325 master-0 kubenswrapper[31559]: I0216 02:43:09.947265 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83093487-16c4-44d2-a29f-bd113826e05a" path="/var/lib/kubelet/pods/83093487-16c4-44d2-a29f-bd113826e05a/volumes" Feb 16 02:43:09.948691 master-0 kubenswrapper[31559]: I0216 02:43:09.948655 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/sushy-emulator-64488c485f-w9s2m"] Feb 16 02:43:10.661341 master-0 kubenswrapper[31559]: I0216 02:43:10.661292 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-64488c485f-w9s2m" event={"ID":"194eb6bb-e60f-41e3-8a6a-f3331c90d58c","Type":"ContainerStarted","Data":"a22095a11b464ba0269e0508c52452b91571bd620036a3430c856ea39d19e1e4"} Feb 16 02:43:10.661933 master-0 kubenswrapper[31559]: I0216 02:43:10.661904 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-64488c485f-w9s2m" event={"ID":"194eb6bb-e60f-41e3-8a6a-f3331c90d58c","Type":"ContainerStarted","Data":"30e28e607703dc22dcdb7a0f3d2574177d6f85a535c0593ad3c30924e8521133"} Feb 16 02:43:10.716141 master-0 kubenswrapper[31559]: I0216 02:43:10.716049 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="sushy-emulator/sushy-emulator-64488c485f-w9s2m" podStartSLOduration=1.716025209 podStartE2EDuration="1.716025209s" podCreationTimestamp="2026-02-16 02:43:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:43:10.691099391 +0000 UTC m=+1243.035705446" watchObservedRunningTime="2026-02-16 02:43:10.716025209 +0000 UTC m=+1243.060631234" Feb 16 02:43:19.544623 master-0 kubenswrapper[31559]: I0216 02:43:19.544555 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="sushy-emulator/sushy-emulator-64488c485f-w9s2m" Feb 16 02:43:19.545237 master-0 kubenswrapper[31559]: I0216 02:43:19.544919 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="sushy-emulator/sushy-emulator-64488c485f-w9s2m" Feb 16 02:43:19.571648 master-0 kubenswrapper[31559]: I0216 02:43:19.569310 31559 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="sushy-emulator/sushy-emulator-64488c485f-w9s2m" Feb 16 02:43:19.824125 master-0 kubenswrapper[31559]: I0216 02:43:19.823952 31559 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="sushy-emulator/sushy-emulator-64488c485f-w9s2m" Feb 16 02:44:37.030224 master-0 kubenswrapper[31559]: I0216 02:44:37.030108 31559 scope.go:117] "RemoveContainer" containerID="d813a884d353fce7a8a09dbc9395e416cc6e807a219a672fddb07b7afb2e5e70" Feb 16 02:44:37.072336 master-0 kubenswrapper[31559]: I0216 02:44:37.072265 31559 scope.go:117] "RemoveContainer" containerID="f654109ada4b3445a4fbbdd048831e4736211bc00c4c70d2aecbe8baefe2e609" Feb 16 02:44:37.147753 master-0 kubenswrapper[31559]: I0216 02:44:37.147112 31559 scope.go:117] "RemoveContainer" containerID="38a91cad39c4c9641fa010eb29c4f3df093c2304b8da2a63043280fb9e9bd1be" Feb 16 02:44:37.183751 master-0 kubenswrapper[31559]: I0216 02:44:37.183660 31559 scope.go:117] "RemoveContainer" containerID="23d055c25f63892b80e04926a3ff71c4d4b17071d223b367dada825790016325" Feb 16 02:45:00.209352 master-0 kubenswrapper[31559]: I0216 02:45:00.209262 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520165-njjb9"] Feb 16 02:45:00.214813 master-0 kubenswrapper[31559]: I0216 02:45:00.214751 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520165-njjb9" Feb 16 02:45:00.218092 master-0 kubenswrapper[31559]: I0216 02:45:00.218008 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-h8ldk" Feb 16 02:45:00.218678 master-0 kubenswrapper[31559]: I0216 02:45:00.218629 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 02:45:00.219814 master-0 kubenswrapper[31559]: I0216 02:45:00.219727 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520165-njjb9"] Feb 16 02:45:00.278316 master-0 kubenswrapper[31559]: I0216 02:45:00.277695 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhgt4\" (UniqueName: \"kubernetes.io/projected/c42468b7-1587-4fa3-a97e-9cb4ecd3df9b-kube-api-access-bhgt4\") pod \"collect-profiles-29520165-njjb9\" (UID: \"c42468b7-1587-4fa3-a97e-9cb4ecd3df9b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520165-njjb9" Feb 16 02:45:00.278316 master-0 kubenswrapper[31559]: I0216 02:45:00.277873 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c42468b7-1587-4fa3-a97e-9cb4ecd3df9b-config-volume\") pod \"collect-profiles-29520165-njjb9\" (UID: \"c42468b7-1587-4fa3-a97e-9cb4ecd3df9b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520165-njjb9" Feb 16 02:45:00.278316 master-0 kubenswrapper[31559]: I0216 02:45:00.277942 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c42468b7-1587-4fa3-a97e-9cb4ecd3df9b-secret-volume\") pod \"collect-profiles-29520165-njjb9\" (UID: \"c42468b7-1587-4fa3-a97e-9cb4ecd3df9b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520165-njjb9" Feb 16 02:45:00.380324 master-0 kubenswrapper[31559]: I0216 02:45:00.380234 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c42468b7-1587-4fa3-a97e-9cb4ecd3df9b-config-volume\") pod \"collect-profiles-29520165-njjb9\" (UID: \"c42468b7-1587-4fa3-a97e-9cb4ecd3df9b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520165-njjb9" Feb 16 02:45:00.380646 master-0 kubenswrapper[31559]: I0216 02:45:00.380522 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c42468b7-1587-4fa3-a97e-9cb4ecd3df9b-secret-volume\") pod \"collect-profiles-29520165-njjb9\" (UID: \"c42468b7-1587-4fa3-a97e-9cb4ecd3df9b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520165-njjb9" Feb 16 02:45:00.381371 master-0 kubenswrapper[31559]: I0216 02:45:00.381326 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhgt4\" (UniqueName: \"kubernetes.io/projected/c42468b7-1587-4fa3-a97e-9cb4ecd3df9b-kube-api-access-bhgt4\") pod \"collect-profiles-29520165-njjb9\" (UID: \"c42468b7-1587-4fa3-a97e-9cb4ecd3df9b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520165-njjb9" Feb 16 02:45:00.382344 master-0 kubenswrapper[31559]: I0216 02:45:00.382221 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c42468b7-1587-4fa3-a97e-9cb4ecd3df9b-config-volume\") pod \"collect-profiles-29520165-njjb9\" (UID: \"c42468b7-1587-4fa3-a97e-9cb4ecd3df9b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520165-njjb9" Feb 16 02:45:00.387269 master-0 kubenswrapper[31559]: I0216 02:45:00.387219 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c42468b7-1587-4fa3-a97e-9cb4ecd3df9b-secret-volume\") pod \"collect-profiles-29520165-njjb9\" (UID: \"c42468b7-1587-4fa3-a97e-9cb4ecd3df9b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520165-njjb9" Feb 16 02:45:00.402491 master-0 kubenswrapper[31559]: I0216 02:45:00.402451 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhgt4\" (UniqueName: \"kubernetes.io/projected/c42468b7-1587-4fa3-a97e-9cb4ecd3df9b-kube-api-access-bhgt4\") pod \"collect-profiles-29520165-njjb9\" (UID: \"c42468b7-1587-4fa3-a97e-9cb4ecd3df9b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520165-njjb9" Feb 16 02:45:00.570688 master-0 kubenswrapper[31559]: I0216 02:45:00.570507 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520165-njjb9" Feb 16 02:45:01.075545 master-0 kubenswrapper[31559]: W0216 02:45:01.074645 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc42468b7_1587_4fa3_a97e_9cb4ecd3df9b.slice/crio-bd4dce224c8f3b41940f54f793bc580babbc75aadda018ad16ebe21c316752f2 WatchSource:0}: Error finding container bd4dce224c8f3b41940f54f793bc580babbc75aadda018ad16ebe21c316752f2: Status 404 returned error can't find the container with id bd4dce224c8f3b41940f54f793bc580babbc75aadda018ad16ebe21c316752f2 Feb 16 02:45:01.098220 master-0 kubenswrapper[31559]: I0216 02:45:01.098148 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520165-njjb9"] Feb 16 02:45:01.456004 master-0 kubenswrapper[31559]: I0216 02:45:01.455946 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520165-njjb9" event={"ID":"c42468b7-1587-4fa3-a97e-9cb4ecd3df9b","Type":"ContainerStarted","Data":"1bf32328cef9c01a391380d6b1e0c7725c40871e1257c20329b621cce771f500"} Feb 16 02:45:01.456004 master-0 kubenswrapper[31559]: I0216 02:45:01.455992 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520165-njjb9" event={"ID":"c42468b7-1587-4fa3-a97e-9cb4ecd3df9b","Type":"ContainerStarted","Data":"bd4dce224c8f3b41940f54f793bc580babbc75aadda018ad16ebe21c316752f2"} Feb 16 02:45:01.486632 master-0 kubenswrapper[31559]: I0216 02:45:01.486527 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29520165-njjb9" podStartSLOduration=1.486507625 podStartE2EDuration="1.486507625s" podCreationTimestamp="2026-02-16 02:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 02:45:01.479994295 +0000 UTC m=+1353.824600310" watchObservedRunningTime="2026-02-16 02:45:01.486507625 +0000 UTC m=+1353.831113650" Feb 16 02:45:02.489704 master-0 kubenswrapper[31559]: I0216 02:45:02.489369 31559 generic.go:334] "Generic (PLEG): container finished" podID="c42468b7-1587-4fa3-a97e-9cb4ecd3df9b" containerID="1bf32328cef9c01a391380d6b1e0c7725c40871e1257c20329b621cce771f500" exitCode=0 Feb 16 02:45:02.489704 master-0 kubenswrapper[31559]: I0216 02:45:02.489444 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520165-njjb9" event={"ID":"c42468b7-1587-4fa3-a97e-9cb4ecd3df9b","Type":"ContainerDied","Data":"1bf32328cef9c01a391380d6b1e0c7725c40871e1257c20329b621cce771f500"} Feb 16 02:45:03.999645 master-0 kubenswrapper[31559]: I0216 02:45:03.999418 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520165-njjb9" Feb 16 02:45:04.188247 master-0 kubenswrapper[31559]: I0216 02:45:04.188154 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhgt4\" (UniqueName: \"kubernetes.io/projected/c42468b7-1587-4fa3-a97e-9cb4ecd3df9b-kube-api-access-bhgt4\") pod \"c42468b7-1587-4fa3-a97e-9cb4ecd3df9b\" (UID: \"c42468b7-1587-4fa3-a97e-9cb4ecd3df9b\") " Feb 16 02:45:04.188523 master-0 kubenswrapper[31559]: I0216 02:45:04.188346 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c42468b7-1587-4fa3-a97e-9cb4ecd3df9b-secret-volume\") pod \"c42468b7-1587-4fa3-a97e-9cb4ecd3df9b\" (UID: \"c42468b7-1587-4fa3-a97e-9cb4ecd3df9b\") " Feb 16 02:45:04.188523 master-0 kubenswrapper[31559]: I0216 02:45:04.188378 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c42468b7-1587-4fa3-a97e-9cb4ecd3df9b-config-volume\") pod \"c42468b7-1587-4fa3-a97e-9cb4ecd3df9b\" (UID: \"c42468b7-1587-4fa3-a97e-9cb4ecd3df9b\") " Feb 16 02:45:04.189457 master-0 kubenswrapper[31559]: I0216 02:45:04.189391 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c42468b7-1587-4fa3-a97e-9cb4ecd3df9b-config-volume" (OuterVolumeSpecName: "config-volume") pod "c42468b7-1587-4fa3-a97e-9cb4ecd3df9b" (UID: "c42468b7-1587-4fa3-a97e-9cb4ecd3df9b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 02:45:04.193684 master-0 kubenswrapper[31559]: I0216 02:45:04.193620 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c42468b7-1587-4fa3-a97e-9cb4ecd3df9b-kube-api-access-bhgt4" (OuterVolumeSpecName: "kube-api-access-bhgt4") pod "c42468b7-1587-4fa3-a97e-9cb4ecd3df9b" (UID: "c42468b7-1587-4fa3-a97e-9cb4ecd3df9b"). InnerVolumeSpecName "kube-api-access-bhgt4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 02:45:04.194085 master-0 kubenswrapper[31559]: I0216 02:45:04.194017 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c42468b7-1587-4fa3-a97e-9cb4ecd3df9b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c42468b7-1587-4fa3-a97e-9cb4ecd3df9b" (UID: "c42468b7-1587-4fa3-a97e-9cb4ecd3df9b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 02:45:04.291616 master-0 kubenswrapper[31559]: I0216 02:45:04.291534 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhgt4\" (UniqueName: \"kubernetes.io/projected/c42468b7-1587-4fa3-a97e-9cb4ecd3df9b-kube-api-access-bhgt4\") on node \"master-0\" DevicePath \"\"" Feb 16 02:45:04.291616 master-0 kubenswrapper[31559]: I0216 02:45:04.291586 31559 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c42468b7-1587-4fa3-a97e-9cb4ecd3df9b-secret-volume\") on node \"master-0\" DevicePath \"\"" Feb 16 02:45:04.291616 master-0 kubenswrapper[31559]: I0216 02:45:04.291601 31559 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c42468b7-1587-4fa3-a97e-9cb4ecd3df9b-config-volume\") on node \"master-0\" DevicePath \"\"" Feb 16 02:45:04.561549 master-0 kubenswrapper[31559]: I0216 02:45:04.561334 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520165-njjb9" event={"ID":"c42468b7-1587-4fa3-a97e-9cb4ecd3df9b","Type":"ContainerDied","Data":"bd4dce224c8f3b41940f54f793bc580babbc75aadda018ad16ebe21c316752f2"} Feb 16 02:45:04.561549 master-0 kubenswrapper[31559]: I0216 02:45:04.561409 31559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd4dce224c8f3b41940f54f793bc580babbc75aadda018ad16ebe21c316752f2" Feb 16 02:45:04.562233 master-0 kubenswrapper[31559]: I0216 02:45:04.562170 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520165-njjb9" Feb 16 02:45:37.373961 master-0 kubenswrapper[31559]: I0216 02:45:37.373869 31559 scope.go:117] "RemoveContainer" containerID="9957c94575b16139adde1349a9b018de6417007657f1a1083222f6f2e05706b7" Feb 16 02:47:37.498645 master-0 kubenswrapper[31559]: I0216 02:47:37.498565 31559 scope.go:117] "RemoveContainer" containerID="30dd8519f624f370d98b8746a04a06d4085df0a4909d080ba9beb06725c29cac" Feb 16 02:48:25.076178 master-0 kubenswrapper[31559]: I0216 02:48:25.075488 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-nxnt5"] Feb 16 02:48:25.085338 master-0 kubenswrapper[31559]: I0216 02:48:25.085255 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-nxnt5"] Feb 16 02:48:25.953155 master-0 kubenswrapper[31559]: I0216 02:48:25.953039 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f478b2fd-7ccb-4480-8887-9decb4a4b32e" path="/var/lib/kubelet/pods/f478b2fd-7ccb-4480-8887-9decb4a4b32e/volumes" Feb 16 02:48:26.094908 master-0 kubenswrapper[31559]: I0216 02:48:26.094816 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-a6ad-account-create-update-sg6j7"] Feb 16 02:48:26.108973 master-0 kubenswrapper[31559]: I0216 02:48:26.108860 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-85ae-account-create-update-4pvhp"] Feb 16 02:48:26.119289 master-0 kubenswrapper[31559]: I0216 02:48:26.119214 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-fx55d"] Feb 16 02:48:26.131672 master-0 kubenswrapper[31559]: I0216 02:48:26.131614 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-24kfs"] Feb 16 02:48:26.142566 master-0 kubenswrapper[31559]: I0216 02:48:26.142526 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-85ae-account-create-update-4pvhp"] Feb 16 02:48:26.152966 master-0 kubenswrapper[31559]: I0216 02:48:26.152878 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-a6ad-account-create-update-sg6j7"] Feb 16 02:48:26.162852 master-0 kubenswrapper[31559]: I0216 02:48:26.162736 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-a2d6-account-create-update-f9fsm"] Feb 16 02:48:26.173969 master-0 kubenswrapper[31559]: I0216 02:48:26.173885 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-fx55d"] Feb 16 02:48:26.186676 master-0 kubenswrapper[31559]: I0216 02:48:26.186621 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-24kfs"] Feb 16 02:48:26.198787 master-0 kubenswrapper[31559]: I0216 02:48:26.198711 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-a2d6-account-create-update-f9fsm"] Feb 16 02:48:27.954992 master-0 kubenswrapper[31559]: I0216 02:48:27.954915 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c275eb5-4e56-4ade-8c4a-142ed7dbafea" path="/var/lib/kubelet/pods/0c275eb5-4e56-4ade-8c4a-142ed7dbafea/volumes" Feb 16 02:48:27.956254 master-0 kubenswrapper[31559]: I0216 02:48:27.956200 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="292a5fd9-3c50-42c9-8f8d-bfbe7dbec084" path="/var/lib/kubelet/pods/292a5fd9-3c50-42c9-8f8d-bfbe7dbec084/volumes" Feb 16 02:48:27.957198 master-0 kubenswrapper[31559]: I0216 02:48:27.957150 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dee98140-f80a-4c3f-81aa-5a5183eeb144" path="/var/lib/kubelet/pods/dee98140-f80a-4c3f-81aa-5a5183eeb144/volumes" Feb 16 02:48:27.957985 master-0 kubenswrapper[31559]: I0216 02:48:27.957944 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e71ec1c5-6255-4555-b1b8-685613eb634b" path="/var/lib/kubelet/pods/e71ec1c5-6255-4555-b1b8-685613eb634b/volumes" Feb 16 02:48:27.959391 master-0 kubenswrapper[31559]: I0216 02:48:27.959346 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec2f0932-270d-4b92-a1a3-03181503039b" path="/var/lib/kubelet/pods/ec2f0932-270d-4b92-a1a3-03181503039b/volumes" Feb 16 02:48:35.067522 master-0 kubenswrapper[31559]: I0216 02:48:35.067227 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-btqxw"] Feb 16 02:48:35.091748 master-0 kubenswrapper[31559]: I0216 02:48:35.091662 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-btqxw"] Feb 16 02:48:35.948520 master-0 kubenswrapper[31559]: I0216 02:48:35.948398 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df39c4df-ec6f-4f4f-b603-a9329fd44cf0" path="/var/lib/kubelet/pods/df39c4df-ec6f-4f4f-b603-a9329fd44cf0/volumes" Feb 16 02:48:37.588843 master-0 kubenswrapper[31559]: I0216 02:48:37.588769 31559 scope.go:117] "RemoveContainer" containerID="6fc531dd656582b5089f8fe6b11e89727951bafa71a08e66d7322ed27015d58f" Feb 16 02:48:37.632299 master-0 kubenswrapper[31559]: I0216 02:48:37.632224 31559 scope.go:117] "RemoveContainer" containerID="6bdec7d62c43a3fdc29fd47dda7c756fcec88d6d779e04dd5caf34785e480651" Feb 16 02:48:37.714397 master-0 kubenswrapper[31559]: I0216 02:48:37.714354 31559 scope.go:117] "RemoveContainer" containerID="b86a9279ace4ab4733d1c9807a2b522c0a591a646304505d9bf451a966fc3236" Feb 16 02:48:37.767130 master-0 kubenswrapper[31559]: I0216 02:48:37.767083 31559 scope.go:117] "RemoveContainer" containerID="36ce22eb02d3d001fb5039561a5e2b07970bd43ea6a67bd8c6188bb4cfa5c898" Feb 16 02:48:37.807634 master-0 kubenswrapper[31559]: I0216 02:48:37.807574 31559 scope.go:117] "RemoveContainer" containerID="c90a1be88865a2f8ff4c1adffae60f65354fc0626e863b41d677b18635a4b5ce" Feb 16 02:48:37.867880 master-0 kubenswrapper[31559]: I0216 02:48:37.867732 31559 scope.go:117] "RemoveContainer" containerID="25acd88e9aba26c34afbd4f127e5ac731311211cdbfa70423487cc2dffadc1a4" Feb 16 02:48:37.912857 master-0 kubenswrapper[31559]: I0216 02:48:37.912744 31559 scope.go:117] "RemoveContainer" containerID="da71f6d2a2f407279b730378c7690ac89969556512a3a05f1563b6a826e32857" Feb 16 02:48:53.087668 master-0 kubenswrapper[31559]: I0216 02:48:53.087596 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-kcsvv"] Feb 16 02:48:53.107901 master-0 kubenswrapper[31559]: I0216 02:48:53.107831 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-kcsvv"] Feb 16 02:48:53.963868 master-0 kubenswrapper[31559]: I0216 02:48:53.963797 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ce4194c-65d5-4538-bb48-d1f17d880599" path="/var/lib/kubelet/pods/2ce4194c-65d5-4538-bb48-d1f17d880599/volumes" Feb 16 02:48:54.082467 master-0 kubenswrapper[31559]: I0216 02:48:54.078522 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-5bsxg"] Feb 16 02:48:54.120465 master-0 kubenswrapper[31559]: I0216 02:48:54.120087 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-7rwhn"] Feb 16 02:48:54.141475 master-0 kubenswrapper[31559]: I0216 02:48:54.140659 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-7rwhn"] Feb 16 02:48:54.151459 master-0 kubenswrapper[31559]: I0216 02:48:54.151307 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-5bsxg"] Feb 16 02:48:55.950990 master-0 kubenswrapper[31559]: I0216 02:48:55.950916 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce0b20e3-b4e6-42eb-a920-588caa2195df" path="/var/lib/kubelet/pods/ce0b20e3-b4e6-42eb-a920-588caa2195df/volumes" Feb 16 02:48:55.952135 master-0 kubenswrapper[31559]: I0216 02:48:55.952077 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb6fab0b-42dc-4235-bc08-73b63df4ed3a" path="/var/lib/kubelet/pods/eb6fab0b-42dc-4235-bc08-73b63df4ed3a/volumes" Feb 16 02:48:58.053459 master-0 kubenswrapper[31559]: I0216 02:48:58.053339 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-7568-account-create-update-nxkd6"] Feb 16 02:48:58.065519 master-0 kubenswrapper[31559]: I0216 02:48:58.065477 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6654-account-create-update-5dvj4"] Feb 16 02:48:58.075223 master-0 kubenswrapper[31559]: I0216 02:48:58.075167 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6654-account-create-update-5dvj4"] Feb 16 02:48:58.084337 master-0 kubenswrapper[31559]: I0216 02:48:58.084250 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-7568-account-create-update-nxkd6"] Feb 16 02:48:59.945868 master-0 kubenswrapper[31559]: I0216 02:48:59.945753 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="143f4b4e-6ddb-47c3-bd5f-c370bc7905e2" path="/var/lib/kubelet/pods/143f4b4e-6ddb-47c3-bd5f-c370bc7905e2/volumes" Feb 16 02:48:59.948540 master-0 kubenswrapper[31559]: I0216 02:48:59.948425 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cd0bd49-96d7-417e-9900-3efb9e8a2de0" path="/var/lib/kubelet/pods/3cd0bd49-96d7-417e-9900-3efb9e8a2de0/volumes" Feb 16 02:49:04.072182 master-0 kubenswrapper[31559]: I0216 02:49:04.072077 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-qhrbh"] Feb 16 02:49:04.090956 master-0 kubenswrapper[31559]: I0216 02:49:04.090867 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-qhrbh"] Feb 16 02:49:05.951792 master-0 kubenswrapper[31559]: I0216 02:49:05.951684 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d822f0ef-28bc-4cfb-8f3d-0f259b61d2bf" path="/var/lib/kubelet/pods/d822f0ef-28bc-4cfb-8f3d-0f259b61d2bf/volumes" Feb 16 02:49:09.052235 master-0 kubenswrapper[31559]: I0216 02:49:09.052158 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-db-create-nqj26"] Feb 16 02:49:09.071375 master-0 kubenswrapper[31559]: I0216 02:49:09.071301 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-db-create-nqj26"] Feb 16 02:49:09.949542 master-0 kubenswrapper[31559]: I0216 02:49:09.949464 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21d15bc7-f350-4b27-a27c-df6b81c9b50b" path="/var/lib/kubelet/pods/21d15bc7-f350-4b27-a27c-df6b81c9b50b/volumes" Feb 16 02:49:10.058646 master-0 kubenswrapper[31559]: I0216 02:49:10.058556 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-5df0-account-create-update-xnvlf"] Feb 16 02:49:10.080133 master-0 kubenswrapper[31559]: I0216 02:49:10.080054 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-5df0-account-create-update-xnvlf"] Feb 16 02:49:11.956995 master-0 kubenswrapper[31559]: I0216 02:49:11.956806 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a67f2cf-c024-48a0-8341-15aca05beca0" path="/var/lib/kubelet/pods/5a67f2cf-c024-48a0-8341-15aca05beca0/volumes" Feb 16 02:49:25.084602 master-0 kubenswrapper[31559]: I0216 02:49:25.084322 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-jgr7s"] Feb 16 02:49:25.106200 master-0 kubenswrapper[31559]: I0216 02:49:25.106143 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-zzhf8"] Feb 16 02:49:25.143954 master-0 kubenswrapper[31559]: I0216 02:49:25.142493 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-jgr7s"] Feb 16 02:49:25.162264 master-0 kubenswrapper[31559]: I0216 02:49:25.162204 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-zzhf8"] Feb 16 02:49:25.951389 master-0 kubenswrapper[31559]: I0216 02:49:25.951293 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="456d0b0c-176d-4d17-993b-d02048e33d25" path="/var/lib/kubelet/pods/456d0b0c-176d-4d17-993b-d02048e33d25/volumes" Feb 16 02:49:25.952216 master-0 kubenswrapper[31559]: I0216 02:49:25.952170 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f313a2ac-7115-41c1-ac49-ede1baebd452" path="/var/lib/kubelet/pods/f313a2ac-7115-41c1-ac49-ede1baebd452/volumes" Feb 16 02:49:37.062463 master-0 kubenswrapper[31559]: I0216 02:49:37.061745 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-dde57-db-sync-2z65z"] Feb 16 02:49:37.077602 master-0 kubenswrapper[31559]: I0216 02:49:37.076806 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-dde57-db-sync-2z65z"] Feb 16 02:49:37.952969 master-0 kubenswrapper[31559]: I0216 02:49:37.952884 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abbf5e91-b1f0-466e-80ca-d7b79ace4552" path="/var/lib/kubelet/pods/abbf5e91-b1f0-466e-80ca-d7b79ace4552/volumes" Feb 16 02:49:38.047602 master-0 kubenswrapper[31559]: I0216 02:49:38.047513 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-mxb5k"] Feb 16 02:49:38.067222 master-0 kubenswrapper[31559]: I0216 02:49:38.067146 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-mxb5k"] Feb 16 02:49:38.127787 master-0 kubenswrapper[31559]: I0216 02:49:38.127713 31559 scope.go:117] "RemoveContainer" containerID="f1a6ade63837afd647742266793c7993ba37f1974c65d4a8fa06ebff687d90b5" Feb 16 02:49:38.160720 master-0 kubenswrapper[31559]: I0216 02:49:38.160672 31559 scope.go:117] "RemoveContainer" containerID="46b54e5d6616bd6bba019be6f5c3a8dec553e57acb7c02a5d680b43627291646" Feb 16 02:49:38.246410 master-0 kubenswrapper[31559]: I0216 02:49:38.245921 31559 scope.go:117] "RemoveContainer" containerID="738f69e8f6966da07711446b5a16ec8590a4a02219ffdc9c2f1d72e42cb6bf4e" Feb 16 02:49:38.326243 master-0 kubenswrapper[31559]: I0216 02:49:38.326067 31559 scope.go:117] "RemoveContainer" containerID="e26c3bac2edcae1f951e7a3108bb83548298c61ec175861d49a1dbec63e44783" Feb 16 02:49:38.394915 master-0 kubenswrapper[31559]: I0216 02:49:38.394641 31559 scope.go:117] "RemoveContainer" containerID="7861aba56890b38321922f4fef5df9a19a7ceb579cfc023958c82a984b034d98" Feb 16 02:49:38.418743 master-0 kubenswrapper[31559]: I0216 02:49:38.418611 31559 scope.go:117] "RemoveContainer" containerID="b39b162722fec5db416e0318921cdfcdebd7b22a51d16df593ce4bf1ab9f8aa5" Feb 16 02:49:38.480131 master-0 kubenswrapper[31559]: I0216 02:49:38.480087 31559 scope.go:117] "RemoveContainer" containerID="ac6c5bb5b269fc2b71f36c3f832ee3ef82d89b42267b7f6e48ea6b12501e72e9" Feb 16 02:49:38.526288 master-0 kubenswrapper[31559]: I0216 02:49:38.526239 31559 scope.go:117] "RemoveContainer" containerID="596fc56ada4755a525e583fbbd2a9cfb5f1dc7a8e3c4ee5c1f1403d1c41bc945" Feb 16 02:49:38.555926 master-0 kubenswrapper[31559]: I0216 02:49:38.555755 31559 scope.go:117] "RemoveContainer" containerID="fe2b4c7f21c19332fed98f625225e4c1c08d28051d33eeb444a714eb51c66ca1" Feb 16 02:49:38.592035 master-0 kubenswrapper[31559]: I0216 02:49:38.591885 31559 scope.go:117] "RemoveContainer" containerID="ecdfbed0193bf44d16f7f7c379370462a576a2165d57038d68e23ba34c738f03" Feb 16 02:49:38.614692 master-0 kubenswrapper[31559]: I0216 02:49:38.614657 31559 scope.go:117] "RemoveContainer" containerID="4b3cdded2da9fa3731909d910565acb813f101628c84b5933c4efa436aefe1a6" Feb 16 02:49:39.938605 master-0 kubenswrapper[31559]: I0216 02:49:39.938533 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="032a3554-910a-472a-8537-73e08670ffe8" path="/var/lib/kubelet/pods/032a3554-910a-472a-8537-73e08670ffe8/volumes" Feb 16 02:49:50.065995 master-0 kubenswrapper[31559]: I0216 02:49:50.065835 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-db-sync-ljf52"] Feb 16 02:49:50.081602 master-0 kubenswrapper[31559]: I0216 02:49:50.081527 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-db-sync-ljf52"] Feb 16 02:49:51.969623 master-0 kubenswrapper[31559]: I0216 02:49:51.969544 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="656fd1dc-bb87-40c8-a161-31a194c23629" path="/var/lib/kubelet/pods/656fd1dc-bb87-40c8-a161-31a194c23629/volumes" Feb 16 02:49:57.068874 master-0 kubenswrapper[31559]: I0216 02:49:57.068713 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-db-create-ldt4r"] Feb 16 02:49:57.080594 master-0 kubenswrapper[31559]: I0216 02:49:57.080523 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-01eb-account-create-update-cvszz"] Feb 16 02:49:57.084696 master-0 kubenswrapper[31559]: I0216 02:49:57.084623 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-inspector-db-create-ldt4r"] Feb 16 02:49:57.094979 master-0 kubenswrapper[31559]: I0216 02:49:57.094894 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-inspector-01eb-account-create-update-cvszz"] Feb 16 02:49:57.953388 master-0 kubenswrapper[31559]: I0216 02:49:57.953307 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="502da4cf-fead-4624-891d-f8db5815915f" path="/var/lib/kubelet/pods/502da4cf-fead-4624-891d-f8db5815915f/volumes" Feb 16 02:49:57.955011 master-0 kubenswrapper[31559]: I0216 02:49:57.954951 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df781460-393f-441c-85a9-ab19366c8734" path="/var/lib/kubelet/pods/df781460-393f-441c-85a9-ab19366c8734/volumes" Feb 16 02:50:18.052930 master-0 kubenswrapper[31559]: I0216 02:50:18.052840 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-245b-account-create-update-rgllf"] Feb 16 02:50:18.072490 master-0 kubenswrapper[31559]: I0216 02:50:18.072352 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-e957-account-create-update-7245k"] Feb 16 02:50:18.083703 master-0 kubenswrapper[31559]: I0216 02:50:18.083168 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-vc447"] Feb 16 02:50:18.092794 master-0 kubenswrapper[31559]: I0216 02:50:18.092739 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-dbe0-account-create-update-jbp2r"] Feb 16 02:50:18.101368 master-0 kubenswrapper[31559]: I0216 02:50:18.101290 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-245b-account-create-update-rgllf"] Feb 16 02:50:18.109198 master-0 kubenswrapper[31559]: I0216 02:50:18.109138 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-e957-account-create-update-7245k"] Feb 16 02:50:18.118277 master-0 kubenswrapper[31559]: I0216 02:50:18.118212 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-dbe0-account-create-update-jbp2r"] Feb 16 02:50:18.126128 master-0 kubenswrapper[31559]: I0216 02:50:18.126064 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-vc447"] Feb 16 02:50:19.053557 master-0 kubenswrapper[31559]: I0216 02:50:19.053369 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-xv575"] Feb 16 02:50:19.066162 master-0 kubenswrapper[31559]: I0216 02:50:19.066079 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-prwlf"] Feb 16 02:50:19.079545 master-0 kubenswrapper[31559]: I0216 02:50:19.079487 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-xv575"] Feb 16 02:50:19.091601 master-0 kubenswrapper[31559]: I0216 02:50:19.091555 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-prwlf"] Feb 16 02:50:19.941079 master-0 kubenswrapper[31559]: I0216 02:50:19.940976 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="323b3672-0931-4d00-9d68-6d4eae9a4cec" path="/var/lib/kubelet/pods/323b3672-0931-4d00-9d68-6d4eae9a4cec/volumes" Feb 16 02:50:19.941943 master-0 kubenswrapper[31559]: I0216 02:50:19.941904 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3376cdb9-0b42-43bc-a145-81508f342ccd" path="/var/lib/kubelet/pods/3376cdb9-0b42-43bc-a145-81508f342ccd/volumes" Feb 16 02:50:19.942769 master-0 kubenswrapper[31559]: I0216 02:50:19.942734 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68b63ce3-aaa3-4eeb-9264-48d821ee7f81" path="/var/lib/kubelet/pods/68b63ce3-aaa3-4eeb-9264-48d821ee7f81/volumes" Feb 16 02:50:19.943652 master-0 kubenswrapper[31559]: I0216 02:50:19.943614 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85150fe7-7e88-4ebb-a2c7-643274767b45" path="/var/lib/kubelet/pods/85150fe7-7e88-4ebb-a2c7-643274767b45/volumes" Feb 16 02:50:19.947017 master-0 kubenswrapper[31559]: I0216 02:50:19.946878 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6448424-28c1-42d1-9f7f-67db21f0e53c" path="/var/lib/kubelet/pods/c6448424-28c1-42d1-9f7f-67db21f0e53c/volumes" Feb 16 02:50:19.948273 master-0 kubenswrapper[31559]: I0216 02:50:19.948235 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed677e03-0917-4744-93f0-d7e64470c27d" path="/var/lib/kubelet/pods/ed677e03-0917-4744-93f0-d7e64470c27d/volumes" Feb 16 02:50:20.539028 master-0 kubenswrapper[31559]: I0216 02:50:20.538790 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-db-sync-2p5mt"] Feb 16 02:50:20.552802 master-0 kubenswrapper[31559]: I0216 02:50:20.552737 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-inspector-db-sync-2p5mt"] Feb 16 02:50:21.955390 master-0 kubenswrapper[31559]: I0216 02:50:21.955299 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e68f2b97-da69-4ad8-aa22-20cbf6fcb819" path="/var/lib/kubelet/pods/e68f2b97-da69-4ad8-aa22-20cbf6fcb819/volumes" Feb 16 02:50:38.932663 master-0 kubenswrapper[31559]: I0216 02:50:38.932584 31559 scope.go:117] "RemoveContainer" containerID="89f83216dcf29606697d9b433300397e8357f8d45c7bbc3534ff236ae0e18cdc" Feb 16 02:50:38.981871 master-0 kubenswrapper[31559]: I0216 02:50:38.981791 31559 scope.go:117] "RemoveContainer" containerID="43488a4f351287cd2ba5e19c90c76153ba4dbd58ef00adb6ad5a8fa8a6f47ed6" Feb 16 02:50:39.072332 master-0 kubenswrapper[31559]: I0216 02:50:39.072282 31559 scope.go:117] "RemoveContainer" containerID="fa760d473a2ab26e3653d949b4f25234338401867f3b4574fc9895f762e78417" Feb 16 02:50:39.135622 master-0 kubenswrapper[31559]: I0216 02:50:39.135553 31559 scope.go:117] "RemoveContainer" containerID="fa7b96e260598981474c211b1d4e4409f38a196a76e9ee89e6026c1b77ba5afa" Feb 16 02:50:39.214109 master-0 kubenswrapper[31559]: I0216 02:50:39.214055 31559 scope.go:117] "RemoveContainer" containerID="ae642a2108709fc64fc5fdad7b85e1857a36ee9843f78e9d3a995993871301a1" Feb 16 02:50:39.253326 master-0 kubenswrapper[31559]: I0216 02:50:39.253231 31559 scope.go:117] "RemoveContainer" containerID="b6771a6f34446d7d5b6513f70ccf41ba6fb0e6b7561268b116cb2f6ad74d23b6" Feb 16 02:50:39.284497 master-0 kubenswrapper[31559]: I0216 02:50:39.284364 31559 scope.go:117] "RemoveContainer" containerID="e6bbea076546c7400fdafe4f2000c907151e73ce02439ce04f17a22b27b86a09" Feb 16 02:50:39.327493 master-0 kubenswrapper[31559]: I0216 02:50:39.327376 31559 scope.go:117] "RemoveContainer" containerID="52e32c9653749fa46ebe4502645f4369efd319735791134ff4820bf98d64d539" Feb 16 02:50:39.361326 master-0 kubenswrapper[31559]: I0216 02:50:39.361165 31559 scope.go:117] "RemoveContainer" containerID="06a6f7b34024716d96e0f364464ae5d56c7102cc42d0797fda5389481b33f7e1" Feb 16 02:50:39.388368 master-0 kubenswrapper[31559]: I0216 02:50:39.388243 31559 scope.go:117] "RemoveContainer" containerID="cf3ac35f5b26be72e1fc0465c5d5599ca1dddd3c6bf02f04f4ce1162cce6acd6" Feb 16 02:50:39.448044 master-0 kubenswrapper[31559]: I0216 02:50:39.447845 31559 scope.go:117] "RemoveContainer" containerID="01c06861ebd5625318f57097560883b98994f0ad57474de583b2a4b05182eaa0" Feb 16 02:50:39.480044 master-0 kubenswrapper[31559]: I0216 02:50:39.479975 31559 scope.go:117] "RemoveContainer" containerID="12cb9765b7f8b2c337e54bfccb3a9365cee56c9c6ee25a776d426bba3814d702" Feb 16 02:51:03.079800 master-0 kubenswrapper[31559]: I0216 02:51:03.079606 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-4dzgp"] Feb 16 02:51:03.095557 master-0 kubenswrapper[31559]: I0216 02:51:03.095472 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-4dzgp"] Feb 16 02:51:03.946579 master-0 kubenswrapper[31559]: I0216 02:51:03.946527 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b2abcfb-1594-4f85-8068-80c1a8d7fc3e" path="/var/lib/kubelet/pods/8b2abcfb-1594-4f85-8068-80c1a8d7fc3e/volumes" Feb 16 02:51:09.514079 master-0 kubenswrapper[31559]: E0216 02:51:09.513892 31559 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 192.168.32.10:57480->192.168.32.10:34313: write tcp 192.168.32.10:57480->192.168.32.10:34313: write: broken pipe Feb 16 02:51:29.086023 master-0 kubenswrapper[31559]: I0216 02:51:29.085935 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-zcqp2"] Feb 16 02:51:29.105470 master-0 kubenswrapper[31559]: I0216 02:51:29.105364 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-zcqp2"] Feb 16 02:51:29.956189 master-0 kubenswrapper[31559]: I0216 02:51:29.956117 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12aa2463-4471-4146-9b3e-e5532987d769" path="/var/lib/kubelet/pods/12aa2463-4471-4146-9b3e-e5532987d769/volumes" Feb 16 02:51:32.045930 master-0 kubenswrapper[31559]: I0216 02:51:32.045810 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-86vgj"] Feb 16 02:51:32.064770 master-0 kubenswrapper[31559]: I0216 02:51:32.064549 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-86vgj"] Feb 16 02:51:33.960363 master-0 kubenswrapper[31559]: I0216 02:51:33.960272 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d980fc2e-586c-4a8d-ad5f-d6385e22c959" path="/var/lib/kubelet/pods/d980fc2e-586c-4a8d-ad5f-d6385e22c959/volumes" Feb 16 02:51:39.798462 master-0 kubenswrapper[31559]: I0216 02:51:39.798369 31559 scope.go:117] "RemoveContainer" containerID="4979bad83bc9ff44e251c1dda305d92b6042d0ffacc394ba2959f41d490ef835" Feb 16 02:51:39.876425 master-0 kubenswrapper[31559]: I0216 02:51:39.876375 31559 scope.go:117] "RemoveContainer" containerID="7f676bcfacc06c6183bcab91167d1bac7b31aad9f2d96d4cf19dc517f26bfb3a" Feb 16 02:51:39.955564 master-0 kubenswrapper[31559]: I0216 02:51:39.955398 31559 scope.go:117] "RemoveContainer" containerID="ed615757ce97a8f4e497be28fe5d75e698dc7e0b994b1152a3a1c0039d4f9e08" Feb 16 02:52:11.072199 master-0 kubenswrapper[31559]: I0216 02:52:11.072001 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-host-discover-q2bvg"] Feb 16 02:52:11.089040 master-0 kubenswrapper[31559]: I0216 02:52:11.088967 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-host-discover-q2bvg"] Feb 16 02:52:11.947456 master-0 kubenswrapper[31559]: I0216 02:52:11.947317 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23ec7bd8-7ac5-44fc-a7f1-67465e2cf88f" path="/var/lib/kubelet/pods/23ec7bd8-7ac5-44fc-a7f1-67465e2cf88f/volumes" Feb 16 02:52:13.071800 master-0 kubenswrapper[31559]: I0216 02:52:13.071738 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-6t6jv"] Feb 16 02:52:13.086788 master-0 kubenswrapper[31559]: I0216 02:52:13.085375 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-6t6jv"] Feb 16 02:52:13.950078 master-0 kubenswrapper[31559]: I0216 02:52:13.950003 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="871c887d-2a5a-4839-8ffb-eadb66301e8d" path="/var/lib/kubelet/pods/871c887d-2a5a-4839-8ffb-eadb66301e8d/volumes" Feb 16 02:52:40.111690 master-0 kubenswrapper[31559]: I0216 02:52:40.111548 31559 scope.go:117] "RemoveContainer" containerID="e5261182c6a67b2a51b90cad0c5f3a5827f29c33c1f33ffb79284020587a5c33" Feb 16 02:52:40.180671 master-0 kubenswrapper[31559]: I0216 02:52:40.180605 31559 scope.go:117] "RemoveContainer" containerID="28dd63192024561ba0870ee951c165cef68cc93be7d9eb1a8dfe3ae0068c336c" Feb 16 03:00:00.181363 master-0 kubenswrapper[31559]: I0216 03:00:00.181274 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520180-f5mg8"] Feb 16 03:00:00.182361 master-0 kubenswrapper[31559]: E0216 03:00:00.181953 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c42468b7-1587-4fa3-a97e-9cb4ecd3df9b" containerName="collect-profiles" Feb 16 03:00:00.182361 master-0 kubenswrapper[31559]: I0216 03:00:00.181973 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="c42468b7-1587-4fa3-a97e-9cb4ecd3df9b" containerName="collect-profiles" Feb 16 03:00:00.182361 master-0 kubenswrapper[31559]: I0216 03:00:00.182323 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="c42468b7-1587-4fa3-a97e-9cb4ecd3df9b" containerName="collect-profiles" Feb 16 03:00:00.183285 master-0 kubenswrapper[31559]: I0216 03:00:00.183232 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520180-f5mg8" Feb 16 03:00:00.186572 master-0 kubenswrapper[31559]: I0216 03:00:00.186496 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 03:00:00.187586 master-0 kubenswrapper[31559]: I0216 03:00:00.187520 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-h8ldk" Feb 16 03:00:00.200926 master-0 kubenswrapper[31559]: I0216 03:00:00.199409 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520180-f5mg8"] Feb 16 03:00:00.277456 master-0 kubenswrapper[31559]: I0216 03:00:00.266425 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2a4e132e-055e-45da-872f-af94a91d85ea-secret-volume\") pod \"collect-profiles-29520180-f5mg8\" (UID: \"2a4e132e-055e-45da-872f-af94a91d85ea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520180-f5mg8" Feb 16 03:00:00.277456 master-0 kubenswrapper[31559]: I0216 03:00:00.266507 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vctpn\" (UniqueName: \"kubernetes.io/projected/2a4e132e-055e-45da-872f-af94a91d85ea-kube-api-access-vctpn\") pod \"collect-profiles-29520180-f5mg8\" (UID: \"2a4e132e-055e-45da-872f-af94a91d85ea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520180-f5mg8" Feb 16 03:00:00.277456 master-0 kubenswrapper[31559]: I0216 03:00:00.266610 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a4e132e-055e-45da-872f-af94a91d85ea-config-volume\") pod \"collect-profiles-29520180-f5mg8\" (UID: \"2a4e132e-055e-45da-872f-af94a91d85ea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520180-f5mg8" Feb 16 03:00:00.388467 master-0 kubenswrapper[31559]: I0216 03:00:00.387845 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2a4e132e-055e-45da-872f-af94a91d85ea-secret-volume\") pod \"collect-profiles-29520180-f5mg8\" (UID: \"2a4e132e-055e-45da-872f-af94a91d85ea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520180-f5mg8" Feb 16 03:00:00.388467 master-0 kubenswrapper[31559]: I0216 03:00:00.387921 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vctpn\" (UniqueName: \"kubernetes.io/projected/2a4e132e-055e-45da-872f-af94a91d85ea-kube-api-access-vctpn\") pod \"collect-profiles-29520180-f5mg8\" (UID: \"2a4e132e-055e-45da-872f-af94a91d85ea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520180-f5mg8" Feb 16 03:00:00.388467 master-0 kubenswrapper[31559]: I0216 03:00:00.387996 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a4e132e-055e-45da-872f-af94a91d85ea-config-volume\") pod \"collect-profiles-29520180-f5mg8\" (UID: \"2a4e132e-055e-45da-872f-af94a91d85ea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520180-f5mg8" Feb 16 03:00:00.391999 master-0 kubenswrapper[31559]: I0216 03:00:00.389065 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a4e132e-055e-45da-872f-af94a91d85ea-config-volume\") pod \"collect-profiles-29520180-f5mg8\" (UID: \"2a4e132e-055e-45da-872f-af94a91d85ea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520180-f5mg8" Feb 16 03:00:00.399716 master-0 kubenswrapper[31559]: I0216 03:00:00.399679 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2a4e132e-055e-45da-872f-af94a91d85ea-secret-volume\") pod \"collect-profiles-29520180-f5mg8\" (UID: \"2a4e132e-055e-45da-872f-af94a91d85ea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520180-f5mg8" Feb 16 03:00:00.408568 master-0 kubenswrapper[31559]: I0216 03:00:00.407905 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vctpn\" (UniqueName: \"kubernetes.io/projected/2a4e132e-055e-45da-872f-af94a91d85ea-kube-api-access-vctpn\") pod \"collect-profiles-29520180-f5mg8\" (UID: \"2a4e132e-055e-45da-872f-af94a91d85ea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520180-f5mg8" Feb 16 03:00:00.521920 master-0 kubenswrapper[31559]: I0216 03:00:00.521799 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520180-f5mg8" Feb 16 03:00:01.041458 master-0 kubenswrapper[31559]: I0216 03:00:01.041370 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520180-f5mg8"] Feb 16 03:00:01.312915 master-0 kubenswrapper[31559]: I0216 03:00:01.310963 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520180-f5mg8" event={"ID":"2a4e132e-055e-45da-872f-af94a91d85ea","Type":"ContainerStarted","Data":"074ce4e79fd4f91131baea7639037d3191d7cc90ca62e70a8a7d63698e980059"} Feb 16 03:00:01.312915 master-0 kubenswrapper[31559]: I0216 03:00:01.311029 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520180-f5mg8" event={"ID":"2a4e132e-055e-45da-872f-af94a91d85ea","Type":"ContainerStarted","Data":"d28ffdc707cf5b87c25b48eb3ee2ab70acfabcfbf93e8f8801e9dc9fa7c260cf"} Feb 16 03:00:01.350691 master-0 kubenswrapper[31559]: I0216 03:00:01.350596 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29520180-f5mg8" podStartSLOduration=1.350570809 podStartE2EDuration="1.350570809s" podCreationTimestamp="2026-02-16 03:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 03:00:01.341153579 +0000 UTC m=+2253.685759624" watchObservedRunningTime="2026-02-16 03:00:01.350570809 +0000 UTC m=+2253.695176864" Feb 16 03:00:02.323947 master-0 kubenswrapper[31559]: I0216 03:00:02.323881 31559 generic.go:334] "Generic (PLEG): container finished" podID="2a4e132e-055e-45da-872f-af94a91d85ea" containerID="074ce4e79fd4f91131baea7639037d3191d7cc90ca62e70a8a7d63698e980059" exitCode=0 Feb 16 03:00:02.324466 master-0 kubenswrapper[31559]: I0216 03:00:02.323933 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520180-f5mg8" event={"ID":"2a4e132e-055e-45da-872f-af94a91d85ea","Type":"ContainerDied","Data":"074ce4e79fd4f91131baea7639037d3191d7cc90ca62e70a8a7d63698e980059"} Feb 16 03:00:03.874899 master-0 kubenswrapper[31559]: I0216 03:00:03.874857 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520180-f5mg8" Feb 16 03:00:03.994651 master-0 kubenswrapper[31559]: I0216 03:00:03.994579 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2a4e132e-055e-45da-872f-af94a91d85ea-secret-volume\") pod \"2a4e132e-055e-45da-872f-af94a91d85ea\" (UID: \"2a4e132e-055e-45da-872f-af94a91d85ea\") " Feb 16 03:00:03.994900 master-0 kubenswrapper[31559]: I0216 03:00:03.994816 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a4e132e-055e-45da-872f-af94a91d85ea-config-volume\") pod \"2a4e132e-055e-45da-872f-af94a91d85ea\" (UID: \"2a4e132e-055e-45da-872f-af94a91d85ea\") " Feb 16 03:00:03.995014 master-0 kubenswrapper[31559]: I0216 03:00:03.994979 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vctpn\" (UniqueName: \"kubernetes.io/projected/2a4e132e-055e-45da-872f-af94a91d85ea-kube-api-access-vctpn\") pod \"2a4e132e-055e-45da-872f-af94a91d85ea\" (UID: \"2a4e132e-055e-45da-872f-af94a91d85ea\") " Feb 16 03:00:03.997505 master-0 kubenswrapper[31559]: I0216 03:00:03.997398 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a4e132e-055e-45da-872f-af94a91d85ea-config-volume" (OuterVolumeSpecName: "config-volume") pod "2a4e132e-055e-45da-872f-af94a91d85ea" (UID: "2a4e132e-055e-45da-872f-af94a91d85ea"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 03:00:03.997746 master-0 kubenswrapper[31559]: I0216 03:00:03.997688 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a4e132e-055e-45da-872f-af94a91d85ea-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2a4e132e-055e-45da-872f-af94a91d85ea" (UID: "2a4e132e-055e-45da-872f-af94a91d85ea"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 03:00:03.998428 master-0 kubenswrapper[31559]: I0216 03:00:03.998326 31559 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2a4e132e-055e-45da-872f-af94a91d85ea-secret-volume\") on node \"master-0\" DevicePath \"\"" Feb 16 03:00:03.998428 master-0 kubenswrapper[31559]: I0216 03:00:03.998376 31559 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a4e132e-055e-45da-872f-af94a91d85ea-config-volume\") on node \"master-0\" DevicePath \"\"" Feb 16 03:00:04.000273 master-0 kubenswrapper[31559]: I0216 03:00:04.000223 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a4e132e-055e-45da-872f-af94a91d85ea-kube-api-access-vctpn" (OuterVolumeSpecName: "kube-api-access-vctpn") pod "2a4e132e-055e-45da-872f-af94a91d85ea" (UID: "2a4e132e-055e-45da-872f-af94a91d85ea"). InnerVolumeSpecName "kube-api-access-vctpn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 03:00:04.100751 master-0 kubenswrapper[31559]: I0216 03:00:04.100648 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vctpn\" (UniqueName: \"kubernetes.io/projected/2a4e132e-055e-45da-872f-af94a91d85ea-kube-api-access-vctpn\") on node \"master-0\" DevicePath \"\"" Feb 16 03:00:04.357159 master-0 kubenswrapper[31559]: I0216 03:00:04.356975 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520180-f5mg8" event={"ID":"2a4e132e-055e-45da-872f-af94a91d85ea","Type":"ContainerDied","Data":"d28ffdc707cf5b87c25b48eb3ee2ab70acfabcfbf93e8f8801e9dc9fa7c260cf"} Feb 16 03:00:04.357159 master-0 kubenswrapper[31559]: I0216 03:00:04.357069 31559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d28ffdc707cf5b87c25b48eb3ee2ab70acfabcfbf93e8f8801e9dc9fa7c260cf" Feb 16 03:00:04.357159 master-0 kubenswrapper[31559]: I0216 03:00:04.357081 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520180-f5mg8" Feb 16 03:00:04.462887 master-0 kubenswrapper[31559]: I0216 03:00:04.462828 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520135-gdm59"] Feb 16 03:00:04.477038 master-0 kubenswrapper[31559]: I0216 03:00:04.476954 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520135-gdm59"] Feb 16 03:00:05.955298 master-0 kubenswrapper[31559]: I0216 03:00:05.955238 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8269ffdd-7357-4a8c-b578-0f482558f93e" path="/var/lib/kubelet/pods/8269ffdd-7357-4a8c-b578-0f482558f93e/volumes" Feb 16 03:00:40.524588 master-0 kubenswrapper[31559]: I0216 03:00:40.522180 31559 scope.go:117] "RemoveContainer" containerID="3d85392af80e65ab2985e54a4974e1f024f9a9bb02545f3e0dcd5540c8518016" Feb 16 03:01:00.217492 master-0 kubenswrapper[31559]: I0216 03:01:00.217007 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29520181-96kj6"] Feb 16 03:01:00.218337 master-0 kubenswrapper[31559]: E0216 03:01:00.218111 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a4e132e-055e-45da-872f-af94a91d85ea" containerName="collect-profiles" Feb 16 03:01:00.218337 master-0 kubenswrapper[31559]: I0216 03:01:00.218149 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a4e132e-055e-45da-872f-af94a91d85ea" containerName="collect-profiles" Feb 16 03:01:00.222477 master-0 kubenswrapper[31559]: I0216 03:01:00.218816 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a4e132e-055e-45da-872f-af94a91d85ea" containerName="collect-profiles" Feb 16 03:01:00.222477 master-0 kubenswrapper[31559]: I0216 03:01:00.220335 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29520181-96kj6" Feb 16 03:01:00.232467 master-0 kubenswrapper[31559]: I0216 03:01:00.230948 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29520181-96kj6"] Feb 16 03:01:00.354454 master-0 kubenswrapper[31559]: I0216 03:01:00.354364 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e36788b-13fa-45b9-82d5-2cb17400ee91-config-data\") pod \"keystone-cron-29520181-96kj6\" (UID: \"9e36788b-13fa-45b9-82d5-2cb17400ee91\") " pod="openstack/keystone-cron-29520181-96kj6" Feb 16 03:01:00.354677 master-0 kubenswrapper[31559]: I0216 03:01:00.354556 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9e36788b-13fa-45b9-82d5-2cb17400ee91-fernet-keys\") pod \"keystone-cron-29520181-96kj6\" (UID: \"9e36788b-13fa-45b9-82d5-2cb17400ee91\") " pod="openstack/keystone-cron-29520181-96kj6" Feb 16 03:01:00.356517 master-0 kubenswrapper[31559]: I0216 03:01:00.356380 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e36788b-13fa-45b9-82d5-2cb17400ee91-combined-ca-bundle\") pod \"keystone-cron-29520181-96kj6\" (UID: \"9e36788b-13fa-45b9-82d5-2cb17400ee91\") " pod="openstack/keystone-cron-29520181-96kj6" Feb 16 03:01:00.356802 master-0 kubenswrapper[31559]: I0216 03:01:00.356741 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn7qw\" (UniqueName: \"kubernetes.io/projected/9e36788b-13fa-45b9-82d5-2cb17400ee91-kube-api-access-nn7qw\") pod \"keystone-cron-29520181-96kj6\" (UID: \"9e36788b-13fa-45b9-82d5-2cb17400ee91\") " pod="openstack/keystone-cron-29520181-96kj6" Feb 16 03:01:00.458407 master-0 kubenswrapper[31559]: I0216 03:01:00.458362 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nn7qw\" (UniqueName: \"kubernetes.io/projected/9e36788b-13fa-45b9-82d5-2cb17400ee91-kube-api-access-nn7qw\") pod \"keystone-cron-29520181-96kj6\" (UID: \"9e36788b-13fa-45b9-82d5-2cb17400ee91\") " pod="openstack/keystone-cron-29520181-96kj6" Feb 16 03:01:00.458810 master-0 kubenswrapper[31559]: I0216 03:01:00.458787 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e36788b-13fa-45b9-82d5-2cb17400ee91-config-data\") pod \"keystone-cron-29520181-96kj6\" (UID: \"9e36788b-13fa-45b9-82d5-2cb17400ee91\") " pod="openstack/keystone-cron-29520181-96kj6" Feb 16 03:01:00.458927 master-0 kubenswrapper[31559]: I0216 03:01:00.458907 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9e36788b-13fa-45b9-82d5-2cb17400ee91-fernet-keys\") pod \"keystone-cron-29520181-96kj6\" (UID: \"9e36788b-13fa-45b9-82d5-2cb17400ee91\") " pod="openstack/keystone-cron-29520181-96kj6" Feb 16 03:01:00.459097 master-0 kubenswrapper[31559]: I0216 03:01:00.459078 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e36788b-13fa-45b9-82d5-2cb17400ee91-combined-ca-bundle\") pod \"keystone-cron-29520181-96kj6\" (UID: \"9e36788b-13fa-45b9-82d5-2cb17400ee91\") " pod="openstack/keystone-cron-29520181-96kj6" Feb 16 03:01:00.462961 master-0 kubenswrapper[31559]: I0216 03:01:00.462936 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e36788b-13fa-45b9-82d5-2cb17400ee91-combined-ca-bundle\") pod \"keystone-cron-29520181-96kj6\" (UID: \"9e36788b-13fa-45b9-82d5-2cb17400ee91\") " pod="openstack/keystone-cron-29520181-96kj6" Feb 16 03:01:00.475785 master-0 kubenswrapper[31559]: I0216 03:01:00.475725 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e36788b-13fa-45b9-82d5-2cb17400ee91-config-data\") pod \"keystone-cron-29520181-96kj6\" (UID: \"9e36788b-13fa-45b9-82d5-2cb17400ee91\") " pod="openstack/keystone-cron-29520181-96kj6" Feb 16 03:01:00.476208 master-0 kubenswrapper[31559]: I0216 03:01:00.476182 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9e36788b-13fa-45b9-82d5-2cb17400ee91-fernet-keys\") pod \"keystone-cron-29520181-96kj6\" (UID: \"9e36788b-13fa-45b9-82d5-2cb17400ee91\") " pod="openstack/keystone-cron-29520181-96kj6" Feb 16 03:01:00.478870 master-0 kubenswrapper[31559]: I0216 03:01:00.478831 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nn7qw\" (UniqueName: \"kubernetes.io/projected/9e36788b-13fa-45b9-82d5-2cb17400ee91-kube-api-access-nn7qw\") pod \"keystone-cron-29520181-96kj6\" (UID: \"9e36788b-13fa-45b9-82d5-2cb17400ee91\") " pod="openstack/keystone-cron-29520181-96kj6" Feb 16 03:01:00.560948 master-0 kubenswrapper[31559]: I0216 03:01:00.560902 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29520181-96kj6" Feb 16 03:01:01.056392 master-0 kubenswrapper[31559]: I0216 03:01:01.056317 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29520181-96kj6"] Feb 16 03:01:01.304811 master-0 kubenswrapper[31559]: I0216 03:01:01.304669 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29520181-96kj6" event={"ID":"9e36788b-13fa-45b9-82d5-2cb17400ee91","Type":"ContainerStarted","Data":"9c19c46ed163367b36d66c68a3d9c5739b4beb0d2fa1c028edd7fc22bb46e578"} Feb 16 03:01:01.304811 master-0 kubenswrapper[31559]: I0216 03:01:01.304723 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29520181-96kj6" event={"ID":"9e36788b-13fa-45b9-82d5-2cb17400ee91","Type":"ContainerStarted","Data":"79e1fcd9096dc9ab2d664ffe3e860e1ccda2079efb7fa37064da7f4c20fac300"} Feb 16 03:01:01.340074 master-0 kubenswrapper[31559]: I0216 03:01:01.339962 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29520181-96kj6" podStartSLOduration=1.339936765 podStartE2EDuration="1.339936765s" podCreationTimestamp="2026-02-16 03:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 03:01:01.322250354 +0000 UTC m=+2313.666856429" watchObservedRunningTime="2026-02-16 03:01:01.339936765 +0000 UTC m=+2313.684542800" Feb 16 03:01:03.329703 master-0 kubenswrapper[31559]: I0216 03:01:03.329600 31559 generic.go:334] "Generic (PLEG): container finished" podID="9e36788b-13fa-45b9-82d5-2cb17400ee91" containerID="9c19c46ed163367b36d66c68a3d9c5739b4beb0d2fa1c028edd7fc22bb46e578" exitCode=0 Feb 16 03:01:03.329703 master-0 kubenswrapper[31559]: I0216 03:01:03.329687 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29520181-96kj6" event={"ID":"9e36788b-13fa-45b9-82d5-2cb17400ee91","Type":"ContainerDied","Data":"9c19c46ed163367b36d66c68a3d9c5739b4beb0d2fa1c028edd7fc22bb46e578"} Feb 16 03:01:05.053117 master-0 kubenswrapper[31559]: I0216 03:01:05.053077 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29520181-96kj6" Feb 16 03:01:05.197513 master-0 kubenswrapper[31559]: I0216 03:01:05.197305 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e36788b-13fa-45b9-82d5-2cb17400ee91-config-data\") pod \"9e36788b-13fa-45b9-82d5-2cb17400ee91\" (UID: \"9e36788b-13fa-45b9-82d5-2cb17400ee91\") " Feb 16 03:01:05.197997 master-0 kubenswrapper[31559]: I0216 03:01:05.197540 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e36788b-13fa-45b9-82d5-2cb17400ee91-combined-ca-bundle\") pod \"9e36788b-13fa-45b9-82d5-2cb17400ee91\" (UID: \"9e36788b-13fa-45b9-82d5-2cb17400ee91\") " Feb 16 03:01:05.197997 master-0 kubenswrapper[31559]: I0216 03:01:05.197644 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nn7qw\" (UniqueName: \"kubernetes.io/projected/9e36788b-13fa-45b9-82d5-2cb17400ee91-kube-api-access-nn7qw\") pod \"9e36788b-13fa-45b9-82d5-2cb17400ee91\" (UID: \"9e36788b-13fa-45b9-82d5-2cb17400ee91\") " Feb 16 03:01:05.197997 master-0 kubenswrapper[31559]: I0216 03:01:05.197688 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9e36788b-13fa-45b9-82d5-2cb17400ee91-fernet-keys\") pod \"9e36788b-13fa-45b9-82d5-2cb17400ee91\" (UID: \"9e36788b-13fa-45b9-82d5-2cb17400ee91\") " Feb 16 03:01:05.201337 master-0 kubenswrapper[31559]: I0216 03:01:05.201261 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e36788b-13fa-45b9-82d5-2cb17400ee91-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "9e36788b-13fa-45b9-82d5-2cb17400ee91" (UID: "9e36788b-13fa-45b9-82d5-2cb17400ee91"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 03:01:05.202953 master-0 kubenswrapper[31559]: I0216 03:01:05.202862 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e36788b-13fa-45b9-82d5-2cb17400ee91-kube-api-access-nn7qw" (OuterVolumeSpecName: "kube-api-access-nn7qw") pod "9e36788b-13fa-45b9-82d5-2cb17400ee91" (UID: "9e36788b-13fa-45b9-82d5-2cb17400ee91"). InnerVolumeSpecName "kube-api-access-nn7qw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 03:01:05.260820 master-0 kubenswrapper[31559]: I0216 03:01:05.260708 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e36788b-13fa-45b9-82d5-2cb17400ee91-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9e36788b-13fa-45b9-82d5-2cb17400ee91" (UID: "9e36788b-13fa-45b9-82d5-2cb17400ee91"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 03:01:05.301002 master-0 kubenswrapper[31559]: I0216 03:01:05.300922 31559 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e36788b-13fa-45b9-82d5-2cb17400ee91-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 03:01:05.301002 master-0 kubenswrapper[31559]: I0216 03:01:05.300969 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nn7qw\" (UniqueName: \"kubernetes.io/projected/9e36788b-13fa-45b9-82d5-2cb17400ee91-kube-api-access-nn7qw\") on node \"master-0\" DevicePath \"\"" Feb 16 03:01:05.301002 master-0 kubenswrapper[31559]: I0216 03:01:05.300986 31559 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9e36788b-13fa-45b9-82d5-2cb17400ee91-fernet-keys\") on node \"master-0\" DevicePath \"\"" Feb 16 03:01:05.303344 master-0 kubenswrapper[31559]: I0216 03:01:05.303268 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e36788b-13fa-45b9-82d5-2cb17400ee91-config-data" (OuterVolumeSpecName: "config-data") pod "9e36788b-13fa-45b9-82d5-2cb17400ee91" (UID: "9e36788b-13fa-45b9-82d5-2cb17400ee91"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 03:01:05.360667 master-0 kubenswrapper[31559]: I0216 03:01:05.360598 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29520181-96kj6" event={"ID":"9e36788b-13fa-45b9-82d5-2cb17400ee91","Type":"ContainerDied","Data":"79e1fcd9096dc9ab2d664ffe3e860e1ccda2079efb7fa37064da7f4c20fac300"} Feb 16 03:01:05.360667 master-0 kubenswrapper[31559]: I0216 03:01:05.360648 31559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79e1fcd9096dc9ab2d664ffe3e860e1ccda2079efb7fa37064da7f4c20fac300" Feb 16 03:01:05.360928 master-0 kubenswrapper[31559]: I0216 03:01:05.360711 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29520181-96kj6" Feb 16 03:01:05.408284 master-0 kubenswrapper[31559]: I0216 03:01:05.408140 31559 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e36788b-13fa-45b9-82d5-2cb17400ee91-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 03:15:00.180955 master-0 kubenswrapper[31559]: I0216 03:15:00.180822 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520195-gcz8n"] Feb 16 03:15:00.181958 master-0 kubenswrapper[31559]: E0216 03:15:00.181617 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e36788b-13fa-45b9-82d5-2cb17400ee91" containerName="keystone-cron" Feb 16 03:15:00.181958 master-0 kubenswrapper[31559]: I0216 03:15:00.181645 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e36788b-13fa-45b9-82d5-2cb17400ee91" containerName="keystone-cron" Feb 16 03:15:00.182726 master-0 kubenswrapper[31559]: I0216 03:15:00.182118 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e36788b-13fa-45b9-82d5-2cb17400ee91" containerName="keystone-cron" Feb 16 03:15:00.183357 master-0 kubenswrapper[31559]: I0216 03:15:00.183301 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520195-gcz8n" Feb 16 03:15:00.187945 master-0 kubenswrapper[31559]: I0216 03:15:00.187877 31559 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-h8ldk" Feb 16 03:15:00.188107 master-0 kubenswrapper[31559]: I0216 03:15:00.187942 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 03:15:00.232616 master-0 kubenswrapper[31559]: I0216 03:15:00.227682 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520195-gcz8n"] Feb 16 03:15:00.252633 master-0 kubenswrapper[31559]: I0216 03:15:00.252422 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e0c2b6a8-f4a4-43fe-b13f-73e00d25c698-secret-volume\") pod \"collect-profiles-29520195-gcz8n\" (UID: \"e0c2b6a8-f4a4-43fe-b13f-73e00d25c698\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520195-gcz8n" Feb 16 03:15:00.252999 master-0 kubenswrapper[31559]: I0216 03:15:00.252920 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e0c2b6a8-f4a4-43fe-b13f-73e00d25c698-config-volume\") pod \"collect-profiles-29520195-gcz8n\" (UID: \"e0c2b6a8-f4a4-43fe-b13f-73e00d25c698\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520195-gcz8n" Feb 16 03:15:00.253469 master-0 kubenswrapper[31559]: I0216 03:15:00.253361 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slqth\" (UniqueName: \"kubernetes.io/projected/e0c2b6a8-f4a4-43fe-b13f-73e00d25c698-kube-api-access-slqth\") pod \"collect-profiles-29520195-gcz8n\" (UID: \"e0c2b6a8-f4a4-43fe-b13f-73e00d25c698\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520195-gcz8n" Feb 16 03:15:00.357104 master-0 kubenswrapper[31559]: I0216 03:15:00.357029 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slqth\" (UniqueName: \"kubernetes.io/projected/e0c2b6a8-f4a4-43fe-b13f-73e00d25c698-kube-api-access-slqth\") pod \"collect-profiles-29520195-gcz8n\" (UID: \"e0c2b6a8-f4a4-43fe-b13f-73e00d25c698\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520195-gcz8n" Feb 16 03:15:00.357586 master-0 kubenswrapper[31559]: I0216 03:15:00.357553 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e0c2b6a8-f4a4-43fe-b13f-73e00d25c698-secret-volume\") pod \"collect-profiles-29520195-gcz8n\" (UID: \"e0c2b6a8-f4a4-43fe-b13f-73e00d25c698\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520195-gcz8n" Feb 16 03:15:00.358418 master-0 kubenswrapper[31559]: I0216 03:15:00.358376 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e0c2b6a8-f4a4-43fe-b13f-73e00d25c698-config-volume\") pod \"collect-profiles-29520195-gcz8n\" (UID: \"e0c2b6a8-f4a4-43fe-b13f-73e00d25c698\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520195-gcz8n" Feb 16 03:15:00.360022 master-0 kubenswrapper[31559]: I0216 03:15:00.359688 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e0c2b6a8-f4a4-43fe-b13f-73e00d25c698-config-volume\") pod \"collect-profiles-29520195-gcz8n\" (UID: \"e0c2b6a8-f4a4-43fe-b13f-73e00d25c698\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520195-gcz8n" Feb 16 03:15:00.362200 master-0 kubenswrapper[31559]: I0216 03:15:00.362145 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e0c2b6a8-f4a4-43fe-b13f-73e00d25c698-secret-volume\") pod \"collect-profiles-29520195-gcz8n\" (UID: \"e0c2b6a8-f4a4-43fe-b13f-73e00d25c698\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520195-gcz8n" Feb 16 03:15:00.375476 master-0 kubenswrapper[31559]: I0216 03:15:00.375390 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slqth\" (UniqueName: \"kubernetes.io/projected/e0c2b6a8-f4a4-43fe-b13f-73e00d25c698-kube-api-access-slqth\") pod \"collect-profiles-29520195-gcz8n\" (UID: \"e0c2b6a8-f4a4-43fe-b13f-73e00d25c698\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520195-gcz8n" Feb 16 03:15:00.530839 master-0 kubenswrapper[31559]: I0216 03:15:00.530665 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520195-gcz8n" Feb 16 03:15:01.082905 master-0 kubenswrapper[31559]: I0216 03:15:01.082849 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520195-gcz8n"] Feb 16 03:15:01.084146 master-0 kubenswrapper[31559]: W0216 03:15:01.084114 31559 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0c2b6a8_f4a4_43fe_b13f_73e00d25c698.slice/crio-30f504e64c6a334cc6df78aa10609cbc0da5ea30bb2a10fc6fb6520b7767136c WatchSource:0}: Error finding container 30f504e64c6a334cc6df78aa10609cbc0da5ea30bb2a10fc6fb6520b7767136c: Status 404 returned error can't find the container with id 30f504e64c6a334cc6df78aa10609cbc0da5ea30bb2a10fc6fb6520b7767136c Feb 16 03:15:01.148553 master-0 kubenswrapper[31559]: I0216 03:15:01.148515 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520195-gcz8n" event={"ID":"e0c2b6a8-f4a4-43fe-b13f-73e00d25c698","Type":"ContainerStarted","Data":"30f504e64c6a334cc6df78aa10609cbc0da5ea30bb2a10fc6fb6520b7767136c"} Feb 16 03:15:02.182969 master-0 kubenswrapper[31559]: I0216 03:15:02.182866 31559 generic.go:334] "Generic (PLEG): container finished" podID="e0c2b6a8-f4a4-43fe-b13f-73e00d25c698" containerID="ffaef7b5d521b1e828d1475af8445cc3ab860c3b6ad6962960ae8500da1dfdc9" exitCode=0 Feb 16 03:15:02.182969 master-0 kubenswrapper[31559]: I0216 03:15:02.182929 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520195-gcz8n" event={"ID":"e0c2b6a8-f4a4-43fe-b13f-73e00d25c698","Type":"ContainerDied","Data":"ffaef7b5d521b1e828d1475af8445cc3ab860c3b6ad6962960ae8500da1dfdc9"} Feb 16 03:15:03.800083 master-0 kubenswrapper[31559]: I0216 03:15:03.800002 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520195-gcz8n" Feb 16 03:15:03.863806 master-0 kubenswrapper[31559]: I0216 03:15:03.861010 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e0c2b6a8-f4a4-43fe-b13f-73e00d25c698-secret-volume\") pod \"e0c2b6a8-f4a4-43fe-b13f-73e00d25c698\" (UID: \"e0c2b6a8-f4a4-43fe-b13f-73e00d25c698\") " Feb 16 03:15:03.863806 master-0 kubenswrapper[31559]: I0216 03:15:03.861352 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slqth\" (UniqueName: \"kubernetes.io/projected/e0c2b6a8-f4a4-43fe-b13f-73e00d25c698-kube-api-access-slqth\") pod \"e0c2b6a8-f4a4-43fe-b13f-73e00d25c698\" (UID: \"e0c2b6a8-f4a4-43fe-b13f-73e00d25c698\") " Feb 16 03:15:03.863806 master-0 kubenswrapper[31559]: I0216 03:15:03.861559 31559 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e0c2b6a8-f4a4-43fe-b13f-73e00d25c698-config-volume\") pod \"e0c2b6a8-f4a4-43fe-b13f-73e00d25c698\" (UID: \"e0c2b6a8-f4a4-43fe-b13f-73e00d25c698\") " Feb 16 03:15:03.863806 master-0 kubenswrapper[31559]: I0216 03:15:03.862530 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0c2b6a8-f4a4-43fe-b13f-73e00d25c698-config-volume" (OuterVolumeSpecName: "config-volume") pod "e0c2b6a8-f4a4-43fe-b13f-73e00d25c698" (UID: "e0c2b6a8-f4a4-43fe-b13f-73e00d25c698"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 03:15:03.863806 master-0 kubenswrapper[31559]: I0216 03:15:03.863043 31559 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e0c2b6a8-f4a4-43fe-b13f-73e00d25c698-config-volume\") on node \"master-0\" DevicePath \"\"" Feb 16 03:15:03.867897 master-0 kubenswrapper[31559]: I0216 03:15:03.867802 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0c2b6a8-f4a4-43fe-b13f-73e00d25c698-kube-api-access-slqth" (OuterVolumeSpecName: "kube-api-access-slqth") pod "e0c2b6a8-f4a4-43fe-b13f-73e00d25c698" (UID: "e0c2b6a8-f4a4-43fe-b13f-73e00d25c698"). InnerVolumeSpecName "kube-api-access-slqth". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 03:15:03.868001 master-0 kubenswrapper[31559]: I0216 03:15:03.867966 31559 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0c2b6a8-f4a4-43fe-b13f-73e00d25c698-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e0c2b6a8-f4a4-43fe-b13f-73e00d25c698" (UID: "e0c2b6a8-f4a4-43fe-b13f-73e00d25c698"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 03:15:03.964536 master-0 kubenswrapper[31559]: I0216 03:15:03.964361 31559 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e0c2b6a8-f4a4-43fe-b13f-73e00d25c698-secret-volume\") on node \"master-0\" DevicePath \"\"" Feb 16 03:15:03.964536 master-0 kubenswrapper[31559]: I0216 03:15:03.964409 31559 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slqth\" (UniqueName: \"kubernetes.io/projected/e0c2b6a8-f4a4-43fe-b13f-73e00d25c698-kube-api-access-slqth\") on node \"master-0\" DevicePath \"\"" Feb 16 03:15:04.216312 master-0 kubenswrapper[31559]: I0216 03:15:04.216143 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520195-gcz8n" event={"ID":"e0c2b6a8-f4a4-43fe-b13f-73e00d25c698","Type":"ContainerDied","Data":"30f504e64c6a334cc6df78aa10609cbc0da5ea30bb2a10fc6fb6520b7767136c"} Feb 16 03:15:04.216312 master-0 kubenswrapper[31559]: I0216 03:15:04.216211 31559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30f504e64c6a334cc6df78aa10609cbc0da5ea30bb2a10fc6fb6520b7767136c" Feb 16 03:15:04.216312 master-0 kubenswrapper[31559]: I0216 03:15:04.216213 31559 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520195-gcz8n" Feb 16 03:15:04.933285 master-0 kubenswrapper[31559]: I0216 03:15:04.933213 31559 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520150-tfx2w"] Feb 16 03:15:04.944775 master-0 kubenswrapper[31559]: I0216 03:15:04.944706 31559 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520150-tfx2w"] Feb 16 03:15:05.944671 master-0 kubenswrapper[31559]: I0216 03:15:05.944581 31559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed896b20-c4dc-4b95-be82-5648300ac6c3" path="/var/lib/kubelet/pods/ed896b20-c4dc-4b95-be82-5648300ac6c3/volumes" Feb 16 03:15:41.114095 master-0 kubenswrapper[31559]: I0216 03:15:41.114008 31559 scope.go:117] "RemoveContainer" containerID="4bf332b4ca8908e5fa9080cb0ecdfdbec32f7856e88a772ecce77d71dd299f77" Feb 16 03:17:59.833302 master-0 kubenswrapper[31559]: I0216 03:17:59.833230 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-9w2xz/must-gather-r669h"] Feb 16 03:17:59.834059 master-0 kubenswrapper[31559]: E0216 03:17:59.833861 31559 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0c2b6a8-f4a4-43fe-b13f-73e00d25c698" containerName="collect-profiles" Feb 16 03:17:59.834059 master-0 kubenswrapper[31559]: I0216 03:17:59.833883 31559 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0c2b6a8-f4a4-43fe-b13f-73e00d25c698" containerName="collect-profiles" Feb 16 03:17:59.834323 master-0 kubenswrapper[31559]: I0216 03:17:59.834276 31559 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0c2b6a8-f4a4-43fe-b13f-73e00d25c698" containerName="collect-profiles" Feb 16 03:17:59.835892 master-0 kubenswrapper[31559]: I0216 03:17:59.835860 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9w2xz/must-gather-r669h" Feb 16 03:17:59.841212 master-0 kubenswrapper[31559]: I0216 03:17:59.840931 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-9w2xz"/"openshift-service-ca.crt" Feb 16 03:17:59.841212 master-0 kubenswrapper[31559]: I0216 03:17:59.841076 31559 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-9w2xz"/"kube-root-ca.crt" Feb 16 03:17:59.854347 master-0 kubenswrapper[31559]: I0216 03:17:59.852468 31559 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-9w2xz/must-gather-g6xzn"] Feb 16 03:17:59.858152 master-0 kubenswrapper[31559]: I0216 03:17:59.858087 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9w2xz/must-gather-g6xzn" Feb 16 03:17:59.869700 master-0 kubenswrapper[31559]: I0216 03:17:59.864829 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-9w2xz/must-gather-g6xzn"] Feb 16 03:17:59.917358 master-0 kubenswrapper[31559]: I0216 03:17:59.917285 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-9w2xz/must-gather-r669h"] Feb 16 03:17:59.997523 master-0 kubenswrapper[31559]: I0216 03:17:59.997397 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kkgf\" (UniqueName: \"kubernetes.io/projected/44add300-343d-4432-86ea-cf0907603a6f-kube-api-access-7kkgf\") pod \"must-gather-g6xzn\" (UID: \"44add300-343d-4432-86ea-cf0907603a6f\") " pod="openshift-must-gather-9w2xz/must-gather-g6xzn" Feb 16 03:17:59.997854 master-0 kubenswrapper[31559]: I0216 03:17:59.997792 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/44add300-343d-4432-86ea-cf0907603a6f-must-gather-output\") pod \"must-gather-g6xzn\" (UID: \"44add300-343d-4432-86ea-cf0907603a6f\") " pod="openshift-must-gather-9w2xz/must-gather-g6xzn" Feb 16 03:17:59.998022 master-0 kubenswrapper[31559]: I0216 03:17:59.997992 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hdpb\" (UniqueName: \"kubernetes.io/projected/fba2f414-741a-4157-9fb7-dbddc07f920d-kube-api-access-4hdpb\") pod \"must-gather-r669h\" (UID: \"fba2f414-741a-4157-9fb7-dbddc07f920d\") " pod="openshift-must-gather-9w2xz/must-gather-r669h" Feb 16 03:17:59.998307 master-0 kubenswrapper[31559]: I0216 03:17:59.998280 31559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/fba2f414-741a-4157-9fb7-dbddc07f920d-must-gather-output\") pod \"must-gather-r669h\" (UID: \"fba2f414-741a-4157-9fb7-dbddc07f920d\") " pod="openshift-must-gather-9w2xz/must-gather-r669h" Feb 16 03:18:00.100610 master-0 kubenswrapper[31559]: I0216 03:18:00.100463 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kkgf\" (UniqueName: \"kubernetes.io/projected/44add300-343d-4432-86ea-cf0907603a6f-kube-api-access-7kkgf\") pod \"must-gather-g6xzn\" (UID: \"44add300-343d-4432-86ea-cf0907603a6f\") " pod="openshift-must-gather-9w2xz/must-gather-g6xzn" Feb 16 03:18:00.100610 master-0 kubenswrapper[31559]: I0216 03:18:00.100575 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/44add300-343d-4432-86ea-cf0907603a6f-must-gather-output\") pod \"must-gather-g6xzn\" (UID: \"44add300-343d-4432-86ea-cf0907603a6f\") " pod="openshift-must-gather-9w2xz/must-gather-g6xzn" Feb 16 03:18:00.100850 master-0 kubenswrapper[31559]: I0216 03:18:00.100634 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hdpb\" (UniqueName: \"kubernetes.io/projected/fba2f414-741a-4157-9fb7-dbddc07f920d-kube-api-access-4hdpb\") pod \"must-gather-r669h\" (UID: \"fba2f414-741a-4157-9fb7-dbddc07f920d\") " pod="openshift-must-gather-9w2xz/must-gather-r669h" Feb 16 03:18:00.100850 master-0 kubenswrapper[31559]: I0216 03:18:00.100654 31559 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/fba2f414-741a-4157-9fb7-dbddc07f920d-must-gather-output\") pod \"must-gather-r669h\" (UID: \"fba2f414-741a-4157-9fb7-dbddc07f920d\") " pod="openshift-must-gather-9w2xz/must-gather-r669h" Feb 16 03:18:00.101252 master-0 kubenswrapper[31559]: I0216 03:18:00.101225 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/fba2f414-741a-4157-9fb7-dbddc07f920d-must-gather-output\") pod \"must-gather-r669h\" (UID: \"fba2f414-741a-4157-9fb7-dbddc07f920d\") " pod="openshift-must-gather-9w2xz/must-gather-r669h" Feb 16 03:18:00.101320 master-0 kubenswrapper[31559]: I0216 03:18:00.101258 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/44add300-343d-4432-86ea-cf0907603a6f-must-gather-output\") pod \"must-gather-g6xzn\" (UID: \"44add300-343d-4432-86ea-cf0907603a6f\") " pod="openshift-must-gather-9w2xz/must-gather-g6xzn" Feb 16 03:18:00.123065 master-0 kubenswrapper[31559]: I0216 03:18:00.123025 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hdpb\" (UniqueName: \"kubernetes.io/projected/fba2f414-741a-4157-9fb7-dbddc07f920d-kube-api-access-4hdpb\") pod \"must-gather-r669h\" (UID: \"fba2f414-741a-4157-9fb7-dbddc07f920d\") " pod="openshift-must-gather-9w2xz/must-gather-r669h" Feb 16 03:18:00.127523 master-0 kubenswrapper[31559]: I0216 03:18:00.127481 31559 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kkgf\" (UniqueName: \"kubernetes.io/projected/44add300-343d-4432-86ea-cf0907603a6f-kube-api-access-7kkgf\") pod \"must-gather-g6xzn\" (UID: \"44add300-343d-4432-86ea-cf0907603a6f\") " pod="openshift-must-gather-9w2xz/must-gather-g6xzn" Feb 16 03:18:00.160022 master-0 kubenswrapper[31559]: I0216 03:18:00.159941 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9w2xz/must-gather-r669h" Feb 16 03:18:00.178877 master-0 kubenswrapper[31559]: I0216 03:18:00.178797 31559 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9w2xz/must-gather-g6xzn" Feb 16 03:18:00.699075 master-0 kubenswrapper[31559]: I0216 03:18:00.699028 31559 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 03:18:00.728585 master-0 kubenswrapper[31559]: I0216 03:18:00.725608 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-9w2xz/must-gather-g6xzn"] Feb 16 03:18:00.749717 master-0 kubenswrapper[31559]: I0216 03:18:00.749666 31559 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-9w2xz/must-gather-r669h"] Feb 16 03:18:00.905682 master-0 kubenswrapper[31559]: I0216 03:18:00.905561 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9w2xz/must-gather-r669h" event={"ID":"fba2f414-741a-4157-9fb7-dbddc07f920d","Type":"ContainerStarted","Data":"7e7eeacc9499524b6536558f80d99881356de931b76b2dc1e1ffb22793b4320f"} Feb 16 03:18:00.907612 master-0 kubenswrapper[31559]: I0216 03:18:00.907536 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9w2xz/must-gather-g6xzn" event={"ID":"44add300-343d-4432-86ea-cf0907603a6f","Type":"ContainerStarted","Data":"ff6a87baec06a0b137a1a24b77250f36d30272a230e3ef25c5efc933ef6d60a1"} Feb 16 03:18:02.942392 master-0 kubenswrapper[31559]: I0216 03:18:02.942308 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9w2xz/must-gather-r669h" event={"ID":"fba2f414-741a-4157-9fb7-dbddc07f920d","Type":"ContainerStarted","Data":"66d2c78006cac6c19bf000296339c5b1f59f5f86a7ed358adb2199460f405c2d"} Feb 16 03:18:02.942392 master-0 kubenswrapper[31559]: I0216 03:18:02.942385 31559 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9w2xz/must-gather-r669h" event={"ID":"fba2f414-741a-4157-9fb7-dbddc07f920d","Type":"ContainerStarted","Data":"5603d8c246b0bca8eff153a5e18fd9864f38fcfa215ebc72f126fe8513a4c73b"} Feb 16 03:18:03.009544 master-0 kubenswrapper[31559]: I0216 03:18:03.003599 31559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-9w2xz/must-gather-r669h" podStartSLOduration=2.455021606 podStartE2EDuration="4.003579671s" podCreationTimestamp="2026-02-16 03:17:59 +0000 UTC" firstStartedPulling="2026-02-16 03:18:00.708513853 +0000 UTC m=+3333.053119858" lastFinishedPulling="2026-02-16 03:18:02.257071908 +0000 UTC m=+3334.601677923" observedRunningTime="2026-02-16 03:18:02.997117976 +0000 UTC m=+3335.341723991" watchObservedRunningTime="2026-02-16 03:18:03.003579671 +0000 UTC m=+3335.348185686" Feb 16 03:18:04.560749 master-0 kubenswrapper[31559]: I0216 03:18:04.560652 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-version_cluster-version-operator-649c4f5445-lzvc4_ad700b17-ba2a-41d4-8bec-538a009a613b/cluster-version-operator/0.log" Feb 16 03:18:04.583647 master-0 kubenswrapper[31559]: I0216 03:18:04.582864 31559 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-version_cluster-version-operator-649c4f5445-lzvc4_ad700b17-ba2a-41d4-8bec-538a009a613b/cluster-version-operator/1.log"